mirror of
https://github.com/vmware-tanzu/velero.git
synced 2026-04-12 11:47:02 +00:00
Compare commits
29 Commits
copilot/de
...
v1.18.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6adcf06b5b | ||
|
|
ffa65605a6 | ||
|
|
bd8dfe9ee2 | ||
|
|
54783fbe28 | ||
|
|
cb5f56265a | ||
|
|
0c7b89a44e | ||
|
|
aa89713559 | ||
|
|
5db4c65a92 | ||
|
|
87db850f66 | ||
|
|
c7631fc4a4 | ||
|
|
9a37478cc2 | ||
|
|
5b54ccd2e0 | ||
|
|
43b926a58b | ||
|
|
9bfc78e769 | ||
|
|
c9e26256fa | ||
|
|
6e315c32e2 | ||
|
|
91cbc40956 | ||
|
|
556d5826a8 | ||
|
|
62939cec18 | ||
|
|
7d6a10d3ea | ||
|
|
1c0cf6c51d | ||
|
|
58f0b29091 | ||
|
|
5cb4cdba61 | ||
|
|
325eb50480 | ||
|
|
993b80a350 | ||
|
|
a909bd1f85 | ||
|
|
62a47b9fc5 | ||
|
|
31e9dcbb87 | ||
|
|
f824c3ca3b |
@@ -17,6 +17,7 @@ If you're using Velero and want to add your organization to this list,
|
||||
<a href="https://www.replicated.com/" border="0" target="_blank"><img alt="replicated.com" src="site/static/img/adopters/replicated-logo-red.svg" height="50"></a>
|
||||
<a href="https://cloudcasa.io/" border="0" target="_blank"><img alt="cloudcasa.io" src="site/static/img/adopters/cloudcasa.svg" height="50"></a>
|
||||
<a href="https://azure.microsoft.com/" border="0" target="_blank"><img alt="azure.com" src="site/static/img/adopters/azure.svg" height="50"></a>
|
||||
<a href="https://www.broadcom.com/" border="0" target="_blank"><img alt="broadcom.com" src="site/static/img/adopters/broadcom.svg" height="50"></a>
|
||||
## Success Stories
|
||||
|
||||
Below is a list of adopters of Velero in **production environments** that have
|
||||
@@ -68,6 +69,9 @@ Replicated uses the Velero open source project to enable snapshots in [KOTS][101
|
||||
**[Microsoft Azure][105]**<br>
|
||||
[Azure Backup for AKS][106] is an Azure native, Kubernetes aware, Enterprise ready backup for containerized applications deployed on Azure Kubernetes Service (AKS). AKS Backup utilizes Velero to perform backup and restore operations to protect stateful applications in AKS clusters.<br>
|
||||
|
||||
**[Broadcom][107]**<br>
|
||||
[VMware Cloud Foundation][108] (VCF) offers built-in [vSphere Kubernetes Service][109] (VKS), a Kubernetes runtime that includes a CNCF certified Kubernetes distribution, to deploy and manage containerized workloads. VCF empowers platform engineers with native [Kubernetes multi-cluster management][110] capability for managing Kubernetes (K8s) infrastructure at scale. VCF utilizes Velero for Kubernetes data protection enabling platform engineers to back up and restore containerized workloads manifests & persistent volumes, helping to increase the resiliency of stateful applications in VKS cluster.
|
||||
|
||||
## Adding your organization to the list of Velero Adopters
|
||||
|
||||
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/static/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
|
||||
@@ -125,3 +129,8 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
|
||||
|
||||
[105]: https://azure.microsoft.com/
|
||||
[106]: https://learn.microsoft.com/azure/backup/backup-overview
|
||||
|
||||
[107]: https://www.broadcom.com/
|
||||
[108]: https://www.vmware.com/products/cloud-infrastructure/vmware-cloud-foundation
|
||||
[109]: https://www.vmware.com/products/cloud-infrastructure/vsphere-kubernetes-service
|
||||
[110]: https://blogs.vmware.com/cloud-foundation/2025/09/29/empowering-platform-engineers-with-native-kubernetes-multi-cluster-management-in-vmware-cloud-foundation/
|
||||
@@ -13,7 +13,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
# Velero binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
|
||||
|
||||
ARG GOPROXY
|
||||
ARG BIN
|
||||
@@ -49,7 +49,7 @@ RUN mkdir -p /output/usr/bin && \
|
||||
go clean -modcache -cache
|
||||
|
||||
# Restic binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS restic-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS restic-builder
|
||||
|
||||
ARG GOPROXY
|
||||
ARG BIN
|
||||
@@ -73,7 +73,7 @@ RUN mkdir -p /output/usr/bin && \
|
||||
go clean -modcache -cache
|
||||
|
||||
# Velero image packing section
|
||||
FROM paketobuildpacks/run-jammy-tiny:latest
|
||||
FROM paketobuildpacks/run-jammy-tiny:0.2.104
|
||||
|
||||
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
ARG OS_VERSION=1809
|
||||
|
||||
# Velero binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
|
||||
|
||||
ARG GOPROXY
|
||||
ARG BIN
|
||||
|
||||
@@ -7,11 +7,11 @@
|
||||
| Maintainer | GitHub ID | Affiliation |
|
||||
|---------------------|---------------------------------------------------------------|--------------------------------------------------|
|
||||
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift) |
|
||||
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
|
||||
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
|
||||
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
|
||||
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | Broadcom |
|
||||
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | Broadcom |
|
||||
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | Broadcom |
|
||||
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
|
||||
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
|
||||
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | Broadcom |
|
||||
| Anshul Ahuja | [anshulahuja98](https://github.com/anshulahuja98) | [Microsoft Azure](https://www.github.com/azure/) |
|
||||
| Tiger Kaovilai | [kaovilai](https://github.com/kaovilai) | [OpenShift](https://github.com/openshift) |
|
||||
|
||||
@@ -27,14 +27,3 @@
|
||||
* JenTing Hsiao ([jenting](https://github.com/jenting))
|
||||
* Dave Smith-Uchida ([dsu-igeek](https://github.com/dsu-igeek))
|
||||
* Ming Qiu ([qiuming-best](https://github.com/qiuming-best))
|
||||
|
||||
## Velero Contributors & Stakeholders
|
||||
|
||||
| Feature Area | Lead |
|
||||
|------------------------|:------------------------------------------------------------------------------------:|
|
||||
| Technical Lead | Daniel Jiang [reasonerjt](https://github.com/reasonerjt) |
|
||||
| Kubernetes CSI Liaison | |
|
||||
| Deployment | |
|
||||
| Community Management | Orlin Vasilev [OrlinVasilev](https://github.com/OrlinVasilev) |
|
||||
| Product Management | Pradeep Kumar Chaturvedi [pradeepkchaturvedi](https://github.com/pradeepkchaturvedi) |
|
||||
|
||||
|
||||
@@ -42,13 +42,11 @@ The following is a list of the supported Kubernetes versions for each Velero ver
|
||||
|
||||
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|
||||
|----------------|-------------------------------------------|-------------------------------------|
|
||||
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
|
||||
| 1.18 | 1.18-latest | 1.33.7, 1.34.1, and 1.35.0 |
|
||||
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
|
||||
| 1.16 | 1.18-latest | 1.31.4, 1.32.3, and 1.33.0 |
|
||||
| 1.15 | 1.18-latest | 1.28.8, 1.29.8, 1.30.4 and 1.31.1 |
|
||||
| 1.14 | 1.18-latest | 1.27.9, 1.28.9, and 1.29.4 |
|
||||
| 1.13 | 1.18-latest | 1.26.5, 1.27.3, 1.27.8, and 1.28.3 |
|
||||
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
|
||||
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
|
||||
|
||||
Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.
|
||||
|
||||
|
||||
2
Tiltfile
2
Tiltfile
@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
|
||||
|
||||
tilt_helper_dockerfile_header = """
|
||||
# Tilt image
|
||||
FROM golang:1.25 as tilt-helper
|
||||
FROM golang:1.25.7 as tilt-helper
|
||||
|
||||
# Support live reloading with Tilt
|
||||
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \
|
||||
|
||||
109
changelogs/CHANGELOG-1.18.md
Normal file
109
changelogs/CHANGELOG-1.18.md
Normal file
@@ -0,0 +1,109 @@
|
||||
## v1.18
|
||||
|
||||
### Download
|
||||
https://github.com/vmware-tanzu/velero/releases/tag/v1.18.0
|
||||
|
||||
### Container Image
|
||||
`velero/velero:v1.18.0`
|
||||
|
||||
### Documentation
|
||||
https://velero.io/docs/v1.18/
|
||||
|
||||
### Upgrading
|
||||
https://velero.io/docs/v1.18/upgrade-to-1.18/
|
||||
|
||||
### Highlights
|
||||
#### Concurrent backup
|
||||
In v1.18, Velero is capable to process multiple backups concurrently. This is a significant usability improvement, especially for multiple tenants or multiple users case, backups submitted from different users could run their backups simultaneously without interfering with each other.
|
||||
|
||||
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/concurrent-backup-processing.md for more details.
|
||||
|
||||
#### Cache volume for data movers
|
||||
In v1.18, Velero allows users to configure cache volumes for data mover pods during restore for CSI snapshot data movement and fs-backup. This brings below benefits:
|
||||
- Solve the problem that data mover pods fail to when pod's ephemeral disk is limited
|
||||
- Solve the problem that multiple data mover pods fail to run concurrently in one node when the node's ephemeral disk is limited
|
||||
- Working together with backup repository's cache limit configuration, cache volume with appropriate size helps to improve the restore throughput
|
||||
|
||||
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/backup-repo-cache-volume.md for more details.
|
||||
|
||||
#### Incremental size for data movers
|
||||
In v1.18, Velero allows users to observe the incremental size of data movers backups for CSI snapshot data movement and fs-backup, so that users could visually see the data reduction due to incremental backup.
|
||||
|
||||
#### Wildcard support for namespaces
|
||||
In v1.18, Velero allows to use Glob regular expressions for namespace filters during backup and restore, so that users could filter namespaces in a batch manner.
|
||||
|
||||
#### VolumePolicy for PVC phase
|
||||
In v1.18, Velero VolumePolicy supports actions by PVC phase, which help users to do special operations for PVCs with a specific phase, e.g., skip PVCs in Pending/Lost status from the backup.
|
||||
|
||||
#### Scalability and Resiliency improvements
|
||||
##### Prevent Velero server OOM Kill for large backup repositories
|
||||
In v1.18, some backup repository operations are delay executed out of Velero server, so Velero server won't be OOM Killed.
|
||||
|
||||
#### Performance improvement for VolumePolicy
|
||||
In v1.18, VolumePolicy is enhanced for large number of pods/PVCs so that the performance is significantly improved.
|
||||
|
||||
#### Events for data mover pod diagnostic
|
||||
In v1.18, events are recorded into data mover pod diagnostic, which allows user to see more information for troubleshooting when the data mover pod fails.
|
||||
|
||||
### Runtime and dependencies
|
||||
Golang runtime: 1.25.7
|
||||
kopia: 0.22.3
|
||||
|
||||
### Limitations/Known issues
|
||||
|
||||
### Breaking changes
|
||||
#### Deprecation of PVC selected node feature
|
||||
According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#deprecation-policy), PVC selected node feature is deprecated in v1.18. Velero could appropriately handle PVC's selected-node annotation, so users don't need to do anything particularly.
|
||||
|
||||
### All Changes
|
||||
* Remove backup from running list when backup fails validation (#9498, @sseago)
|
||||
* Maintenance Job only uses the first element of the LoadAffinity array (#9494, @blackpiglet)
|
||||
* Fix issue #9478, add diagnose info on expose peek fails (#9481, @Lyndon-Li)
|
||||
* Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence. (#9474, @blackpiglet)
|
||||
* Add maintenance job and data mover pod's labels and annotations setting. (#9452, @blackpiglet)
|
||||
* Fix plugin init container names exceeding DNS-1123 limit (#9445, @mpryc)
|
||||
* Add PVC-to-Pod cache to improve volume policy performance (#9441, @shubham-pampattiwar)
|
||||
* Remove VolumeSnapshotClass from CSI B/R process. (#9431, @blackpiglet)
|
||||
* Use hookIndex for recording multiple restore exec hooks. (#9366, @blackpiglet)
|
||||
* Sanitize Azure HTTP responses in BSL status messages (#9321, @shubham-pampattiwar)
|
||||
* Remove labels associated with previous backups (#9206, @Joeavaikath)
|
||||
* Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs (#9166, @claude)
|
||||
* feat: Enhance BackupStorageLocation with Secret-based CA certificate support (#9141, @kaovilai)
|
||||
* Add `--apply` flag to `install` command, allowing usage of Kubernetes apply to make changes to existing installs (#9132, @mjnagel)
|
||||
* Fix issue #9194, add doc for GOMAXPROCS behavior change (#9420, @Lyndon-Li)
|
||||
* Apply volume policies to VolumeGroupSnapshot PVC filtering (#9419, @shubham-pampattiwar)
|
||||
* Fix issue #9276, add doc for cache volume support (#9418, @Lyndon-Li)
|
||||
* Add Prometheus metrics for maintenance jobs (#9414, @shubham-pampattiwar)
|
||||
* Fix issue #9400, connect repo first time after creation so that init params could be written (#9407, @Lyndon-Li)
|
||||
* Cache volume for PVR (#9397, @Lyndon-Li)
|
||||
* Cache volume support for DataDownload (#9391, @Lyndon-Li)
|
||||
* don't copy securitycontext from first container if configmap found (#9389, @sseago)
|
||||
* Refactor repo provider interface for static configuration (#9379, @Lyndon-Li)
|
||||
* Fix issue #9365, prevent fake completion notification due to multiple update of single PVR (#9375, @Lyndon-Li)
|
||||
* Add cache volume configuration (#9370, @Lyndon-Li)
|
||||
* Track actual resource names for GenerateName in restore status (#9368, @shubham-pampattiwar)
|
||||
* Fix managed fields patch for resources using GenerateName (#9367, @shubham-pampattiwar)
|
||||
* Support cache volume for generic restore exposer and pod volume exposer (#9362, @Lyndon-Li)
|
||||
* Add incrementalSize to DU/PVB for reporting new/changed size (#9357, @sseago)
|
||||
* Add snapshotSize for DataDownload, PodVolumeRestore (#9354, @Lyndon-Li)
|
||||
* Add cache dir configuration for udmrepo (#9353, @Lyndon-Li)
|
||||
* Fix the Job build error when BackupReposiotry name longer than 63. (#9350, @blackpiglet)
|
||||
* Add cache configuration to VGDP (#9342, @Lyndon-Li)
|
||||
* Fix issue #9332, add bytesDone for cache files (#9333, @Lyndon-Li)
|
||||
* Fix typos in documentation (#9329, @T4iFooN-IX)
|
||||
* Concurrent backup processing (#9307, @sseago)
|
||||
* VerifyJSONConfigs verify every elements in Data. (#9302, @blackpiglet)
|
||||
* Fix issue #9267, add events to data mover prepare diagnostic (#9296, @Lyndon-Li)
|
||||
* Add option for privileged fs-backup pod (#9295, @sseago)
|
||||
* Fix issue #9193, don't connect repo in repo controller (#9291, @Lyndon-Li)
|
||||
* Implement concurrency control for cache of native VolumeSnapshotter plugin. (#9281, @0xLeo258)
|
||||
* Fix issue #7904, remove the code and doc for PVC node selection (#9269, @Lyndon-Li)
|
||||
* Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases (#9264, @shubham-pampattiwar)
|
||||
* Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment (#9256, @shubham-pampattiwar)
|
||||
* Implement wildcard namespace pattern expansion for backup namespace includes/excludes. This change adds support for wildcard patterns (*, ?, [abc], {a,b,c}) in namespace includes and excludes during backup operations (#9255, @Joeavaikath)
|
||||
* Protect VolumeSnapshot field from race condition during multi-thread backup (#9248, @0xLeo258)
|
||||
* Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244, @priyansh17)
|
||||
* Get pod list once per namespace in pvc IBA (#9226, @sseago)
|
||||
* Fix issue #7725, add design for backup repo cache configuration (#9148, @Lyndon-Li)
|
||||
* Fix issue #9229, don't attach backupPVC to the source node (#9233, @Lyndon-Li)
|
||||
* feat: Permit specifying annotations for the BackupPVC (#9173, @clementnuss)
|
||||
@@ -1 +0,0 @@
|
||||
Add `--apply` flag to `install` command, allowing usage of Kubernetes apply to make changes to existing installs
|
||||
@@ -1 +0,0 @@
|
||||
feat: Enhance BackupStorageLocation with Secret-based CA certificate support
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #7725, add design for backup repo cache configuration
|
||||
@@ -1 +0,0 @@
|
||||
Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs
|
||||
@@ -1 +0,0 @@
|
||||
feat: Permit specifying annotations for the BackupPVC
|
||||
@@ -1 +0,0 @@
|
||||
Remove labels associated with previous backups
|
||||
@@ -1 +0,0 @@
|
||||
Get pod list once per namespace in pvc IBA
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9229, don't attach backupPVC to the source node
|
||||
@@ -1 +0,0 @@
|
||||
Update AzureAD Microsoft Authentication Library to v1.5.0
|
||||
@@ -1 +0,0 @@
|
||||
Protect VolumeSnapshot field from race condition during multi-thread backup
|
||||
@@ -1,10 +0,0 @@
|
||||
Implement wildcard namespace pattern expansion for backup namespace includes/excludes.
|
||||
|
||||
This change adds support for wildcard patterns (*, ?, [abc], {a,b,c}) in namespace includes and excludes during backup operations.
|
||||
When wildcard patterns are detected, they are expanded against the list of active namespaces in the cluster before the backup proceeds.
|
||||
|
||||
Key features:
|
||||
- Wildcard patterns in namespace includes/excludes are automatically detected and expanded
|
||||
- Pattern validation ensures unsupported patterns (regex, consecutive asterisks) are rejected
|
||||
- Empty wildcard results (e.g., "invalid*" matching no namespaces) correctly result in empty backups
|
||||
- Exact namespace names and "*" continue to work as before (no expansion needed)
|
||||
@@ -1 +0,0 @@
|
||||
Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment
|
||||
@@ -1 +0,0 @@
|
||||
Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #7904, remove the code and doc for PVC node selection
|
||||
@@ -1 +0,0 @@
|
||||
Implement concurrency control for cache of native VolumeSnapshotter plugin.
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9193, don't connect repo in repo controller
|
||||
@@ -1 +0,0 @@
|
||||
Add option for privileged fs-backup pod
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9267, add events to data mover prepare diagnostic
|
||||
@@ -1 +0,0 @@
|
||||
VerifyJSONConfigs verify every elements in Data.
|
||||
@@ -1 +0,0 @@
|
||||
Concurrent backup processing
|
||||
@@ -1 +0,0 @@
|
||||
Sanitize Azure HTTP responses in BSL status messages
|
||||
@@ -1 +0,0 @@
|
||||
Fix typos in documentation
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9332, add bytesDone for cache files
|
||||
@@ -1 +0,0 @@
|
||||
Add cache configuration to VGDP
|
||||
@@ -1 +0,0 @@
|
||||
Fix the Job build error when BackupReposiotry name longer than 63.
|
||||
@@ -1 +0,0 @@
|
||||
Add cache dir configuration for udmrepo
|
||||
@@ -1 +0,0 @@
|
||||
Add snapshotSize for DataDownload, PodVolumeRestore
|
||||
@@ -1 +0,0 @@
|
||||
Add incrementalSize to DU/PVB for reporting new/changed size
|
||||
@@ -1 +0,0 @@
|
||||
Support cache volume for generic restore exposer and pod volume exposer
|
||||
@@ -1 +0,0 @@
|
||||
Use hookIndex for recording multiple restore exec hooks.
|
||||
@@ -1 +0,0 @@
|
||||
Fix managed fields patch for resources using GenerateName
|
||||
@@ -1 +0,0 @@
|
||||
Track actual resource names for GenerateName in restore status
|
||||
@@ -1 +0,0 @@
|
||||
Add cache volume configuration
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9365, prevent fake completion notification due to multiple update of single PVR
|
||||
@@ -1 +0,0 @@
|
||||
Refactor repo provider interface for static configuration
|
||||
@@ -1 +0,0 @@
|
||||
don't copy securitycontext from first container if configmap found
|
||||
@@ -1 +0,0 @@
|
||||
Cache volume support for DataDownload
|
||||
@@ -1 +0,0 @@
|
||||
Cache volume for PVR
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9400, connect repo first time after creation so that init params could be written
|
||||
@@ -1 +0,0 @@
|
||||
Add Prometheus metrics for maintenance jobs
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9276, add doc for cache volume support
|
||||
@@ -1 +0,0 @@
|
||||
Apply volume policies to VolumeGroupSnapshot PVC filtering
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9194, add doc for GOMAXPROCS behavior change
|
||||
@@ -1 +0,0 @@
|
||||
Remove VolumeSnapshotClass from CSI B/R process.
|
||||
@@ -1 +0,0 @@
|
||||
Add PVC-to-Pod cache to improve volume policy performance
|
||||
@@ -1 +0,0 @@
|
||||
Fix plugin init container names exceeding DNS-1123 limit
|
||||
@@ -1 +0,0 @@
|
||||
Add maintenance job and data mover pod's labels and annotations setting.
|
||||
@@ -1 +0,0 @@
|
||||
Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence.
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9478, add diagnose info on expose peek fails
|
||||
@@ -1 +0,0 @@
|
||||
Maintenance Job only uses the first element of the LoadAffinity array
|
||||
@@ -1 +0,0 @@
|
||||
Remove backup from running list when backup fails validation
|
||||
1
changelogs/unreleased/9508-kaovilai
Normal file
1
changelogs/unreleased/9508-kaovilai
Normal file
@@ -0,0 +1 @@
|
||||
Fix VolumePolicy PVC phase condition filter for unbound PVCs (#9507)
|
||||
1
changelogs/unreleased/9537-kaovilai
Normal file
1
changelogs/unreleased/9537-kaovilai
Normal file
@@ -0,0 +1 @@
|
||||
Fix VolumePolicy PVC phase condition filter for unbound PVCs (#9507)
|
||||
1
changelogs/unreleased/9539-Joeavaikath
Normal file
1
changelogs/unreleased/9539-Joeavaikath
Normal file
@@ -0,0 +1 @@
|
||||
Support all glob wildcard characters in namespace validation
|
||||
14
go.mod
14
go.mod
@@ -1,6 +1,6 @@
|
||||
module github.com/vmware-tanzu/velero
|
||||
|
||||
go 1.25.0
|
||||
go 1.25.7
|
||||
|
||||
require (
|
||||
cloud.google.com/go/storage v1.57.2
|
||||
@@ -171,11 +171,11 @@ require (
|
||||
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
|
||||
go.opentelemetry.io/otel v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/metric v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.40.0 // indirect
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||
@@ -183,7 +183,7 @@ require (
|
||||
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
|
||||
golang.org/x/net v0.47.0 // indirect
|
||||
golang.org/x/sync v0.18.0 // indirect
|
||||
golang.org/x/sys v0.38.0 // indirect
|
||||
golang.org/x/sys v0.40.0 // indirect
|
||||
golang.org/x/term v0.37.0 // indirect
|
||||
golang.org/x/time v0.14.0 // indirect
|
||||
golang.org/x/tools v0.38.0 // indirect
|
||||
|
||||
24
go.sum
24
go.sum
@@ -748,18 +748,18 @@ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.6
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
|
||||
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
|
||||
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
|
||||
go.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=
|
||||
go.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0 h1:rixTyDGXFxRy1xzhKrotaHy3/KXdPhlWARrCgK+eqUY=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0/go.mod h1:dowW6UsM9MKbJq5JTz2AMVp3/5iW5I/TStsk8S+CfHw=
|
||||
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
|
||||
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
|
||||
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
|
||||
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
|
||||
go.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=
|
||||
go.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=
|
||||
go.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=
|
||||
go.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.40.0 h1:mtmdVqgQkeRxHgRv4qhyJduP3fYJRMX4AtAlbuWdCYw=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.40.0/go.mod h1:4Z2bGMf0KSK3uRjlczMOeMhKU2rhUqdWNoKcYrtcBPg=
|
||||
go.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=
|
||||
go.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=
|
||||
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o=
|
||||
go.starlark.net v0.0.0-20201006213952-227f4aabceb5/go.mod h1:f0znQkUKRrkk36XxWbGjMqQM8wGv/xHBVE2qc3B5oFU=
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca h1:VdD38733bfYv5tUZwEIskMM93VanwNIi5bIKnDrJdEY=
|
||||
@@ -969,8 +969,8 @@ golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
||||
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
||||
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM --platform=$TARGETPLATFORM golang:1.25-bookworm
|
||||
FROM --platform=$TARGETPLATFORM golang:1.25.7-bookworm
|
||||
|
||||
ARG GOPROXY
|
||||
|
||||
@@ -21,9 +21,11 @@ ENV GO111MODULE=on
|
||||
ENV GOPROXY=${GOPROXY}
|
||||
|
||||
# kubebuilder test bundle is separated from kubebuilder. Need to setup it for CI test.
|
||||
RUN curl -sSLo envtest-bins.tar.gz https://go.kubebuilder.io/test-tools/1.22.1/linux/$(go env GOARCH) && \
|
||||
mkdir /usr/local/kubebuilder && \
|
||||
tar -C /usr/local/kubebuilder --strip-components=1 -zvxf envtest-bins.tar.gz
|
||||
# Using setup-envtest to download envtest binaries
|
||||
RUN go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest && \
|
||||
mkdir -p /usr/local/kubebuilder/bin && \
|
||||
ENVTEST_ASSETS_DIR=$(setup-envtest use 1.33.0 --bin-dir /usr/local/kubebuilder/bin -p path) && \
|
||||
cp -r ${ENVTEST_ASSETS_DIR}/* /usr/local/kubebuilder/bin/
|
||||
|
||||
RUN wget --quiet https://github.com/kubernetes-sigs/kubebuilder/releases/download/v3.2.0/kubebuilder_linux_$(go env GOARCH) && \
|
||||
mv kubebuilder_linux_$(go env GOARCH) /usr/local/kubebuilder/bin/kubebuilder && \
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
diff --git a/go.mod b/go.mod
|
||||
index 5f939c481..6ae17f4a1 100644
|
||||
index 5f939c481..f6205aa3c 100644
|
||||
--- a/go.mod
|
||||
+++ b/go.mod
|
||||
@@ -24,32 +24,31 @@ require (
|
||||
@@ -14,13 +14,13 @@ index 5f939c481..6ae17f4a1 100644
|
||||
- golang.org/x/term v0.4.0
|
||||
- golang.org/x/text v0.6.0
|
||||
- google.golang.org/api v0.106.0
|
||||
+ golang.org/x/crypto v0.36.0
|
||||
+ golang.org/x/net v0.38.0
|
||||
+ golang.org/x/crypto v0.45.0
|
||||
+ golang.org/x/net v0.47.0
|
||||
+ golang.org/x/oauth2 v0.28.0
|
||||
+ golang.org/x/sync v0.12.0
|
||||
+ golang.org/x/sys v0.31.0
|
||||
+ golang.org/x/term v0.30.0
|
||||
+ golang.org/x/text v0.23.0
|
||||
+ golang.org/x/sync v0.18.0
|
||||
+ golang.org/x/sys v0.38.0
|
||||
+ golang.org/x/term v0.37.0
|
||||
+ golang.org/x/text v0.31.0
|
||||
+ google.golang.org/api v0.114.0
|
||||
)
|
||||
|
||||
@@ -64,11 +64,11 @@ index 5f939c481..6ae17f4a1 100644
|
||||
)
|
||||
|
||||
-go 1.18
|
||||
+go 1.23.0
|
||||
+go 1.24.0
|
||||
+
|
||||
+toolchain go1.23.7
|
||||
+toolchain go1.24.11
|
||||
diff --git a/go.sum b/go.sum
|
||||
index 026e1d2fa..805792055 100644
|
||||
index 026e1d2fa..4a37e7ac7 100644
|
||||
--- a/go.sum
|
||||
+++ b/go.sum
|
||||
@@ -1,23 +1,24 @@
|
||||
@@ -170,8 +170,8 @@ index 026e1d2fa..805792055 100644
|
||||
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||
-golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE=
|
||||
-golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
|
||||
+golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
|
||||
+golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
|
||||
+golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
|
||||
+golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
@@ -181,8 +181,8 @@ index 026e1d2fa..805792055 100644
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
-golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
|
||||
-golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
|
||||
+golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
|
||||
+golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
|
||||
+golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
|
||||
+golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
-golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
|
||||
-golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
|
||||
@@ -194,8 +194,8 @@ index 026e1d2fa..805792055 100644
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
-golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
|
||||
-golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
+golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
|
||||
+golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
+golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
|
||||
+golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -205,21 +205,21 @@ index 026e1d2fa..805792055 100644
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
-golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
|
||||
-golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
+golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
|
||||
+golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
+golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
||||
+golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
-golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
|
||||
-golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
|
||||
+golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
|
||||
+golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
|
||||
+golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
|
||||
+golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
-golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
|
||||
-golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
+golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
|
||||
+golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
|
||||
+golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
|
||||
+golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
|
||||
@@ -134,6 +134,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
pv := new(corev1api.PersistentVolume)
|
||||
var err error
|
||||
|
||||
var pvNotFoundErr error
|
||||
if groupResource == kuberesource.PersistentVolumeClaims {
|
||||
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pvc); err != nil {
|
||||
v.logger.WithError(err).Error("fail to convert unstructured into PVC")
|
||||
@@ -142,8 +143,10 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
|
||||
pv, err = kubeutil.GetPVForPVC(pvc, v.client)
|
||||
if err != nil {
|
||||
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, err
|
||||
// Any error means PV not available - save to return later if no policy matches
|
||||
v.logger.Debugf("PV not found for PVC %s: %v", pvc.Namespace+"/"+pvc.Name, err)
|
||||
pvNotFoundErr = err
|
||||
pv = nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -158,7 +161,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
|
||||
action, err := v.volumePolicy.GetMatchAction(vfd)
|
||||
if err != nil {
|
||||
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for PV %s", pv.Name)
|
||||
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for %+v", vfd)
|
||||
return false, err
|
||||
}
|
||||
|
||||
@@ -167,15 +170,21 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
// If there is no match action, go on to the next check.
|
||||
if action != nil {
|
||||
if action.Type == resourcepolicies.Snapshot {
|
||||
v.logger.Infof(fmt.Sprintf("performing snapshot action for pv %s", pv.Name))
|
||||
v.logger.Infof("performing snapshot action for %+v", vfd)
|
||||
return true, nil
|
||||
} else {
|
||||
v.logger.Infof("Skip snapshot action for pv %s as the action type is %s", pv.Name, action.Type)
|
||||
v.logger.Infof("Skip snapshot action for %+v as the action type is %s", vfd, action.Type)
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If resource is PVC, and PV is nil (e.g., Pending/Lost PVC with no matching policy), return the original error
|
||||
if groupResource == kuberesource.PersistentVolumeClaims && pv == nil && pvNotFoundErr != nil {
|
||||
v.logger.WithError(pvNotFoundErr).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, pvNotFoundErr
|
||||
}
|
||||
|
||||
// If this PV is claimed, see if we've already taken a (pod volume backup)
|
||||
// snapshot of the contents of this PV. If so, don't take a snapshot.
|
||||
if pv.Spec.ClaimRef != nil {
|
||||
@@ -209,7 +218,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
return true, nil
|
||||
}
|
||||
|
||||
v.logger.Infof(fmt.Sprintf("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name))
|
||||
v.logger.Infof("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
@@ -219,6 +228,7 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
|
||||
return false, nil
|
||||
}
|
||||
|
||||
var pvNotFoundErr error
|
||||
if v.volumePolicy != nil {
|
||||
var resource any
|
||||
var err error
|
||||
@@ -230,10 +240,13 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
|
||||
v.logger.WithError(err).Errorf("fail to get PVC for pod %s", pod.Namespace+"/"+pod.Name)
|
||||
return false, err
|
||||
}
|
||||
resource, err = kubeutil.GetPVForPVC(pvc, v.client)
|
||||
pvResource, err := kubeutil.GetPVForPVC(pvc, v.client)
|
||||
if err != nil {
|
||||
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, err
|
||||
// Any error means PV not available - save to return later if no policy matches
|
||||
v.logger.Debugf("PV not found for PVC %s: %v", pvc.Namespace+"/"+pvc.Name, err)
|
||||
pvNotFoundErr = err
|
||||
} else {
|
||||
resource = pvResource
|
||||
}
|
||||
}
|
||||
|
||||
@@ -260,6 +273,12 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
// If no policy matched and PV was not found, return the original error
|
||||
if pvNotFoundErr != nil {
|
||||
v.logger.WithError(pvNotFoundErr).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, pvNotFoundErr
|
||||
}
|
||||
}
|
||||
|
||||
if v.shouldPerformFSBackupLegacy(volume, pod) {
|
||||
|
||||
@@ -286,7 +286,7 @@ func TestVolumeHelperImpl_ShouldPerformSnapshot(t *testing.T) {
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "PVC not having PV, return false and error case PV not found",
|
||||
name: "PVC not having PV, return false and error when no matching policy",
|
||||
inputObj: builder.ForPersistentVolumeClaim("default", "example-pvc").StorageClass("gp2-csi").Result(),
|
||||
groupResource: kuberesource.PersistentVolumeClaims,
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
@@ -1234,3 +1234,312 @@ func TestNewVolumeHelperImplWithCache_UsesCache(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.False(t, shouldSnapshot, "Expected snapshot to be skipped due to fs-backup selection via cache")
|
||||
}
|
||||
|
||||
// TestVolumeHelperImpl_ShouldPerformSnapshot_UnboundPVC tests that Pending and Lost PVCs with
|
||||
// phase-based skip policies don't cause errors when GetPVForPVC would fail.
|
||||
func TestVolumeHelperImpl_ShouldPerformSnapshot_UnboundPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
inputPVC *corev1api.PersistentVolumeClaim
|
||||
resourcePolicies *resourcepolicies.ResourcePolicies
|
||||
shouldSnapshot bool
|
||||
expectedErr bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with phase-based skip policy should not error and return false",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-pending").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Pending PVC without matching skip policy should error (no PV)",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-pending-no-policy").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"storageClass": []string{"gp2-csi"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: true,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with phase-based skip policy should not error and return false",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with policy for Pending and Lost should not error and return false",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t)
|
||||
|
||||
var p *resourcepolicies.Policies
|
||||
if tc.resourcePolicies != nil {
|
||||
p = &resourcepolicies.Policies{}
|
||||
err := p.BuildPolicy(tc.resourcePolicies)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
vh := NewVolumeHelperImpl(
|
||||
p,
|
||||
ptr.To(true),
|
||||
logrus.StandardLogger(),
|
||||
fakeClient,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
|
||||
obj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.inputPVC)
|
||||
require.NoError(t, err)
|
||||
|
||||
actualShouldSnapshot, actualError := vh.ShouldPerformSnapshot(&unstructured.Unstructured{Object: obj}, kuberesource.PersistentVolumeClaims)
|
||||
if tc.expectedErr {
|
||||
require.Error(t, actualError, "Want error; Got nil error")
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, actualError)
|
||||
require.Equalf(t, tc.shouldSnapshot, actualShouldSnapshot, "Want shouldSnapshot as %t; Got shouldSnapshot as %t", tc.shouldSnapshot, actualShouldSnapshot)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestVolumeHelperImpl_ShouldPerformFSBackup_UnboundPVC tests that Pending and Lost PVCs with
|
||||
// phase-based skip policies don't cause errors when GetPVForPVC would fail.
|
||||
func TestVolumeHelperImpl_ShouldPerformFSBackup_UnboundPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
pod *corev1api.Pod
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
resourcePolicies *resourcepolicies.ResourcePolicies
|
||||
shouldFSBackup bool
|
||||
expectedErr bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with phase-based skip policy should not error and return false",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-pending",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-pending",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-pending").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Pending PVC without matching skip policy should error (no PV)",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-pending",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-pending-no-policy",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-pending-no-policy").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"storageClass": []string{"gp2-csi"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: true,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with phase-based skip policy should not error and return false",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-lost",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-lost",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with policy for Pending and Lost should not error and return false",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-lost",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-lost",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, tc.pvc)
|
||||
require.NoError(t, fakeClient.Create(t.Context(), tc.pod))
|
||||
|
||||
var p *resourcepolicies.Policies
|
||||
if tc.resourcePolicies != nil {
|
||||
p = &resourcepolicies.Policies{}
|
||||
err := p.BuildPolicy(tc.resourcePolicies)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
vh := NewVolumeHelperImpl(
|
||||
p,
|
||||
ptr.To(true),
|
||||
logrus.StandardLogger(),
|
||||
fakeClient,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
|
||||
actualShouldFSBackup, actualError := vh.ShouldPerformFSBackup(tc.pod.Spec.Volumes[0], *tc.pod)
|
||||
if tc.expectedErr {
|
||||
require.Error(t, actualError, "Want error; Got nil error")
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, actualError)
|
||||
require.Equalf(t, tc.shouldFSBackup, actualShouldFSBackup, "Want shouldFSBackup as %t; Got shouldFSBackup as %t", tc.shouldFSBackup, actualShouldFSBackup)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -687,15 +687,14 @@ func (ib *itemBackupper) getMatchAction(obj runtime.Unstructured, groupResource
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
|
||||
pvName := pvc.Spec.VolumeName
|
||||
if pvName == "" {
|
||||
return nil, errors.Errorf("PVC has no volume backing this claim")
|
||||
}
|
||||
|
||||
pv := &corev1api.PersistentVolume{}
|
||||
if err := ib.kbClient.Get(context.Background(), kbClient.ObjectKey{Name: pvName}, pv); err != nil {
|
||||
return nil, errors.WithStack(err)
|
||||
var pv *corev1api.PersistentVolume
|
||||
if pvName := pvc.Spec.VolumeName; pvName != "" {
|
||||
pv = &corev1api.PersistentVolume{}
|
||||
if err := ib.kbClient.Get(context.Background(), kbClient.ObjectKey{Name: pvName}, pv); err != nil {
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
}
|
||||
// If pv is nil for unbound PVCs - policy matching will use PVC-only conditions
|
||||
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
|
||||
return ib.backupRequest.ResPolicies.GetMatchAction(vfd)
|
||||
}
|
||||
@@ -709,7 +708,10 @@ func (ib *itemBackupper) trackSkippedPV(obj runtime.Unstructured, groupResource
|
||||
if name, err := getPVName(obj, groupResource); len(name) > 0 && err == nil {
|
||||
ib.backupRequest.SkippedPVTracker.Track(name, approach, reason)
|
||||
} else if err != nil {
|
||||
log.WithError(err).Warnf("unable to get PV name, skip tracking.")
|
||||
// Log at info level for tracking purposes. This is not an error because
|
||||
// it's expected for some resources (e.g., PVCs in Pending or Lost phase)
|
||||
// to not have a PV name. This occurs when volume policy skips unbound PVCs.
|
||||
log.WithError(err).Infof("unable to get PV name, skip tracking.")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -719,6 +721,17 @@ func (ib *itemBackupper) unTrackSkippedPV(obj runtime.Unstructured, groupResourc
|
||||
if name, err := getPVName(obj, groupResource); len(name) > 0 && err == nil {
|
||||
ib.backupRequest.SkippedPVTracker.Untrack(name)
|
||||
} else if err != nil {
|
||||
// For PVCs in Pending or Lost phase, it's expected that there's no PV name.
|
||||
// Log at debug level instead of warning to reduce noise.
|
||||
if groupResource == kuberesource.PersistentVolumeClaims {
|
||||
pvc := new(corev1api.PersistentVolumeClaim)
|
||||
if convErr := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), pvc); convErr == nil {
|
||||
if pvc.Status.Phase == corev1api.ClaimPending || pvc.Status.Phase == corev1api.ClaimLost {
|
||||
log.WithError(err).Debugf("unable to get PV name for %s PVC, skip untracking.", pvc.Status.Phase)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
log.WithError(err).Warnf("unable to get PV name, skip untracking.")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -17,12 +17,15 @@ limitations under the License.
|
||||
package backup
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
ctrlfake "sigs.k8s.io/controller-runtime/pkg/client/fake"
|
||||
|
||||
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
|
||||
"github.com/vmware-tanzu/velero/pkg/kuberesource"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -269,3 +272,225 @@ func TestAddVolumeInfo(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetMatchAction_PendingLostPVC(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
require.NoError(t, corev1api.AddToScheme(scheme))
|
||||
|
||||
// Create resource policies that skip Pending/Lost PVCs
|
||||
resPolicies := &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
policies := &resourcepolicies.Policies{}
|
||||
err := policies.BuildPolicy(resPolicies)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
pv *corev1api.PersistentVolume
|
||||
expectedAction *resourcepolicies.Action
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with no VolumeName should match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with no VolumeName should match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Bound PVC with VolumeName and matching PV should not match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
StorageClass("test-sc").
|
||||
VolumeName("test-pv").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
pv: builder.ForPersistentVolume("test-pv").StorageClass("test-sc").Result(),
|
||||
expectedAction: nil,
|
||||
expectError: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
// Build fake client with PV if present
|
||||
clientBuilder := ctrlfake.NewClientBuilder().WithScheme(scheme)
|
||||
if tc.pv != nil {
|
||||
clientBuilder = clientBuilder.WithObjects(tc.pv)
|
||||
}
|
||||
fakeClient := clientBuilder.Build()
|
||||
|
||||
ib := &itemBackupper{
|
||||
kbClient: fakeClient,
|
||||
backupRequest: &Request{
|
||||
ResPolicies: policies,
|
||||
},
|
||||
}
|
||||
|
||||
// Convert PVC to unstructured
|
||||
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
|
||||
require.NoError(t, err)
|
||||
obj := &unstructured.Unstructured{Object: pvcData}
|
||||
|
||||
action, err := ib.getMatchAction(obj, kuberesource.PersistentVolumeClaims, csiBIAPluginName)
|
||||
if tc.expectError {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
if tc.expectedAction == nil {
|
||||
assert.Nil(t, action)
|
||||
} else {
|
||||
require.NotNil(t, action)
|
||||
assert.Equal(t, tc.expectedAction.Type, action.Type)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestTrackSkippedPV_PendingLostPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC should log at info level",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
},
|
||||
{
|
||||
name: "Lost PVC should log at info level",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
},
|
||||
{
|
||||
name: "Bound PVC without VolumeName should log at info level",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ib := &itemBackupper{
|
||||
backupRequest: &Request{
|
||||
SkippedPVTracker: NewSkipPVTracker(),
|
||||
},
|
||||
}
|
||||
|
||||
// Set up log capture
|
||||
logOutput := &bytes.Buffer{}
|
||||
logger := logrus.New()
|
||||
logger.SetOutput(logOutput)
|
||||
logger.SetLevel(logrus.DebugLevel)
|
||||
|
||||
// Convert PVC to unstructured
|
||||
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
|
||||
require.NoError(t, err)
|
||||
obj := &unstructured.Unstructured{Object: pvcData}
|
||||
|
||||
ib.trackSkippedPV(obj, kuberesource.PersistentVolumeClaims, "", "test reason", logger)
|
||||
|
||||
logStr := logOutput.String()
|
||||
assert.Contains(t, logStr, "level=info")
|
||||
assert.Contains(t, logStr, "unable to get PV name, skip tracking.")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestUnTrackSkippedPV_PendingLostPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
expectWarningLog bool
|
||||
expectDebugMessage string
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC should log at debug level, not warning",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
expectWarningLog: false,
|
||||
expectDebugMessage: "unable to get PV name for Pending PVC, skip untracking.",
|
||||
},
|
||||
{
|
||||
name: "Lost PVC should log at debug level, not warning",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
expectWarningLog: false,
|
||||
expectDebugMessage: "unable to get PV name for Lost PVC, skip untracking.",
|
||||
},
|
||||
{
|
||||
name: "Bound PVC without VolumeName should log warning",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
expectWarningLog: true,
|
||||
expectDebugMessage: "",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ib := &itemBackupper{
|
||||
backupRequest: &Request{
|
||||
SkippedPVTracker: NewSkipPVTracker(),
|
||||
},
|
||||
}
|
||||
|
||||
// Set up log capture
|
||||
logOutput := &bytes.Buffer{}
|
||||
logger := logrus.New()
|
||||
logger.SetOutput(logOutput)
|
||||
logger.SetLevel(logrus.DebugLevel)
|
||||
|
||||
// Convert PVC to unstructured
|
||||
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
|
||||
require.NoError(t, err)
|
||||
obj := &unstructured.Unstructured{Object: pvcData}
|
||||
|
||||
ib.unTrackSkippedPV(obj, kuberesource.PersistentVolumeClaims, logger)
|
||||
|
||||
logStr := logOutput.String()
|
||||
if tc.expectWarningLog {
|
||||
assert.Contains(t, logStr, "level=warning")
|
||||
assert.Contains(t, logStr, "unable to get PV name, skip untracking.")
|
||||
} else {
|
||||
assert.NotContains(t, logStr, "level=warning")
|
||||
if tc.expectDebugMessage != "" {
|
||||
assert.Contains(t, logStr, "level=debug")
|
||||
assert.Contains(t, logStr, tc.expectDebugMessage)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -210,11 +210,9 @@ func resultsKey(ns, name string) string {
|
||||
|
||||
func (b *backupper) getMatchAction(resPolicies *resourcepolicies.Policies, pvc *corev1api.PersistentVolumeClaim, volume *corev1api.Volume) (*resourcepolicies.Action, error) {
|
||||
if pvc != nil {
|
||||
pv := new(corev1api.PersistentVolume)
|
||||
err := b.crClient.Get(context.TODO(), ctrlclient.ObjectKey{Name: pvc.Spec.VolumeName}, pv)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error getting pv for pvc %s", pvc.Spec.VolumeName)
|
||||
}
|
||||
// Ignore err, if the PV is not available (Pending/Lost PVC or PV fetch failed) - try matching with PVC only
|
||||
// GetPVForPVC returns nil for all error cases
|
||||
pv, _ := kube.GetPVForPVC(pvc, b.crClient)
|
||||
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
|
||||
return resPolicies.GetMatchAction(vfd)
|
||||
}
|
||||
|
||||
@@ -309,8 +309,8 @@ func createNodeObj() *corev1api.Node {
|
||||
|
||||
func TestBackupPodVolumes(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
velerov1api.AddToScheme(scheme)
|
||||
corev1api.AddToScheme(scheme)
|
||||
require.NoError(t, velerov1api.AddToScheme(scheme))
|
||||
require.NoError(t, corev1api.AddToScheme(scheme))
|
||||
log := logrus.New()
|
||||
|
||||
tests := []struct {
|
||||
@@ -778,7 +778,7 @@ func TestWaitAllPodVolumesProcessed(t *testing.T) {
|
||||
|
||||
backuper := newBackupper(c.ctx, log, nil, nil, informer, nil, "", &velerov1api.Backup{})
|
||||
if c.pvb != nil {
|
||||
backuper.pvbIndexer.Add(c.pvb)
|
||||
require.NoError(t, backuper.pvbIndexer.Add(c.pvb))
|
||||
backuper.wg.Add(1)
|
||||
}
|
||||
|
||||
@@ -833,3 +833,185 @@ func TestPVCBackupSummary(t *testing.T) {
|
||||
assert.Empty(t, pbs.Skipped)
|
||||
assert.Len(t, pbs.Backedup, 2)
|
||||
}
|
||||
|
||||
func TestGetMatchAction_PendingPVC(t *testing.T) {
|
||||
// Create resource policies that skip Pending/Lost PVCs
|
||||
resPolicies := &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
policies := &resourcepolicies.Policies{}
|
||||
err := policies.BuildPolicy(resPolicies)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
volume *corev1api.Volume
|
||||
pv *corev1api.PersistentVolume
|
||||
expectedAction *resourcepolicies.Action
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with pvcPhase skip policy should return skip action",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pending-pvc",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with pvcPhase skip policy should return skip action",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "lost-pvc",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Bound PVC with matching PV should not match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
StorageClass("test-sc").
|
||||
VolumeName("test-pv").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "bound-pvc",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: builder.ForPersistentVolume("test-pv").StorageClass("test-sc").Result(),
|
||||
expectedAction: nil,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Pending PVC with no matching policy should return nil action",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc-no-match").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pending-pvc-no-match",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip}, // Will match the pvcPhase policy
|
||||
expectError: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
// Build fake client with PV if present
|
||||
var objs []runtime.Object
|
||||
if tc.pv != nil {
|
||||
objs = append(objs, tc.pv)
|
||||
}
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, objs...)
|
||||
|
||||
b := &backupper{
|
||||
crClient: fakeClient,
|
||||
}
|
||||
|
||||
action, err := b.getMatchAction(policies, tc.pvc, tc.volume)
|
||||
if tc.expectError {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
if tc.expectedAction == nil {
|
||||
assert.Nil(t, action)
|
||||
} else {
|
||||
require.NotNil(t, action)
|
||||
assert.Equal(t, tc.expectedAction.Type, action.Type)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetMatchAction_PVCWithoutPVLookupError(t *testing.T) {
|
||||
// Test that when a PVC has a VolumeName but the PV doesn't exist,
|
||||
// the function ignores the error and tries to match with PVC only
|
||||
resPolicies := &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
policies := &resourcepolicies.Policies{}
|
||||
err := policies.BuildPolicy(resPolicies)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Pending PVC without a matching PV in the cluster
|
||||
pvc := builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result()
|
||||
|
||||
volume := &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pending-pvc",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Empty client - no PV exists
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t)
|
||||
|
||||
b := &backupper{
|
||||
crClient: fakeClient,
|
||||
}
|
||||
|
||||
// Should succeed even though PV lookup would fail
|
||||
// because the function ignores PV lookup errors and uses PVC-only matching
|
||||
action, err := b.getMatchAction(policies, pvc, volume)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, action)
|
||||
assert.Equal(t, resourcepolicies.Skip, action.Type)
|
||||
}
|
||||
|
||||
@@ -666,10 +666,22 @@ func validateNamespaceName(ns string) []error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Kubernetes does not allow asterisks in namespaces but Velero uses them as
|
||||
// wildcards. Replace asterisks with an arbitrary letter to pass Kubernetes
|
||||
// validation.
|
||||
tmpNamespace := strings.ReplaceAll(ns, "*", "x")
|
||||
// Validate the namespace name to ensure it is a valid wildcard pattern
|
||||
if err := wildcard.ValidateNamespaceName(ns); err != nil {
|
||||
return []error{err}
|
||||
}
|
||||
|
||||
// Kubernetes does not allow wildcard characters in namespaces but Velero uses them
|
||||
// for glob patterns. Replace wildcard characters with valid characters to pass
|
||||
// Kubernetes validation.
|
||||
tmpNamespace := ns
|
||||
|
||||
// Replace glob wildcard characters with valid alphanumeric characters
|
||||
// Note: Validation of wildcard patterns is handled by the wildcard package.
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "*", "x") // matches any sequence
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "?", "x") // matches single character
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "[", "x") // character class start
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "]", "x") // character class end
|
||||
|
||||
if errMsgs := validation.ValidateNamespaceName(tmpNamespace, false); errMsgs != nil {
|
||||
for _, msg := range errMsgs {
|
||||
|
||||
@@ -289,6 +289,54 @@ func TestValidateNamespaceIncludesExcludes(t *testing.T) {
|
||||
excludes: []string{"bar"},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "glob characters in includes should not error",
|
||||
includes: []string{"kube-*", "test-?", "ns-[0-9]"},
|
||||
excludes: []string{},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "glob characters in excludes should not error",
|
||||
includes: []string{"default"},
|
||||
excludes: []string{"test-*", "app-?", "ns-[1-5]"},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "character class in includes should not error",
|
||||
includes: []string{"ns-[abc]", "test-[0-9]"},
|
||||
excludes: []string{},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "mixed glob patterns should not error",
|
||||
includes: []string{"kube-*", "test-?"},
|
||||
excludes: []string{"*-test", "debug-[0-9]"},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "pipe character in includes should error",
|
||||
includes: []string{"namespace|other"},
|
||||
excludes: []string{},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "parentheses in includes should error",
|
||||
includes: []string{"namespace(prod)", "test-(dev)"},
|
||||
excludes: []string{},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "exclamation mark in includes should error",
|
||||
includes: []string{"!namespace", "test!"},
|
||||
excludes: []string{},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "unsupported characters in excludes should error",
|
||||
includes: []string{"default"},
|
||||
excludes: []string{"test|prod", "app(staging)"},
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
@@ -1082,16 +1130,6 @@ func TestExpandIncludesExcludes(t *testing.T) {
|
||||
expectedWildcardExpanded: true,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace wildcard pattern",
|
||||
includes: []string{"app-{prod,dev}"},
|
||||
excludes: []string{},
|
||||
activeNamespaces: []string{"app-prod", "app-dev", "app-test", "default"},
|
||||
expectedIncludes: []string{"app-prod", "app-dev"},
|
||||
expectedExcludes: []string{},
|
||||
expectedWildcardExpanded: true,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "empty activeNamespaces with wildcards",
|
||||
includes: []string{"kube-*"},
|
||||
@@ -1233,13 +1271,6 @@ func TestResolveNamespaceList(t *testing.T) {
|
||||
expectedNamespaces: []string{"kube-system", "kube-public"},
|
||||
preExpandWildcards: true,
|
||||
},
|
||||
{
|
||||
name: "complex wildcard pattern",
|
||||
includes: []string{"app-{prod,dev}", "kube-*"},
|
||||
excludes: []string{"*-test"},
|
||||
activeNamespaces: []string{"app-prod", "app-dev", "app-test", "kube-system", "kube-test", "default"},
|
||||
expectedNamespaces: []string{"app-prod", "app-dev", "kube-system"},
|
||||
},
|
||||
{
|
||||
name: "question mark wildcard pattern",
|
||||
includes: []string{"ns-?"},
|
||||
|
||||
@@ -417,19 +417,19 @@ func MakePodPVCAttachment(volumeName string, volumeMode *corev1api.PersistentVol
|
||||
return volumeMounts, volumeDevices, volumePath
|
||||
}
|
||||
|
||||
// GetPVForPVC returns the PersistentVolume backing a PVC
|
||||
// returns PV, error.
|
||||
// PV will be nil on error
|
||||
func GetPVForPVC(
|
||||
pvc *corev1api.PersistentVolumeClaim,
|
||||
crClient crclient.Client,
|
||||
) (*corev1api.PersistentVolume, error) {
|
||||
if pvc.Spec.VolumeName == "" {
|
||||
return nil, errors.Errorf("PVC %s/%s has no volume backing this claim",
|
||||
pvc.Namespace, pvc.Name)
|
||||
return nil, errors.Errorf("PVC %s/%s has no volume backing this claim", pvc.Namespace, pvc.Name)
|
||||
}
|
||||
if pvc.Status.Phase != corev1api.ClaimBound {
|
||||
// TODO: confirm if this PVC should be snapshotted if it has no PV bound
|
||||
return nil,
|
||||
errors.Errorf("PVC %s/%s is in phase %v and is not bound to a volume",
|
||||
pvc.Namespace, pvc.Name, pvc.Status.Phase)
|
||||
return nil, errors.Errorf("PVC %s/%s is in phase %v and is not bound to a volume",
|
||||
pvc.Namespace, pvc.Name, pvc.Status.Phase)
|
||||
}
|
||||
|
||||
pv := &corev1api.PersistentVolume{}
|
||||
|
||||
@@ -31,70 +31,77 @@ func ShouldExpandWildcards(includes []string, excludes []string) bool {
|
||||
}
|
||||
|
||||
// containsWildcardPattern checks if a pattern contains any wildcard symbols
|
||||
// Supported patterns: *, ?, [abc], {a,b,c}
|
||||
// Supported patterns: *, ?, [abc]
|
||||
// Note: . and + are treated as literal characters (not wildcards)
|
||||
// Note: ** and consecutive asterisks are NOT supported (will cause validation error)
|
||||
func containsWildcardPattern(pattern string) bool {
|
||||
return strings.ContainsAny(pattern, "*?[{")
|
||||
return strings.ContainsAny(pattern, "*?[")
|
||||
}
|
||||
|
||||
func validateWildcardPatterns(patterns []string) error {
|
||||
for _, pattern := range patterns {
|
||||
// Check for invalid regex-only patterns that we don't support
|
||||
if strings.ContainsAny(pattern, "|()") {
|
||||
return errors.New("wildcard pattern contains unsupported regex symbols: |, (, )")
|
||||
}
|
||||
|
||||
// Check for consecutive asterisks (2 or more)
|
||||
if strings.Contains(pattern, "**") {
|
||||
return errors.New("wildcard pattern contains consecutive asterisks (only single * allowed)")
|
||||
}
|
||||
|
||||
// Check for malformed brace patterns
|
||||
if err := validateBracePatterns(pattern); err != nil {
|
||||
if err := ValidateNamespaceName(pattern); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func ValidateNamespaceName(pattern string) error {
|
||||
// Check for invalid characters that are not supported in glob patterns
|
||||
if strings.ContainsAny(pattern, "|()!{},") {
|
||||
return errors.New("wildcard pattern contains unsupported characters: |, (, ), !, {, }, ,")
|
||||
}
|
||||
|
||||
// Check for consecutive asterisks (2 or more)
|
||||
if strings.Contains(pattern, "**") {
|
||||
return errors.New("wildcard pattern contains consecutive asterisks (only single * allowed)")
|
||||
}
|
||||
|
||||
// Check for malformed brace patterns
|
||||
if err := validateBracePatterns(pattern); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// validateBracePatterns checks for malformed brace patterns like unclosed braces or empty braces
|
||||
// Also validates bracket patterns [] for character classes
|
||||
func validateBracePatterns(pattern string) error {
|
||||
depth := 0
|
||||
bracketDepth := 0
|
||||
|
||||
for i := 0; i < len(pattern); i++ {
|
||||
if pattern[i] == '{' {
|
||||
braceStart := i
|
||||
depth++
|
||||
if pattern[i] == '[' {
|
||||
bracketStart := i
|
||||
bracketDepth++
|
||||
|
||||
// Scan ahead to find the matching closing brace and validate content
|
||||
for j := i + 1; j < len(pattern) && depth > 0; j++ {
|
||||
if pattern[j] == '{' {
|
||||
depth++
|
||||
} else if pattern[j] == '}' {
|
||||
depth--
|
||||
if depth == 0 {
|
||||
// Found matching closing brace - validate content
|
||||
content := pattern[braceStart+1 : j]
|
||||
if strings.Trim(content, ", \t") == "" {
|
||||
return errors.New("wildcard pattern contains empty brace pattern '{}'")
|
||||
// Scan ahead to find the matching closing bracket and validate content
|
||||
for j := i + 1; j < len(pattern) && bracketDepth > 0; j++ {
|
||||
if pattern[j] == ']' {
|
||||
bracketDepth--
|
||||
if bracketDepth == 0 {
|
||||
// Found matching closing bracket - validate content
|
||||
content := pattern[bracketStart+1 : j]
|
||||
if content == "" {
|
||||
return errors.New("wildcard pattern contains empty bracket pattern '[]'")
|
||||
}
|
||||
// Skip to the closing brace
|
||||
// Skip to the closing bracket
|
||||
i = j
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If we exited the loop without finding a match (depth > 0), brace is unclosed
|
||||
if depth > 0 {
|
||||
return errors.New("wildcard pattern contains unclosed brace '{'")
|
||||
// If we exited the loop without finding a match (bracketDepth > 0), bracket is unclosed
|
||||
if bracketDepth > 0 {
|
||||
return errors.New("wildcard pattern contains unclosed bracket '['")
|
||||
}
|
||||
|
||||
// i is now positioned at the closing brace; the outer loop will increment it
|
||||
} else if pattern[i] == '}' {
|
||||
// Found a closing brace without a matching opening brace
|
||||
return errors.New("wildcard pattern contains unmatched closing brace '}'")
|
||||
// i is now positioned at the closing bracket; the outer loop will increment it
|
||||
} else if pattern[i] == ']' {
|
||||
// Found a closing bracket without a matching opening bracket
|
||||
return errors.New("wildcard pattern contains unmatched closing bracket ']'")
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -90,7 +90,7 @@ func TestShouldExpandWildcards(t *testing.T) {
|
||||
name: "brace alternatives wildcard",
|
||||
includes: []string{"ns{prod,staging}"},
|
||||
excludes: []string{},
|
||||
expected: true, // brace alternatives are considered wildcard
|
||||
expected: false, // brace alternatives are not supported
|
||||
},
|
||||
{
|
||||
name: "dot is literal - not wildcard",
|
||||
@@ -237,9 +237,9 @@ func TestExpandWildcards(t *testing.T) {
|
||||
activeNamespaces: []string{"app-prod", "app-staging", "app-dev", "db-prod"},
|
||||
includes: []string{"app-{prod,staging}"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: []string{"app-prod", "app-staging"}, // {prod,staging} matches either
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: false,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
name: "literal dot and plus patterns",
|
||||
@@ -259,33 +259,6 @@ func TestExpandWildcards(t *testing.T) {
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // |, (, ) are not supported
|
||||
},
|
||||
{
|
||||
name: "unclosed brace patterns should error",
|
||||
activeNamespaces: []string{"app-prod"},
|
||||
includes: []string{"app-{prod,staging"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // unclosed brace
|
||||
},
|
||||
{
|
||||
name: "empty brace patterns should error",
|
||||
activeNamespaces: []string{"app-prod"},
|
||||
includes: []string{"app-{}"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // empty braces
|
||||
},
|
||||
{
|
||||
name: "unmatched closing brace should error",
|
||||
activeNamespaces: []string{"app-prod"},
|
||||
includes: []string{"app-prod}"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // unmatched closing brace
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
@@ -354,13 +327,6 @@ func TestExpandWildcardsPrivate(t *testing.T) {
|
||||
expected: []string{}, // returns empty slice, not nil
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace patterns work correctly",
|
||||
patterns: []string{"app-{prod,staging}"},
|
||||
activeNamespaces: []string{"app-prod", "app-staging", "app-dev", "app-{prod,staging}"},
|
||||
expected: []string{"app-prod", "app-staging"}, // brace patterns do expand
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "duplicate matches from multiple patterns",
|
||||
patterns: []string{"app-*", "*-prod"},
|
||||
@@ -389,20 +355,6 @@ func TestExpandWildcardsPrivate(t *testing.T) {
|
||||
expected: []string{"nsa", "nsb", "nsc"}, // [a-c] matches a to c
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "negated character class",
|
||||
patterns: []string{"ns[!abc]"},
|
||||
activeNamespaces: []string{"nsa", "nsb", "nsc", "nsd", "ns1"},
|
||||
expected: []string{"nsd", "ns1"}, // [!abc] matches anything except a, b, c
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace alternatives",
|
||||
patterns: []string{"app-{prod,test}"},
|
||||
activeNamespaces: []string{"app-prod", "app-test", "app-staging", "db-prod"},
|
||||
expected: []string{"app-prod", "app-test"}, // {prod,test} matches either
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "double asterisk should error",
|
||||
patterns: []string{"**"},
|
||||
@@ -410,13 +362,6 @@ func TestExpandWildcardsPrivate(t *testing.T) {
|
||||
expected: nil,
|
||||
expectError: true, // ** is not allowed
|
||||
},
|
||||
{
|
||||
name: "literal dot and plus",
|
||||
patterns: []string{"app.prod", "service+"},
|
||||
activeNamespaces: []string{"app.prod", "appXprod", "service+", "service"},
|
||||
expected: []string{"app.prod", "service+"}, // . and + are literal
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "unsupported regex symbols should error",
|
||||
patterns: []string{"ns(1|2)"},
|
||||
@@ -468,153 +413,101 @@ func TestValidateBracePatterns(t *testing.T) {
|
||||
expectError bool
|
||||
errorMsg string
|
||||
}{
|
||||
// Valid patterns
|
||||
// Valid square bracket patterns
|
||||
{
|
||||
name: "valid single brace pattern",
|
||||
pattern: "app-{prod,staging}",
|
||||
name: "valid square bracket pattern",
|
||||
pattern: "ns[abc]",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid brace with single option",
|
||||
pattern: "app-{prod}",
|
||||
name: "valid square bracket pattern with range",
|
||||
pattern: "ns[a-z]",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid brace with three options",
|
||||
pattern: "app-{prod,staging,dev}",
|
||||
name: "valid square bracket pattern with numbers",
|
||||
pattern: "ns[0-9]",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid pattern with text before and after brace",
|
||||
pattern: "prefix-{a,b}-suffix",
|
||||
name: "valid square bracket pattern with mixed",
|
||||
pattern: "ns[a-z0-9]",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid pattern with no braces",
|
||||
pattern: "app-prod",
|
||||
name: "valid square bracket pattern with single character",
|
||||
pattern: "ns[a]",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid pattern with asterisk",
|
||||
pattern: "app-*",
|
||||
name: "valid square bracket pattern with text before and after",
|
||||
pattern: "prefix-[abc]-suffix",
|
||||
expectError: false,
|
||||
},
|
||||
// Unclosed opening brackets
|
||||
{
|
||||
name: "valid brace with spaces around content",
|
||||
pattern: "app-{ prod , staging }",
|
||||
expectError: false,
|
||||
name: "unclosed opening bracket at end",
|
||||
pattern: "ns[abc",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed bracket",
|
||||
},
|
||||
{
|
||||
name: "valid brace with numbers",
|
||||
pattern: "ns-{1,2,3}",
|
||||
expectError: false,
|
||||
name: "unclosed opening bracket at start",
|
||||
pattern: "[abc",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed bracket",
|
||||
},
|
||||
{
|
||||
name: "valid brace with hyphens in options",
|
||||
pattern: "{app-prod,db-staging}",
|
||||
expectError: false,
|
||||
name: "unclosed opening bracket in middle",
|
||||
pattern: "ns[abc-test",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed bracket",
|
||||
},
|
||||
|
||||
// Unclosed opening braces
|
||||
// Unmatched closing brackets
|
||||
{
|
||||
name: "unclosed opening brace at end",
|
||||
pattern: "app-{prod,staging",
|
||||
name: "unmatched closing bracket at end",
|
||||
pattern: "ns-abc]",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
errorMsg: "unmatched closing bracket",
|
||||
},
|
||||
{
|
||||
name: "unclosed opening brace at start",
|
||||
pattern: "{prod,staging",
|
||||
name: "unmatched closing bracket at start",
|
||||
pattern: "]ns-abc",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
errorMsg: "unmatched closing bracket",
|
||||
},
|
||||
{
|
||||
name: "unclosed opening brace in middle",
|
||||
pattern: "app-{prod-test",
|
||||
name: "unmatched closing bracket in middle",
|
||||
pattern: "ns-]abc",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
errorMsg: "unmatched closing bracket",
|
||||
},
|
||||
{
|
||||
name: "multiple unclosed braces",
|
||||
pattern: "app-{prod-{staging",
|
||||
name: "extra closing bracket after valid pair",
|
||||
pattern: "ns[abc]]",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
errorMsg: "unmatched closing bracket",
|
||||
},
|
||||
|
||||
// Unmatched closing braces
|
||||
// Empty bracket patterns
|
||||
{
|
||||
name: "unmatched closing brace at end",
|
||||
pattern: "app-prod}",
|
||||
name: "completely empty brackets",
|
||||
pattern: "ns[]",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
errorMsg: "empty bracket pattern",
|
||||
},
|
||||
{
|
||||
name: "unmatched closing brace at start",
|
||||
pattern: "}app-prod",
|
||||
name: "empty brackets at start",
|
||||
pattern: "[]ns",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
errorMsg: "empty bracket pattern",
|
||||
},
|
||||
{
|
||||
name: "unmatched closing brace in middle",
|
||||
pattern: "app-}prod",
|
||||
name: "empty brackets standalone",
|
||||
pattern: "[]",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
{
|
||||
name: "extra closing brace after valid pair",
|
||||
pattern: "app-{prod,staging}}",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
|
||||
// Empty brace patterns
|
||||
{
|
||||
name: "completely empty braces",
|
||||
pattern: "app-{}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with only spaces",
|
||||
pattern: "app-{ }",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with only comma",
|
||||
pattern: "app-{,}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with only commas",
|
||||
pattern: "app-{,,,}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with commas and spaces",
|
||||
pattern: "app-{ , , }",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with tabs and commas",
|
||||
pattern: "app-{\t,\t}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "empty braces at start",
|
||||
pattern: "{}app-prod",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "empty braces standalone",
|
||||
pattern: "{}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
errorMsg: "empty bracket pattern",
|
||||
},
|
||||
|
||||
// Edge cases
|
||||
@@ -623,58 +516,6 @@ func TestValidateBracePatterns(t *testing.T) {
|
||||
pattern: "",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "pattern with only opening brace",
|
||||
pattern: "{",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
},
|
||||
{
|
||||
name: "pattern with only closing brace",
|
||||
pattern: "}",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
{
|
||||
name: "valid brace with special characters inside",
|
||||
pattern: "app-{prod-1,staging_2,dev.3}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace with asterisk inside option",
|
||||
pattern: "app-{prod*,staging}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "multiple valid brace patterns",
|
||||
pattern: "{app,db}-{prod,staging}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace with single character",
|
||||
pattern: "app-{a}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace with trailing comma but has content",
|
||||
pattern: "app-{prod,staging,}",
|
||||
expectError: false, // Has content, so it's valid
|
||||
},
|
||||
{
|
||||
name: "brace with leading comma but has content",
|
||||
pattern: "app-{,prod,staging}",
|
||||
expectError: false, // Has content, so it's valid
|
||||
},
|
||||
{
|
||||
name: "brace with leading comma but has content",
|
||||
pattern: "app-{{,prod,staging}",
|
||||
expectError: true, // unclosed brace
|
||||
},
|
||||
{
|
||||
name: "brace with leading comma but has content",
|
||||
pattern: "app-{,prod,staging}}",
|
||||
expectError: true, // unmatched closing brace
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
@@ -723,20 +564,6 @@ func TestExpandWildcardsEdgeCases(t *testing.T) {
|
||||
assert.ElementsMatch(t, []string{"ns-1", "ns_2", "ns.3", "ns@4"}, result)
|
||||
})
|
||||
|
||||
t.Run("complex glob combinations", func(t *testing.T) {
|
||||
activeNamespaces := []string{"app1-prod", "app2-prod", "app1-test", "db-prod", "service"}
|
||||
result, err := expandWildcards([]string{"app?-{prod,test}"}, activeNamespaces)
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{"app1-prod", "app2-prod", "app1-test"}, result)
|
||||
})
|
||||
|
||||
t.Run("escaped characters", func(t *testing.T) {
|
||||
activeNamespaces := []string{"app*", "app-prod", "app?test", "app-test"}
|
||||
result, err := expandWildcards([]string{"app\\*"}, activeNamespaces)
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{"app*"}, result)
|
||||
})
|
||||
|
||||
t.Run("mixed literal and wildcard patterns", func(t *testing.T) {
|
||||
activeNamespaces := []string{"app.prod", "app-prod", "app_prod", "test.ns"}
|
||||
result, err := expandWildcards([]string{"app.prod", "app?prod"}, activeNamespaces)
|
||||
@@ -777,12 +604,8 @@ func TestExpandWildcardsEdgeCases(t *testing.T) {
|
||||
shouldError bool
|
||||
}{
|
||||
{"unclosed bracket", "ns[abc", true},
|
||||
{"unclosed brace", "app-{prod,staging", true},
|
||||
{"nested unclosed", "ns[a{bc", true},
|
||||
{"valid bracket", "ns[abc]", false},
|
||||
{"valid brace", "app-{prod,staging}", false},
|
||||
{"empty bracket", "ns[]", true}, // empty brackets are invalid
|
||||
{"empty brace", "app-{}", true}, // empty braces are invalid
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
|
||||
@@ -15,6 +15,7 @@ params:
|
||||
latest: v1.17
|
||||
versions:
|
||||
- main
|
||||
- v1.18
|
||||
- v1.17
|
||||
- v1.16
|
||||
- v1.15
|
||||
|
||||
@@ -16,6 +16,8 @@ Backup belongs to the API group version `velero.io/v1`.
|
||||
|
||||
Here is a sample `Backup` object with each of the fields documented:
|
||||
|
||||
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
@@ -42,11 +44,12 @@ spec:
|
||||
resourcePolicy:
|
||||
kind: configmap
|
||||
name: resource-policy-configmap
|
||||
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
# Array of namespaces to include in the backup. Accepts glob patterns (*, ?, [abc]).
|
||||
# Note: '*' alone is reserved for empty fields, which means all namespaces.
|
||||
# If unspecified, all namespaces are included. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the backup. Optional.
|
||||
# Array of namespaces to exclude from the backup. Accepts glob patterns (*, ?, [abc]). Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
|
||||
@@ -16,6 +16,8 @@ Restore belongs to the API group version `velero.io/v1`.
|
||||
|
||||
Here is a sample `Restore` object with each of the fields documented:
|
||||
|
||||
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
@@ -45,11 +47,11 @@ spec:
|
||||
writeSparseFiles: true
|
||||
# ParallelFilesDownload is the concurrency number setting for restore
|
||||
parallelFilesDownload: 10
|
||||
# Array of namespaces to include in the restore. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
# Array of namespaces to include in the restore. Accepts glob patterns (*, ?, [abc]).
|
||||
# If unspecified, all namespaces are included. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the restore. Optional.
|
||||
# Array of namespaces to exclude from the restore. Accepts glob patterns (*, ?, [abc]). Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
|
||||
71
site/content/docs/main/namespace-glob-patterns.md
Normal file
71
site/content/docs/main/namespace-glob-patterns.md
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: "Namespace Glob Patterns"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
When using `--include-namespaces` and `--exclude-namespaces` flags with backup and restore commands, you can use glob patterns to match multiple namespaces.
|
||||
|
||||
## Supported Patterns
|
||||
|
||||
Velero supports the following glob pattern characters:
|
||||
|
||||
- `*` - Matches any sequence of characters
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "app-*"
|
||||
# Matches: app-prod, app-staging, app-dev, etc.
|
||||
```
|
||||
|
||||
- `?` - Matches exactly one character
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "ns?"
|
||||
# Matches: ns1, ns2, nsa, but NOT ns10
|
||||
```
|
||||
|
||||
- `[abc]` - Matches any single character in the brackets
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "ns[123]"
|
||||
# Matches: ns1, ns2, ns3
|
||||
```
|
||||
|
||||
- `[a-z]` - Matches any single character in the range
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "ns[a-c]"
|
||||
# Matches: nsa, nsb, nsc
|
||||
```
|
||||
|
||||
## Unsupported Patterns
|
||||
|
||||
The following patterns are **not supported** and will cause validation errors:
|
||||
|
||||
- `**` - Consecutive asterisks
|
||||
- `|` - Alternation (regex operator)
|
||||
- `()` - Grouping (regex operators)
|
||||
- `!` - Negation
|
||||
- `{}` - Brace expansion
|
||||
- `,` - Comma (used in brace expansion)
|
||||
|
||||
## Special Cases
|
||||
|
||||
- `*` alone means "all namespaces" and is not expanded
|
||||
- Empty brackets `[]` are invalid
|
||||
- Unmatched or unclosed brackets will cause validation errors
|
||||
|
||||
## Examples
|
||||
|
||||
Combine patterns with include and exclude flags:
|
||||
|
||||
```bash
|
||||
# Backup all production namespaces except test
|
||||
velero backup create prod-backup \
|
||||
--include-namespaces "*-prod" \
|
||||
--exclude-namespaces "test-*"
|
||||
|
||||
# Backup specific numbered namespaces
|
||||
velero backup create numbered-backup \
|
||||
--include-namespaces "app-[0-9]"
|
||||
|
||||
# Restore namespaces matching multiple patterns
|
||||
velero restore create my-restore \
|
||||
--from-backup my-backup \
|
||||
--include-namespaces "frontend-*,backend-*"
|
||||
```
|
||||
@@ -17,7 +17,11 @@ Wildcard takes precedence when both a wildcard and specific resource are include
|
||||
|
||||
### --include-namespaces
|
||||
|
||||
Namespaces to include. Default is `*`, all namespaces.
|
||||
Namespaces to include. Accepts glob patterns (`*`, `?`, `[abc]`). Default is `*`, all namespaces.
|
||||
|
||||
See [Namespace Glob Patterns](namespace-glob-patterns) for more details on supported patterns.
|
||||
|
||||
Note: `*` alone is reserved for empty fields, which means all namespaces.
|
||||
|
||||
* Backup a namespace and it's objects.
|
||||
|
||||
@@ -158,7 +162,9 @@ Wildcard excludes are ignored.
|
||||
|
||||
### --exclude-namespaces
|
||||
|
||||
Namespaces to exclude.
|
||||
Namespaces to exclude. Accepts glob patterns (`*`, `?`, `[abc]`).
|
||||
|
||||
See [Namespace Glob Patterns](namespace-glob-patterns.md) for more details on supported patterns.
|
||||
|
||||
* Exclude kube-system from the cluster backup.
|
||||
|
||||
|
||||
58
site/content/docs/v1.18/_index.md
Normal file
58
site/content/docs/v1.18/_index.md
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
toc: "false"
|
||||
cascade:
|
||||
version: v1.18
|
||||
toc: "true"
|
||||
---
|
||||
![100]
|
||||
|
||||
[![Build Status][1]][2]
|
||||
|
||||
## Overview
|
||||
|
||||
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
|
||||
|
||||
* Take backups of your cluster and restore in case of loss.
|
||||
* Migrate cluster resources to other clusters.
|
||||
* Replicate your production cluster to development and testing clusters.
|
||||
|
||||
Velero consists of:
|
||||
|
||||
* A server that runs on your cluster
|
||||
* A command-line client that runs locally
|
||||
|
||||
## Documentation
|
||||
|
||||
This site is our documentation home with installation instructions, plus information about customizing Velero for your needs, architecture, extending Velero, contributing to Velero and more.
|
||||
|
||||
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero-users channel][25] on the Kubernetes Slack server.
|
||||
|
||||
## Contributing
|
||||
|
||||
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.18.0/start-contributing/) documentation for guidance on how to setup Velero for development.
|
||||
|
||||
## Changelog
|
||||
|
||||
See [the list of releases][6] to find out about feature changes.
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero/workflows/Main%20CI/badge.svg
|
||||
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Main+CI"
|
||||
|
||||
[4]: https://github.com/vmware-tanzu/velero/issues
|
||||
[6]: https://github.com/vmware-tanzu/velero/releases
|
||||
|
||||
[9]: https://kubernetes.io/docs/setup/
|
||||
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
|
||||
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
|
||||
[12]: https://github.com/kubernetes/kubernetes/blob/main/cluster/addons/dns/README.md
|
||||
[14]: https://github.com/kubernetes/kubernetes
|
||||
[24]: https://groups.google.com/forum/#!forum/projectvelero
|
||||
[25]: https://kubernetes.slack.com/messages/velero-users
|
||||
|
||||
[30]: troubleshooting.md
|
||||
|
||||
[100]: img/velero.png
|
||||
21
site/content/docs/v1.18/api-types/README.md
Normal file
21
site/content/docs/v1.18/api-types/README.md
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
title: "Table of Contents"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## API types
|
||||
|
||||
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
|
||||
(hooks)
|
||||
|
||||
* [Backup][1]
|
||||
* [Restore][2]
|
||||
* [Schedule][3]
|
||||
* [BackupStorageLocation][4]
|
||||
* [VolumeSnapshotLocation][5]
|
||||
|
||||
[1]: backup.md
|
||||
[2]: restore.md
|
||||
[3]: schedule.md
|
||||
[4]: backupstoragelocation.md
|
||||
[5]: volumesnapshotlocation.md
|
||||
19
site/content/docs/v1.18/api-types/_index.md
Normal file
19
site/content/docs/v1.18/api-types/_index.md
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
layout: docs
|
||||
title: API types
|
||||
---
|
||||
|
||||
Here's a list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
|
||||
(hooks)
|
||||
|
||||
* [Backup][1]
|
||||
* [Restore][2]
|
||||
* [Schedule][3]
|
||||
* [BackupStorageLocation][4]
|
||||
* [VolumeSnapshotLocation][5]
|
||||
|
||||
[1]: backup.md
|
||||
[2]: restore.md
|
||||
[3]: schedule.md
|
||||
[4]: backupstoragelocation.md
|
||||
[5]: volumesnapshotlocation.md
|
||||
213
site/content/docs/v1.18/api-types/backup.md
Normal file
213
site/content/docs/v1.18/api-types/backup.md
Normal file
@@ -0,0 +1,213 @@
|
||||
---
|
||||
title: "Backup API Type"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Use
|
||||
|
||||
Use the `Backup` API type to request the Velero server to perform a backup. Once created, the
|
||||
Velero Server immediately starts the backup process.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Backup belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Backup` object with each of the fields documented:
|
||||
|
||||
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Backup
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Backup name. May be any valid Kubernetes object name. Required.
|
||||
name: a
|
||||
# Backup namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the backup. Required.
|
||||
spec:
|
||||
# CSISnapshotTimeout specifies the time used to wait for
|
||||
# CSI VolumeSnapshot status turns to ReadyToUse during creation, before
|
||||
# returning error as timeout. The default value is 10 minute.
|
||||
csiSnapshotTimeout: 10m
|
||||
# ItemOperationTimeout specifies the time used to wait for
|
||||
# asynchronous BackupItemAction operations
|
||||
# The default value is 4 hour.
|
||||
itemOperationTimeout: 4h
|
||||
# resourcePolicy specifies the referenced resource policies that backup should follow
|
||||
# optional
|
||||
resourcePolicy:
|
||||
kind: configmap
|
||||
name: resource-policy-configmap
|
||||
# Array of namespaces to include in the backup. Accepts glob patterns (*, ?, [abc]).
|
||||
# If unspecified, all namespaces are included. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the backup. Accepts glob patterns (*, ?, [abc]). Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
# Order of the resources to be collected during the backup process. It's a map with key being the plural resource
|
||||
# name, and the value being a list of object names separated by comma. Each resource name has format "namespace/objectname".
|
||||
# For cluster resources, simply use "objectname". Optional
|
||||
orderedResources:
|
||||
pods: mysql/mysql-cluster-replica-0,mysql/mysql-cluster-replica-1,mysql/mysql-cluster-source-0
|
||||
persistentvolumes: pvc-87ae0832-18fd-4f40-a2a4-5ed4242680c4,pvc-63be1bb0-90f5-4629-a7db-b8ce61ee29b3
|
||||
# Whether to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the backup. For example, if a
|
||||
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Array of cluster-scoped resources to exclude from the backup. Resources may be shortcuts
|
||||
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
|
||||
# no additional cluster-scoped resources are excluded. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
excludedClusterScopedResources: {}
|
||||
# Array of cluster-scoped resources to include from the backup. Resources may be shortcuts
|
||||
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
|
||||
# no additional cluster-scoped resources are included. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
includedClusterScopedResources: {}
|
||||
# Array of namespace-scoped resources to exclude from the backup. Resources may be shortcuts
|
||||
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
|
||||
# no namespace-scoped resources are excluded. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
excludedNamespaceScopedResources: {}
|
||||
# Array of namespace-scoped resources to include from the backup. Resources may be shortcuts
|
||||
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
|
||||
# all namespace-scoped resources are included. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
includedNamespaceScopedResources: {}
|
||||
# Individual objects must match this label selector to be included in the backup. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Individual object when matched with any of the label selector specified in the set are to be included in the backup. Optional.
|
||||
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the backup request
|
||||
orLabelSelectors:
|
||||
- matchLabels:
|
||||
app: velero
|
||||
- matchLabels:
|
||||
app: data-protection
|
||||
# Whether or not to snapshot volumes. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
|
||||
# a persistent volume provider is configured for Velero.
|
||||
snapshotVolumes: null
|
||||
# Where to store the tarball and logs.
|
||||
storageLocation: aws-primary
|
||||
# The list of locations in which to store volume snapshots created for this backup.
|
||||
volumeSnapshotLocations:
|
||||
- aws-primary
|
||||
- gcp-primary
|
||||
# The amount of time before this backup is eligible for garbage collection. If not specified,
|
||||
# a default value of 30 days will be used. The default can be configured on the velero server
|
||||
# by passing the flag --default-backup-ttl.
|
||||
ttl: 24h0m0s
|
||||
# whether pod volume file system backup should be used for all volumes by default.
|
||||
defaultVolumesToFsBackup: true
|
||||
# Whether snapshot data should be moved. If set, data movement is launched after the snapshot is created.
|
||||
snapshotMoveData: true
|
||||
# The data mover to be used by the backup. If the value is "" or "velero", the built-in data mover will be used.
|
||||
datamover: velero
|
||||
# UploaderConfig specifies the configuration for the uploader
|
||||
uploaderConfig:
|
||||
# ParallelFilesUpload is the number of files parallel uploads to perform when using the uploader.
|
||||
parallelFilesUpload: 10
|
||||
# Actions to perform at different times during a backup. The only hook supported is
|
||||
# executing a command in a container in a pod using the pod exec API. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
-
|
||||
# Name of the hook. Will be displayed in backup log.
|
||||
name: my-hook
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to which this hook applies. The only resource supported at this time is
|
||||
# pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
|
||||
pre:
|
||||
-
|
||||
# The type of hook. This must be "exec".
|
||||
exec:
|
||||
# The name of the container where the command will be executed. If unspecified, the
|
||||
# first container in the pod will be used. Optional.
|
||||
container: my-container
|
||||
# The command to execute, specified as an array. Required.
|
||||
command:
|
||||
- /bin/uname
|
||||
- -a
|
||||
# How to handle an error executing the command. Valid values are Fail and Continue.
|
||||
# Defaults to Fail. Optional.
|
||||
onError: Fail
|
||||
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
|
||||
timeout: 10s
|
||||
# An array of hooks to run after all custom actions and additional items have been
|
||||
# processed. Only "exec" hooks are supported.
|
||||
post:
|
||||
# Same content as pre above.
|
||||
# Status about the Backup. Users should not set any data here.
|
||||
status:
|
||||
# The version of this Backup. The only version supported is 1.
|
||||
version: 1
|
||||
# The date and time when the Backup is eligible for garbage collection.
|
||||
expiration: null
|
||||
# The current phase.
|
||||
# Valid values are New, FailedValidation, InProgress, WaitingForPluginOperations,
|
||||
# WaitingForPluginOperationsPartiallyFailed, FinalizingafterPluginOperations,
|
||||
# FinalizingPartiallyFailed, Completed, PartiallyFailed, Failed.
|
||||
phase: ""
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors: null
|
||||
# Date/time when the backup started being processed.
|
||||
startTimestamp: 2019-04-29T15:58:43Z
|
||||
# Date/time when the backup finished being processed.
|
||||
completionTimestamp: 2019-04-29T15:58:56Z
|
||||
# Number of volume snapshots that Velero tried to create for this backup.
|
||||
volumeSnapshotsAttempted: 2
|
||||
# Number of volume snapshots that Velero successfully created for this backup.
|
||||
volumeSnapshotsCompleted: 1
|
||||
# Number of attempted BackupItemAction operations for this backup.
|
||||
backupItemOperationsAttempted: 2
|
||||
# Number of BackupItemAction operations that Velero successfully completed for this backup.
|
||||
backupItemOperationsCompleted: 1
|
||||
# Number of BackupItemAction operations that ended in failure for this backup.
|
||||
backupItemOperationsFailed: 0
|
||||
# Number of warnings that were logged by the backup.
|
||||
warnings: 2
|
||||
# Number of errors that were logged by the backup.
|
||||
errors: 0
|
||||
# An error that caused the entire backup to fail.
|
||||
failureReason: ""
|
||||
```
|
||||
112
site/content/docs/v1.18/api-types/backupstoragelocation.md
Normal file
112
site/content/docs/v1.18/api-types/backupstoragelocation.md
Normal file
@@ -0,0 +1,112 @@
|
||||
---
|
||||
title: "Velero Backup Storage Locations"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Backup Storage Location
|
||||
|
||||
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
|
||||
|
||||
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
|
||||
|
||||
A sample YAML `BackupStorageLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
backupSyncPeriod: 2m0s
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: myBucket
|
||||
credential:
|
||||
name: secret-name
|
||||
key: key-in-secret
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Example with self-signed certificate
|
||||
|
||||
When using object storage with self-signed certificates, you can specify the CA certificate:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: velero-backups
|
||||
# Base64 encoded CA certificate (deprecated - use caCertRef instead)
|
||||
caCert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1VENDQXFHZ0F3SUJBZ0lVTWRiWkNaYnBhcE9lYThDR0NMQnhhY3dVa213d0RRWUpLb1pJaHZjTkFRRUwKQlFBd2JERUxNQWtHQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdNQ2tOaGJHbG1iM0p1YVdFeEZqQVVCZ05WQkFjTQpEVk5oYmlCR2NtRnVZMmx6WTI4eEdEQVdCZ05WQkFvTUQwVjRZVzF3YkdVZ1EyOXRjR0Z1ZVRFV01CUUdBMVVFCkF3d05aWGhoYlhCc1pTNXNiMk5oYkRBZUZ3MHlNekEzTVRBeE9UVXlNVGhhRncweU5EQTNNRGt4T1RVeU1UaGEKTUd3eEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUNEQXBEWEJ4cG1iM0p1YVdFeEZqQVVCZ05WQkFjTURWTmgKYmlCR2NtRnVZMmx6WTI4eEdEQVdCZ05WQkFvTUQwVjRZVzF3YkdVZ1EyOXRjR0Z1ZVRFV01CUUdBMVVFQXd3TgpaWGhoYlhCc1pTNXNiMk5oYkRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS1dqCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
|
||||
config:
|
||||
region: us-east-1
|
||||
s3Url: https://minio.example.com
|
||||
```
|
||||
|
||||
#### Using a CA Certificate with Secret Reference (Recommended)
|
||||
|
||||
The recommended approach is to use `caCertRef` to reference a Secret containing the CA certificate:
|
||||
|
||||
```yaml
|
||||
# First, create a Secret containing the CA certificate
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: storage-ca-cert
|
||||
namespace: velero
|
||||
type: Opaque
|
||||
data:
|
||||
ca-bundle.crt: <base64-encoded-certificate>
|
||||
|
||||
---
|
||||
# Then reference it in the BackupStorageLocation
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: myBucket
|
||||
caCertRef:
|
||||
name: storage-ca-cert
|
||||
key: ca-bundle.crt
|
||||
# ... other configuration
|
||||
```
|
||||
|
||||
**Note:** You cannot specify both `caCert` and `caCertRef` in the same BackupStorageLocation. The `caCert` field is deprecated and will be removed in a future version.
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
{{< table caption="Main config parameters" >}}
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String | Required Field | The name for whichever object storage provider will be used to store the backups. See [your object storage provider's plugin documentation](../supported-providers) for the appropriate value to use. |
|
||||
| `objectStorage` | ObjectStorageLocation | Required Field | Specification of the object storage for the given provider. |
|
||||
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
|
||||
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
|
||||
| `objectStorage/caCert` | String | Optional Field | **Deprecated**: Use `caCertRef` instead. A base64 encoded CA bundle to be used when verifying TLS connections |
|
||||
| `objectStorage/caCertRef` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | Reference to a Secret containing a CA bundle to be used when verifying TLS connections. The Secret must be in the same namespace as the BackupStorageLocation. |
|
||||
| `objectStorage/caCertRef/name` | String | Required Field (when using caCertRef) | The name of the Secret containing the CA certificate bundle |
|
||||
| `objectStorage/caCertRef/key` | String | Required Field (when using caCertRef) | The key within the Secret that contains the CA certificate bundle |
|
||||
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the object store plugin. See [your object storage provider's plugin documentation](../supported-providers) for details. |
|
||||
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
|
||||
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
|
||||
| `validationFrequency` | metav1.Duration | Optional Field | How frequently Velero should validate the object storage . Default is Velero's server validation frequency. Set this to `0s` to disable validation. Default 1 minute. |
|
||||
| `credential` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | The credential information to be used with this location. |
|
||||
| `credential/name` | String | Optional Field | The name of the secret within the Velero namespace which contains the credential information. |
|
||||
| `credential/key` | String | Optional Field | The key to use within the secret. |
|
||||
{{< /table >}}
|
||||
221
site/content/docs/v1.18/api-types/restore.md
Normal file
221
site/content/docs/v1.18/api-types/restore.md
Normal file
@@ -0,0 +1,221 @@
|
||||
---
|
||||
title: "Restore API Type"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Use
|
||||
|
||||
The `Restore` API type is used as a request for the Velero server to perform a Restore. Once created, the
|
||||
Velero Server immediately starts the Restore process.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Restore belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Restore` object with each of the fields documented:
|
||||
|
||||
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Restore
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Restore name. May be any valid Kubernetes object name. Required.
|
||||
name: a-very-special-backup-0000111122223333
|
||||
# Restore namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the restore. Required.
|
||||
spec:
|
||||
# The unique name of the Velero backup to restore from.
|
||||
backupName: a-very-special-backup
|
||||
# The unique name of the Velero schedule
|
||||
# to restore from. If specified, and BackupName is empty, Velero will
|
||||
# restore from the most recent successful backup created from this schedule.
|
||||
scheduleName: my-scheduled-backup-name
|
||||
# ItemOperationTimeout specifies the time used to wait for
|
||||
# asynchronous BackupItemAction operations
|
||||
# The default value is 4 hour.
|
||||
itemOperationTimeout: 4h
|
||||
# UploaderConfig specifies the configuration for the restore.
|
||||
uploaderConfig:
|
||||
# WriteSparseFiles is a flag to indicate whether write files sparsely or not
|
||||
writeSparseFiles: true
|
||||
# ParallelFilesDownload is the concurrency number setting for restore
|
||||
parallelFilesDownload: 10
|
||||
# Array of namespaces to include in the restore. Accepts glob patterns (*, ?, [abc]).
|
||||
# If unspecified, all namespaces are included. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the restore. Accepts glob patterns (*, ?, [abc]). Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the restore. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
|
||||
# restoreStatus selects resources to restore not only the specification, but
|
||||
# the status of the manifest. This is specially useful for CRDs that maintain
|
||||
# external references. By default, it excludes all resources.
|
||||
restoreStatus:
|
||||
# Array of resources to include in the restore status. Just like above,
|
||||
# resources may be shortcuts (for example 'po' for 'pods') or fully-qualified.
|
||||
# If unspecified, no resources are included. Optional.
|
||||
includedResources:
|
||||
- workflows
|
||||
# Array of resources to exclude from the restore status. Resources may be
|
||||
# shortcuts (for example 'po' for 'pods') or fully-qualified.
|
||||
# If unspecified, all resources are excluded. Optional.
|
||||
excludedResources: []
|
||||
|
||||
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the restore. For example, if a
|
||||
# PersistentVolumeClaim is included in the restore, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Individual objects must match this label selector to be included in the restore. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Individual object when matched with any of the label selector specified in the set are to be included in the restore. Optional.
|
||||
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the restore request
|
||||
orLabelSelectors:
|
||||
- matchLabels:
|
||||
app: velero
|
||||
- matchLabels:
|
||||
app: data-protection
|
||||
# namespaceMapping is a map of source namespace names to
|
||||
# target namespace names to restore into. Any source namespaces not
|
||||
# included in the map will be restored into namespaces of the same name.
|
||||
namespaceMapping:
|
||||
namespace-backup-from: namespace-to-restore-to
|
||||
# restorePVs specifies whether to restore all included PVs
|
||||
# from snapshot. Optional
|
||||
restorePVs: true
|
||||
# preserveNodePorts specifies whether to restore old nodePorts from backup,
|
||||
# so that the exposed port numbers on the node will remain the same after restore. Optional
|
||||
preserveNodePorts: true
|
||||
# existingResourcePolicy specifies the restore behaviour
|
||||
# for the Kubernetes resource to be restored. Optional
|
||||
existingResourcePolicy: none
|
||||
# ResourceModifier specifies the reference to JSON resource patches
|
||||
# that should be applied to resources before restoration. Optional
|
||||
resourceModifier:
|
||||
kind: ConfigMap
|
||||
name: resource-modifier-configmap
|
||||
# Actions to perform during or post restore. The only hooks currently supported are
|
||||
# adding an init container to a pod before it can be restored and executing a command in a
|
||||
# restored pod's container. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
# Name is the name of this hook.
|
||||
- name: restore-hook-1
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- ns1
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- ns3
|
||||
# Array of resources to which this hook applies. If unspecified, the hook applies to all resources in the backup. Optional.
|
||||
# The only resource supported at this time is pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run during or after restores. Currently only "init" and "exec" hooks
|
||||
# are supported.
|
||||
postHooks:
|
||||
# The type of the hook. This must be "init" or "exec".
|
||||
- init:
|
||||
# An array of container specs to be added as init containers to pods to which this hook applies to.
|
||||
initContainers:
|
||||
- name: restore-hook-init1
|
||||
image: alpine:latest
|
||||
# Mounting volumes from the podSpec to which this hooks applies to.
|
||||
volumeMounts:
|
||||
- mountPath: /restores/pvc1-vm
|
||||
# Volume name from the podSpec
|
||||
name: pvc1-vm
|
||||
command:
|
||||
- /bin/ash
|
||||
- -c
|
||||
- echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz
|
||||
- name: restore-hook-init2
|
||||
image: alpine:latest
|
||||
# Mounting volumes from the podSpec to which this hooks applies to.
|
||||
volumeMounts:
|
||||
- mountPath: /restores/pvc2-vm
|
||||
# Volume name from the podSpec
|
||||
name: pvc2-vm
|
||||
command:
|
||||
- /bin/ash
|
||||
- -c
|
||||
- echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed
|
||||
- exec:
|
||||
# The container name where the hook will be executed. Defaults to the first container.
|
||||
# Optional.
|
||||
container: foo
|
||||
# The command that will be executed in the container. Required.
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- "psql < /backup/backup.sql"
|
||||
# How long to wait for a container to become ready. This should be long enough for the
|
||||
# container to start plus any preceding hooks in the same container to complete. The wait
|
||||
# timeout begins when the container is restored and may require time for the image to pull
|
||||
# and volumes to mount. If not set the restore will wait indefinitely. Optional.
|
||||
waitTimeout: 5m
|
||||
# How long to wait once execution begins. Defaults to 30 seconds. Optional.
|
||||
execTimeout: 1m
|
||||
# How to handle execution failures. Valid values are `Fail` and `Continue`. Defaults to
|
||||
# `Continue`. With `Continue` mode, execution failures are logged only. With `Fail` mode,
|
||||
# no more restore hooks will be executed in any container in any pod and the status of the
|
||||
# Restore will be `PartiallyFailed`. Optional.
|
||||
onError: Continue
|
||||
# RestoreStatus captures the current status of a Velero restore. Users should not set any data here.
|
||||
status:
|
||||
# The current phase.
|
||||
# Valid values are New, FailedValidation, InProgress, WaitingForPluginOperations,
|
||||
# WaitingForPluginOperationsPartiallyFailed, Completed, PartiallyFailed, Failed.
|
||||
phase: ""
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors: null
|
||||
# Number of attempted RestoreItemAction operations for this restore.
|
||||
restoreItemOperationsAttempted: 2
|
||||
# Number of RestoreItemAction operations that Velero successfully completed for this restore.
|
||||
restoreItemOperationsCompleted: 1
|
||||
# Number of RestoreItemAction operations that ended in failure for this restore.
|
||||
restoreItemOperationsFailed: 0
|
||||
# Number of warnings that were logged by the restore.
|
||||
warnings: 2
|
||||
# Errors is a count of all error messages that were generated
|
||||
# during execution of the restore. The actual errors are stored in object
|
||||
# storage.
|
||||
errors: 0
|
||||
# FailureReason is an error that caused the entire restore
|
||||
# to fail.
|
||||
failureReason:
|
||||
|
||||
```
|
||||
216
site/content/docs/v1.18/api-types/schedule.md
Normal file
216
site/content/docs/v1.18/api-types/schedule.md
Normal file
@@ -0,0 +1,216 @@
|
||||
---
|
||||
title: "Schedule API Type"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Use
|
||||
|
||||
The `Schedule` API type is used as a repeatable request for the Velero server to perform a backup for a given cron notation. Once created, the
|
||||
Velero Server will start the backup process. It will then wait for the next valid point of the given cron expression and execute the backup
|
||||
process on a repeating basis.
|
||||
|
||||
### Schedule Control Fields
|
||||
|
||||
The Schedule API provides several fields to control backup execution behavior:
|
||||
|
||||
- **paused**: When set to `true`, the schedule is paused and no new backups will be created. When set back to `false`, the schedule is unpaused and will resume creating backups according to the cron schedule.
|
||||
|
||||
- **skipImmediately**: Controls whether to skip an immediate backup when a schedule is created or unpaused. By default (when `false`), if a backup is due immediately upon creation or unpausing, it will be executed right away. When set to `true`, the controller will:
|
||||
1. Skip the immediate backup
|
||||
2. Record the current time in the `lastSkipped` status field
|
||||
3. Automatically reset `skipImmediately` back to `false` (one-time use)
|
||||
4. Schedule the next backup based on the cron expression, using `lastSkipped` as the reference time
|
||||
|
||||
- **lastSkipped**: A status field (not directly settable) that records when a backup was last skipped due to `skipImmediately` being `true`. The controller uses this timestamp, if more recent than `lastBackup`, to calculate the next scheduled backup time.
|
||||
|
||||
This "consume and reset" pattern for `skipImmediately` ensures that after skipping one immediate backup, the schedule returns to normal behavior for subsequent runs without requiring user intervention.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Schedule belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Schedule` object with each of the fields documented:
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Schedule
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Schedule name. May be any valid Kubernetes object name. Required.
|
||||
name: a
|
||||
# Schedule namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the scheduled backup. Required.
|
||||
spec:
|
||||
# Paused specifies whether the schedule is paused or not
|
||||
paused: false
|
||||
# SkipImmediately specifies whether to skip backup if schedule is due immediately when unpaused or created.
|
||||
# This is a one-time flag that will be automatically reset to false after being consumed.
|
||||
# When true, the controller will skip the immediate backup, set LastSkipped timestamp, and reset this to false.
|
||||
skipImmediately: false
|
||||
# Schedule is a Cron expression defining when to run the Backup
|
||||
schedule: 0 7 * * *
|
||||
# Specifies whether to use OwnerReferences on backups created by this Schedule.
|
||||
# Notice: if set to true, when schedule is deleted, backups will be deleted too. Optional.
|
||||
useOwnerReferencesInBackup: false
|
||||
# Template is the spec that should be used for each backup triggered by this schedule.
|
||||
template:
|
||||
# CSISnapshotTimeout specifies the time used to wait for
|
||||
# CSI VolumeSnapshot status turns to ReadyToUse during creation, before
|
||||
# returning error as timeout. The default value is 10 minute.
|
||||
csiSnapshotTimeout: 10m
|
||||
# resourcePolicy specifies the referenced resource policies that backup should follow
|
||||
# optional
|
||||
resourcePolicy:
|
||||
kind: configmap
|
||||
name: resource-policy-configmap
|
||||
# Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the scheduled backup. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
orderedResources:
|
||||
pods: mysql/mysql-cluster-replica-0,mysql/mysql-cluster-replica-1,mysql/mysql-cluster-source-0
|
||||
persistentvolumes: pvc-87ae0832-18fd-4f40-a2a4-5ed4242680c4,pvc-63be1bb0-90f5-4629-a7db-b8ce61ee29b3
|
||||
# Whether to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a
|
||||
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Array of cluster-scoped resources to exclude from the backup. Resources may be shortcuts
|
||||
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
|
||||
# no additional cluster-scoped resources are excluded. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
excludedClusterScopedResources: {}
|
||||
# Array of cluster-scoped resources to include from the backup. Resources may be shortcuts
|
||||
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
|
||||
# no additional cluster-scoped resources are included. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
includedClusterScopedResources: {}
|
||||
# Array of namespace-scoped resources to exclude from the backup. Resources may be shortcuts
|
||||
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
|
||||
# no namespace-scoped resources are excluded. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
excludedNamespaceScopedResources: {}
|
||||
# Array of namespace-scoped resources to include from the backup. Resources may be shortcuts
|
||||
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
|
||||
# all namespace-scoped resources are included. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
includedNamespaceScopedResources: {}
|
||||
# Individual objects must match this label selector to be included in the scheduled backup. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Individual object when matched with any of the label selector specified in the set are to be included in the backup. Optional.
|
||||
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the backup request
|
||||
orLabelSelectors:
|
||||
- matchLabels:
|
||||
app: velero
|
||||
- matchLabels:
|
||||
app: data-protection
|
||||
# Whether to snapshot volumes. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
|
||||
# a persistent volume provider is configured for Velero.
|
||||
snapshotVolumes: null
|
||||
# Where to store the tarball and logs.
|
||||
storageLocation: aws-primary
|
||||
# The list of locations in which to store volume snapshots created for backups under this schedule.
|
||||
volumeSnapshotLocations:
|
||||
- aws-primary
|
||||
- gcp-primary
|
||||
# The amount of time before backups created on this schedule are eligible for garbage collection. If not specified,
|
||||
# a default value of 30 days will be used. The default can be configured on the velero server
|
||||
# by passing the flag --default-backup-ttl.
|
||||
ttl: 24h0m0s
|
||||
# whether pod volume file system backup should be used for all volumes by default.
|
||||
defaultVolumesToFsBackup: true
|
||||
# Whether snapshot data should be moved. If set, data movement is launched after the snapshot is created.
|
||||
snapshotMoveData: true
|
||||
# The data mover to be used by the backup. If the value is "" or "velero", the built-in data mover will be used.
|
||||
datamover: velero
|
||||
# UploaderConfig specifies the configuration for the uploader
|
||||
uploaderConfig:
|
||||
# ParallelFilesUpload is the number of files parallel uploads to perform when using the uploader.
|
||||
parallelFilesUpload: 10
|
||||
# The labels you want on backup objects, created from this schedule (instead of copying the labels you have on schedule object itself).
|
||||
# When this field is set, the labels from the Schedule resource are not copied to the Backup resource.
|
||||
metadata:
|
||||
labels:
|
||||
labelname: somelabelvalue
|
||||
# Actions to perform at different times during a backup. The only hook supported is
|
||||
# executing a command in a container in a pod using the pod exec API. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
-
|
||||
# Name of the hook. Will be displayed in backup log.
|
||||
name: my-hook
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to which this hook applies. The only resource supported at this time is
|
||||
# pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
|
||||
pre:
|
||||
-
|
||||
# The type of hook. This must be "exec".
|
||||
exec:
|
||||
# The name of the container where the command will be executed. If unspecified, the
|
||||
# first container in the pod will be used. Optional.
|
||||
container: my-container
|
||||
# The command to execute, specified as an array. Required.
|
||||
command:
|
||||
- /bin/uname
|
||||
- -a
|
||||
# How to handle an error executing the command. Valid values are Fail and Continue.
|
||||
# Defaults to Fail. Optional.
|
||||
onError: Fail
|
||||
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
|
||||
timeout: 10s
|
||||
# An array of hooks to run after all custom actions and additional items have been
|
||||
# processed. Only "exec" hooks are supported.
|
||||
post:
|
||||
# Same content as pre above.
|
||||
status:
|
||||
# The current phase.
|
||||
# Valid values are New, Enabled, FailedValidation.
|
||||
phase: ""
|
||||
# Date/time of the last backup for a given schedule
|
||||
lastBackup:
|
||||
# Date/time when a backup was last skipped due to skipImmediately being true
|
||||
lastSkipped:
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors:
|
||||
```
|
||||
46
site/content/docs/v1.18/api-types/volumesnapshotlocation.md
Normal file
46
site/content/docs/v1.18/api-types/volumesnapshotlocation.md
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
title: "Velero Volume Snapshot Location"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Volume Snapshot Location
|
||||
|
||||
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
|
||||
|
||||
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
|
||||
|
||||
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
|
||||
|
||||
A sample YAML `VolumeSnapshotLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: VolumeSnapshotLocation
|
||||
metadata:
|
||||
name: aws-default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
credential:
|
||||
name: secret-name
|
||||
key: key-in-secret
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
{{< table caption="Main config parameters" >}}
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String | Required Field | The name for whichever storage provider will be used to create/store the volume snapshots. See [your volume snapshot provider's plugin documentation](../supported-providers) for the appropriate value to use. |
|
||||
| `config` | map string string | None (Optional) | Provider-specific configuration keys/values to be passed to the volume snapshotter plugin. See [your volume snapshot provider's plugin documentation](../supported-providers) for details. |
|
||||
| `credential` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | The credential information to be used with this location. |
|
||||
| `credential/name` | String | Optional Field | The name of the secret within the Velero namespace which contains the credential information. |
|
||||
| `credential/key` | String | Optional Field | The key to use within the secret. |
|
||||
{{< /table >}}
|
||||
126
site/content/docs/v1.18/backup-hooks.md
Normal file
126
site/content/docs/v1.18/backup-hooks.md
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
title: "Backup Hooks"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
Velero supports executing commands in containers in pods during a backup.
|
||||
|
||||
## Backup Hooks
|
||||
|
||||
When performing a backup, you can specify one or more commands to execute in a container in a pod
|
||||
when that pod is being backed up. The commands can be configured to run *before* any custom action
|
||||
processing ("pre" hooks), or after all custom actions have been completed and any additional items
|
||||
specified by custom action have been backed up ("post" hooks). Note that hooks are _not_ executed within a shell
|
||||
on the containers.
|
||||
|
||||
As of Velero 1.15, related items that must be backed up together are grouped into ItemBlocks, and pod hooks run before and after the ItemBlock is backed up.
|
||||
In particular, this means that if an ItemBlock contains more than one pod (such as in a scenario where an RWX volume is mounted by multiple pods), pre hooks are run for all pods in the ItemBlock, then the items are backed up, then all post hooks are run.
|
||||
|
||||
There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec.
|
||||
|
||||
### Specifying Hooks As Pod Annotations
|
||||
|
||||
You can use the following annotations on a pod to make Velero execute a hook when backing up the pod:
|
||||
|
||||
#### Pre hooks
|
||||
|
||||
* `pre.hook.backup.velero.io/container`
|
||||
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
|
||||
* `pre.hook.backup.velero.io/command`
|
||||
* The command to execute. This command is not executed within a shell by default. If a shell is needed to run your command, include a shell command, like `/bin/sh`, that is supported by the container at the beginning of your command. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`. See [examples of using pre hook commands](#backup-hook-commands-examples). Optional.
|
||||
* `pre.hook.backup.velero.io/on-error`
|
||||
* What to do if the command returns a non-zero exit code. Defaults to `Fail`. Valid values are Fail and Continue. Optional.
|
||||
* `pre.hook.backup.velero.io/timeout`
|
||||
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
|
||||
|
||||
|
||||
#### Post hooks
|
||||
|
||||
* `post.hook.backup.velero.io/container`
|
||||
* The container where the command should be executed. Default is the first container in the pod. Optional.
|
||||
* `post.hook.backup.velero.io/command`
|
||||
* The command to execute. This command is not executed within a shell by default. If a shell is needed to run your command, include a shell command, like `/bin/sh`, that is supported by the container at the beginning of your command. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`. See [examples of using pre hook commands](#backup-hook-commands-examples). Optional.
|
||||
* `post.hook.backup.velero.io/on-error`
|
||||
* What to do if the command returns a non-zero exit code. Defaults to `Fail`. Valid values are Fail and Continue. Optional.
|
||||
* `post.hook.backup.velero.io/timeout`
|
||||
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
|
||||
|
||||
### Specifying Hooks in the Backup Spec
|
||||
|
||||
Please see the documentation on the [Backup API Type][1] for how to specify hooks in the Backup
|
||||
spec.
|
||||
|
||||
## Hook Example with fsfreeze
|
||||
|
||||
This examples walks you through using both pre and post hooks for freezing a file system. Freezing the
|
||||
file system is useful to ensure that all pending disk I/O operations have completed prior to taking a snapshot.
|
||||
|
||||
### Annotations
|
||||
|
||||
The Velero [example/nginx-app/with-pv.yaml][2] serves as an example of adding the pre and post hook annotations directly
|
||||
to your declarative deployment. Below is an example of what updating an object in place might look like.
|
||||
|
||||
```shell
|
||||
kubectl annotate pod -n nginx-example -l app=nginx \
|
||||
pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
|
||||
pre.hook.backup.velero.io/container=fsfreeze \
|
||||
post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
|
||||
post.hook.backup.velero.io/container=fsfreeze
|
||||
```
|
||||
|
||||
Now test the pre and post hooks by creating a backup. You can use the Velero logs to verify that the pre and post
|
||||
hooks are running and exiting without error.
|
||||
|
||||
```shell
|
||||
velero backup create nginx-hook-test
|
||||
|
||||
velero backup get nginx-hook-test
|
||||
velero backup logs nginx-hook-test | grep hookCommand
|
||||
```
|
||||
|
||||
## Backup hook commands examples
|
||||
|
||||
### Multiple commands
|
||||
|
||||
To use multiple commands, wrap your target command in a shell and separate them with `;`, `&&`, or other shell conditional constructs.
|
||||
|
||||
```shell
|
||||
pre.hook.backup.velero.io/command='["/bin/bash", "-c", "echo hello > hello.txt && echo goodbye > goodbye.txt"]'
|
||||
```
|
||||
|
||||
#### Using environment variables
|
||||
|
||||
You are able to use environment variables from your pods in your pre and post hook commands by including a shell command before using the environment variable. For example, `MYSQL_ROOT_PASSWORD` is an environment variable defined in pod called `mysql`. To use `MYSQL_ROOT_PASSWORD` in your pre-hook, you'd include a shell, like `/bin/sh`, before calling your environment variable:
|
||||
|
||||
```
|
||||
pre:
|
||||
- exec:
|
||||
container: mysql
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- mysql --password=$MYSQL_ROOT_PASSWORD -e "FLUSH TABLES WITH READ LOCK"
|
||||
onError: Fail
|
||||
```
|
||||
|
||||
Note that the container must support the shell command you use.
|
||||
|
||||
## Backup Hook Execution Results
|
||||
### Viewing Results
|
||||
|
||||
Velero records the execution results of hooks, allowing users to obtain this information by running the following command:
|
||||
|
||||
```bash
|
||||
$ velero backup describe <backup name>
|
||||
```
|
||||
|
||||
The displayed results include the number of hooks that were attempted to be executed and the number of hooks that failed execution. Any detailed failure reasons will be present in `Errors` section if applicable.
|
||||
|
||||
```bash
|
||||
HooksAttempted: 1
|
||||
HooksFailed: 0
|
||||
```
|
||||
|
||||
|
||||
[1]: api-types/backup.md
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.18.0/examples/nginx-app/with-pv.yaml
|
||||
167
site/content/docs/v1.18/backup-reference.md
Normal file
167
site/content/docs/v1.18/backup-reference.md
Normal file
@@ -0,0 +1,167 @@
|
||||
---
|
||||
title: "Backup Reference"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Exclude Specific Items from Backup
|
||||
|
||||
It is possible to exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows:
|
||||
|
||||
```bash
|
||||
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true
|
||||
```
|
||||
## Parallel Files Upload
|
||||
If using fs-backup with Kopia uploader or CSI snapshot data movements, it's allowed to configure the option for parallel files upload, which could accelerate the backup:
|
||||
```bash
|
||||
velero backup create <BACKUP_NAME> --include-namespaces <NAMESPACE> --parallel-files-upload <NUM> --wait
|
||||
```
|
||||
|
||||
## Specify Backup Orders of Resources of Specific Kind
|
||||
|
||||
To backup resources of specific Kind in a specific order, use option --ordered-resources to specify a mapping Kinds to an ordered list of specific resources of that Kind. Resource names are separated by commas and their names are in format 'namespace/resourcename'. For cluster scope resource, simply use resource name. Key-value pairs in the mapping are separated by semi-colon. Kind name is in plural form.
|
||||
|
||||
```bash
|
||||
velero backup create backupName --include-cluster-resources=true --ordered-resources 'pods=ns1/pod1,ns1/pod2;persistentvolumes=pv4,pv8' --include-namespaces=ns1
|
||||
velero backup create backupName --ordered-resources 'statefulsets=ns1/sts1,ns1/sts0' --include-namespaces=ns1
|
||||
```
|
||||
## Schedule a Backup
|
||||
|
||||
The **schedule** operation allows you to create a backup of your data at a specified time, defined by a [Cron expression](https://en.wikipedia.org/wiki/Cron).
|
||||
|
||||
```
|
||||
velero schedule create NAME --schedule="* * * * *" [flags]
|
||||
```
|
||||
|
||||
Cron schedules use the following format.
|
||||
|
||||
```
|
||||
# ┌───────────── minute (0 - 59)
|
||||
# │ ┌───────────── hour (0 - 23)
|
||||
# │ │ ┌───────────── day of the month (1 - 31)
|
||||
# │ │ │ ┌───────────── month (1 - 12)
|
||||
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
|
||||
# │ │ │ │ │ 7 is also Sunday on some systems)
|
||||
# │ │ │ │ │
|
||||
# │ │ │ │ │
|
||||
# * * * * *
|
||||
```
|
||||
|
||||
For example, the command below creates a backup that runs every day at 3am.
|
||||
|
||||
```
|
||||
velero schedule create example-schedule --schedule="0 3 * * *"
|
||||
```
|
||||
|
||||
This command will create the backup, `example-schedule`, within Velero, but the backup will not be taken until the next scheduled time, 3am. Backups created by a schedule are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*. For a full list of available configuration flags use the Velero CLI help command.
|
||||
|
||||
```
|
||||
velero schedule create --help
|
||||
```
|
||||
|
||||
Once you create the scheduled backup, you can then trigger it manually using the `velero backup` command.
|
||||
|
||||
```
|
||||
velero backup create --from-schedule example-schedule
|
||||
```
|
||||
|
||||
This command will immediately trigger a new backup based on your template for `example-schedule`. This will not affect the backup schedule, and another backup will trigger at the scheduled time.
|
||||
|
||||
### Time zone specification
|
||||
Time zone can be specified in the schedule cron. The format is `CRON_TZ=<timezone> <cron>`.
|
||||
|
||||
Specifying timezones can reduce disputes in the case of daylight saving time changes. For example, if the schedule is set to run at 3am, and daylight saving time changes, the schedule will still run at 3am in the timezone specified.
|
||||
|
||||
Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run!
|
||||
|
||||
For example, the command below creates a backup that runs every day at 3am in the timezone `America/New_York`.
|
||||
|
||||
```
|
||||
velero schedule create example-schedule --schedule="CRON_TZ=America/New_York 0 3 * * *"
|
||||
```
|
||||
|
||||
Another example, the command below creates a backup that runs every day at 3am in the timezone `Asia/Shanghai`.
|
||||
|
||||
```
|
||||
velero schedule create example-schedule --schedule="CRON_TZ=Asia/Shanghai 0 3 * * *"
|
||||
```
|
||||
|
||||
The supported timezone names are listed in the [IANA Time Zone Database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List) under 'TZ identifier'.
|
||||
<!--
|
||||
cron's WithLocation functions uses time.Location as parameter, and [time.LoadLocation](https://pkg.go.dev/time#LoadLocation) support names from IANA timezone database in following locations in this order
|
||||
- the directory or uncompressed zip file named by the ZONEINFO environment variable
|
||||
- on a Unix system, the system standard installation location
|
||||
- $GOROOT/lib/time/zoneinfo.zip
|
||||
- the time/tzdata package, if it was imported
|
||||
-->
|
||||
|
||||
### Limitation
|
||||
|
||||
#### Backup's OwnerReference with Schedule
|
||||
Backups created from schedule can have owner reference to the schedule. This can be achieved by command:
|
||||
|
||||
```
|
||||
velero schedule create --use-owner-references-in-backup <backup-name>
|
||||
```
|
||||
By this way, schedule is the owner of it created backups. This is useful for some GitOps scenarios, or the resource tree of k8s synchronized from other places.
|
||||
|
||||
Please do notice there is also side effect that may not be expected. Because schedule is the owner, when the schedule is deleted, the related backups CR (Just backup CR is deleted. Backup data still exists in object store and snapshots) will be deleted by k8s GC controller, too, but Velero controller will sync these backups from object store's metadata into k8s. Then k8s GC controller and Velero controller will fight over whether these backups should exist all through.
|
||||
|
||||
If there is possibility the schedule will be disable to not create backup anymore, and the created backups are still useful. Please do not enable this option. For detail, please reference to [Backups created by a schedule with useOwnerReferenceInBackup set do not get synced properly](https://github.com/vmware-tanzu/velero/issues/4093).
|
||||
|
||||
Some GitOps tools have configurations to avoid pruning the day 2 backups generated from the schedule.
|
||||
For example, the ArgoCD has two ways to do that:
|
||||
* Add annotations to schedule. This method makes ArgoCD ignore the schedule from syncing, so the generated backups are ignored too, but it has a side effect. When deleting the schedule from the GitOps manifest, the schedule can not be deleted. User needs to do it manually.
|
||||
``` yaml
|
||||
annotations:
|
||||
argocd.argoproj.io/compare-options: IgnoreExtraneous
|
||||
argocd.argoproj.io/sync-options: Delete=false,Prune=false
|
||||
```
|
||||
* If ArgoCD is deployed by ArgoCD-Operator, there is another option: [resourceExclusions](https://argocd-operator.readthedocs.io/en/latest/reference/argocd/#resource-exclusions-example). This is an example, which means ArgoCD operator should ignore `Backup` and `Restore` in `velero.io` group in the `velero` namespace for all managed k8s cluster.
|
||||
``` yaml
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: ArgoCD
|
||||
metadata:
|
||||
name: velero-argocd
|
||||
namespace: velero
|
||||
spec:
|
||||
resourceExclusions: |
|
||||
- apiGroups:
|
||||
- velero.io
|
||||
kinds:
|
||||
- Backup
|
||||
- Restore
|
||||
clusters:
|
||||
- "*"
|
||||
```
|
||||
|
||||
#### Cannot support backup data immutability
|
||||
Starting from 1.11, Velero's backups may not work as expected when the target object storage has some kind of an "immutability" option configured. These options are known by different names (see links below for some examples). The main reason is that Velero first saves the state of a backup as Finalizing and then checks whether there are any async operations in progress. If there are, it needs to wait for all of them to be finished before moving the backup state to Complete. If there are no async operations, the state is moved to Complete right away. In either case, Velero needs to modify the metadata in object storage and that will not be possible if some kind of immutability is configured on the object storage.
|
||||
|
||||
Even with versions prior to 1.11, there was no explicit support in Velero to work with object storage that has "immutability" configuration. As a result, you may see some problems even though backups seem to work (e.g. versions objects not being deleted when backup is deleted).
|
||||
|
||||
Note that backups may still work in some cases depending on specific providers and configurations.
|
||||
|
||||
* For AWS S3 service, backups work because S3's object lock only applies to versioned buckets, and the object data can still be updated as the new version. But when backups are deleted, old versions of the objects will not be deleted.
|
||||
* Azure Storage Blob supports both versioned-level immutability and container-level immutability. For the versioned-level scenario, data immutability can still work in Velero, but the container-level cannot.
|
||||
* GCP Cloud storage policy only supports bucket-level immutability, so there is no way to make it work in the GCP environment.
|
||||
|
||||
The following are the links to cloud providers' documentation in this regard:
|
||||
|
||||
* [AWS S3 Using S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html)
|
||||
* [Azure Storage Blob Containers - Lock Immutability Policy](https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-policy-configure-version-scope?tabs=azure-portal)
|
||||
* [GCP cloud storage Retention policies and retention policy locks](https://cloud.google.com/storage/docs/bucket-lock)
|
||||
|
||||
## Kubernetes API Pagination
|
||||
|
||||
By default, Velero will paginate the LIST API call for each resource type in the Kubernetes API when collecting items into a backup. The `--client-page-size` flag for the Velero server configures the size of each page.
|
||||
|
||||
Depending on the cluster's scale, tuning the page size can improve backup performance. You can experiment with higher values, noting their impact on the relevant `apiserver_request_duration_seconds_*` metrics from the Kubernetes apiserver.
|
||||
|
||||
Pagination can be entirely disabled by setting `--client-page-size` to `0`. This will request all items in a single unpaginated LIST call.
|
||||
|
||||
## Deleting Backups
|
||||
|
||||
Use the following commands to delete Velero backups and data:
|
||||
|
||||
* `kubectl delete backup <backupName> -n <veleroNamespace>` will delete the backup custom resource only and will not delete any associated data from object/block storage
|
||||
* `velero backup delete <backupName>` will delete the backup resource including all data in object/block storage
|
||||
63
site/content/docs/v1.18/backup-repository-configuration.md
Normal file
63
site/content/docs/v1.18/backup-repository-configuration.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
title: "Backup Repository Configuration"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
Velero uses selectable backup repositories for various backup/restore methods, i.e., [file-system backup][1], [CSI snapshot data movement][2], etc. To achieve the best performance, backup repositories may need to be configured according to the running environments.
|
||||
|
||||
Velero uses a BackupRepository CR to represent the instance of the backup repository. Now, a new field `repositoryConfig` is added to support various configurations to the underlying backup repository.
|
||||
|
||||
Velero also allows you to specify configurations before the BackupRepository CR is created through a configMap. The configurations in the configMap will be copied to the BackupRepository CR when it is created at the due time.
|
||||
The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to Velero instance in that namespace only. The name of the configMap should be specified in the Velero server parameter `--backup-repository-configmap`.
|
||||
|
||||
|
||||
The users can specify the ConfigMap name during velero installation by CLI:
|
||||
`velero install --backup-repository-configmap=<ConfigMap-Name>`
|
||||
|
||||
Conclusively, you have two ways to add/change/delete configurations of a backup repository:
|
||||
- If the BackupRepository CR for the backup repository is already there, you should modify the `repositoryConfig` field. The new changes will be applied to the backup repository at the due time, it doesn't require Velero server to restart.
|
||||
- Otherwise, you can create the backup repository configMap as a template for the BackupRepository CRs that are going to be created.
|
||||
|
||||
The backup repository configMap is repository type (i.e., kopia, restic) specific, so for one repository type, you only need to create one set of configurations, they will be applied to all BackupRepository CRs of the same type. Whereas, the changes of `repositoryConfig` field apply to the specific BackupRepository CR only, you may need to change every BackupRepository CR of the same type.
|
||||
|
||||
Below is an example of the BackupRepository configMap with the configurations:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: <config-name>
|
||||
namespace: velero
|
||||
data:
|
||||
<kopia>: |
|
||||
{
|
||||
"cacheLimitMB": 2048,
|
||||
"fullMaintenanceInterval": "fastGC"
|
||||
}
|
||||
<other-repository-type>: |
|
||||
{
|
||||
"cacheLimitMB": 1024
|
||||
}
|
||||
```
|
||||
|
||||
To create the configMap, you need to save something like the above sample to a file and then run below commands:
|
||||
```shell
|
||||
kubectl apply -f <yaml-file-name>
|
||||
```
|
||||
|
||||
When and how the configurations are used is decided by the backup repository itself. Though you can specify any configuration to the configMap or `repositoryConfig`, the configuration may/may not be used by the backup repository, or the configuration may be used at an arbitrary time.
|
||||
|
||||
Below is the supported configurations by Velero and the specific backup repository.
|
||||
***Kopia repository:***
|
||||
`cacheLimitMB`: specifies the size limit(in MB) for the local data cache. The more data is cached locally, the less data may be downloaded from the backup storage, so the better performance may be achieved. Practically, you can specify any size that is smaller than the free space so that the disk space won't run out. This parameter is for repository connection, that is, you could change it before connecting to the repository. E.g., before a backup/restore/maintenance.
|
||||
|
||||
`fullMaintenanceInterval`: The full maintenance interval defaults to kopia defaults of 24 hours. Override options below allows for faster removal of deleted velero backups from kopia repo.
|
||||
- normalGC: 24 hours
|
||||
- fastGC: 12 hours
|
||||
- eagerGC: 6 hours
|
||||
|
||||
Per kopia [Maintenance Safety](https://kopia.io/docs/advanced/maintenance/#maintenance-safety), it is expected that velero backup deletion will not result in immediate kopia repository data removal. Reducing full maintenance interval using above options should help reduce time taken to remove blobs not in use.
|
||||
|
||||
On the other hand, the not-in-use data will be deleted permanently after the full maintenance, so shorter full maintenance intervals may weaken the data safety if they are used incorrectly.
|
||||
|
||||
[1]: file-system-backup.md
|
||||
[2]: csi-snapshot-data-movement.md
|
||||
79
site/content/docs/v1.18/backup-restore-windows.md
Normal file
79
site/content/docs/v1.18/backup-restore-windows.md
Normal file
@@ -0,0 +1,79 @@
|
||||
---
|
||||
title: "Backup Restore Windows Workloads"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Velero supports to backup and restore Windows workloads, either stateless or stateful.
|
||||
To keep compatibility to the existing Velero plugins, Velero server runs in linux nodes only, so Velero requires at least one linux node in the cluster. And it is not recommended to run Velero server in control plane, so a linux worker node is required. For resource requirement of the linux node for Velero server, see [Customize resource requests and limits][1].
|
||||
|
||||
Velero is built and tested with `windows/amd64/ltsc2022` container only, older Windows versions, i.e., Windows Server 2019, are not supported.
|
||||
|
||||
For volume backups, CSI and CSI snapshot should be supported by the storage.
|
||||
|
||||
## Installation
|
||||
|
||||
As mentioned in [Image building][2], a hybrid image is provided for all platforms, so you don't need to set different images for linux and Windows clusters, you can always use the all-in-one image, e.g., `velero/velero:v1.16.0` or `velero/velero:main`.
|
||||
|
||||
In order to backup/restore volumes for stateful workloads, Velero node-agent needs to run in the Windows nodes. Velero provides a dedicated daemonset for Windows nodes, called `node-agent-windows`.
|
||||
Therefore, in a typical cluster with linux and Windows nodes, there are two daemonsets for Velero node-agent, the existing `node-agent` deamonset for linux nodes, and the `node-agent-windows` daemonset for Windows nodes.
|
||||
If you want to install `node-agent` deamonset, specify `--use-node-agent` parameter in `velero install` command; and if you want to install `node-agent-windows` daemonset, specify `--use-node-agent-windows` parameter.
|
||||
|
||||
## Resource backup restore
|
||||
|
||||
Resource backup/restore for Windows workloads are done by Velero server as same as linux workloads.
|
||||
|
||||
Since Velero server is running in linux nodes only, all the existing plugins, i.e., BIA, RIA, BackupStore plugins, could be started by Velero in a cluster with Windows nodes. However, whether or how the plugins are functional to Windows workloads are decided by the plugins themselves.
|
||||
It is recommended that plugin providers do a well round test with Velero in Windows cluster environments, and:
|
||||
- If they need to support Windows workloads, make the necessary modification to ensure their plugins work well with Windows workloads
|
||||
- If they don't want to support Windows workloads, or part of the Windows workloads, they need to ensure the plugins won't cause any failure or crash when they process the undesired Windows workload items
|
||||
|
||||
## Volume backup restore
|
||||
|
||||
Below are the status of supportive of Windows workload volumes for different backup methods:
|
||||
- CSI snapshot data movement: block volumes (i.e., vSphere CNS Block Volume, Azure Disk, AWS EBS, GCP Persistent Disk, etc.) are full supported; file volumes (i.e., vSphere CNS File Volume, Azure File, AWS EFS, GCP Filestore, etc.) are not tested or officially supported. This is the same with linux workloads
|
||||
- CSI snapshot backup: block volumes (i.e., vSphere CNS Block Volume, Azure Disk, AWS EBS, GCP Persistent Disk, etc.) are full supported; file volumes (i.e., vSphere CNS File Volume, Azure File, AWS EFS, GCP Filestore, etc.) are not tested or officially supported. This is the same with linux workloads
|
||||
- native snapshot backup: supported as same as linux workloads
|
||||
- file system backup: at present, NOT supported
|
||||
|
||||
For volume backups/restores conducted through Velero plugins, the supportive status is decided by the plugin themselves.
|
||||
|
||||
### CSI snapshot data movement
|
||||
|
||||
During backup, Velero automatically identifies the OS type of the workload and schedules data mover pods to the right nodes. Specifically, for a linux workload, linux nodes in the cluster will be used; for a Windows workload, Windows nodes in the cluster will be used.
|
||||
You could view the OS type that a data mover pod is running with from the DataUpload status's `nodeOS` field.
|
||||
|
||||
Velero takes several measures to deduce the OS type for volumes of workloads, from PVCs, VolumeAttach CRs, nodes and storage classes. If Velero fails to deduce the OS type, it fallbacks to linux, then the data mover pods will be scheduled to linux nodes. As a result, the data mover pods may not be able to start and the corresponding DataUploads will be cancelled because of timeout, so the backup will be partially failed.
|
||||
|
||||
Therefore, it is highly recommended you provide a dedicated storage class for Windows workloads volumes, and set `csi.storage.k8s.io/fstype` correctly. E.g., for linux workload volumes, set `csi.storage.k8s.io/fstype=ext4`; for Windows workload volumes set `csi.storage.k8s.io/fstype=ntfs`.
|
||||
Specifically, if you have X number of storage classes for linux workloads, you need to create another X number of storage classes for Windows workloads.
|
||||
This is helpful for Velero to deduce the right OS type successfully all the time, especially when you are backing up below kind of volumes belonging to a Windows workload:
|
||||
- The PVC is with Immediate mode
|
||||
- There is no pod mounting the PVC at the time of backup
|
||||
|
||||
For restore, Velero automatically inherits the OS type from backup, so no deduction process is required.
|
||||
|
||||
For other information, check [CSI Snapshot Data Movement][3].
|
||||
|
||||
|
||||
## Backup Repository Maintenance job
|
||||
|
||||
Backup Repository Maintenance jobs and pods are supported to run in Windows nodes, that is, you can take full node resources in a cluster with Windows nodes for Backup Repository Maintenance. For more information, check [Repository Maintenance][4].
|
||||
|
||||
## Backup restore hooks
|
||||
|
||||
Pre/post backup/restore hooks are supported for Windows workloads, the commands run in the same Windows nodes hosting the workload pods. For more information, check [Backup Hooks][5] and [Restore Hooks][6].
|
||||
|
||||
## Limitations
|
||||
|
||||
NTFS extended attributes/advanced features are not supported, i.e., Security Descriptors, System/Hidden/ReadOnly attributes, Creation Time, NTFS Streams, etc. That is, after backup/restore, these data will be lost.
|
||||
|
||||
|
||||
|
||||
[1]: customize-installation.md#customize-resource-requests-and-limits
|
||||
[2]: build-from-source.md#image-building
|
||||
[3]: csi-snapshot-data-movement.md
|
||||
[4]: repository-maintenance.md
|
||||
[5]: backup-hooks.md
|
||||
[6]: restore-hooks.md
|
||||
73
site/content/docs/v1.18/basic-install.md
Normal file
73
site/content/docs/v1.18/basic-install.md
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
title: "Basic Install"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
Use this doc to get a basic installation of Velero.
|
||||
Refer [this document](customize-installation.md) to customize your installation, including setting priority classes for Velero components.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatibility-matrix).
|
||||
- `kubectl` installed locally
|
||||
|
||||
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].
|
||||
|
||||
Velero supports storage providers for both cloud-provider environments and on-premises environments. For more details on on-premises scenarios, see the [on-premises documentation][2].
|
||||
|
||||
### Velero on Windows
|
||||
|
||||
Velero supports to backup and restore Windows workloads, either stateless or stateful.
|
||||
Velero node-agent and data mover pods could run in Windows nodes. To keep compatibility to the existing Velero plugins, Velero server runs in linux nodes only, so Velero requires at least one linux node in the cluster. Velero provides Windows images for specific Windows versions. For more information see [Backup Restore Windows Workloads][6].
|
||||
|
||||
## Install the CLI
|
||||
|
||||
### Option 1: MacOS - Homebrew
|
||||
|
||||
On macOS, you can use [Homebrew](https://brew.sh) to install the `velero` client:
|
||||
|
||||
```bash
|
||||
brew install velero
|
||||
```
|
||||
|
||||
### Option 2: GitHub release
|
||||
|
||||
1. Download the [latest release][1]'s tarball for your client platform.
|
||||
1. Extract the tarball:
|
||||
|
||||
```bash
|
||||
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz
|
||||
```
|
||||
|
||||
1. Move the extracted `velero` binary to somewhere in your `$PATH` (`/usr/local/bin` for most users).
|
||||
|
||||
### Option 3: Windows - Chocolatey
|
||||
|
||||
On Windows, you can use [Chocolatey](https://chocolatey.org/install) to install the [velero](https://chocolatey.org/packages/velero) client:
|
||||
|
||||
```powershell
|
||||
choco install velero
|
||||
```
|
||||
|
||||
## Install and configure the server components
|
||||
|
||||
There are two supported methods for installing the Velero server components:
|
||||
|
||||
- the `velero install` CLI command
|
||||
- the [Helm chart](https://vmware-tanzu.github.io/helm-charts/)
|
||||
|
||||
Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations. The steps to install and configure the Velero server components along with the appropriate plugins are specific to your chosen storage provider. To find installation instructions for your chosen storage provider, follow the documentation link for your provider at our [supported storage providers][0] page
|
||||
|
||||
_Note: if your object storage provider is different than your volume snapshot provider, follow the installation instructions for your object storage provider first, then return here and follow the instructions to [add your volume snapshot provider][4]._
|
||||
|
||||
## Command line Autocompletion
|
||||
|
||||
Please refer to [this part of the documentation][5].
|
||||
|
||||
[0]: supported-providers.md
|
||||
[1]: https://github.com/vmware-tanzu/velero/releases/latest
|
||||
[2]: on-premises.md
|
||||
[3]: overview-plugins.md
|
||||
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
|
||||
[5]: customize-installation.md#optional-velero-cli-configurations
|
||||
[6]: backup-restore-windows.md
|
||||
198
site/content/docs/v1.18/build-from-source.md
Normal file
198
site/content/docs/v1.18/build-from-source.md
Normal file
@@ -0,0 +1,198 @@
|
||||
---
|
||||
title: "Build from source"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Access to a Kubernetes cluster, version 1.7 or later.
|
||||
* A DNS server on the cluster
|
||||
* `kubectl` installed
|
||||
* [Go][5] installed (minimum version 1.8)
|
||||
|
||||
## Get the source
|
||||
|
||||
### Option 1) Get latest (recommended)
|
||||
|
||||
```bash
|
||||
mkdir $HOME/go
|
||||
export GOPATH=$HOME/go
|
||||
go get github.com/vmware-tanzu/velero
|
||||
```
|
||||
|
||||
Where `go` is your [import path][4] for Go.
|
||||
|
||||
For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path.
|
||||
|
||||
### Option 2) Release archive
|
||||
|
||||
Download the archive named `Source code` from the [release page][22] and extract it in your Go import path as `src/github.com/vmware-tanzu/velero`.
|
||||
|
||||
Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below.
|
||||
|
||||
## Build
|
||||
|
||||
There are a number of different ways to build `velero` depending on your needs. This section outlines the main possibilities.
|
||||
|
||||
When building by using `make`, it will place the binaries under `_output/bin/$GOOS/$GOARCH`. For example, you will find the binary for darwin here: `_output/bin/darwin/amd64/velero`, and the binary for linux here: `_output/bin/linux/amd64/velero`. `make` will also splice version and git commit information in so that `velero version` displays proper output.
|
||||
|
||||
Note: `velero install` will also use the version information to determine which tagged image to deploy. If you would like to overwrite what image gets deployed, use the `image` flag (see below for instructions on how to build images).
|
||||
|
||||
### Build the binary
|
||||
|
||||
To build the `velero` binary on your local machine, compiled for your OS and architecture, run one of these two commands:
|
||||
|
||||
```bash
|
||||
go build ./cmd/velero
|
||||
```
|
||||
|
||||
```bash
|
||||
make local
|
||||
```
|
||||
|
||||
### Cross compiling
|
||||
|
||||
To build the velero binary targeting linux/amd64 within a build container on your local machine, run:
|
||||
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
For any specific platform, run `make build-<GOOS>-<GOARCH>`.
|
||||
|
||||
For example, to build for the Mac, run `make build-darwin-amd64`.
|
||||
|
||||
Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
|
||||
|
||||
* linux-amd64
|
||||
* linux-arm
|
||||
* linux-arm64
|
||||
* linux-ppc64le
|
||||
* darwin-amd64
|
||||
* windows-amd64
|
||||
|
||||
## Making images and updating Velero
|
||||
|
||||
If after installing Velero you would like to change the image used by its deployment to one that contains your code changes, you may do so by updating the image:
|
||||
|
||||
```bash
|
||||
kubectl -n velero set image deploy/velero velero=myimagerepo/velero:$VERSION
|
||||
```
|
||||
|
||||
To build a Velero container image, you need to configure `buildx` first.
|
||||
|
||||
### Buildx
|
||||
|
||||
Docker Buildx is a CLI plugin that extends the docker command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently.
|
||||
|
||||
More information in the [docker docs][23] and in the [buildx github][24] repo.
|
||||
|
||||
### Image building
|
||||
|
||||
#### Build local image
|
||||
|
||||
If you want to build an image with the same OS type and CPU architecture with your local machine, you can keep most the build parameters as default.
|
||||
Run below command to build the local image:
|
||||
```bash
|
||||
make container
|
||||
```
|
||||
Optionally, set the `$VERSION` environment variable to change the image tag or `$BIN` to change which binary to build a container image for.
|
||||
Optionally, you can set the `$REGISTRY` environment variable. For example, if you want to build the `gcr.io/my-registry/velero:main` image, set `$REGISTRY` to `gcr.io/my-registry`. If this variable is not set, the default is `velero`.
|
||||
The image is preserved in the local machine, you can run `docker push` to push the image to the specified registry, or if not specified, docker hub by default.
|
||||
|
||||
#### Build hybrid image
|
||||
|
||||
You can also build a hybrid image that supports multiple OS types or CPU architectures. A hybrid image contains a manifest list with one or more manifests each of which maps to a single `os type/arch/os version` configuration.
|
||||
Below `os type/arch/os version` configurations are tested and supported:
|
||||
* `linux/amd64`
|
||||
* `linux/arm64`
|
||||
* `windows/amd64/ltsc2022`
|
||||
|
||||
The hybrid image must be pushed to a registry as the local system doesn't support all the manifests in the image. So `BUILDX_OUTPUT_TYPE` parameter must be set as `registry`.
|
||||
By default, `$REGISTRY` is set as `velero`, you can change it to your own registry.
|
||||
|
||||
To build a hybrid image, the following one time setup is necessary:
|
||||
|
||||
1. If you are building cross platform container images
|
||||
```bash
|
||||
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
|
||||
```
|
||||
2. Create and bootstrap a new docker buildx builder
|
||||
```bash
|
||||
$ docker buildx create --use --name builder
|
||||
builder
|
||||
$ docker buildx inspect --bootstrap
|
||||
[+] Building 2.6s (1/1) FINISHED
|
||||
=> [internal] booting buildkit 2.6s
|
||||
=> => pulling image moby/buildkit:buildx-stable-1 1.9s
|
||||
=> => creating container buildx_buildkit_builder0 0.7s
|
||||
Name: builder
|
||||
Driver: docker-container
|
||||
|
||||
Nodes:
|
||||
Name: builder0
|
||||
Endpoint: unix:///var/run/docker.sock
|
||||
Status: running
|
||||
Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
|
||||
```
|
||||
NOTE: Without the above setup, the output of `docker buildx inspect --bootstrap` will be:
|
||||
```bash
|
||||
$ docker buildx inspect --bootstrap
|
||||
Name: default
|
||||
Driver: docker
|
||||
|
||||
Nodes:
|
||||
Name: default
|
||||
Endpoint: default
|
||||
Status: running
|
||||
Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
|
||||
```
|
||||
And the `REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry make container` will fail with the below error:
|
||||
```bash
|
||||
$ REGISTRY=ashishamarnath BUILDX_PLATFORMS=linux/arm64 BUILDX_OUTPUT_TYPE=registry make container
|
||||
auto-push is currently not implemented for docker driver
|
||||
make: *** [container] Error 1
|
||||
```
|
||||
|
||||
Having completed the above one time setup, now the output of `docker buildx inspect --bootstrap` should be like
|
||||
|
||||
```bash
|
||||
$ docker buildx inspect --bootstrap
|
||||
Name: builder
|
||||
Driver: docker-container
|
||||
|
||||
Nodes:
|
||||
Name: builder0
|
||||
Endpoint: unix:///var/run/docker.sock
|
||||
Status: running
|
||||
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v
|
||||
```
|
||||
|
||||
Now build and push the container image by running the `make container` command with `$BUILDX_OUTPUT_TYPE` set to `registry`.
|
||||
|
||||
Blow command builds a hybrid image with single configuration `linux/amd64`:
|
||||
```bash
|
||||
$ REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry make container
|
||||
```
|
||||
|
||||
Blow command builds a hybrid image with configurations `linux/amd64` and `linux/arm64`:
|
||||
```bash
|
||||
$ REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry BUILD_ARCH=amd64,arm64 make container
|
||||
```
|
||||
|
||||
Blow command builds a hybrid image with configurations `linux/amd64`, `linux/arm64` and `windows/amd64/ltsc2022`:
|
||||
```bash
|
||||
$ REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry BUILD_OS=linux,windows BUILD_ARCH=amd64,arm64 make container
|
||||
```
|
||||
|
||||
Note: if you want to update the image but not change its name, you will have to trigger Kubernetes to pick up the new image. One way of doing so is by deleting the Velero deployment pod and node-agent pods:
|
||||
|
||||
```bash
|
||||
kubectl -n velero delete pods -l deploy=velero
|
||||
```
|
||||
|
||||
[4]: https://blog.golang.org/organizing-go-code
|
||||
[5]: https://golang.org/doc/install
|
||||
[22]: https://github.com/vmware-tanzu/velero/releases
|
||||
[23]: https://docs.docker.com/buildx/working-with-buildx/
|
||||
[24]: https://github.com/docker/buildx
|
||||
171
site/content/docs/v1.18/code-standards.md
Normal file
171
site/content/docs/v1.18/code-standards.md
Normal file
@@ -0,0 +1,171 @@
|
||||
---
|
||||
title: "Code Standards"
|
||||
layout: docs
|
||||
toc: "true"
|
||||
---
|
||||
|
||||
## Opening PRs
|
||||
|
||||
When opening a pull request, please fill out the checklist supplied the template. This will help others properly categorize and review your pull request.
|
||||
|
||||
### PR title
|
||||
|
||||
Make sure that the pull request title summarizes the change made (and not just "fixes issue #xxxx"):
|
||||
|
||||
Example PR titles:
|
||||
|
||||
- "Check for nil when validating foo"
|
||||
- "Issue #1234: Check for nil when validating foo"
|
||||
|
||||
### Cherry-pick PRs
|
||||
|
||||
When a PR to main needs to be cherry-picked to a release branch, please wait until the main PR is merged first before creating the CP PR. If the CP PR is made before the main PR is merged, there is a risk that PR modifications in response to review comments will not make it into the CP PR.
|
||||
|
||||
The Cherry-pick PR title should reference the branch it's cherry-picked to and the fact that it's a CP of a commit to main:
|
||||
|
||||
- "[release-1.13 CP] Issue #1234: Check for nil when validating foo"
|
||||
|
||||
|
||||
## Adding a changelog
|
||||
|
||||
Authors are expected to include a changelog file with their pull requests. The changelog file
|
||||
should be a new file created in the `changelogs/unreleased` folder. The file should follow the
|
||||
naming convention of `pr-username` and the contents of the file should be your text for the
|
||||
changelog.
|
||||
|
||||
velero/changelogs/unreleased <- folder
|
||||
000-username <- file
|
||||
|
||||
Add that to the PR.
|
||||
|
||||
A command to do this is `make new-changelog CHANGELOG_BODY="Changes you have made"`
|
||||
|
||||
If a PR does not warrant a changelog, the CI check for a changelog can be skipped by applying a `changelog-not-required` label on the PR. If you are making a PR on a release branch, you should still make a new file in the `changelogs/unreleased` folder on the release branch for your change.
|
||||
|
||||
## Copyright header
|
||||
|
||||
Whenever a source code file is being modified, the copyright notice should be updated to our standard copyright notice. That is, it should read “Copyright the Velero contributors.”
|
||||
|
||||
For new files, the entire copyright and license header must be added.
|
||||
|
||||
Please note that doc files do not need a copyright header.
|
||||
|
||||
## Code
|
||||
|
||||
- Log messages are capitalized.
|
||||
|
||||
- Error messages are kept lower-cased.
|
||||
|
||||
- Wrap/add a stack only to errors that are being directly returned from non-velero code, such as an API call to the Kubernetes server.
|
||||
|
||||
```bash
|
||||
errors.WithStack(err)
|
||||
```
|
||||
|
||||
- Prefer to use the utilities in the Kubernetes package [`sets`](https://godoc.org/github.com/kubernetes/apimachinery/pkg/util/sets).
|
||||
|
||||
```bash
|
||||
k8s.io/apimachinery/pkg/util/sets
|
||||
```
|
||||
|
||||
## Imports
|
||||
|
||||
For imports, we use the following convention:
|
||||
|
||||
`<group><version><api | client | informer | ...>`
|
||||
|
||||
Example:
|
||||
|
||||
import (
|
||||
corev1api "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
|
||||
corev1listers "k8s.io/client-go/listers/core/v1"
|
||||
|
||||
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
velerov1client "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/typed/velero/v1"
|
||||
)
|
||||
|
||||
## Mocks
|
||||
|
||||
We use a package to generate mocks for our interfaces.
|
||||
|
||||
Example: if you want to change this mock: https://github.com/vmware-tanzu/velero/blob/v1.18.0/pkg/podvolume/mocks/restorer.go
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
go get github.com/vektra/mockery/.../
|
||||
cd pkg/podvolume
|
||||
mockery -name=Restorer
|
||||
```
|
||||
|
||||
Might need to run `make update` to update the imports.
|
||||
|
||||
## Kubernetes Labels
|
||||
|
||||
When generating label values, be sure to pass them through the `label.GetValidName()` helper function.
|
||||
|
||||
This will help ensure that the values are the proper length and format to be stored and queried.
|
||||
|
||||
In general, UIDs are safe to persist as label values.
|
||||
|
||||
This function is not relevant to annotation values, which do not have restrictions.
|
||||
|
||||
## DCO Sign off
|
||||
|
||||
All authors to the project retain copyright to their work. However, to ensure
|
||||
that they are only submitting work that they have rights to, we are requiring
|
||||
everyone to acknowledge this by signing their work.
|
||||
|
||||
Any copyright notices in this repo should specify the authors as "the Velero contributors".
|
||||
|
||||
To sign your work, just add a line like this at the end of your commit message:
|
||||
|
||||
```
|
||||
Signed-off-by: Joe Beda <joe@heptio.com>
|
||||
```
|
||||
|
||||
This can easily be done with the `--signoff` option to `git commit`.
|
||||
|
||||
By doing this you state that you can certify the following (from [https://developercertificate.org/](https://developercertificate.org/)):
|
||||
|
||||
```
|
||||
Developer Certificate of Origin
|
||||
Version 1.1
|
||||
|
||||
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
|
||||
1 Letterman Drive
|
||||
Suite D4700
|
||||
San Francisco, CA, 94129
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim copies of this
|
||||
license document, but changing it is not allowed.
|
||||
|
||||
|
||||
Developer's Certificate of Origin 1.1
|
||||
|
||||
By making a contribution to this project, I certify that:
|
||||
|
||||
(a) The contribution was created in whole or in part by me and I
|
||||
have the right to submit it under the open source license
|
||||
indicated in the file; or
|
||||
|
||||
(b) The contribution is based upon previous work that, to the best
|
||||
of my knowledge, is covered under an appropriate open source
|
||||
license and I have the right under that license to submit that
|
||||
work with modifications, whether created in whole or in part
|
||||
by me, under the same open source license (unless I am
|
||||
permitted to submit under a different license), as indicated
|
||||
in the file; or
|
||||
|
||||
(c) The contribution was provided directly to me by some other
|
||||
person who certified (a), (b) or (c) and I have not modified
|
||||
it.
|
||||
|
||||
(d) I understand and agree that this project and the contribution
|
||||
are public and that a record of the contribution (including all
|
||||
personal information I submit with it, including my sign-off) is
|
||||
maintained indefinitely and may be redistributed consistent with
|
||||
this project or the open source license(s) involved.
|
||||
```
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user