mirror of
https://github.com/vmware-tanzu/velero.git
synced 2026-03-27 03:55:04 +00:00
Compare commits
39 Commits
fix_e2e_ve
...
v1.18.0-rc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
54783fbe28 | ||
|
|
cb5f56265a | ||
|
|
aa89713559 | ||
|
|
5db4c65a92 | ||
|
|
87db850f66 | ||
|
|
c7631fc4a4 | ||
|
|
9a37478cc2 | ||
|
|
5b54ccd2e0 | ||
|
|
43b926a58b | ||
|
|
9bfc78e769 | ||
|
|
c9e26256fa | ||
|
|
6e315c32e2 | ||
|
|
91cbc40956 | ||
|
|
556d5826a8 | ||
|
|
62939cec18 | ||
|
|
7d6a10d3ea | ||
|
|
1c0cf6c51d | ||
|
|
58f0b29091 | ||
|
|
5cb4cdba61 | ||
|
|
325eb50480 | ||
|
|
993b80a350 | ||
|
|
a909bd1f85 | ||
|
|
62a47b9fc5 | ||
|
|
31e9dcbb87 | ||
|
|
f824c3ca3b | ||
|
|
386599638f | ||
|
|
9796da389d | ||
|
|
dfb1d45831 | ||
|
|
72beb35edc | ||
|
|
7442d20f9d | ||
|
|
4dfb47dd21 | ||
|
|
e72fea8ecd | ||
|
|
f388a5ce51 | ||
|
|
e703e06eeb | ||
|
|
1feaafc03e | ||
|
|
e446ce54f6 | ||
|
|
b7289b51c7 | ||
|
|
6eae73f0bf | ||
|
|
1425ebb369 |
@@ -17,6 +17,7 @@ If you're using Velero and want to add your organization to this list,
|
||||
<a href="https://www.replicated.com/" border="0" target="_blank"><img alt="replicated.com" src="site/static/img/adopters/replicated-logo-red.svg" height="50"></a>
|
||||
<a href="https://cloudcasa.io/" border="0" target="_blank"><img alt="cloudcasa.io" src="site/static/img/adopters/cloudcasa.svg" height="50"></a>
|
||||
<a href="https://azure.microsoft.com/" border="0" target="_blank"><img alt="azure.com" src="site/static/img/adopters/azure.svg" height="50"></a>
|
||||
<a href="https://www.broadcom.com/" border="0" target="_blank"><img alt="broadcom.com" src="site/static/img/adopters/broadcom.svg" height="50"></a>
|
||||
## Success Stories
|
||||
|
||||
Below is a list of adopters of Velero in **production environments** that have
|
||||
@@ -68,6 +69,9 @@ Replicated uses the Velero open source project to enable snapshots in [KOTS][101
|
||||
**[Microsoft Azure][105]**<br>
|
||||
[Azure Backup for AKS][106] is an Azure native, Kubernetes aware, Enterprise ready backup for containerized applications deployed on Azure Kubernetes Service (AKS). AKS Backup utilizes Velero to perform backup and restore operations to protect stateful applications in AKS clusters.<br>
|
||||
|
||||
**[Broadcom][107]**<br>
|
||||
[VMware Cloud Foundation][108] (VCF) offers built-in [vSphere Kubernetes Service][109] (VKS), a Kubernetes runtime that includes a CNCF certified Kubernetes distribution, to deploy and manage containerized workloads. VCF empowers platform engineers with native [Kubernetes multi-cluster management][110] capability for managing Kubernetes (K8s) infrastructure at scale. VCF utilizes Velero for Kubernetes data protection enabling platform engineers to back up and restore containerized workloads manifests & persistent volumes, helping to increase the resiliency of stateful applications in VKS cluster.
|
||||
|
||||
## Adding your organization to the list of Velero Adopters
|
||||
|
||||
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/static/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
|
||||
@@ -125,3 +129,8 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
|
||||
|
||||
[105]: https://azure.microsoft.com/
|
||||
[106]: https://learn.microsoft.com/azure/backup/backup-overview
|
||||
|
||||
[107]: https://www.broadcom.com/
|
||||
[108]: https://www.vmware.com/products/cloud-infrastructure/vmware-cloud-foundation
|
||||
[109]: https://www.vmware.com/products/cloud-infrastructure/vsphere-kubernetes-service
|
||||
[110]: https://blogs.vmware.com/cloud-foundation/2025/09/29/empowering-platform-engineers-with-native-kubernetes-multi-cluster-management-in-vmware-cloud-foundation/
|
||||
@@ -13,7 +13,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
# Velero binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
|
||||
|
||||
ARG GOPROXY
|
||||
ARG BIN
|
||||
@@ -49,7 +49,7 @@ RUN mkdir -p /output/usr/bin && \
|
||||
go clean -modcache -cache
|
||||
|
||||
# Restic binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS restic-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS restic-builder
|
||||
|
||||
ARG GOPROXY
|
||||
ARG BIN
|
||||
@@ -73,7 +73,7 @@ RUN mkdir -p /output/usr/bin && \
|
||||
go clean -modcache -cache
|
||||
|
||||
# Velero image packing section
|
||||
FROM paketobuildpacks/run-jammy-tiny:latest
|
||||
FROM paketobuildpacks/run-jammy-tiny:0.2.97
|
||||
|
||||
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
ARG OS_VERSION=1809
|
||||
|
||||
# Velero binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
|
||||
|
||||
ARG GOPROXY
|
||||
ARG BIN
|
||||
|
||||
@@ -7,11 +7,11 @@
|
||||
| Maintainer | GitHub ID | Affiliation |
|
||||
|---------------------|---------------------------------------------------------------|--------------------------------------------------|
|
||||
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift) |
|
||||
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
|
||||
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
|
||||
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
|
||||
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | Broadcom |
|
||||
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | Broadcom |
|
||||
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | Broadcom |
|
||||
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
|
||||
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
|
||||
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | Broadcom |
|
||||
| Anshul Ahuja | [anshulahuja98](https://github.com/anshulahuja98) | [Microsoft Azure](https://www.github.com/azure/) |
|
||||
| Tiger Kaovilai | [kaovilai](https://github.com/kaovilai) | [OpenShift](https://github.com/openshift) |
|
||||
|
||||
@@ -27,14 +27,3 @@
|
||||
* JenTing Hsiao ([jenting](https://github.com/jenting))
|
||||
* Dave Smith-Uchida ([dsu-igeek](https://github.com/dsu-igeek))
|
||||
* Ming Qiu ([qiuming-best](https://github.com/qiuming-best))
|
||||
|
||||
## Velero Contributors & Stakeholders
|
||||
|
||||
| Feature Area | Lead |
|
||||
|------------------------|:------------------------------------------------------------------------------------:|
|
||||
| Technical Lead | Daniel Jiang [reasonerjt](https://github.com/reasonerjt) |
|
||||
| Kubernetes CSI Liaison | |
|
||||
| Deployment | |
|
||||
| Community Management | Orlin Vasilev [OrlinVasilev](https://github.com/OrlinVasilev) |
|
||||
| Product Management | Pradeep Kumar Chaturvedi [pradeepkchaturvedi](https://github.com/pradeepkchaturvedi) |
|
||||
|
||||
|
||||
@@ -42,13 +42,11 @@ The following is a list of the supported Kubernetes versions for each Velero ver
|
||||
|
||||
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|
||||
|----------------|-------------------------------------------|-------------------------------------|
|
||||
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
|
||||
| 1.18 | 1.18-latest | 1.33.7, 1.34.1, and 1.35.0 |
|
||||
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
|
||||
| 1.16 | 1.18-latest | 1.31.4, 1.32.3, and 1.33.0 |
|
||||
| 1.15 | 1.18-latest | 1.28.8, 1.29.8, 1.30.4 and 1.31.1 |
|
||||
| 1.14 | 1.18-latest | 1.27.9, 1.28.9, and 1.29.4 |
|
||||
| 1.13 | 1.18-latest | 1.26.5, 1.27.3, 1.27.8, and 1.28.3 |
|
||||
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
|
||||
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
|
||||
|
||||
Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.
|
||||
|
||||
|
||||
2
Tiltfile
2
Tiltfile
@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
|
||||
|
||||
tilt_helper_dockerfile_header = """
|
||||
# Tilt image
|
||||
FROM golang:1.25 as tilt-helper
|
||||
FROM golang:1.25.7 as tilt-helper
|
||||
|
||||
# Support live reloading with Tilt
|
||||
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \
|
||||
|
||||
109
changelogs/CHANGELOG-1.18.md
Normal file
109
changelogs/CHANGELOG-1.18.md
Normal file
@@ -0,0 +1,109 @@
|
||||
## v1.18
|
||||
|
||||
### Download
|
||||
https://github.com/vmware-tanzu/velero/releases/tag/v1.18.0
|
||||
|
||||
### Container Image
|
||||
`velero/velero:v1.18.0`
|
||||
|
||||
### Documentation
|
||||
https://velero.io/docs/v1.18/
|
||||
|
||||
### Upgrading
|
||||
https://velero.io/docs/v1.18/upgrade-to-1.18/
|
||||
|
||||
### Highlights
|
||||
#### Concurrent backup
|
||||
In v1.18, Velero is capable to process multiple backups concurrently. This is a significant usability improvement, especially for multiple tenants or multiple users case, backups submitted from different users could run their backups simultaneously without interfering with each other.
|
||||
|
||||
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/concurrent-backup-processing.md for more details.
|
||||
|
||||
#### Cache volume for data movers
|
||||
In v1.18, Velero allows users to configure cache volumes for data mover pods during restore for CSI snapshot data movement and fs-backup. This brings below benefits:
|
||||
- Solve the problem that data mover pods fail to when pod's ephemeral disk is limited
|
||||
- Solve the problem that multiple data mover pods fail to run concurrently in one node when the node's ephemeral disk is limited
|
||||
- Working together with backup repository's cache limit configuration, cache volume with appropriate size helps to improve the restore throughput
|
||||
|
||||
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/backup-repo-cache-volume.md for more details.
|
||||
|
||||
#### Incremental size for data movers
|
||||
In v1.18, Velero allows users to observe the incremental size of data movers backups for CSI snapshot data movement and fs-backup, so that users could visually see the data reduction due to incremental backup.
|
||||
|
||||
#### Wildcard support for namespaces
|
||||
In v1.18, Velero allows to use Glob regular expressions for namespace filters during backup and restore, so that users could filter namespaces in a batch manner.
|
||||
|
||||
#### VolumePolicy for PVC phase
|
||||
In v1.18, Velero VolumePolicy supports actions by PVC phase, which help users to do special operations for PVCs with a specific phase, e.g., skip PVCs in Pending/Lost status from the backup.
|
||||
|
||||
#### Scalability and Resiliency improvements
|
||||
##### Prevent Velero server OOM Kill for large backup repositories
|
||||
In v1.18, some backup repository operations are delay executed out of Velero server, so Velero server won't be OOM Killed.
|
||||
|
||||
#### Performance improvement for VolumePolicy
|
||||
In v1.18, VolumePolicy is enhanced for large number of pods/PVCs so that the performance is significantly improved.
|
||||
|
||||
#### Events for data mover pod diagnostic
|
||||
In v1.18, events are recorded into data mover pod diagnostic, which allows user to see more information for troubleshooting when the data mover pod fails.
|
||||
|
||||
### Runtime and dependencies
|
||||
Golang runtime: 1.25.7
|
||||
kopia: 0.22.3
|
||||
|
||||
### Limitations/Known issues
|
||||
|
||||
### Breaking changes
|
||||
#### Deprecation of PVC selected node feature
|
||||
According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#deprecation-policy), PVC selected node feature is deprecated in v1.18. Velero could appropriately handle PVC's selected-node annotation, so users don't need to do anything particularly.
|
||||
|
||||
### All Changes
|
||||
* Remove backup from running list when backup fails validation (#9498, @sseago)
|
||||
* Maintenance Job only uses the first element of the LoadAffinity array (#9494, @blackpiglet)
|
||||
* Fix issue #9478, add diagnose info on expose peek fails (#9481, @Lyndon-Li)
|
||||
* Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence. (#9474, @blackpiglet)
|
||||
* Add maintenance job and data mover pod's labels and annotations setting. (#9452, @blackpiglet)
|
||||
* Fix plugin init container names exceeding DNS-1123 limit (#9445, @mpryc)
|
||||
* Add PVC-to-Pod cache to improve volume policy performance (#9441, @shubham-pampattiwar)
|
||||
* Remove VolumeSnapshotClass from CSI B/R process. (#9431, @blackpiglet)
|
||||
* Use hookIndex for recording multiple restore exec hooks. (#9366, @blackpiglet)
|
||||
* Sanitize Azure HTTP responses in BSL status messages (#9321, @shubham-pampattiwar)
|
||||
* Remove labels associated with previous backups (#9206, @Joeavaikath)
|
||||
* Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs (#9166, @claude)
|
||||
* feat: Enhance BackupStorageLocation with Secret-based CA certificate support (#9141, @kaovilai)
|
||||
* Add `--apply` flag to `install` command, allowing usage of Kubernetes apply to make changes to existing installs (#9132, @mjnagel)
|
||||
* Fix issue #9194, add doc for GOMAXPROCS behavior change (#9420, @Lyndon-Li)
|
||||
* Apply volume policies to VolumeGroupSnapshot PVC filtering (#9419, @shubham-pampattiwar)
|
||||
* Fix issue #9276, add doc for cache volume support (#9418, @Lyndon-Li)
|
||||
* Add Prometheus metrics for maintenance jobs (#9414, @shubham-pampattiwar)
|
||||
* Fix issue #9400, connect repo first time after creation so that init params could be written (#9407, @Lyndon-Li)
|
||||
* Cache volume for PVR (#9397, @Lyndon-Li)
|
||||
* Cache volume support for DataDownload (#9391, @Lyndon-Li)
|
||||
* don't copy securitycontext from first container if configmap found (#9389, @sseago)
|
||||
* Refactor repo provider interface for static configuration (#9379, @Lyndon-Li)
|
||||
* Fix issue #9365, prevent fake completion notification due to multiple update of single PVR (#9375, @Lyndon-Li)
|
||||
* Add cache volume configuration (#9370, @Lyndon-Li)
|
||||
* Track actual resource names for GenerateName in restore status (#9368, @shubham-pampattiwar)
|
||||
* Fix managed fields patch for resources using GenerateName (#9367, @shubham-pampattiwar)
|
||||
* Support cache volume for generic restore exposer and pod volume exposer (#9362, @Lyndon-Li)
|
||||
* Add incrementalSize to DU/PVB for reporting new/changed size (#9357, @sseago)
|
||||
* Add snapshotSize for DataDownload, PodVolumeRestore (#9354, @Lyndon-Li)
|
||||
* Add cache dir configuration for udmrepo (#9353, @Lyndon-Li)
|
||||
* Fix the Job build error when BackupReposiotry name longer than 63. (#9350, @blackpiglet)
|
||||
* Add cache configuration to VGDP (#9342, @Lyndon-Li)
|
||||
* Fix issue #9332, add bytesDone for cache files (#9333, @Lyndon-Li)
|
||||
* Fix typos in documentation (#9329, @T4iFooN-IX)
|
||||
* Concurrent backup processing (#9307, @sseago)
|
||||
* VerifyJSONConfigs verify every elements in Data. (#9302, @blackpiglet)
|
||||
* Fix issue #9267, add events to data mover prepare diagnostic (#9296, @Lyndon-Li)
|
||||
* Add option for privileged fs-backup pod (#9295, @sseago)
|
||||
* Fix issue #9193, don't connect repo in repo controller (#9291, @Lyndon-Li)
|
||||
* Implement concurrency control for cache of native VolumeSnapshotter plugin. (#9281, @0xLeo258)
|
||||
* Fix issue #7904, remove the code and doc for PVC node selection (#9269, @Lyndon-Li)
|
||||
* Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases (#9264, @shubham-pampattiwar)
|
||||
* Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment (#9256, @shubham-pampattiwar)
|
||||
* Implement wildcard namespace pattern expansion for backup namespace includes/excludes. This change adds support for wildcard patterns (*, ?, [abc], {a,b,c}) in namespace includes and excludes during backup operations (#9255, @Joeavaikath)
|
||||
* Protect VolumeSnapshot field from race condition during multi-thread backup (#9248, @0xLeo258)
|
||||
* Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244, @priyansh17)
|
||||
* Get pod list once per namespace in pvc IBA (#9226, @sseago)
|
||||
* Fix issue #7725, add design for backup repo cache configuration (#9148, @Lyndon-Li)
|
||||
* Fix issue #9229, don't attach backupPVC to the source node (#9233, @Lyndon-Li)
|
||||
* feat: Permit specifying annotations for the BackupPVC (#9173, @clementnuss)
|
||||
@@ -1 +0,0 @@
|
||||
Add `--apply` flag to `install` command, allowing usage of Kubernetes apply to make changes to existing installs
|
||||
@@ -1 +0,0 @@
|
||||
feat: Enhance BackupStorageLocation with Secret-based CA certificate support
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #7725, add design for backup repo cache configuration
|
||||
@@ -1 +0,0 @@
|
||||
Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs
|
||||
@@ -1 +0,0 @@
|
||||
feat: Permit specifying annotations for the BackupPVC
|
||||
@@ -1 +0,0 @@
|
||||
Remove labels associated with previous backups
|
||||
@@ -1 +0,0 @@
|
||||
Get pod list once per namespace in pvc IBA
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9229, don't attach backupPVC to the source node
|
||||
@@ -1 +0,0 @@
|
||||
Update AzureAD Microsoft Authentication Library to v1.5.0
|
||||
@@ -1 +0,0 @@
|
||||
Protect VolumeSnapshot field from race condition during multi-thread backup
|
||||
@@ -1,10 +0,0 @@
|
||||
Implement wildcard namespace pattern expansion for backup namespace includes/excludes.
|
||||
|
||||
This change adds support for wildcard patterns (*, ?, [abc], {a,b,c}) in namespace includes and excludes during backup operations.
|
||||
When wildcard patterns are detected, they are expanded against the list of active namespaces in the cluster before the backup proceeds.
|
||||
|
||||
Key features:
|
||||
- Wildcard patterns in namespace includes/excludes are automatically detected and expanded
|
||||
- Pattern validation ensures unsupported patterns (regex, consecutive asterisks) are rejected
|
||||
- Empty wildcard results (e.g., "invalid*" matching no namespaces) correctly result in empty backups
|
||||
- Exact namespace names and "*" continue to work as before (no expansion needed)
|
||||
@@ -1 +0,0 @@
|
||||
Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment
|
||||
@@ -1 +0,0 @@
|
||||
Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #7904, remove the code and doc for PVC node selection
|
||||
@@ -1 +0,0 @@
|
||||
Implement concurrency control for cache of native VolumeSnapshotter plugin.
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9193, don't connect repo in repo controller
|
||||
@@ -1 +0,0 @@
|
||||
Add option for privileged fs-backup pod
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9267, add events to data mover prepare diagnostic
|
||||
@@ -1 +0,0 @@
|
||||
VerifyJSONConfigs verify every elements in Data.
|
||||
@@ -1 +0,0 @@
|
||||
Concurrent backup processing
|
||||
@@ -1 +0,0 @@
|
||||
Sanitize Azure HTTP responses in BSL status messages
|
||||
@@ -1 +0,0 @@
|
||||
Fix typos in documentation
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9332, add bytesDone for cache files
|
||||
@@ -1 +0,0 @@
|
||||
Add cache configuration to VGDP
|
||||
@@ -1 +0,0 @@
|
||||
Fix the Job build error when BackupReposiotry name longer than 63.
|
||||
@@ -1 +0,0 @@
|
||||
Add cache dir configuration for udmrepo
|
||||
@@ -1 +0,0 @@
|
||||
Add snapshotSize for DataDownload, PodVolumeRestore
|
||||
@@ -1 +0,0 @@
|
||||
Add incrementalSize to DU/PVB for reporting new/changed size
|
||||
@@ -1 +0,0 @@
|
||||
Support cache volume for generic restore exposer and pod volume exposer
|
||||
@@ -1 +0,0 @@
|
||||
Use hookIndex for recording multiple restore exec hooks.
|
||||
@@ -1 +0,0 @@
|
||||
Fix managed fields patch for resources using GenerateName
|
||||
@@ -1 +0,0 @@
|
||||
Track actual resource names for GenerateName in restore status
|
||||
@@ -1 +0,0 @@
|
||||
Add cache volume configuration
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9365, prevent fake completion notification due to multiple update of single PVR
|
||||
@@ -1 +0,0 @@
|
||||
Refactor repo provider interface for static configuration
|
||||
@@ -1 +0,0 @@
|
||||
don't copy securitycontext from first container if configmap found
|
||||
@@ -1 +0,0 @@
|
||||
Cache volume support for DataDownload
|
||||
@@ -1 +0,0 @@
|
||||
Cache volume for PVR
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9400, connect repo first time after creation so that init params could be written
|
||||
@@ -1 +0,0 @@
|
||||
Add Prometheus metrics for maintenance jobs
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9276, add doc for cache volume support
|
||||
@@ -1 +0,0 @@
|
||||
Apply volume policies to VolumeGroupSnapshot PVC filtering
|
||||
@@ -1 +0,0 @@
|
||||
Fix issue #9194, add doc for GOMAXPROCS behavior change
|
||||
@@ -1 +0,0 @@
|
||||
Remove VolumeSnapshotClass from CSI B/R process.
|
||||
@@ -1 +0,0 @@
|
||||
Add PVC-to-Pod cache to improve volume policy performance
|
||||
@@ -1 +0,0 @@
|
||||
Fix plugin init container names exceeding DNS-1123 limit
|
||||
@@ -1 +0,0 @@
|
||||
Add maintenance job and data mover pod's labels and annotations setting.
|
||||
1
changelogs/unreleased/9537-kaovilai
Normal file
1
changelogs/unreleased/9537-kaovilai
Normal file
@@ -0,0 +1 @@
|
||||
Fix VolumePolicy PVC phase condition filter for unbound PVCs (#9507)
|
||||
1
changelogs/unreleased/9539-Joeavaikath
Normal file
1
changelogs/unreleased/9539-Joeavaikath
Normal file
@@ -0,0 +1 @@
|
||||
Support all glob wildcard characters in namespace validation
|
||||
2
go.mod
2
go.mod
@@ -1,6 +1,6 @@
|
||||
module github.com/vmware-tanzu/velero
|
||||
|
||||
go 1.25.0
|
||||
go 1.25.7
|
||||
|
||||
require (
|
||||
cloud.google.com/go/storage v1.57.2
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM --platform=$TARGETPLATFORM golang:1.25-bookworm
|
||||
FROM --platform=$TARGETPLATFORM golang:1.25.7-bookworm
|
||||
|
||||
ARG GOPROXY
|
||||
|
||||
@@ -21,9 +21,11 @@ ENV GO111MODULE=on
|
||||
ENV GOPROXY=${GOPROXY}
|
||||
|
||||
# kubebuilder test bundle is separated from kubebuilder. Need to setup it for CI test.
|
||||
RUN curl -sSLo envtest-bins.tar.gz https://go.kubebuilder.io/test-tools/1.22.1/linux/$(go env GOARCH) && \
|
||||
mkdir /usr/local/kubebuilder && \
|
||||
tar -C /usr/local/kubebuilder --strip-components=1 -zvxf envtest-bins.tar.gz
|
||||
# Using setup-envtest to download envtest binaries
|
||||
RUN go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest && \
|
||||
mkdir -p /usr/local/kubebuilder/bin && \
|
||||
ENVTEST_ASSETS_DIR=$(setup-envtest use 1.33.0 --bin-dir /usr/local/kubebuilder/bin -p path) && \
|
||||
cp -r ${ENVTEST_ASSETS_DIR}/* /usr/local/kubebuilder/bin/
|
||||
|
||||
RUN wget --quiet https://github.com/kubernetes-sigs/kubebuilder/releases/download/v3.2.0/kubebuilder_linux_$(go env GOARCH) && \
|
||||
mv kubebuilder_linux_$(go env GOARCH) /usr/local/kubebuilder/bin/kubebuilder && \
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
diff --git a/go.mod b/go.mod
|
||||
index 5f939c481..6ae17f4a1 100644
|
||||
index 5f939c481..f6205aa3c 100644
|
||||
--- a/go.mod
|
||||
+++ b/go.mod
|
||||
@@ -24,32 +24,31 @@ require (
|
||||
@@ -14,13 +14,13 @@ index 5f939c481..6ae17f4a1 100644
|
||||
- golang.org/x/term v0.4.0
|
||||
- golang.org/x/text v0.6.0
|
||||
- google.golang.org/api v0.106.0
|
||||
+ golang.org/x/crypto v0.36.0
|
||||
+ golang.org/x/net v0.38.0
|
||||
+ golang.org/x/crypto v0.45.0
|
||||
+ golang.org/x/net v0.47.0
|
||||
+ golang.org/x/oauth2 v0.28.0
|
||||
+ golang.org/x/sync v0.12.0
|
||||
+ golang.org/x/sys v0.31.0
|
||||
+ golang.org/x/term v0.30.0
|
||||
+ golang.org/x/text v0.23.0
|
||||
+ golang.org/x/sync v0.18.0
|
||||
+ golang.org/x/sys v0.38.0
|
||||
+ golang.org/x/term v0.37.0
|
||||
+ golang.org/x/text v0.31.0
|
||||
+ google.golang.org/api v0.114.0
|
||||
)
|
||||
|
||||
@@ -64,11 +64,11 @@ index 5f939c481..6ae17f4a1 100644
|
||||
)
|
||||
|
||||
-go 1.18
|
||||
+go 1.23.0
|
||||
+go 1.24.0
|
||||
+
|
||||
+toolchain go1.23.7
|
||||
+toolchain go1.24.11
|
||||
diff --git a/go.sum b/go.sum
|
||||
index 026e1d2fa..805792055 100644
|
||||
index 026e1d2fa..4a37e7ac7 100644
|
||||
--- a/go.sum
|
||||
+++ b/go.sum
|
||||
@@ -1,23 +1,24 @@
|
||||
@@ -170,8 +170,8 @@ index 026e1d2fa..805792055 100644
|
||||
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||
-golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE=
|
||||
-golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
|
||||
+golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
|
||||
+golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
|
||||
+golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
|
||||
+golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
@@ -181,8 +181,8 @@ index 026e1d2fa..805792055 100644
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
-golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
|
||||
-golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
|
||||
+golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
|
||||
+golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
|
||||
+golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
|
||||
+golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
-golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
|
||||
-golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
|
||||
@@ -194,8 +194,8 @@ index 026e1d2fa..805792055 100644
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
-golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
|
||||
-golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
+golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
|
||||
+golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
+golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
|
||||
+golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -205,21 +205,21 @@ index 026e1d2fa..805792055 100644
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
-golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
|
||||
-golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
+golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
|
||||
+golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
+golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
||||
+golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
-golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
|
||||
-golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
|
||||
+golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
|
||||
+golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
|
||||
+golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
|
||||
+golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
-golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
|
||||
-golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
+golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
|
||||
+golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
|
||||
+golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
|
||||
+golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
|
||||
@@ -134,6 +134,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
pv := new(corev1api.PersistentVolume)
|
||||
var err error
|
||||
|
||||
var pvNotFoundErr error
|
||||
if groupResource == kuberesource.PersistentVolumeClaims {
|
||||
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pvc); err != nil {
|
||||
v.logger.WithError(err).Error("fail to convert unstructured into PVC")
|
||||
@@ -142,8 +143,10 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
|
||||
pv, err = kubeutil.GetPVForPVC(pvc, v.client)
|
||||
if err != nil {
|
||||
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, err
|
||||
// Any error means PV not available - save to return later if no policy matches
|
||||
v.logger.Debugf("PV not found for PVC %s: %v", pvc.Namespace+"/"+pvc.Name, err)
|
||||
pvNotFoundErr = err
|
||||
pv = nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -158,7 +161,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
|
||||
action, err := v.volumePolicy.GetMatchAction(vfd)
|
||||
if err != nil {
|
||||
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for PV %s", pv.Name)
|
||||
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for %+v", vfd)
|
||||
return false, err
|
||||
}
|
||||
|
||||
@@ -167,15 +170,21 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
// If there is no match action, go on to the next check.
|
||||
if action != nil {
|
||||
if action.Type == resourcepolicies.Snapshot {
|
||||
v.logger.Infof(fmt.Sprintf("performing snapshot action for pv %s", pv.Name))
|
||||
v.logger.Infof("performing snapshot action for %+v", vfd)
|
||||
return true, nil
|
||||
} else {
|
||||
v.logger.Infof("Skip snapshot action for pv %s as the action type is %s", pv.Name, action.Type)
|
||||
v.logger.Infof("Skip snapshot action for %+v as the action type is %s", vfd, action.Type)
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If resource is PVC, and PV is nil (e.g., Pending/Lost PVC with no matching policy), return the original error
|
||||
if groupResource == kuberesource.PersistentVolumeClaims && pv == nil && pvNotFoundErr != nil {
|
||||
v.logger.WithError(pvNotFoundErr).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, pvNotFoundErr
|
||||
}
|
||||
|
||||
// If this PV is claimed, see if we've already taken a (pod volume backup)
|
||||
// snapshot of the contents of this PV. If so, don't take a snapshot.
|
||||
if pv.Spec.ClaimRef != nil {
|
||||
@@ -209,7 +218,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
return true, nil
|
||||
}
|
||||
|
||||
v.logger.Infof(fmt.Sprintf("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name))
|
||||
v.logger.Infof("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
@@ -219,6 +228,7 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
|
||||
return false, nil
|
||||
}
|
||||
|
||||
var pvNotFoundErr error
|
||||
if v.volumePolicy != nil {
|
||||
var resource any
|
||||
var err error
|
||||
@@ -230,10 +240,13 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
|
||||
v.logger.WithError(err).Errorf("fail to get PVC for pod %s", pod.Namespace+"/"+pod.Name)
|
||||
return false, err
|
||||
}
|
||||
resource, err = kubeutil.GetPVForPVC(pvc, v.client)
|
||||
pvResource, err := kubeutil.GetPVForPVC(pvc, v.client)
|
||||
if err != nil {
|
||||
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, err
|
||||
// Any error means PV not available - save to return later if no policy matches
|
||||
v.logger.Debugf("PV not found for PVC %s: %v", pvc.Namespace+"/"+pvc.Name, err)
|
||||
pvNotFoundErr = err
|
||||
} else {
|
||||
resource = pvResource
|
||||
}
|
||||
}
|
||||
|
||||
@@ -260,6 +273,12 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
// If no policy matched and PV was not found, return the original error
|
||||
if pvNotFoundErr != nil {
|
||||
v.logger.WithError(pvNotFoundErr).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, pvNotFoundErr
|
||||
}
|
||||
}
|
||||
|
||||
if v.shouldPerformFSBackupLegacy(volume, pod) {
|
||||
|
||||
@@ -286,7 +286,7 @@ func TestVolumeHelperImpl_ShouldPerformSnapshot(t *testing.T) {
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "PVC not having PV, return false and error case PV not found",
|
||||
name: "PVC not having PV, return false and error when no matching policy",
|
||||
inputObj: builder.ForPersistentVolumeClaim("default", "example-pvc").StorageClass("gp2-csi").Result(),
|
||||
groupResource: kuberesource.PersistentVolumeClaims,
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
@@ -1234,3 +1234,312 @@ func TestNewVolumeHelperImplWithCache_UsesCache(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.False(t, shouldSnapshot, "Expected snapshot to be skipped due to fs-backup selection via cache")
|
||||
}
|
||||
|
||||
// TestVolumeHelperImpl_ShouldPerformSnapshot_UnboundPVC tests that Pending and Lost PVCs with
|
||||
// phase-based skip policies don't cause errors when GetPVForPVC would fail.
|
||||
func TestVolumeHelperImpl_ShouldPerformSnapshot_UnboundPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
inputPVC *corev1api.PersistentVolumeClaim
|
||||
resourcePolicies *resourcepolicies.ResourcePolicies
|
||||
shouldSnapshot bool
|
||||
expectedErr bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with phase-based skip policy should not error and return false",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-pending").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Pending PVC without matching skip policy should error (no PV)",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-pending-no-policy").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"storageClass": []string{"gp2-csi"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: true,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with phase-based skip policy should not error and return false",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with policy for Pending and Lost should not error and return false",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t)
|
||||
|
||||
var p *resourcepolicies.Policies
|
||||
if tc.resourcePolicies != nil {
|
||||
p = &resourcepolicies.Policies{}
|
||||
err := p.BuildPolicy(tc.resourcePolicies)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
vh := NewVolumeHelperImpl(
|
||||
p,
|
||||
ptr.To(true),
|
||||
logrus.StandardLogger(),
|
||||
fakeClient,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
|
||||
obj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.inputPVC)
|
||||
require.NoError(t, err)
|
||||
|
||||
actualShouldSnapshot, actualError := vh.ShouldPerformSnapshot(&unstructured.Unstructured{Object: obj}, kuberesource.PersistentVolumeClaims)
|
||||
if tc.expectedErr {
|
||||
require.Error(t, actualError, "Want error; Got nil error")
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, actualError)
|
||||
require.Equalf(t, tc.shouldSnapshot, actualShouldSnapshot, "Want shouldSnapshot as %t; Got shouldSnapshot as %t", tc.shouldSnapshot, actualShouldSnapshot)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestVolumeHelperImpl_ShouldPerformFSBackup_UnboundPVC tests that Pending and Lost PVCs with
|
||||
// phase-based skip policies don't cause errors when GetPVForPVC would fail.
|
||||
func TestVolumeHelperImpl_ShouldPerformFSBackup_UnboundPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
pod *corev1api.Pod
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
resourcePolicies *resourcepolicies.ResourcePolicies
|
||||
shouldFSBackup bool
|
||||
expectedErr bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with phase-based skip policy should not error and return false",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-pending",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-pending",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-pending").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Pending PVC without matching skip policy should error (no PV)",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-pending",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-pending-no-policy",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-pending-no-policy").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"storageClass": []string{"gp2-csi"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: true,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with phase-based skip policy should not error and return false",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-lost",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-lost",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with policy for Pending and Lost should not error and return false",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-lost",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-lost",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, tc.pvc)
|
||||
require.NoError(t, fakeClient.Create(t.Context(), tc.pod))
|
||||
|
||||
var p *resourcepolicies.Policies
|
||||
if tc.resourcePolicies != nil {
|
||||
p = &resourcepolicies.Policies{}
|
||||
err := p.BuildPolicy(tc.resourcePolicies)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
vh := NewVolumeHelperImpl(
|
||||
p,
|
||||
ptr.To(true),
|
||||
logrus.StandardLogger(),
|
||||
fakeClient,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
|
||||
actualShouldFSBackup, actualError := vh.ShouldPerformFSBackup(tc.pod.Spec.Volumes[0], *tc.pod)
|
||||
if tc.expectedErr {
|
||||
require.Error(t, actualError, "Want error; Got nil error")
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, actualError)
|
||||
require.Equalf(t, tc.shouldFSBackup, actualShouldFSBackup, "Want shouldFSBackup as %t; Got shouldFSBackup as %t", tc.shouldFSBackup, actualShouldFSBackup)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -687,15 +687,14 @@ func (ib *itemBackupper) getMatchAction(obj runtime.Unstructured, groupResource
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
|
||||
pvName := pvc.Spec.VolumeName
|
||||
if pvName == "" {
|
||||
return nil, errors.Errorf("PVC has no volume backing this claim")
|
||||
}
|
||||
|
||||
pv := &corev1api.PersistentVolume{}
|
||||
if err := ib.kbClient.Get(context.Background(), kbClient.ObjectKey{Name: pvName}, pv); err != nil {
|
||||
return nil, errors.WithStack(err)
|
||||
var pv *corev1api.PersistentVolume
|
||||
if pvName := pvc.Spec.VolumeName; pvName != "" {
|
||||
pv = &corev1api.PersistentVolume{}
|
||||
if err := ib.kbClient.Get(context.Background(), kbClient.ObjectKey{Name: pvName}, pv); err != nil {
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
}
|
||||
// If pv is nil for unbound PVCs - policy matching will use PVC-only conditions
|
||||
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
|
||||
return ib.backupRequest.ResPolicies.GetMatchAction(vfd)
|
||||
}
|
||||
@@ -709,7 +708,10 @@ func (ib *itemBackupper) trackSkippedPV(obj runtime.Unstructured, groupResource
|
||||
if name, err := getPVName(obj, groupResource); len(name) > 0 && err == nil {
|
||||
ib.backupRequest.SkippedPVTracker.Track(name, approach, reason)
|
||||
} else if err != nil {
|
||||
log.WithError(err).Warnf("unable to get PV name, skip tracking.")
|
||||
// Log at info level for tracking purposes. This is not an error because
|
||||
// it's expected for some resources (e.g., PVCs in Pending or Lost phase)
|
||||
// to not have a PV name. This occurs when volume policy skips unbound PVCs.
|
||||
log.WithError(err).Infof("unable to get PV name, skip tracking.")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -719,6 +721,17 @@ func (ib *itemBackupper) unTrackSkippedPV(obj runtime.Unstructured, groupResourc
|
||||
if name, err := getPVName(obj, groupResource); len(name) > 0 && err == nil {
|
||||
ib.backupRequest.SkippedPVTracker.Untrack(name)
|
||||
} else if err != nil {
|
||||
// For PVCs in Pending or Lost phase, it's expected that there's no PV name.
|
||||
// Log at debug level instead of warning to reduce noise.
|
||||
if groupResource == kuberesource.PersistentVolumeClaims {
|
||||
pvc := new(corev1api.PersistentVolumeClaim)
|
||||
if convErr := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), pvc); convErr == nil {
|
||||
if pvc.Status.Phase == corev1api.ClaimPending || pvc.Status.Phase == corev1api.ClaimLost {
|
||||
log.WithError(err).Debugf("unable to get PV name for %s PVC, skip untracking.", pvc.Status.Phase)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
log.WithError(err).Warnf("unable to get PV name, skip untracking.")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -17,12 +17,15 @@ limitations under the License.
|
||||
package backup
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
ctrlfake "sigs.k8s.io/controller-runtime/pkg/client/fake"
|
||||
|
||||
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
|
||||
"github.com/vmware-tanzu/velero/pkg/kuberesource"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -269,3 +272,225 @@ func TestAddVolumeInfo(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetMatchAction_PendingLostPVC(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
require.NoError(t, corev1api.AddToScheme(scheme))
|
||||
|
||||
// Create resource policies that skip Pending/Lost PVCs
|
||||
resPolicies := &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
policies := &resourcepolicies.Policies{}
|
||||
err := policies.BuildPolicy(resPolicies)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
pv *corev1api.PersistentVolume
|
||||
expectedAction *resourcepolicies.Action
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with no VolumeName should match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with no VolumeName should match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Bound PVC with VolumeName and matching PV should not match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
StorageClass("test-sc").
|
||||
VolumeName("test-pv").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
pv: builder.ForPersistentVolume("test-pv").StorageClass("test-sc").Result(),
|
||||
expectedAction: nil,
|
||||
expectError: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
// Build fake client with PV if present
|
||||
clientBuilder := ctrlfake.NewClientBuilder().WithScheme(scheme)
|
||||
if tc.pv != nil {
|
||||
clientBuilder = clientBuilder.WithObjects(tc.pv)
|
||||
}
|
||||
fakeClient := clientBuilder.Build()
|
||||
|
||||
ib := &itemBackupper{
|
||||
kbClient: fakeClient,
|
||||
backupRequest: &Request{
|
||||
ResPolicies: policies,
|
||||
},
|
||||
}
|
||||
|
||||
// Convert PVC to unstructured
|
||||
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
|
||||
require.NoError(t, err)
|
||||
obj := &unstructured.Unstructured{Object: pvcData}
|
||||
|
||||
action, err := ib.getMatchAction(obj, kuberesource.PersistentVolumeClaims, csiBIAPluginName)
|
||||
if tc.expectError {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
if tc.expectedAction == nil {
|
||||
assert.Nil(t, action)
|
||||
} else {
|
||||
require.NotNil(t, action)
|
||||
assert.Equal(t, tc.expectedAction.Type, action.Type)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestTrackSkippedPV_PendingLostPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC should log at info level",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
},
|
||||
{
|
||||
name: "Lost PVC should log at info level",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
},
|
||||
{
|
||||
name: "Bound PVC without VolumeName should log at info level",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ib := &itemBackupper{
|
||||
backupRequest: &Request{
|
||||
SkippedPVTracker: NewSkipPVTracker(),
|
||||
},
|
||||
}
|
||||
|
||||
// Set up log capture
|
||||
logOutput := &bytes.Buffer{}
|
||||
logger := logrus.New()
|
||||
logger.SetOutput(logOutput)
|
||||
logger.SetLevel(logrus.DebugLevel)
|
||||
|
||||
// Convert PVC to unstructured
|
||||
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
|
||||
require.NoError(t, err)
|
||||
obj := &unstructured.Unstructured{Object: pvcData}
|
||||
|
||||
ib.trackSkippedPV(obj, kuberesource.PersistentVolumeClaims, "", "test reason", logger)
|
||||
|
||||
logStr := logOutput.String()
|
||||
assert.Contains(t, logStr, "level=info")
|
||||
assert.Contains(t, logStr, "unable to get PV name, skip tracking.")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestUnTrackSkippedPV_PendingLostPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
expectWarningLog bool
|
||||
expectDebugMessage string
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC should log at debug level, not warning",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
expectWarningLog: false,
|
||||
expectDebugMessage: "unable to get PV name for Pending PVC, skip untracking.",
|
||||
},
|
||||
{
|
||||
name: "Lost PVC should log at debug level, not warning",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
expectWarningLog: false,
|
||||
expectDebugMessage: "unable to get PV name for Lost PVC, skip untracking.",
|
||||
},
|
||||
{
|
||||
name: "Bound PVC without VolumeName should log warning",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
expectWarningLog: true,
|
||||
expectDebugMessage: "",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ib := &itemBackupper{
|
||||
backupRequest: &Request{
|
||||
SkippedPVTracker: NewSkipPVTracker(),
|
||||
},
|
||||
}
|
||||
|
||||
// Set up log capture
|
||||
logOutput := &bytes.Buffer{}
|
||||
logger := logrus.New()
|
||||
logger.SetOutput(logOutput)
|
||||
logger.SetLevel(logrus.DebugLevel)
|
||||
|
||||
// Convert PVC to unstructured
|
||||
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
|
||||
require.NoError(t, err)
|
||||
obj := &unstructured.Unstructured{Object: pvcData}
|
||||
|
||||
ib.unTrackSkippedPV(obj, kuberesource.PersistentVolumeClaims, logger)
|
||||
|
||||
logStr := logOutput.String()
|
||||
if tc.expectWarningLog {
|
||||
assert.Contains(t, logStr, "level=warning")
|
||||
assert.Contains(t, logStr, "unable to get PV name, skip untracking.")
|
||||
} else {
|
||||
assert.NotContains(t, logStr, "level=warning")
|
||||
if tc.expectDebugMessage != "" {
|
||||
assert.Contains(t, logStr, "level=debug")
|
||||
assert.Contains(t, logStr, tc.expectDebugMessage)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -340,20 +340,16 @@ func (s *nodeAgentServer) run() {
|
||||
}
|
||||
}
|
||||
|
||||
var cachePVCConfig *velerotypes.CachePVC
|
||||
if s.dataPathConfigs != nil && s.dataPathConfigs.CachePVCConfig != nil {
|
||||
if err := s.validateCachePVCConfig(*s.dataPathConfigs.CachePVCConfig); err != nil {
|
||||
s.logger.WithError(err).Warnf("Ignore cache config %v", s.dataPathConfigs.CachePVCConfig)
|
||||
} else {
|
||||
cachePVCConfig = s.dataPathConfigs.CachePVCConfig
|
||||
s.logger.Infof("Using cache volume configs %v", s.dataPathConfigs.CachePVCConfig)
|
||||
}
|
||||
}
|
||||
|
||||
var cachePVCConfig *velerotypes.CachePVC
|
||||
if s.dataPathConfigs != nil && s.dataPathConfigs.CachePVCConfig != nil {
|
||||
cachePVCConfig = s.dataPathConfigs.CachePVCConfig
|
||||
s.logger.Infof("Using customized cachePVC config %v", cachePVCConfig)
|
||||
}
|
||||
|
||||
var podLabels map[string]string
|
||||
if s.dataPathConfigs != nil && len(s.dataPathConfigs.PodLabels) > 0 {
|
||||
podLabels = s.dataPathConfigs.PodLabels
|
||||
@@ -368,6 +364,8 @@ func (s *nodeAgentServer) run() {
|
||||
|
||||
if s.backupRepoConfigs != nil {
|
||||
s.logger.Infof("Using backup repo config %v", s.backupRepoConfigs)
|
||||
} else if cachePVCConfig != nil {
|
||||
s.logger.Info("Backup repo config is not provided, using default values for cache volume configs")
|
||||
}
|
||||
|
||||
pvbReconciler := controller.NewPodVolumeBackupReconciler(
|
||||
|
||||
@@ -115,7 +115,11 @@ var (
|
||||
"datauploads.velero.io",
|
||||
"persistentvolumes",
|
||||
"persistentvolumeclaims",
|
||||
"clusterroles",
|
||||
"roles",
|
||||
"serviceaccounts",
|
||||
"clusterrolebindings",
|
||||
"rolebindings",
|
||||
"secrets",
|
||||
"configmaps",
|
||||
"limitranges",
|
||||
|
||||
@@ -307,6 +307,16 @@ func (b *backupReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
|
||||
|
||||
backupScheduleName := request.GetLabels()[velerov1api.ScheduleNameLabel]
|
||||
|
||||
b.backupTracker.Add(request.Namespace, request.Name)
|
||||
defer func() {
|
||||
switch request.Status.Phase {
|
||||
case velerov1api.BackupPhaseCompleted, velerov1api.BackupPhasePartiallyFailed, velerov1api.BackupPhaseFailed, velerov1api.BackupPhaseFailedValidation:
|
||||
b.backupTracker.Delete(request.Namespace, request.Name)
|
||||
case velerov1api.BackupPhaseWaitingForPluginOperations, velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed, velerov1api.BackupPhaseFinalizing, velerov1api.BackupPhaseFinalizingPartiallyFailed:
|
||||
b.backupTracker.AddPostProcessing(request.Namespace, request.Name)
|
||||
}
|
||||
}()
|
||||
|
||||
if request.Status.Phase == velerov1api.BackupPhaseFailedValidation {
|
||||
log.Debug("failed to validate backup status")
|
||||
b.metrics.RegisterBackupValidationFailure(backupScheduleName)
|
||||
@@ -318,16 +328,6 @@ func (b *backupReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
|
||||
// store ref to just-updated item for creating patch
|
||||
original = request.Backup.DeepCopy()
|
||||
|
||||
b.backupTracker.Add(request.Namespace, request.Name)
|
||||
defer func() {
|
||||
switch request.Status.Phase {
|
||||
case velerov1api.BackupPhaseCompleted, velerov1api.BackupPhasePartiallyFailed, velerov1api.BackupPhaseFailed, velerov1api.BackupPhaseFailedValidation:
|
||||
b.backupTracker.Delete(request.Namespace, request.Name)
|
||||
case velerov1api.BackupPhaseWaitingForPluginOperations, velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed, velerov1api.BackupPhaseFinalizing, velerov1api.BackupPhaseFinalizingPartiallyFailed:
|
||||
b.backupTracker.AddPostProcessing(request.Namespace, request.Name)
|
||||
}
|
||||
}()
|
||||
|
||||
log.Debug("Running backup")
|
||||
|
||||
b.metrics.RegisterBackupAttempt(backupScheduleName)
|
||||
|
||||
@@ -246,6 +246,7 @@ func TestProcessBackupValidationFailures(t *testing.T) {
|
||||
clock: &clock.RealClock{},
|
||||
formatFlag: formatFlag,
|
||||
metrics: metrics.NewServerMetrics(),
|
||||
backupTracker: NewBackupTracker(),
|
||||
}
|
||||
|
||||
require.NotNil(t, test.backup)
|
||||
|
||||
@@ -292,8 +292,14 @@ func (r *DataDownloadReconciler) Reconcile(ctx context.Context, req ctrl.Request
|
||||
return ctrl.Result{}, nil
|
||||
} else if dd.Status.Phase == velerov2alpha1api.DataDownloadPhaseAccepted {
|
||||
if peekErr := r.restoreExposer.PeekExposed(ctx, getDataDownloadOwnerObject(dd)); peekErr != nil {
|
||||
r.tryCancelDataDownload(ctx, dd, fmt.Sprintf("found a datadownload %s/%s with expose error: %s. mark it as cancel", dd.Namespace, dd.Name, peekErr))
|
||||
log.Errorf("Cancel dd %s/%s because of expose error %s", dd.Namespace, dd.Name, peekErr)
|
||||
|
||||
diags := strings.Split(r.restoreExposer.DiagnoseExpose(ctx, getDataDownloadOwnerObject(dd)), "\n")
|
||||
for _, diag := range diags {
|
||||
log.Warnf("[Diagnose DD expose]%s", diag)
|
||||
}
|
||||
|
||||
r.tryCancelDataDownload(ctx, dd, fmt.Sprintf("found a datadownload %s/%s with expose error: %s. mark it as cancel", dd.Namespace, dd.Name, peekErr))
|
||||
} else if dd.Status.AcceptedTimestamp != nil {
|
||||
if time.Since(dd.Status.AcceptedTimestamp.Time) >= r.preparingTimeout {
|
||||
r.onPrepareTimeout(ctx, dd)
|
||||
@@ -918,7 +924,7 @@ func (r *DataDownloadReconciler) setupExposeParam(dd *velerov2alpha1api.DataDown
|
||||
cacheVolume = &exposer.CacheConfigs{
|
||||
Limit: limit,
|
||||
StorageClass: r.cacheVolumeConfigs.StorageClass,
|
||||
ResidentThreshold: r.cacheVolumeConfigs.ResidentThreshold,
|
||||
ResidentThreshold: r.cacheVolumeConfigs.ResidentThresholdInMB << 20,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -561,6 +561,7 @@ func TestDataDownloadReconcile(t *testing.T) {
|
||||
ep.On("GetExposed", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil, nil)
|
||||
} else if test.isPeekExposeErr {
|
||||
ep.On("PeekExposed", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(errors.New("fake-peek-error"))
|
||||
ep.On("DiagnoseExpose", mock.Anything, mock.Anything).Return("")
|
||||
}
|
||||
|
||||
if !test.notMockCleanUp {
|
||||
|
||||
@@ -298,8 +298,14 @@ func (r *DataUploadReconciler) Reconcile(ctx context.Context, req ctrl.Request)
|
||||
return ctrl.Result{}, nil
|
||||
} else if du.Status.Phase == velerov2alpha1api.DataUploadPhaseAccepted {
|
||||
if peekErr := ep.PeekExposed(ctx, getOwnerObject(du)); peekErr != nil {
|
||||
r.tryCancelDataUpload(ctx, du, fmt.Sprintf("found a du %s/%s with expose error: %s. mark it as cancel", du.Namespace, du.Name, peekErr))
|
||||
log.Errorf("Cancel du %s/%s because of expose error %s", du.Namespace, du.Name, peekErr)
|
||||
|
||||
diags := strings.Split(ep.DiagnoseExpose(ctx, getOwnerObject(du)), "\n")
|
||||
for _, diag := range diags {
|
||||
log.Warnf("[Diagnose DU expose]%s", diag)
|
||||
}
|
||||
|
||||
r.tryCancelDataUpload(ctx, du, fmt.Sprintf("found a du %s/%s with expose error: %s. mark it as cancel", du.Namespace, du.Name, peekErr))
|
||||
} else if du.Status.AcceptedTimestamp != nil {
|
||||
if time.Since(du.Status.AcceptedTimestamp.Time) >= r.preparingTimeout {
|
||||
r.onPrepareTimeout(ctx, du)
|
||||
|
||||
@@ -260,6 +260,12 @@ func (r *PodVolumeBackupReconciler) Reconcile(ctx context.Context, req ctrl.Requ
|
||||
} else if pvb.Status.Phase == velerov1api.PodVolumeBackupPhaseAccepted {
|
||||
if peekErr := r.exposer.PeekExposed(ctx, getPVBOwnerObject(pvb)); peekErr != nil {
|
||||
log.Errorf("Cancel PVB %s/%s because of expose error %s", pvb.Namespace, pvb.Name, peekErr)
|
||||
|
||||
diags := strings.Split(r.exposer.DiagnoseExpose(ctx, getPVBOwnerObject(pvb)), "\n")
|
||||
for _, diag := range diags {
|
||||
log.Warnf("[Diagnose PVB expose]%s", diag)
|
||||
}
|
||||
|
||||
r.tryCancelPodVolumeBackup(ctx, pvb, fmt.Sprintf("found a PVB %s/%s with expose error: %s. mark it as cancel", pvb.Namespace, pvb.Name, peekErr))
|
||||
} else if pvb.Status.AcceptedTimestamp != nil {
|
||||
if time.Since(pvb.Status.AcceptedTimestamp.Time) >= r.preparingTimeout {
|
||||
|
||||
@@ -274,6 +274,12 @@ func (r *PodVolumeRestoreReconciler) Reconcile(ctx context.Context, req ctrl.Req
|
||||
} else if pvr.Status.Phase == velerov1api.PodVolumeRestorePhaseAccepted {
|
||||
if peekErr := r.exposer.PeekExposed(ctx, getPVROwnerObject(pvr)); peekErr != nil {
|
||||
log.Errorf("Cancel PVR %s/%s because of expose error %s", pvr.Namespace, pvr.Name, peekErr)
|
||||
|
||||
diags := strings.Split(r.exposer.DiagnoseExpose(ctx, getPVROwnerObject(pvr)), "\n")
|
||||
for _, diag := range diags {
|
||||
log.Warnf("[Diagnose PVR expose]%s", diag)
|
||||
}
|
||||
|
||||
_ = r.tryCancelPodVolumeRestore(ctx, pvr, fmt.Sprintf("found a PVR %s/%s with expose error: %s. mark it as cancel", pvr.Namespace, pvr.Name, peekErr))
|
||||
} else if pvr.Status.AcceptedTimestamp != nil {
|
||||
if time.Since(pvr.Status.AcceptedTimestamp.Time) >= r.preparingTimeout {
|
||||
@@ -934,7 +940,7 @@ func (r *PodVolumeRestoreReconciler) setupExposeParam(pvr *velerov1api.PodVolume
|
||||
cacheVolume = &exposer.CacheConfigs{
|
||||
Limit: limit,
|
||||
StorageClass: r.cacheVolumeConfigs.StorageClass,
|
||||
ResidentThreshold: r.cacheVolumeConfigs.ResidentThreshold,
|
||||
ResidentThreshold: r.cacheVolumeConfigs.ResidentThresholdInMB << 20,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1024,6 +1024,7 @@ func TestPodVolumeRestoreReconcile(t *testing.T) {
|
||||
ep.On("GetExposed", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil, nil)
|
||||
} else if test.isPeekExposeErr {
|
||||
ep.On("PeekExposed", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(errors.New("fake-peek-error"))
|
||||
ep.On("DiagnoseExpose", mock.Anything, mock.Anything).Return("")
|
||||
}
|
||||
|
||||
if !test.notMockCleanUp {
|
||||
|
||||
@@ -1307,6 +1307,7 @@ func Test_csiSnapshotExposer_DiagnoseExpose(t *testing.T) {
|
||||
Message: "fake-pod-message",
|
||||
},
|
||||
},
|
||||
Message: "fake-pod-message-1",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1501,7 +1502,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutStatus,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name
|
||||
Pod velero/fake-backup, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to
|
||||
VS velero/fake-backup, bind to , readyToUse false, errMessage
|
||||
@@ -1518,7 +1519,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name
|
||||
Pod velero/fake-backup, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to
|
||||
VS velero/fake-backup, bind to , readyToUse false, errMessage
|
||||
@@ -1535,7 +1536,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
node-agent is not running in node fake-node, err: daemonset pod not found in running state in node fake-node
|
||||
PVC velero/fake-backup, phase Pending, binding to
|
||||
@@ -1554,7 +1555,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to
|
||||
VS velero/fake-backup, bind to , readyToUse false, errMessage
|
||||
@@ -1572,7 +1573,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to fake-pv
|
||||
error getting backup pv fake-pv, err: persistentvolumes "fake-pv" not found
|
||||
@@ -1592,7 +1593,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to fake-pv
|
||||
PV fake-pv, phase Pending, reason , message fake-pv-message
|
||||
@@ -1612,7 +1613,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to fake-pv
|
||||
PV fake-pv, phase Pending, reason , message fake-pv-message
|
||||
@@ -1634,7 +1635,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to fake-pv
|
||||
PV fake-pv, phase Pending, reason , message fake-pv-message
|
||||
@@ -1698,7 +1699,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
Pod event reason reason-2, message message-2
|
||||
Pod event reason reason-6, message message-6
|
||||
|
||||
@@ -664,6 +664,7 @@ func Test_ReastoreDiagnoseExpose(t *testing.T) {
|
||||
Message: "fake-pod-message",
|
||||
},
|
||||
},
|
||||
Message: "fake-pod-message-1",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -815,7 +816,7 @@ end diagnose restore exposer`,
|
||||
&restorePVCWithoutVolumeName,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name
|
||||
Pod velero/fake-restore, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to
|
||||
end diagnose restore exposer`,
|
||||
@@ -828,7 +829,7 @@ end diagnose restore exposer`,
|
||||
&restorePVCWithoutVolumeName,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name
|
||||
Pod velero/fake-restore, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to
|
||||
end diagnose restore exposer`,
|
||||
@@ -841,7 +842,7 @@ end diagnose restore exposer`,
|
||||
&restorePVCWithoutVolumeName,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
node-agent is not running in node fake-node, err: daemonset pod not found in running state in node fake-node
|
||||
PVC velero/fake-restore, phase Pending, binding to
|
||||
@@ -856,7 +857,7 @@ end diagnose restore exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to
|
||||
end diagnose restore exposer`,
|
||||
@@ -870,7 +871,7 @@ end diagnose restore exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to fake-pv
|
||||
error getting restore pv fake-pv, err: persistentvolumes "fake-pv" not found
|
||||
@@ -886,7 +887,7 @@ end diagnose restore exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to fake-pv
|
||||
PV fake-pv, phase Pending, reason , message fake-pv-message
|
||||
@@ -902,7 +903,7 @@ end diagnose restore exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to fake-pv
|
||||
error getting restore pv fake-pv, err: persistentvolumes "fake-pv" not found
|
||||
@@ -922,7 +923,7 @@ end diagnose restore exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to fake-pv
|
||||
PV fake-pv, phase Pending, reason , message fake-pv-message
|
||||
@@ -975,7 +976,7 @@ end diagnose restore exposer`,
|
||||
},
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
Pod event reason reason-2, message message-2
|
||||
Pod event reason reason-5, message message-5
|
||||
|
||||
@@ -592,6 +592,7 @@ func TestPodVolumeDiagnoseExpose(t *testing.T) {
|
||||
Message: "fake-pod-message",
|
||||
},
|
||||
},
|
||||
Message: "fake-pod-message-1",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -691,7 +692,7 @@ end diagnose pod volume exposer`,
|
||||
&backupPodWithoutNodeName,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name
|
||||
Pod velero/fake-backup, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
end diagnose pod volume exposer`,
|
||||
},
|
||||
@@ -702,7 +703,7 @@ end diagnose pod volume exposer`,
|
||||
&backupPodWithoutNodeName,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name
|
||||
Pod velero/fake-backup, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
end diagnose pod volume exposer`,
|
||||
},
|
||||
@@ -713,7 +714,7 @@ end diagnose pod volume exposer`,
|
||||
&backupPodWithNodeName,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
node-agent is not running in node fake-node, err: daemonset pod not found in running state in node fake-node
|
||||
end diagnose pod volume exposer`,
|
||||
@@ -726,7 +727,7 @@ end diagnose pod volume exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
end diagnose pod volume exposer`,
|
||||
},
|
||||
@@ -739,7 +740,7 @@ end diagnose pod volume exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup-cache, phase Pending, binding to fake-pv-cache
|
||||
error getting cache pv fake-pv-cache, err: persistentvolumes "fake-pv-cache" not found
|
||||
@@ -755,7 +756,7 @@ end diagnose pod volume exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup-cache, phase Pending, binding to fake-pv-cache
|
||||
PV fake-pv-cache, phase Pending, reason , message fake-pv-message
|
||||
@@ -797,7 +798,7 @@ end diagnose pod volume exposer`,
|
||||
},
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
Pod event reason reason-2, message message-2
|
||||
Pod event reason reason-4, message message-4
|
||||
|
||||
@@ -18,6 +18,7 @@ limitations under the License.
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
|
||||
"github.com/kopia/kopia/repo/logging"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -30,6 +31,10 @@ type kopiaLog struct {
|
||||
logger logrus.FieldLogger
|
||||
}
|
||||
|
||||
type repoLog struct {
|
||||
logger logrus.FieldLogger
|
||||
}
|
||||
|
||||
// SetupKopiaLog sets the Kopia log handler to the specific context, Kopia modules
|
||||
// call the logger in the context to write logs
|
||||
func SetupKopiaLog(ctx context.Context, logger logrus.FieldLogger) context.Context {
|
||||
@@ -39,6 +44,10 @@ func SetupKopiaLog(ctx context.Context, logger logrus.FieldLogger) context.Conte
|
||||
})
|
||||
}
|
||||
|
||||
func RepositoryLogger(logger logrus.FieldLogger) io.Writer {
|
||||
return &repoLog{logger: logger}
|
||||
}
|
||||
|
||||
// Enabled decides whether a given logging level is enabled when logging a message
|
||||
func (kl *kopiaLog) Enabled(level zapcore.Level) bool {
|
||||
entry := kl.logger.WithField("null", "null")
|
||||
@@ -160,3 +169,9 @@ func (kl *kopiaLog) logrusFieldsForWrite(ent zapcore.Entry, fields []zapcore.Fie
|
||||
|
||||
return copied
|
||||
}
|
||||
|
||||
func (rl *repoLog) Write(p []byte) (int, error) {
|
||||
rl.logger.Debug(string(p))
|
||||
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
@@ -210,11 +210,9 @@ func resultsKey(ns, name string) string {
|
||||
|
||||
func (b *backupper) getMatchAction(resPolicies *resourcepolicies.Policies, pvc *corev1api.PersistentVolumeClaim, volume *corev1api.Volume) (*resourcepolicies.Action, error) {
|
||||
if pvc != nil {
|
||||
pv := new(corev1api.PersistentVolume)
|
||||
err := b.crClient.Get(context.TODO(), ctrlclient.ObjectKey{Name: pvc.Spec.VolumeName}, pv)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error getting pv for pvc %s", pvc.Spec.VolumeName)
|
||||
}
|
||||
// Ignore err, if the PV is not available (Pending/Lost PVC or PV fetch failed) - try matching with PVC only
|
||||
// GetPVForPVC returns nil for all error cases
|
||||
pv, _ := kube.GetPVForPVC(pvc, b.crClient)
|
||||
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
|
||||
return resPolicies.GetMatchAction(vfd)
|
||||
}
|
||||
|
||||
@@ -309,8 +309,8 @@ func createNodeObj() *corev1api.Node {
|
||||
|
||||
func TestBackupPodVolumes(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
velerov1api.AddToScheme(scheme)
|
||||
corev1api.AddToScheme(scheme)
|
||||
require.NoError(t, velerov1api.AddToScheme(scheme))
|
||||
require.NoError(t, corev1api.AddToScheme(scheme))
|
||||
log := logrus.New()
|
||||
|
||||
tests := []struct {
|
||||
@@ -778,7 +778,7 @@ func TestWaitAllPodVolumesProcessed(t *testing.T) {
|
||||
|
||||
backuper := newBackupper(c.ctx, log, nil, nil, informer, nil, "", &velerov1api.Backup{})
|
||||
if c.pvb != nil {
|
||||
backuper.pvbIndexer.Add(c.pvb)
|
||||
require.NoError(t, backuper.pvbIndexer.Add(c.pvb))
|
||||
backuper.wg.Add(1)
|
||||
}
|
||||
|
||||
@@ -833,3 +833,185 @@ func TestPVCBackupSummary(t *testing.T) {
|
||||
assert.Empty(t, pbs.Skipped)
|
||||
assert.Len(t, pbs.Backedup, 2)
|
||||
}
|
||||
|
||||
func TestGetMatchAction_PendingPVC(t *testing.T) {
|
||||
// Create resource policies that skip Pending/Lost PVCs
|
||||
resPolicies := &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
policies := &resourcepolicies.Policies{}
|
||||
err := policies.BuildPolicy(resPolicies)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
volume *corev1api.Volume
|
||||
pv *corev1api.PersistentVolume
|
||||
expectedAction *resourcepolicies.Action
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with pvcPhase skip policy should return skip action",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pending-pvc",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with pvcPhase skip policy should return skip action",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "lost-pvc",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Bound PVC with matching PV should not match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
StorageClass("test-sc").
|
||||
VolumeName("test-pv").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "bound-pvc",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: builder.ForPersistentVolume("test-pv").StorageClass("test-sc").Result(),
|
||||
expectedAction: nil,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Pending PVC with no matching policy should return nil action",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc-no-match").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pending-pvc-no-match",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip}, // Will match the pvcPhase policy
|
||||
expectError: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
// Build fake client with PV if present
|
||||
var objs []runtime.Object
|
||||
if tc.pv != nil {
|
||||
objs = append(objs, tc.pv)
|
||||
}
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, objs...)
|
||||
|
||||
b := &backupper{
|
||||
crClient: fakeClient,
|
||||
}
|
||||
|
||||
action, err := b.getMatchAction(policies, tc.pvc, tc.volume)
|
||||
if tc.expectError {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
if tc.expectedAction == nil {
|
||||
assert.Nil(t, action)
|
||||
} else {
|
||||
require.NotNil(t, action)
|
||||
assert.Equal(t, tc.expectedAction.Type, action.Type)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetMatchAction_PVCWithoutPVLookupError(t *testing.T) {
|
||||
// Test that when a PVC has a VolumeName but the PV doesn't exist,
|
||||
// the function ignores the error and tries to match with PVC only
|
||||
resPolicies := &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
policies := &resourcepolicies.Policies{}
|
||||
err := policies.BuildPolicy(resPolicies)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Pending PVC without a matching PV in the cluster
|
||||
pvc := builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result()
|
||||
|
||||
volume := &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pending-pvc",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Empty client - no PV exists
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t)
|
||||
|
||||
b := &backupper{
|
||||
crClient: fakeClient,
|
||||
}
|
||||
|
||||
// Should succeed even though PV lookup would fail
|
||||
// because the function ignores PV lookup errors and uses PVC-only matching
|
||||
action, err := b.getMatchAction(policies, pvc, volume)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, action)
|
||||
assert.Equal(t, resourcepolicies.Skip, action.Type)
|
||||
}
|
||||
|
||||
@@ -671,7 +671,8 @@ func buildJob(
|
||||
}
|
||||
|
||||
if config != nil && len(config.LoadAffinities) > 0 {
|
||||
affinity := kube.ToSystemAffinity(config.LoadAffinities)
|
||||
// Maintenance job only takes the first loadAffinity.
|
||||
affinity := kube.ToSystemAffinity([]*kube.LoadAffinity{config.LoadAffinities[0]})
|
||||
job.Spec.Template.Spec.Affinity = affinity
|
||||
}
|
||||
|
||||
|
||||
@@ -19,6 +19,7 @@ package kopialib
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
@@ -74,7 +75,9 @@ type kopiaObjectWriter struct {
|
||||
rawWriter object.Writer
|
||||
}
|
||||
|
||||
type openOptions struct{}
|
||||
type openOptions struct {
|
||||
repoLogger io.Writer
|
||||
}
|
||||
|
||||
const (
|
||||
defaultLogInterval = time.Second * 10
|
||||
@@ -154,7 +157,7 @@ func (ks *kopiaRepoService) Open(ctx context.Context, repoOption udmrepo.RepoOpt
|
||||
|
||||
repoCtx := kopia.SetupKopiaLog(ctx, ks.logger)
|
||||
|
||||
r, err := openKopiaRepo(repoCtx, repoConfig, repoOption.RepoPassword, nil)
|
||||
r, err := openKopiaRepo(repoCtx, repoConfig, repoOption.RepoPassword, &openOptions{repoLogger: kopia.RepositoryLogger(ks.logger)})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -199,7 +202,7 @@ func (ks *kopiaRepoService) Maintain(ctx context.Context, repoOption udmrepo.Rep
|
||||
|
||||
ks.logger.Info("Start to open repo for maintenance, allow index write on load")
|
||||
|
||||
r, err := openKopiaRepo(repoCtx, repoConfig, repoOption.RepoPassword, nil)
|
||||
r, err := openKopiaRepo(repoCtx, repoConfig, repoOption.RepoPassword, &openOptions{repoLogger: kopia.RepositoryLogger(ks.logger)})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -625,8 +628,10 @@ func (lt *logThrottle) shouldLog() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func openKopiaRepo(ctx context.Context, configFile string, password string, _ *openOptions) (repo.Repository, error) {
|
||||
r, err := kopiaRepoOpen(ctx, configFile, password, &repo.Options{})
|
||||
func openKopiaRepo(ctx context.Context, configFile string, password string, options *openOptions) (repo.Repository, error) {
|
||||
r, err := kopiaRepoOpen(ctx, configFile, password, &repo.Options{
|
||||
ContentLogWriter: options.repoLogger,
|
||||
})
|
||||
if os.IsNotExist(err) {
|
||||
return nil, errors.Wrap(err, "error to open repo, repo doesn't exist")
|
||||
}
|
||||
|
||||
@@ -32,6 +32,7 @@ import (
|
||||
"github.com/kopia/kopia/repo/maintenance"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/kopia"
|
||||
"github.com/vmware-tanzu/velero/pkg/repository/udmrepo"
|
||||
"github.com/vmware-tanzu/velero/pkg/repository/udmrepo/kopialib/backend"
|
||||
)
|
||||
@@ -354,7 +355,7 @@ func (b *byteBufferReader) Seek(offset int64, whence int) (int64, error) {
|
||||
var funcGetParam = maintenance.GetParams
|
||||
|
||||
func writeInitParameters(ctx context.Context, repoOption udmrepo.RepoOptions, logger logrus.FieldLogger) error {
|
||||
r, err := openKopiaRepo(ctx, repoOption.ConfigFilePath, repoOption.RepoPassword, nil)
|
||||
r, err := openKopiaRepo(ctx, repoOption.ConfigFilePath, repoOption.RepoPassword, &openOptions{repoLogger: kopia.RepositoryLogger(logger)})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -68,10 +68,10 @@ type RestorePVC struct {
|
||||
|
||||
type CachePVC struct {
|
||||
// StorageClass specifies the storage class for cache PVC
|
||||
StorageClass string
|
||||
StorageClass string `json:"storageClass,omitempty"`
|
||||
|
||||
// ResidentThreshold specifies the minimum size of the backup data to create cache PVC
|
||||
ResidentThreshold int64
|
||||
// ResidentThresholdInMB specifies the minimum size of the backup data to create cache PVC
|
||||
ResidentThresholdInMB int64 `json:"residentThresholdInMB,omitempty"`
|
||||
}
|
||||
|
||||
type NodeAgentConfigs struct {
|
||||
|
||||
@@ -666,10 +666,22 @@ func validateNamespaceName(ns string) []error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Kubernetes does not allow asterisks in namespaces but Velero uses them as
|
||||
// wildcards. Replace asterisks with an arbitrary letter to pass Kubernetes
|
||||
// validation.
|
||||
tmpNamespace := strings.ReplaceAll(ns, "*", "x")
|
||||
// Validate the namespace name to ensure it is a valid wildcard pattern
|
||||
if err := wildcard.ValidateNamespaceName(ns); err != nil {
|
||||
return []error{err}
|
||||
}
|
||||
|
||||
// Kubernetes does not allow wildcard characters in namespaces but Velero uses them
|
||||
// for glob patterns. Replace wildcard characters with valid characters to pass
|
||||
// Kubernetes validation.
|
||||
tmpNamespace := ns
|
||||
|
||||
// Replace glob wildcard characters with valid alphanumeric characters
|
||||
// Note: Validation of wildcard patterns is handled by the wildcard package.
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "*", "x") // matches any sequence
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "?", "x") // matches single character
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "[", "x") // character class start
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "]", "x") // character class end
|
||||
|
||||
if errMsgs := validation.ValidateNamespaceName(tmpNamespace, false); errMsgs != nil {
|
||||
for _, msg := range errMsgs {
|
||||
|
||||
@@ -289,6 +289,54 @@ func TestValidateNamespaceIncludesExcludes(t *testing.T) {
|
||||
excludes: []string{"bar"},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "glob characters in includes should not error",
|
||||
includes: []string{"kube-*", "test-?", "ns-[0-9]"},
|
||||
excludes: []string{},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "glob characters in excludes should not error",
|
||||
includes: []string{"default"},
|
||||
excludes: []string{"test-*", "app-?", "ns-[1-5]"},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "character class in includes should not error",
|
||||
includes: []string{"ns-[abc]", "test-[0-9]"},
|
||||
excludes: []string{},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "mixed glob patterns should not error",
|
||||
includes: []string{"kube-*", "test-?"},
|
||||
excludes: []string{"*-test", "debug-[0-9]"},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "pipe character in includes should error",
|
||||
includes: []string{"namespace|other"},
|
||||
excludes: []string{},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "parentheses in includes should error",
|
||||
includes: []string{"namespace(prod)", "test-(dev)"},
|
||||
excludes: []string{},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "exclamation mark in includes should error",
|
||||
includes: []string{"!namespace", "test!"},
|
||||
excludes: []string{},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "unsupported characters in excludes should error",
|
||||
includes: []string{"default"},
|
||||
excludes: []string{"test|prod", "app(staging)"},
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
@@ -1082,16 +1130,6 @@ func TestExpandIncludesExcludes(t *testing.T) {
|
||||
expectedWildcardExpanded: true,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace wildcard pattern",
|
||||
includes: []string{"app-{prod,dev}"},
|
||||
excludes: []string{},
|
||||
activeNamespaces: []string{"app-prod", "app-dev", "app-test", "default"},
|
||||
expectedIncludes: []string{"app-prod", "app-dev"},
|
||||
expectedExcludes: []string{},
|
||||
expectedWildcardExpanded: true,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "empty activeNamespaces with wildcards",
|
||||
includes: []string{"kube-*"},
|
||||
@@ -1233,13 +1271,6 @@ func TestResolveNamespaceList(t *testing.T) {
|
||||
expectedNamespaces: []string{"kube-system", "kube-public"},
|
||||
preExpandWildcards: true,
|
||||
},
|
||||
{
|
||||
name: "complex wildcard pattern",
|
||||
includes: []string{"app-{prod,dev}", "kube-*"},
|
||||
excludes: []string{"*-test"},
|
||||
activeNamespaces: []string{"app-prod", "app-dev", "app-test", "kube-system", "kube-test", "default"},
|
||||
expectedNamespaces: []string{"app-prod", "app-dev", "kube-system"},
|
||||
},
|
||||
{
|
||||
name: "question mark wildcard pattern",
|
||||
includes: []string{"ns-?"},
|
||||
|
||||
@@ -140,7 +140,13 @@ func EnsureDeletePod(ctx context.Context, podGetter corev1client.CoreV1Interface
|
||||
func IsPodUnrecoverable(pod *corev1api.Pod, log logrus.FieldLogger) (bool, string) {
|
||||
// Check the Phase field
|
||||
if pod.Status.Phase == corev1api.PodFailed || pod.Status.Phase == corev1api.PodUnknown {
|
||||
message := GetPodTerminateMessage(pod)
|
||||
message := ""
|
||||
if pod.Status.Message != "" {
|
||||
message += pod.Status.Message + "/"
|
||||
}
|
||||
|
||||
message += GetPodTerminateMessage(pod)
|
||||
|
||||
log.Warnf("Pod is in abnormal state %s, message [%s]", pod.Status.Phase, message)
|
||||
return true, fmt.Sprintf("Pod is in abnormal state [%s], message [%s]", pod.Status.Phase, message)
|
||||
}
|
||||
@@ -269,7 +275,7 @@ func ToSystemAffinity(loadAffinities []*LoadAffinity) *corev1api.Affinity {
|
||||
}
|
||||
|
||||
func DiagnosePod(pod *corev1api.Pod, events *corev1api.EventList) string {
|
||||
diag := fmt.Sprintf("Pod %s/%s, phase %s, node name %s\n", pod.Namespace, pod.Name, pod.Status.Phase, pod.Spec.NodeName)
|
||||
diag := fmt.Sprintf("Pod %s/%s, phase %s, node name %s, message %s\n", pod.Namespace, pod.Name, pod.Status.Phase, pod.Spec.NodeName, pod.Status.Message)
|
||||
|
||||
for _, condition := range pod.Status.Conditions {
|
||||
diag += fmt.Sprintf("Pod condition %s, status %s, reason %s, message %s\n", condition.Type, condition.Status, condition.Reason, condition.Message)
|
||||
|
||||
@@ -925,9 +925,10 @@ func TestDiagnosePod(t *testing.T) {
|
||||
Message: "fake-message-2",
|
||||
},
|
||||
},
|
||||
Message: "fake-message-3",
|
||||
},
|
||||
},
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\n",
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node, message fake-message-3\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\n",
|
||||
},
|
||||
{
|
||||
name: "pod with all info and empty event list",
|
||||
@@ -955,10 +956,11 @@ func TestDiagnosePod(t *testing.T) {
|
||||
Message: "fake-message-2",
|
||||
},
|
||||
},
|
||||
Message: "fake-message-3",
|
||||
},
|
||||
},
|
||||
events: &corev1api.EventList{},
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\n",
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node, message fake-message-3\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\n",
|
||||
},
|
||||
{
|
||||
name: "pod with all info and events",
|
||||
@@ -987,6 +989,7 @@ func TestDiagnosePod(t *testing.T) {
|
||||
Message: "fake-message-2",
|
||||
},
|
||||
},
|
||||
Message: "fake-message-3",
|
||||
},
|
||||
},
|
||||
events: &corev1api.EventList{Items: []corev1api.Event{
|
||||
@@ -1027,7 +1030,7 @@ func TestDiagnosePod(t *testing.T) {
|
||||
Message: "message-6",
|
||||
},
|
||||
}},
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\nPod event reason reason-3, message message-3\nPod event reason reason-6, message message-6\n",
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node, message fake-message-3\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\nPod event reason reason-3, message message-3\nPod event reason reason-6, message message-6\n",
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -417,19 +417,19 @@ func MakePodPVCAttachment(volumeName string, volumeMode *corev1api.PersistentVol
|
||||
return volumeMounts, volumeDevices, volumePath
|
||||
}
|
||||
|
||||
// GetPVForPVC returns the PersistentVolume backing a PVC
|
||||
// returns PV, error.
|
||||
// PV will be nil on error
|
||||
func GetPVForPVC(
|
||||
pvc *corev1api.PersistentVolumeClaim,
|
||||
crClient crclient.Client,
|
||||
) (*corev1api.PersistentVolume, error) {
|
||||
if pvc.Spec.VolumeName == "" {
|
||||
return nil, errors.Errorf("PVC %s/%s has no volume backing this claim",
|
||||
pvc.Namespace, pvc.Name)
|
||||
return nil, errors.Errorf("PVC %s/%s has no volume backing this claim", pvc.Namespace, pvc.Name)
|
||||
}
|
||||
if pvc.Status.Phase != corev1api.ClaimBound {
|
||||
// TODO: confirm if this PVC should be snapshotted if it has no PV bound
|
||||
return nil,
|
||||
errors.Errorf("PVC %s/%s is in phase %v and is not bound to a volume",
|
||||
pvc.Namespace, pvc.Name, pvc.Status.Phase)
|
||||
return nil, errors.Errorf("PVC %s/%s is in phase %v and is not bound to a volume",
|
||||
pvc.Namespace, pvc.Name, pvc.Status.Phase)
|
||||
}
|
||||
|
||||
pv := &corev1api.PersistentVolume{}
|
||||
|
||||
@@ -31,70 +31,77 @@ func ShouldExpandWildcards(includes []string, excludes []string) bool {
|
||||
}
|
||||
|
||||
// containsWildcardPattern checks if a pattern contains any wildcard symbols
|
||||
// Supported patterns: *, ?, [abc], {a,b,c}
|
||||
// Supported patterns: *, ?, [abc]
|
||||
// Note: . and + are treated as literal characters (not wildcards)
|
||||
// Note: ** and consecutive asterisks are NOT supported (will cause validation error)
|
||||
func containsWildcardPattern(pattern string) bool {
|
||||
return strings.ContainsAny(pattern, "*?[{")
|
||||
return strings.ContainsAny(pattern, "*?[")
|
||||
}
|
||||
|
||||
func validateWildcardPatterns(patterns []string) error {
|
||||
for _, pattern := range patterns {
|
||||
// Check for invalid regex-only patterns that we don't support
|
||||
if strings.ContainsAny(pattern, "|()") {
|
||||
return errors.New("wildcard pattern contains unsupported regex symbols: |, (, )")
|
||||
}
|
||||
|
||||
// Check for consecutive asterisks (2 or more)
|
||||
if strings.Contains(pattern, "**") {
|
||||
return errors.New("wildcard pattern contains consecutive asterisks (only single * allowed)")
|
||||
}
|
||||
|
||||
// Check for malformed brace patterns
|
||||
if err := validateBracePatterns(pattern); err != nil {
|
||||
if err := ValidateNamespaceName(pattern); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func ValidateNamespaceName(pattern string) error {
|
||||
// Check for invalid characters that are not supported in glob patterns
|
||||
if strings.ContainsAny(pattern, "|()!{},") {
|
||||
return errors.New("wildcard pattern contains unsupported characters: |, (, ), !, {, }, ,")
|
||||
}
|
||||
|
||||
// Check for consecutive asterisks (2 or more)
|
||||
if strings.Contains(pattern, "**") {
|
||||
return errors.New("wildcard pattern contains consecutive asterisks (only single * allowed)")
|
||||
}
|
||||
|
||||
// Check for malformed brace patterns
|
||||
if err := validateBracePatterns(pattern); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// validateBracePatterns checks for malformed brace patterns like unclosed braces or empty braces
|
||||
// Also validates bracket patterns [] for character classes
|
||||
func validateBracePatterns(pattern string) error {
|
||||
depth := 0
|
||||
bracketDepth := 0
|
||||
|
||||
for i := 0; i < len(pattern); i++ {
|
||||
if pattern[i] == '{' {
|
||||
braceStart := i
|
||||
depth++
|
||||
if pattern[i] == '[' {
|
||||
bracketStart := i
|
||||
bracketDepth++
|
||||
|
||||
// Scan ahead to find the matching closing brace and validate content
|
||||
for j := i + 1; j < len(pattern) && depth > 0; j++ {
|
||||
if pattern[j] == '{' {
|
||||
depth++
|
||||
} else if pattern[j] == '}' {
|
||||
depth--
|
||||
if depth == 0 {
|
||||
// Found matching closing brace - validate content
|
||||
content := pattern[braceStart+1 : j]
|
||||
if strings.Trim(content, ", \t") == "" {
|
||||
return errors.New("wildcard pattern contains empty brace pattern '{}'")
|
||||
// Scan ahead to find the matching closing bracket and validate content
|
||||
for j := i + 1; j < len(pattern) && bracketDepth > 0; j++ {
|
||||
if pattern[j] == ']' {
|
||||
bracketDepth--
|
||||
if bracketDepth == 0 {
|
||||
// Found matching closing bracket - validate content
|
||||
content := pattern[bracketStart+1 : j]
|
||||
if content == "" {
|
||||
return errors.New("wildcard pattern contains empty bracket pattern '[]'")
|
||||
}
|
||||
// Skip to the closing brace
|
||||
// Skip to the closing bracket
|
||||
i = j
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If we exited the loop without finding a match (depth > 0), brace is unclosed
|
||||
if depth > 0 {
|
||||
return errors.New("wildcard pattern contains unclosed brace '{'")
|
||||
// If we exited the loop without finding a match (bracketDepth > 0), bracket is unclosed
|
||||
if bracketDepth > 0 {
|
||||
return errors.New("wildcard pattern contains unclosed bracket '['")
|
||||
}
|
||||
|
||||
// i is now positioned at the closing brace; the outer loop will increment it
|
||||
} else if pattern[i] == '}' {
|
||||
// Found a closing brace without a matching opening brace
|
||||
return errors.New("wildcard pattern contains unmatched closing brace '}'")
|
||||
// i is now positioned at the closing bracket; the outer loop will increment it
|
||||
} else if pattern[i] == ']' {
|
||||
// Found a closing bracket without a matching opening bracket
|
||||
return errors.New("wildcard pattern contains unmatched closing bracket ']'")
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -90,7 +90,7 @@ func TestShouldExpandWildcards(t *testing.T) {
|
||||
name: "brace alternatives wildcard",
|
||||
includes: []string{"ns{prod,staging}"},
|
||||
excludes: []string{},
|
||||
expected: true, // brace alternatives are considered wildcard
|
||||
expected: false, // brace alternatives are not supported
|
||||
},
|
||||
{
|
||||
name: "dot is literal - not wildcard",
|
||||
@@ -237,9 +237,9 @@ func TestExpandWildcards(t *testing.T) {
|
||||
activeNamespaces: []string{"app-prod", "app-staging", "app-dev", "db-prod"},
|
||||
includes: []string{"app-{prod,staging}"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: []string{"app-prod", "app-staging"}, // {prod,staging} matches either
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: false,
|
||||
expectError: true,
|
||||
},
|
||||
{
|
||||
name: "literal dot and plus patterns",
|
||||
@@ -259,33 +259,6 @@ func TestExpandWildcards(t *testing.T) {
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // |, (, ) are not supported
|
||||
},
|
||||
{
|
||||
name: "unclosed brace patterns should error",
|
||||
activeNamespaces: []string{"app-prod"},
|
||||
includes: []string{"app-{prod,staging"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // unclosed brace
|
||||
},
|
||||
{
|
||||
name: "empty brace patterns should error",
|
||||
activeNamespaces: []string{"app-prod"},
|
||||
includes: []string{"app-{}"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // empty braces
|
||||
},
|
||||
{
|
||||
name: "unmatched closing brace should error",
|
||||
activeNamespaces: []string{"app-prod"},
|
||||
includes: []string{"app-prod}"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // unmatched closing brace
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
@@ -354,13 +327,6 @@ func TestExpandWildcardsPrivate(t *testing.T) {
|
||||
expected: []string{}, // returns empty slice, not nil
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace patterns work correctly",
|
||||
patterns: []string{"app-{prod,staging}"},
|
||||
activeNamespaces: []string{"app-prod", "app-staging", "app-dev", "app-{prod,staging}"},
|
||||
expected: []string{"app-prod", "app-staging"}, // brace patterns do expand
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "duplicate matches from multiple patterns",
|
||||
patterns: []string{"app-*", "*-prod"},
|
||||
@@ -389,20 +355,6 @@ func TestExpandWildcardsPrivate(t *testing.T) {
|
||||
expected: []string{"nsa", "nsb", "nsc"}, // [a-c] matches a to c
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "negated character class",
|
||||
patterns: []string{"ns[!abc]"},
|
||||
activeNamespaces: []string{"nsa", "nsb", "nsc", "nsd", "ns1"},
|
||||
expected: []string{"nsd", "ns1"}, // [!abc] matches anything except a, b, c
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace alternatives",
|
||||
patterns: []string{"app-{prod,test}"},
|
||||
activeNamespaces: []string{"app-prod", "app-test", "app-staging", "db-prod"},
|
||||
expected: []string{"app-prod", "app-test"}, // {prod,test} matches either
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "double asterisk should error",
|
||||
patterns: []string{"**"},
|
||||
@@ -410,13 +362,6 @@ func TestExpandWildcardsPrivate(t *testing.T) {
|
||||
expected: nil,
|
||||
expectError: true, // ** is not allowed
|
||||
},
|
||||
{
|
||||
name: "literal dot and plus",
|
||||
patterns: []string{"app.prod", "service+"},
|
||||
activeNamespaces: []string{"app.prod", "appXprod", "service+", "service"},
|
||||
expected: []string{"app.prod", "service+"}, // . and + are literal
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "unsupported regex symbols should error",
|
||||
patterns: []string{"ns(1|2)"},
|
||||
@@ -468,153 +413,101 @@ func TestValidateBracePatterns(t *testing.T) {
|
||||
expectError bool
|
||||
errorMsg string
|
||||
}{
|
||||
// Valid patterns
|
||||
// Valid square bracket patterns
|
||||
{
|
||||
name: "valid single brace pattern",
|
||||
pattern: "app-{prod,staging}",
|
||||
name: "valid square bracket pattern",
|
||||
pattern: "ns[abc]",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid brace with single option",
|
||||
pattern: "app-{prod}",
|
||||
name: "valid square bracket pattern with range",
|
||||
pattern: "ns[a-z]",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid brace with three options",
|
||||
pattern: "app-{prod,staging,dev}",
|
||||
name: "valid square bracket pattern with numbers",
|
||||
pattern: "ns[0-9]",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid pattern with text before and after brace",
|
||||
pattern: "prefix-{a,b}-suffix",
|
||||
name: "valid square bracket pattern with mixed",
|
||||
pattern: "ns[a-z0-9]",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid pattern with no braces",
|
||||
pattern: "app-prod",
|
||||
name: "valid square bracket pattern with single character",
|
||||
pattern: "ns[a]",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid pattern with asterisk",
|
||||
pattern: "app-*",
|
||||
name: "valid square bracket pattern with text before and after",
|
||||
pattern: "prefix-[abc]-suffix",
|
||||
expectError: false,
|
||||
},
|
||||
// Unclosed opening brackets
|
||||
{
|
||||
name: "valid brace with spaces around content",
|
||||
pattern: "app-{ prod , staging }",
|
||||
expectError: false,
|
||||
name: "unclosed opening bracket at end",
|
||||
pattern: "ns[abc",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed bracket",
|
||||
},
|
||||
{
|
||||
name: "valid brace with numbers",
|
||||
pattern: "ns-{1,2,3}",
|
||||
expectError: false,
|
||||
name: "unclosed opening bracket at start",
|
||||
pattern: "[abc",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed bracket",
|
||||
},
|
||||
{
|
||||
name: "valid brace with hyphens in options",
|
||||
pattern: "{app-prod,db-staging}",
|
||||
expectError: false,
|
||||
name: "unclosed opening bracket in middle",
|
||||
pattern: "ns[abc-test",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed bracket",
|
||||
},
|
||||
|
||||
// Unclosed opening braces
|
||||
// Unmatched closing brackets
|
||||
{
|
||||
name: "unclosed opening brace at end",
|
||||
pattern: "app-{prod,staging",
|
||||
name: "unmatched closing bracket at end",
|
||||
pattern: "ns-abc]",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
errorMsg: "unmatched closing bracket",
|
||||
},
|
||||
{
|
||||
name: "unclosed opening brace at start",
|
||||
pattern: "{prod,staging",
|
||||
name: "unmatched closing bracket at start",
|
||||
pattern: "]ns-abc",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
errorMsg: "unmatched closing bracket",
|
||||
},
|
||||
{
|
||||
name: "unclosed opening brace in middle",
|
||||
pattern: "app-{prod-test",
|
||||
name: "unmatched closing bracket in middle",
|
||||
pattern: "ns-]abc",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
errorMsg: "unmatched closing bracket",
|
||||
},
|
||||
{
|
||||
name: "multiple unclosed braces",
|
||||
pattern: "app-{prod-{staging",
|
||||
name: "extra closing bracket after valid pair",
|
||||
pattern: "ns[abc]]",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
errorMsg: "unmatched closing bracket",
|
||||
},
|
||||
|
||||
// Unmatched closing braces
|
||||
// Empty bracket patterns
|
||||
{
|
||||
name: "unmatched closing brace at end",
|
||||
pattern: "app-prod}",
|
||||
name: "completely empty brackets",
|
||||
pattern: "ns[]",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
errorMsg: "empty bracket pattern",
|
||||
},
|
||||
{
|
||||
name: "unmatched closing brace at start",
|
||||
pattern: "}app-prod",
|
||||
name: "empty brackets at start",
|
||||
pattern: "[]ns",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
errorMsg: "empty bracket pattern",
|
||||
},
|
||||
{
|
||||
name: "unmatched closing brace in middle",
|
||||
pattern: "app-}prod",
|
||||
name: "empty brackets standalone",
|
||||
pattern: "[]",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
{
|
||||
name: "extra closing brace after valid pair",
|
||||
pattern: "app-{prod,staging}}",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
|
||||
// Empty brace patterns
|
||||
{
|
||||
name: "completely empty braces",
|
||||
pattern: "app-{}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with only spaces",
|
||||
pattern: "app-{ }",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with only comma",
|
||||
pattern: "app-{,}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with only commas",
|
||||
pattern: "app-{,,,}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with commas and spaces",
|
||||
pattern: "app-{ , , }",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with tabs and commas",
|
||||
pattern: "app-{\t,\t}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "empty braces at start",
|
||||
pattern: "{}app-prod",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "empty braces standalone",
|
||||
pattern: "{}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
errorMsg: "empty bracket pattern",
|
||||
},
|
||||
|
||||
// Edge cases
|
||||
@@ -623,58 +516,6 @@ func TestValidateBracePatterns(t *testing.T) {
|
||||
pattern: "",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "pattern with only opening brace",
|
||||
pattern: "{",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
},
|
||||
{
|
||||
name: "pattern with only closing brace",
|
||||
pattern: "}",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
{
|
||||
name: "valid brace with special characters inside",
|
||||
pattern: "app-{prod-1,staging_2,dev.3}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace with asterisk inside option",
|
||||
pattern: "app-{prod*,staging}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "multiple valid brace patterns",
|
||||
pattern: "{app,db}-{prod,staging}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace with single character",
|
||||
pattern: "app-{a}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace with trailing comma but has content",
|
||||
pattern: "app-{prod,staging,}",
|
||||
expectError: false, // Has content, so it's valid
|
||||
},
|
||||
{
|
||||
name: "brace with leading comma but has content",
|
||||
pattern: "app-{,prod,staging}",
|
||||
expectError: false, // Has content, so it's valid
|
||||
},
|
||||
{
|
||||
name: "brace with leading comma but has content",
|
||||
pattern: "app-{{,prod,staging}",
|
||||
expectError: true, // unclosed brace
|
||||
},
|
||||
{
|
||||
name: "brace with leading comma but has content",
|
||||
pattern: "app-{,prod,staging}}",
|
||||
expectError: true, // unmatched closing brace
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
@@ -723,20 +564,6 @@ func TestExpandWildcardsEdgeCases(t *testing.T) {
|
||||
assert.ElementsMatch(t, []string{"ns-1", "ns_2", "ns.3", "ns@4"}, result)
|
||||
})
|
||||
|
||||
t.Run("complex glob combinations", func(t *testing.T) {
|
||||
activeNamespaces := []string{"app1-prod", "app2-prod", "app1-test", "db-prod", "service"}
|
||||
result, err := expandWildcards([]string{"app?-{prod,test}"}, activeNamespaces)
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{"app1-prod", "app2-prod", "app1-test"}, result)
|
||||
})
|
||||
|
||||
t.Run("escaped characters", func(t *testing.T) {
|
||||
activeNamespaces := []string{"app*", "app-prod", "app?test", "app-test"}
|
||||
result, err := expandWildcards([]string{"app\\*"}, activeNamespaces)
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{"app*"}, result)
|
||||
})
|
||||
|
||||
t.Run("mixed literal and wildcard patterns", func(t *testing.T) {
|
||||
activeNamespaces := []string{"app.prod", "app-prod", "app_prod", "test.ns"}
|
||||
result, err := expandWildcards([]string{"app.prod", "app?prod"}, activeNamespaces)
|
||||
@@ -777,12 +604,8 @@ func TestExpandWildcardsEdgeCases(t *testing.T) {
|
||||
shouldError bool
|
||||
}{
|
||||
{"unclosed bracket", "ns[abc", true},
|
||||
{"unclosed brace", "app-{prod,staging", true},
|
||||
{"nested unclosed", "ns[a{bc", true},
|
||||
{"valid bracket", "ns[abc]", false},
|
||||
{"valid brace", "app-{prod,staging}", false},
|
||||
{"empty bracket", "ns[]", true}, // empty brackets are invalid
|
||||
{"empty brace", "app-{}", true}, // empty braces are invalid
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
|
||||
@@ -15,6 +15,7 @@ params:
|
||||
latest: v1.17
|
||||
versions:
|
||||
- main
|
||||
- v1.18
|
||||
- v1.17
|
||||
- v1.16
|
||||
- v1.15
|
||||
|
||||
@@ -16,6 +16,8 @@ Backup belongs to the API group version `velero.io/v1`.
|
||||
|
||||
Here is a sample `Backup` object with each of the fields documented:
|
||||
|
||||
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
@@ -42,11 +44,12 @@ spec:
|
||||
resourcePolicy:
|
||||
kind: configmap
|
||||
name: resource-policy-configmap
|
||||
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
# Array of namespaces to include in the backup. Accepts glob patterns (*, ?, [abc]).
|
||||
# Note: '*' alone is reserved for empty fields, which means all namespaces.
|
||||
# If unspecified, all namespaces are included. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the backup. Optional.
|
||||
# Array of namespaces to exclude from the backup. Accepts glob patterns (*, ?, [abc]). Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
|
||||
@@ -16,6 +16,8 @@ Restore belongs to the API group version `velero.io/v1`.
|
||||
|
||||
Here is a sample `Restore` object with each of the fields documented:
|
||||
|
||||
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
@@ -45,11 +47,11 @@ spec:
|
||||
writeSparseFiles: true
|
||||
# ParallelFilesDownload is the concurrency number setting for restore
|
||||
parallelFilesDownload: 10
|
||||
# Array of namespaces to include in the restore. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
# Array of namespaces to include in the restore. Accepts glob patterns (*, ?, [abc]).
|
||||
# If unspecified, all namespaces are included. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the restore. Optional.
|
||||
# Array of namespaces to exclude from the restore. Accepts glob patterns (*, ?, [abc]). Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
|
||||
@@ -17,9 +17,8 @@ Velero supports storage providers for both cloud-provider environments and on-pr
|
||||
|
||||
### Velero on Windows
|
||||
|
||||
Velero does not officially support Windows. In testing, the Velero team was able to backup stateless Windows applications only. The File System Backup and backups of stateful applications or PersistentVolumes were not supported.
|
||||
|
||||
If you want to perform your own testing of Velero on Windows, you must deploy Velero as a Windows container. Velero does not provide official Windows images, but its possible for you to build your own Velero Windows container image to use. Note that you must build this image on a Windows node.
|
||||
Velero supports to backup and restore Windows workloads, either stateless or stateful.
|
||||
Velero node-agent and data mover pods could run in Windows nodes. To keep compatibility to the existing Velero plugins, Velero server runs in linux nodes only, so Velero requires at least one linux node in the cluster. Velero provides Windows images for specific Windows versions. For more information see [Backup Restore Windows Workloads][6].
|
||||
|
||||
## Install the CLI
|
||||
|
||||
@@ -71,3 +70,4 @@ Please refer to [this part of the documentation][5].
|
||||
[3]: overview-plugins.md
|
||||
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
|
||||
[5]: customize-installation.md#optional-velero-cli-configurations
|
||||
[6]: backup-restore-windows.md
|
||||
|
||||
@@ -16,7 +16,7 @@ A sample of cache PVC configuration as part of the ConfigMap would look like:
|
||||
```json
|
||||
{
|
||||
"cachePVC": {
|
||||
"thresholdInGB": 1,
|
||||
"residentThresholdInMB": 1024,
|
||||
"storageClass": "sc-wffc"
|
||||
}
|
||||
}
|
||||
@@ -29,7 +29,7 @@ kubectl create cm node-agent-config -n velero --from-file=<json file name>
|
||||
|
||||
A must-have field in the configuration is `storageClass` which tells Velero which storage class is used to provision the cache PVC. Velero relies on Kubernetes dynamic provision process to provision the PVC, static provision is not supported.
|
||||
|
||||
The cache PVC behavior could be further fine tuned through `thresholdInGB`. Its value is compared to the size of the backup, if the size is smaller than this value, no cache PVC would be created when restoring from the backup. This ensures that cache PVCs are not created in vain when the backup size is too small and can be accommodated in the data mover pods' root disk.
|
||||
The cache PVC behavior could be further fine tuned through `residentThresholdInMB`. Its value is compared to the size of the backup, if the size is smaller than this value, no cache PVC would be created when restoring from the backup. This ensures that cache PVCs are not created in vain when the backup size is too small and can be accommodated in the data mover pods' root disk.
|
||||
|
||||
This configuration decides whether and how to provision cache PVCs, but it doesn't decide their size. Instead, the size is decided by the specific backup repository. Specifically, Velero asks a cache limit from the backup repository and uses this limit to calculate the cache PVC size.
|
||||
The cache limit is decided by the backup repository itself, for Kopia repository, if `cacheLimitMB` is specified in the backup repository configuration, its value will be used; otherwise, a default limit (5 GB) is used.
|
||||
|
||||
71
site/content/docs/main/namespace-glob-patterns.md
Normal file
71
site/content/docs/main/namespace-glob-patterns.md
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: "Namespace Glob Patterns"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
When using `--include-namespaces` and `--exclude-namespaces` flags with backup and restore commands, you can use glob patterns to match multiple namespaces.
|
||||
|
||||
## Supported Patterns
|
||||
|
||||
Velero supports the following glob pattern characters:
|
||||
|
||||
- `*` - Matches any sequence of characters
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "app-*"
|
||||
# Matches: app-prod, app-staging, app-dev, etc.
|
||||
```
|
||||
|
||||
- `?` - Matches exactly one character
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "ns?"
|
||||
# Matches: ns1, ns2, nsa, but NOT ns10
|
||||
```
|
||||
|
||||
- `[abc]` - Matches any single character in the brackets
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "ns[123]"
|
||||
# Matches: ns1, ns2, ns3
|
||||
```
|
||||
|
||||
- `[a-z]` - Matches any single character in the range
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "ns[a-c]"
|
||||
# Matches: nsa, nsb, nsc
|
||||
```
|
||||
|
||||
## Unsupported Patterns
|
||||
|
||||
The following patterns are **not supported** and will cause validation errors:
|
||||
|
||||
- `**` - Consecutive asterisks
|
||||
- `|` - Alternation (regex operator)
|
||||
- `()` - Grouping (regex operators)
|
||||
- `!` - Negation
|
||||
- `{}` - Brace expansion
|
||||
- `,` - Comma (used in brace expansion)
|
||||
|
||||
## Special Cases
|
||||
|
||||
- `*` alone means "all namespaces" and is not expanded
|
||||
- Empty brackets `[]` are invalid
|
||||
- Unmatched or unclosed brackets will cause validation errors
|
||||
|
||||
## Examples
|
||||
|
||||
Combine patterns with include and exclude flags:
|
||||
|
||||
```bash
|
||||
# Backup all production namespaces except test
|
||||
velero backup create prod-backup \
|
||||
--include-namespaces "*-prod" \
|
||||
--exclude-namespaces "test-*"
|
||||
|
||||
# Backup specific numbered namespaces
|
||||
velero backup create numbered-backup \
|
||||
--include-namespaces "app-[0-9]"
|
||||
|
||||
# Restore namespaces matching multiple patterns
|
||||
velero restore create my-restore \
|
||||
--from-backup my-backup \
|
||||
--include-namespaces "frontend-*,backend-*"
|
||||
```
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user