Remove 'self' from MIGRATE_FROM_VELERO_VERSION.
* Need to modify the BYOT case to specify MIGRATE_FROM_VELERO_VERSION.
Remove the CSI plugin installation check for no older than v1.14
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
This commit implements VolumePolicy support for PVC Phase conditions, resolving
vmware-tanzu/velero#7233 where backups fail with ''PVC has no volume backing this claim''
for Pending PVCs.
Changes made:
- Extended VolumePolicy API to support PVC phase conditions
- Added pvcPhaseCondition struct with matching logic
- Modified getMatchAction() to evaluate policies for unbound PVCs before returning errors
- Added case to GetMatchAction() to handle PVC-only scenarios (nil PV)
- Added comprehensive unit tests for PVC phase parsing and matching
Users can now skip Pending PVCs through volume policy configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: volume-policy
namespace: velero
data:
policy.yaml: |
version: v1
volumePolicies:
- conditions:
pvcPhase: [Pending]
action:
type: skip
chore: rename changelog file to match PR #9166
Renamed changelogs/unreleased/7233-claude to changelogs/unreleased/9166-claude
to match the opened PR at https://github.com/vmware-tanzu/velero/pull/9166
docs: Add PVC phase condition support to VolumePolicy documentation
- Added pvcPhase field to YAML template example
- Documented pvcPhase as a supported condition in the list
- Added comprehensive examples for using PVC phase conditions
- Included examples for Pending, Bound, and Lost phases
- Demonstrated combining PVC phase with other conditions
Co-Authored-By: Tiger Kaovilai <kaovilai@users.noreply.github.com>
Ensure plugin init container names satisfy DNS-1123 label constraints
(max 63 chars). Long names are truncated with an 8-char hash suffix to
maintain uniqueness.
Fixes: #9444
Signed-off-by: Michal Pryc <mpryc@redhat.com>
The nolint:staticcheck directives are not needed in the test file because
it calls NewVolumeHelperImpl within the same package, which doesn't
trigger deprecation warnings. Only cross-package calls need the directive.
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
The ShouldPerformSnapshotWithVolumeHelper function and tests intentionally
use NewVolumeHelperImpl (deprecated) for backwards compatibility with
third-party plugins. Add nolint:staticcheck to suppress the linter
warnings with explanatory comments.
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Remove deprecated functions that were marked for removal per review:
- Remove GetPodsUsingPVC (replaced by GetPodsUsingPVCWithCache)
- Remove IsPVCDefaultToFSBackup (replaced by IsPVCDefaultToFSBackupWithCache)
- Remove associated tests for deprecated functions
- Add deprecation marker to NewVolumeHelperImpl
- Add deprecation marker to ShouldPerformSnapshotWithBackup
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
This commit addresses reviewer feedback on PR #9441 regarding
concurrent backup caching concerns. Key changes:
1. Added lazy per-namespace caching for the CSI PVC BIA plugin path:
- Added IsNamespaceBuilt() method to check if namespace is cached
- Added BuildCacheForNamespace() for lazy, per-namespace cache building
- Plugin builds cache incrementally as namespaces are encountered
2. Added NewVolumeHelperImplWithCache constructor for plugins:
- Accepts externally-managed PVC-to-Pod cache
- Follows pattern from PR #9226 (Scott Seago's design)
3. Plugin instance lifecycle clarification:
- Plugin instances are unique per backup (created via newPluginManager)
- Cleaned up via CleanupClients at backup completion
- No mutex or backup UID tracking needed
4. Test coverage:
- Added tests for IsNamespaceBuilt and BuildCacheForNamespace
- Added tests for NewVolumeHelperImplWithCache constructor
- Added test verifying cache usage for fs-backup determination
This maintains the O(N+M) complexity improvement from issue #9179
while addressing architectural concerns about concurrent access.
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Address review feedback to have a global VolumeHelper instance per
plugin process instead of creating one on each ShouldPerformSnapshot
call.
Changes:
- Add volumeHelper, cachedForBackup, and mu fields to pvcBackupItemAction
struct for caching the VolumeHelper per backup
- Add getOrCreateVolumeHelper() method for thread-safe lazy initialization
- Update Execute() to use cached VolumeHelper via
ShouldPerformSnapshotWithVolumeHelper()
- Update filterPVCsByVolumePolicy() to accept VolumeHelper parameter
- Add ShouldPerformSnapshotWithVolumeHelper() that accepts optional
VolumeHelper for reuse across multiple calls
- Add NewVolumeHelperForBackup() factory function for BIA plugins
- Add comprehensive unit tests for both nil and non-nil VolumeHelper paths
This completes the fix for issue #9179 by ensuring the PVC-to-Pod cache
is built once per backup and reused across all PVC processing, avoiding
O(N*M) complexity.
Fixes#9179
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
- Rename NewVolumeHelperImplWithCache to NewVolumeHelperImplWithNamespaces
- Move cache building logic from backup.go into volumehelper
- Return error from NewVolumeHelperImplWithNamespaces if cache build fails
- Remove fallback in main backup path - backup fails if cache build fails
- Update NewVolumeHelperImpl to call NewVolumeHelperImplWithNamespaces
- Add comments clarifying fallback is only used by plugins
- Update tests for new error return signature
This addresses review comments from @Lyndon-Li and @kaovilai:
- Cache building is now encapsulated in volumehelper
- No fallback in main backup path ensures predictable performance
- Code reuse between constructors
Fixes#9179
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
- Use ResolveNamespaceList() instead of GetIncludes() for more accurate
namespace resolution when building the PVC-to-Pod cache
- Refactor NewVolumeHelperImpl to call NewVolumeHelperImplWithCache with
nil cache parameter to avoid code duplication
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Add test case to verify that the PVC-to-Pod cache is used even when
no volume policy is configured. When defaultVolumesToFSBackup is true,
the cache is used to find pods using the PVC to determine if fs-backup
should be used instead of snapshot.
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Add TestVolumeHelperImplWithCache_ShouldPerformFSBackup to verify:
- Volume policy match with cache returns correct fs-backup decision
- Volume policy match with snapshot action skips fs-backup
- Fallback to direct lookup when cache is not built
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Add TestVolumeHelperImplWithCache_ShouldPerformSnapshot to verify:
- Volume policy match with cache returns correct snapshot decision
- fs-backup via opt-out with cache properly skips snapshot
- Fallback to direct lookup when cache is not built
These tests verify the cache-enabled code path added in the previous
commit for improved volume policy performance.
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
The GetPodsUsingPVC function had O(N*M) complexity - for each PVC,
it listed ALL pods in the namespace and iterated through each pod.
With many PVCs and pods, this caused significant performance
degradation (2+ seconds per PV in some cases).
This change introduces a PVC-to-Pod cache that is built once per
backup and reused for all PVC lookups, reducing complexity from
O(N*M) to O(N+M).
Changes:
- Add PVCPodCache struct with thread-safe caching in podvolume pkg
- Add NewVolumeHelperImplWithCache constructor for cache support
- Build cache before backup item processing in backup.go
- Add comprehensive unit tests for cache functionality
- Graceful fallback to direct lookups if cache fails
Fixes#9179
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
This change enables BSL validation to work when using caCertRef
(Secret-based CA certificate) by resolving the certificate from
the Secret in velero core before passing it to the object store
plugin as 'caCert' in the config map.
This approach requires no changes to provider plugins since they
already understand the 'caCert' config key.
Changes:
- Add SecretStore to objectBackupStoreGetter struct
- Add NewObjectBackupStoreGetterWithSecretStore constructor
- Update Get method to resolve caCertRef from Secret
- Update server.go to use new constructor with SecretStore
- Add CACertRef builder method and unit tests
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
- Introduced `CACertRef` field in `ObjectStorageLocation` to reference a Secret containing the CA certificate, replacing the deprecated `CACert` field.
- Implemented validation logic to ensure mutual exclusivity between `CACert` and `CACertRef`.
- Updated BSL controller and repository provider to handle the new certificate resolution logic.
- Enhanced CLI to support automatic certificate discovery from BSL configurations.
- Added unit and integration tests to validate new functionality and ensure backward compatibility.
- Documented migration strategy for users transitioning from inline certificates to Secret-based management.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Explain that the duration histogram tracks distribution of individual
job durations, not accumulated sums, to address reviewer concerns.
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
This commit addresses three review comments on PR #9321:
1. Keep sanitization in controller (response to @ywk253100)
- Maintaining centralized error handling for easier extension
- Azure-specific patterns detected and others passed through unchanged
2. Sanitize unavailableErrors array (@priyansh17)
- Now using sanitizeStorageError() for both unavailableErrors array
and location.Status.Message for consistency
3. Add SAS token scrubbing (@anshulahuja98)
- Scrubs Azure SAS token parameters to prevent credential leakage
- Redacts: sig, se, st, sp, spr, sv, sr, sip, srt, ss
- Example: ?sig=secret becomes ?sig=***REDACTED***
Added comprehensive test coverage for SAS token scrubbing with 4 new
test cases covering various scenarios.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Azure storage errors include verbose HTTP response details and XML
in error messages, making the BSL status.message field cluttered
and hard to read. This change adds sanitization to extract only
the error code and meaningful message.
Before:
BackupStorageLocation "test" is unavailable: rpc error: code = Unknown
desc = GET https://...
RESPONSE 404: 404 The specified container does not exist.
ERROR CODE: ContainerNotFound
<?xml version="1.0"...>
After:
BackupStorageLocation "test" is unavailable: rpc error: code = Unknown
desc = ContainerNotFound: The specified container does not exist.
AWS and GCP error messages are preserved as-is since they don't
contain verbose HTTP responses.
Fixes#8368
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
- Rename metric constants from maintenance_job_* to repo_maintenance_*
- Update metric help text to clarify these are for repo maintenance
- Rename functions: RegisterMaintenanceJob* → RegisterRepoMaintenance*
- Update all test references to use new names
Addresses review comments from @Lyndon-Li on PR #9414
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Adds three new Prometheus metrics to track backup repository
maintenance job execution:
- velero_maintenance_job_success_total: Counter for successful jobs
- velero_maintenance_job_failure_total: Counter for failed jobs
- velero_maintenance_job_duration_seconds: Histogram for job duration
Metrics use repository_name label to identify specific BackupRepositories.
Duration is recorded for both successful and failed jobs (when job runs),
but not when job fails to start.
Includes comprehensive unit and integration tests.
Fixes#9225
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
* Add wildcard status fields
Signed-off-by: Joseph <jvaikath@redhat.com>
* Implement wildcard namespace expansion in item collector
- Introduced methods to get active namespaces and expand wildcard includes/excludes in the item collector.
- Updated getNamespacesToList to handle wildcard patterns and return expanded lists.
- Added utility functions for setting includes and excludes in the IncludesExcludes struct.
- Created a new package for wildcard handling, including functions to determine when to expand wildcards and to perform the expansion.
This enhances the backup process by allowing more flexible namespace selection based on wildcard patterns.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Enhance wildcard expansion logic and logging in item collector
- Improved logging to include original includes and excludes when expanding wildcards.
- Updated the ShouldExpandWildcards function to check for wildcard patterns in excludes.
- Added comments for clarity in the expandWildcards function regarding pattern handling.
These changes enhance the clarity and functionality of the wildcard expansion process in the backup system.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Add wildcard namespace fields to Backup CRD and update deepcopy methods
- Introduced `wildcardIncludedNamespaces` and `wildcardExcludedNamespaces` fields to the Backup CRD to support wildcard patterns for namespace inclusion and exclusion.
- Updated deepcopy methods to handle the new fields, ensuring proper copying of data during object manipulation.
These changes enhance the flexibility of namespace selection in backup operations, aligning with recent improvements in wildcard handling.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Refactor Backup CRD to rename wildcard namespace fields
- Updated `BackupStatus` struct to rename `WildcardIncludedNamespaces` to `WildcardExpandedIncludedNamespaces` and `WildcardExcludedNamespaces` to `WildcardExpandedExcludedNamespaces` for clarity.
- Adjusted associated comments to reflect the new naming and ensure consistency in documentation.
- Modified deepcopy methods to accommodate the renamed fields, ensuring proper data handling during object manipulation.
These changes enhance the clarity and maintainability of the Backup CRD, aligning with recent improvements in wildcard handling.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Fix
Signed-off-by: Joseph <jvaikath@redhat.com>
* Refactor where wildcard expansion happens
Signed-off-by: Joseph <jvaikath@redhat.com>
* Refactor Backup CRD and related components for expanded namespace handling
- Updated `BackupStatus` struct to rename fields for clarity: `WildcardExpandedIncludedNamespaces` and `WildcardExpandedExcludedNamespaces` are now `ExpandedIncludedNamespaces` and `ExpandedExcludedNamespaces`, respectively.
- Adjusted associated comments and deepcopy methods to reflect the new naming conventions.
- Removed the `getActiveNamespaces` function from the item collector, streamlining the namespace handling process.
- Enhanced logging during wildcard expansion to provide clearer insights into the process.
These changes improve the clarity and maintainability of the Backup CRD and enhance the functionality of namespace selection in backup operations.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Refactor wildcard expansion logic in item collector and enhance testing
- Moved the wildcard expansion logic into a dedicated method, `expandNamespaceWildcards`, improving code organization and readability.
- Updated logging to provide detailed insights during the wildcard expansion process.
- Introduced comprehensive unit tests for wildcard handling, covering various scenarios and edge cases.
- Enhanced the `ShouldExpandWildcards` function to better identify wildcard patterns and validate inputs.
These changes improve the maintainability and robustness of the wildcard handling in the backup system.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Enhance Restore CRD with expanded namespace fields and update logic
- Added `ExpandedIncludedNamespaces` and `ExpandedExcludedNamespaces` fields to the `RestoreStatus` struct to support expanded wildcard namespace handling.
- Updated the `DeepCopyInto` method to ensure proper copying of the new fields.
- Implemented logic in the restore process to expand wildcard patterns for included and excluded namespaces, improving flexibility in namespace selection during restores.
- Enhanced logging to provide insights into the expanded namespaces.
These changes improve the functionality and maintainability of the restore process, aligning with recent enhancements in wildcard handling.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Refactor Backup and Restore CRDs to enhance wildcard namespace handling
- Renamed fields in `BackupStatus` and `RestoreStatus` from `ExpandedIncludedNamespaces` and `ExpandedExcludedNamespaces` to `IncludeWildcardMatches` and `ExcludeWildcardMatches` for clarity.
- Introduced a new field `WildcardResult` to record the final namespaces after applying wildcard logic.
- Updated the `DeepCopyInto` methods to accommodate the new field names and ensure proper data handling.
- Enhanced comments to reflect the changes and improve documentation clarity.
These updates improve the maintainability and clarity of the CRDs, aligning with recent enhancements in wildcard handling.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Enhance wildcard namespace handling in Backup and Restore processes
- Updated `BackupRequest` and `Restore` status structures to include a new field `WildcardResult`, which captures the final list of namespaces after applying wildcard logic.
- Renamed existing fields to `IncludeWildcardMatches` and `ExcludeWildcardMatches` for improved clarity.
- Enhanced logging to provide detailed insights into the expanded namespaces and final results during backup and restore operations.
- Introduced a new utility function `GetWildcardResult` to streamline the selection of namespaces based on include/exclude criteria.
These changes improve the clarity and functionality of namespace selection in both backup and restore processes, aligning with recent enhancements in wildcard handling.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Refactor namespace wildcard expansion logic in restore process
- Moved the wildcard expansion logic into a dedicated method, `expandNamespaceWildcards`, improving code organization and readability.
- Enhanced error handling and logging to provide detailed insights into the expanded namespaces during the restore operation.
- Updated the restore context with expanded namespace patterns and final results, ensuring clarity in the restore status.
These changes improve the maintainability and clarity of the restore process, aligning with recent enhancements in wildcard handling.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Add checks for "*" in exclude
Signed-off-by: Joseph <jvaikath@redhat.com>
* Rebase
Signed-off-by: Joseph <jvaikath@redhat.com>
* Create NamespaceIncludesExcludes to get full NS listing for backup w/
Signed-off-by: Scott Seago <sseago@redhat.com>
Signed-off-by: Joseph <jvaikath@redhat.com>
* Add new NamespaceIncludesExcludes struct
Signed-off-by: Joseph <jvaikath@redhat.com>
* Move namespace expansion logic
Signed-off-by: Joseph <jvaikath@redhat.com>
* Update backup status with expansion
Signed-off-by: Joseph <jvaikath@redhat.com>
* Wildcard status update
Signed-off-by: Joseph <jvaikath@redhat.com>
* Skip ns check if wildcard expansion
Signed-off-by: Joseph <jvaikath@redhat.com>
* Move wildcard expansion to getResourceItems
Signed-off-by: Joseph <jvaikath@redhat.com>
* lint
Signed-off-by: Joseph <jvaikath@redhat.com>
* Changelog
Signed-off-by: Joseph <jvaikath@redhat.com>
* linting issues
Signed-off-by: Joseph <jvaikath@redhat.com>
* Remove wildcard restore to check if tests pass
Signed-off-by: Joseph <jvaikath@redhat.com>
* Fix namespace mapping test bug from lint fix
The previous commit (0a4aabcf4) attempted to fix linting issues by
using strings.Builder, but incorrectly wrote commas to a separate
builder and concatenated them at the end instead of between namespace
mappings.
This caused the namespace mapping string to be malformed:
Before: ns-1:ns-1-mapped,ns-2:ns-2-mapped
Bug: ns-1:ns-1-mappedns-2:ns-2-mapped,,
The malformed string was parsed as a single mapping with an invalid
namespace name containing a colon, causing Kubernetes to reject it:
"ns-1-mappedns-2:ns-2-mapped" is invalid
Fix by properly using strings.Builder to construct the mapping string
with commas between entries, addressing both the linting concern and
the functional bug.
Fixes the MultiNamespacesMappingResticTest and
MultiNamespacesMappingSnapshotTest failures.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Signed-off-by: Joseph <jvaikath@redhat.com>
* Fix wildcard namespace expansion edge cases
This commit fixes two bugs in the wildcard namespace expansion feature:
1. Empty wildcard results: When a wildcard pattern (e.g., "invalid*")
matched no namespaces, the backup would incorrectly back up ALL
namespaces instead of backing up nothing. This was because the empty
includes list was indistinguishable from "no filter specified".
Fix: Added wildcardExpanded flag to NamespaceIncludesExcludes to
track when wildcard expansion has occurred. When true and the
includes list is empty, ShouldInclude now correctly returns false.
2. Premature namespace filtering: An earlier attempt to fix bug #1
filtered namespaces too early in collectNamespaces, breaking
LabelSelector tests where namespaces should be included based on
resources within them matching the label selector.
Fix: Removed the premature filtering and rely on the existing
filterNamespaces call at the end of getAllItems, which correctly
handles both wildcard expansion and label selector scenarios.
The fixes ensure:
- Wildcard patterns matching nothing result in empty backups
- Label selectors still work correctly (namespace included if any
resource in it matches the selector)
- State is preserved across multiple ResolveNamespaceList calls
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Signed-off-by: Joseph <jvaikath@redhat.com>
* Run wildcard expansion during backup processing
Signed-off-by: Joseph <jvaikath@redhat.com>
* Lint fix
Signed-off-by: Joseph <jvaikath@redhat.com>
* Improve coverage
Signed-off-by: Joseph <jvaikath@redhat.com>
* gofmt fix
Signed-off-by: Joseph <jvaikath@redhat.com>
* Add wildcard details to describe backup status
Signed-off-by: Joseph <jvaikath@redhat.com>
* Revert "Remove wildcard restore to check if tests pass"
This reverts commit 4e22c2af855b71447762cb0a9fab7e7049f38a5f.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Add restore describe for wildcard namespaces Revert restore wildcard removal
Signed-off-by: Joseph <jvaikath@redhat.com>
* Add coverage
Signed-off-by: Joseph <jvaikath@redhat.com>
* Lint
Signed-off-by: Joseph <jvaikath@redhat.com>
* Remove unintentional changes
Signed-off-by: Joseph <jvaikath@redhat.com>
* Remove wildcard status fields and mentionsRemove usage of wildcard fields for backup and restore status.
Signed-off-by: Joseph <jvaikath@redhat.com>
* Remove status update changelog line
Signed-off-by: Joseph <jvaikath@redhat.com>
* Rename getNamespaceIncludesExcludes
Signed-off-by: Scott Seago <sseago@redhat.com>
Signed-off-by: Scott Seago <sseago@redhat.com>
* Rewrite brace pattern validation
Signed-off-by: Joseph <jvaikath@redhat.com>
* Different var for internal loop
Signed-off-by: Joseph <jvaikath@redhat.com>
---------
Signed-off-by: Joseph <jvaikath@redhat.com>
Signed-off-by: Scott Seago <sseago@redhat.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Scott Seago <sseago@redhat.com>
Co-authored-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Claude <noreply@anthropic.com>
Bump golangci-lint to v1.25.0, because golangci-lint start to support
Golang v1.25 since v1.24.0, and v1.26.x was not stable yet.
Align action pr-linter-check's golangci-lint version to v1.25.0
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
Add documentation explaining how volume policies are applied before
VGS grouping, including examples and troubleshooting guidance for the
multiple CSI drivers scenario.
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
VolumeGroupSnapshots were querying all PVCs with matching labels
directly from the cluster without respecting volume policies. This
caused errors when labeled PVCs included both CSI and non-CSI volumes,
or volumes from different CSI drivers that were excluded by policies.
This change filters PVCs by volume policy before VGS grouping,
ensuring only PVCs that should be snapshotted are included in the
group. A warning is logged when PVCs are excluded from VGS due to
volume policy.
Fixes#9344
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
* Fix the Job build error when BackupReposiotry name longer than 63.
Fix the Job build error.
Consider the name length limitation change in job list code.
Use hash to replace the GetValidName function.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
* Use ref_name to replace ref.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
---------
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
Update test expectations to include createdName field for resources
with action 'created'. Also ensure namespaces track their created
names when created via EnsureNamespaceExistsAndIsReady.
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
When restoring resources with GenerateName, Kubernetes assigns the actual name
after creation, but Velero only tracked the original name from the backup in
itemKey. This caused volume information collection to fail when trying to fetch
PVCs using the original name instead of the actual created name.
Example:
- Original PVC name from backup: "test-vm-disk-1"
- Actual created PVC name: "test-vm-backup-2025-10-27-test-vm-disk-1-mdjkd"
- Volume info tried to fetch: "test-vm-disk-1" → Failed with "not found"
This affects any plugin or workflow using GenerateName during restore:
- kubevirt-velero-plugin (VMFR use case with PVC collision avoidance)
- Custom restore item actions using generateName
- Secrets/ConfigMaps restored with generateName
Changes:
1. Add createdName field to restoredItemStatus struct (pkg/restore/request.go)
2. Capture actual name from createdObj.GetName() (pkg/restore/restore.go:1520)
3. Use createdName in RestoredResourceList() when available (pkg/restore/request.go:93-95)
This fix is backwards compatible:
- createdName defaults to empty string
- When empty, falls back to itemKey.name (original behavior)
- Only populated for GenerateName resources where needed
Fixes volume information collection errors like:
"Failed to get PVC" error="persistentvolumeclaims \"<original-name>\" not found"
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
When restoring resources with GenerateName (where name is empty and K8s
assigns the actual name), the managed fields patch was failing with error
"name is required" because it was using obj.GetName() which returns empty
for GenerateName resources.
The fix uses createdObj.GetName() instead, which contains the actual name
assigned by Kubernetes after resource creation.
This affects any resource using GenerateName for restore, including:
- PersistentVolumeClaims restored by kubevirt-velero-plugin
- Secrets and ConfigMaps created with generateName
- Any custom resources using generateName
Changes:
- Line 1707: Use createdObj.GetName() instead of obj.GetName() in Patch call
- Lines 1702, 1709, 1713, 1716: Use createdObj in error/info messages for accuracy
This is a backwards-compatible fix since:
- For resources WITHOUT generateName: obj.GetName() == createdObj.GetName()
- For resources WITH generateName: createdObj.GetName() has the actual name
The managed fields patch was already correctly using createdObj (lines 1698-1700),
only the Patch() call was incorrectly using obj.
Fixes restore status showing FinalizingPartiallyFailed with "name is required"
error when restoring resources with GenerateName.
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Add error message in the velero install CLI output if VerifyJSONConfigs fail.
Only allow one element in node-agent-configmap's Data.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
main branch will read go version from go.mod's go primitive, and
only keep major and minor version, because we want the actions to use
the lastest patch version automatically, even the go.mod specify version
like 1.24.0.
release branch can read the go version from go.mod file by setup-go
action's own logic.
Refactor the get Go version to reusable workflow.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
The Bitnami MinIO image bitnami/minio:2021.6.17-debian-10-r7 is no longer
available on Docker Hub, causing E2E tests to fail.
This change implements a solution to build the MinIO image locally from
Bitnami's public Dockerfile and cache it for subsequent runs:
- Fetches the latest commit hash of the Bitnami MinIO Dockerfile
- Uses GitHub Actions cache to store/retrieve built images
- Only rebuilds when the upstream Dockerfile changes
- Maintains compatibility with existing environment variables
Fixes#9279🤖 Generated with [Claude Code](https://claude.ai/code)
Update .github/workflows/e2e-test-kind.yaml
Signed-off-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Fix GetResourceWithLabel's bug: labels were not applied.
Add workOS for deployment and pod creationg.
Add OS label for select node.
Enlarge the context timeout to 10 minutes. 5 min is not enough for Windows.
Enlarge the Kibishii test context to 15 minutes for Windows.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
- Expanded the design to include detailed implementation steps for wildcard expansion in both backup and restore operations.
- Added new status fields to the backup and restore CRDs to track expanded wildcard namespaces.
- Clarified the approach to ensure backward compatibility with existing `*` behavior.
- Addressed limitations and provided insights on restore operations handling wildcard-expanded backups.
This update aims to provide a comprehensive and clear framework for implementing wildcard namespace support in Velero.
Signed-off-by: Joseph <jvaikath@redhat.com>
- Clarified the use of the standalone `*` character in namespace specifications.
- Ensured consistent formatting for `*` throughout the document.
- Maintained focus on backward compatibility and limitations regarding wildcard usage.
This update enhances the clarity and consistency of the design document for implementing wildcard namespace support in Velero.
Signed-off-by: Joseph <jvaikath@redhat.com>
- Updated the abstract to clarify the current limitations of namespace specifications in Velero.
- Expanded the goals section to include specific objectives for implementing wildcard patterns in `--include-namespaces` and `--exclude-namespaces`.
- Detailed the high-level design and implementation steps, including the addition of new status fields in the backup CRD and the creation of a utility package for wildcard expansion.
- Addressed backward compatibility and known limitations regarding the use of wildcards alongside the existing "*" character.
This update aims to provide a comprehensive overview of the proposed changes for improved namespace selection flexibility.
Signed-off-by: Joseph <jvaikath@redhat.com>
This commit add content to cover "includeExcludePolicy" in resource
policies.
It also tweak the wordings to clarify the "volume policy" and "resource
policies"
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
Fixes#4201: Ensure PriorityClasses are restored before pods that
reference them, preventing restoration failures when pods depend on
custom PriorityClasses.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
It failed with fetching the wrong VolumeInfo. Correct it.
Add AdditionalBSL label for applied cases.
Remove not needed default BSL kibishii test for additional bsl case.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
PVB and PVR used to print related pod namespace in output.
In v1.17, the behavior changed. Use backup or restore name to filter them.
Shorten the timeout context from 1h to 5m, because AWS was not covered anymore.
Remove an used if branch for vSphere.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
- Add --server-priority-class-name and --node-agent-priority-class-name flags to velero install command
- Configure data mover pods (PVB/PVR/DataUpload/DataDownload) to use priority class from node-agent-configmap
- Configure maintenance jobs to use priority class from repo-maintenance-job-configmap (global config only)
- Add priority class validation with ValidatePriorityClass and GetDataMoverPriorityClassName utilities
- Update e2e tests to include PriorityClass testing utilities
- Move priority class design document to Implemented folder
- Add comprehensive unit tests for all priority class implementations
- Update documentation for priority class configuration
- Add changelog entry for #8883
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
remove unused test utils
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
feat: add unit test for getting priority class name in maintenance jobs
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
doc update
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
feat: add priority class validation for repository maintenance jobs
- Add ValidatePriorityClassWithClient function to validate priority class existence
- Integrate validation in maintenance.go when creating maintenance jobs
- Update tests to cover the new validation functionality
- Return boolean from ValidatePriorityClass to allow fallback behavior
This ensures maintenance jobs don't fail due to non-existent priority classes,
following the same pattern used for data mover pods.
Addresses feedback from:
https://github.com/vmware-tanzu/velero/pull/8883#discussion_r2238681442
Refs #8869
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
refactor: clean up priority class handling for data mover pods
- Fix comment in node_agent.go to clarify PriorityClassName is only for data mover pods
- Simplify server.go to use dataPathConfigs.PriorityClassName directly
- Remove redundant priority class logging from controllers as it's already logged during server startup
- Keep logging centralized in the node-agent server initialization
This reduces code duplication and clarifies the scope of priority class configuration.
🤖 Generated with [Claude Code](https://claude.ai/code)
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
refactor: remove GetDataMoverPriorityClassName from kube utilities
Remove GetDataMoverPriorityClassName function and its tests as priority
class is now read directly from dataPathConfigs instead of parsing from
ConfigMap. This simplifies the codebase by eliminating the need for
indirect ConfigMap parsing.
Refs #8869🤖 Generated with [Claude Code](https://claude.ai/code)
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
refactor: remove priority class validation from install command
Remove priority class validation during install as it's redundant
since validation already occurs during server startup. Users cannot
see console logs during install, making the validation warnings
ineffective at this stage.
The validation remains in place during server and node-agent startup
where it's more appropriate and visible to users.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-Authored-By: Claude <noreply@anthropic.com>
fixes#8610
This commit extends the resources policy, such that user can define
resource include exclude filters in the policy and reuse it in different backups.
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
Addresses #9133 by adding clear documentation about the current limitation
where only the first element in the loadAffinity array is processed.
Changes:
- Added prominent warning at the beginning of loadAffinity section
- Updated misleading examples that showed multiple array elements
- Added warnings to each multi-element example explaining the limitation
- Clarified that the recommended approach is to combine all conditions
into a single loadAffinity element using both matchLabels and matchExpressions
This provides the "bare minimum" documentation clarification requested
in the issue until a code fix can be implemented.
🤖 Generated with [Claude Code](https://claude.ai/code)
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Apply suggestion from @kaovilai
Signed-off-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Apply suggestion from @kaovilai
Signed-off-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Apply suggestion from @kaovilai
Signed-off-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
feat: Add CA cert fallback when caCertFile fails in download requests
- Fallback to BSL cert when caCertFile cannot be opened
- Combine certificate handling blocks to reuse CA pool initialization
- Add comprehensive unit tests for fallback behavior
This improves robustness by allowing downloads to proceed with BSL CA cert
when the provided CA cert file is unavailable or unreadable.
🤖 Generated with [Claude Code](https://claude.ai/code)
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-Authored-By: Claude <noreply@anthropic.com>
If distributed snapshotting is enabled in the external snapshotter a manager label is added to the volume snapshot content. When exposing the snapshot velero needs to keep this label around otherwise the exposed snapshot will never become ready.
Signed-off-by: Felix Prasse <1330854+flx5@users.noreply.github.com>
* Enhance File System Backup documentation with details on volume snapshot behavior and backup method decision flow
Fixes: Velero AWS snapshots not occurring with the AWS plugin #9090
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
* Clarify conditions for excluding volumes from File System Backup and enhance decision flow for volume snapshots
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
---------
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
The ResticIdentifier field in BackupRepository is only relevant for restic
repositories. For kopia repositories, this field is unused and should be
omitted. This change:
- Adds omitempty tag to ResticIdentifier field in BackupRepository CRD
- Updates controller to only populate ResticIdentifier for restic repos
- Adds tests to verify behavior for both restic and kopia repository types
This ensures backward compatibility while properly handling kopia repositories
that don't require a restic-compatible identifier.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
The reason is toolchain cannot update automatically.
If 1.24 can force CI use the latest patch version,
and it will not force user to upgrade their local go version,
this should be the better approach.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
add changelog file
Show defaultVolumesToFsBackup in describe only when set by the user
minor ut fix
minor fix
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
* Refactor backup volume info retrieval and snapshot checkpoint building in e2e tests
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
log backup volume info retrieval and snapshot checkpoint building
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
Add error handling for volume info retrieval in backup tests
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
Add error handling for volume info retrieval in backup tests
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
* Update snapshot checkpoint building to use DefaultKibishiiWorkerCounts
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
---------
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
This PR fixes issue #8870 where Velero was unnecessarily adding the restore-wait init container when restoring pods with volumes that were backed up using native datamover or CSI.
When restoring pods with volumes, Velero was always adding the restore-wait init container, even when the volumes were backed up using native datamover or CSI and didn't need file system restores. This was causing unnecessary overhead and potential issues.
PVR action to remove restore-wait init container on restore
Changes:
- Remove ALL existing restore-wait init containers before deciding whether to add a new one
- This covers both scenarios: when no file system restore is needed AND when preventing duplicates
- Simplify the add logic since we've already cleaned up existing containers
- Add better logging to show how many containers were removed
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
* Clarify thirdparty label/annotations on the maintenance jobs
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
* Clarify that maintenance jobs do not inherit all labels/annotations
- Address PR review feedback and issue #8974
- Make it explicit that only specific predefined third-party labels and annotations are propagated
- Add Important note to prevent user confusion about label/annotation inheritance behavior
- Currently only azure.workload.identity/use label and iam.amazonaws.com/role annotation are inherited
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
---------
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Xun Jiang/Bruce Jiang <59276555+blackpiglet@users.noreply.github.com>
* Add support for pod labels and service account annotations in Velero configuration
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
* Refactor Velero configuration to use string types for pod labels and service account annotations
Signed-off-by: Priyansh Choudhary <im1706@gmail.com>
* Please notice only Kibishii workload support Windows test,
because the other work loads use busybox image, and not support Windows.
* Refactor CreateFileToPod to support Windows.
* Add skip logic for migration test if the version is under 1.16.
* Add main in semver check.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
Migration cases use the Kibishii as the workload, and SC mapping
ConfigMap was needed for all scenarios, because standby cluster
doesn't have the Kibishii SC after setting up.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
The restore workflow used name represents the backup resource and the
restore to be restored, but the restored resource name may be different
from the backup one, e.g. PV and VSC are global resources, to avoid
conflict, need to rename them.
Reanme the name variable to backupResourceName, and use obj.GetName()
for restore operation.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
Enables the node-agent to start even if the
/host_pods path does not exist.
If the path is present, the existing logic
remains unchanged, ensuring it is readable.
Signed-off-by: Michal Pryc <mpryc@redhat.com>
Combine existing PVC non-CSI RIAs and move annotation
removal out of the CSI plugin to fix issues with
CSI volumes when using fs-backup
Signed-off-by: Scott Seago <sseago@redhat.com>
This change fixes a minor typo in the Backup Hooks documentation, changing "Defaults is" to "Defaults to".
Signed-off-by: Andreas Lindhé <7773090+lindhe@users.noreply.github.com>
Also bumped to support upgraded k8s.io/ deps.
- controller-gen to v0.16.5
- sigs.k8s.io/controller-runtime v0.19.2
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Run backup post hooks inside ItemBlock synchronously as the ItemBlocks are handled asynchronously
Fixes#8516
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
This commit makes sure when a backup is deleted the controller will
delete the CSI snapshot even when the bakckup tarball is not uploaded.
fixes#8160
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue
Fixes#8587
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
This commit makes change in restore finalizer controller, to make it
check the status in item operation of a PVC before patch the PV that is
bound to it. If the operation is not successful it will skip patching
the PV.
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
The issue is caused by the changes of controller-runtime: WithEventFilter() doesn't apply to WatchesRawSource(),
this commit set Predicate for WatchesRawSource() seperatedly
Fixes#8437
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
* issue 8433: add ask label to data mover pods
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
* check existence of the same label from node-agent
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
---------
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
This commit adds SecurityContext that complies with "restricted" level
per Pod Security Standards to "restore-helper" initContainer.
It ensures the restore won't fail when the cluster enforces PSA.
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
* Only install and uninstall SC and VSC once for default cluster.
* Install and uninstall SC and VSC for standby cluster on migration case.
* Refactor the StorageClass and VolumeSnapshotClass YAMLs.
* Prettify the e2e_suite_test.go
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
Run the E2E test on kind / run-e2e-test (1.23.17, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.24.17, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.24.17, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.25.16, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.25.16, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.26.13, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.26.13, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.27.10, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.27.10, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.28.6, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.28.6, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.29.1, (NamespaceMapping && Single && Restic) || (NamespaceMapping && Multiple && Restic)) (push) Has been skipped
Run the E2E test on kind / run-e2e-test (1.29.1, ResourceModifier || (Backups && BackupsSync) || PrivilegesMgmt || OrderedResources) (push) Has been skipped
* Add new flag HAS_VSPHERE_PLUGIN for E2E test.
* Modify the E2E README for the new parameter.
* Add the VolumeSnapshotClass for VKS.
* Modify the plugin install logic.
* Modify the cases to support data mover case in VKS.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
Always test latest available patch version of each supported k8s version available in Kindest/node images.
ie. This adds v1.31, v1.30 to test matrix and upgrade patch versions for others.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
This commit implements a new --annotations flag in the backup and restore create commands.
This allows users to specify key-value pairs for annotations directly at the time of backup and restore creation, in the same way as the --labels flag.
Signed-off-by: Alvaro Romero <alromero@redhat.com>
by changing output type to image.
Then you can execute command like so to create a multi-arch image
```
BUILDX_PLATFORMS=linux/amd64,linux/arm64 BUILDX_OUTPUT_TYPE=image make container
```
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
As with other plugin types, the information on how to implement
an IBA plugin will be in the velero-plugin-example repo.
Signed-off-by: Scott Seago <sseago@redhat.com>
Rename backup-repository-config to backup-repository-configmap.
Rename repo-maintenance-job-config to repo-maintenance-job-configmap.
Rename node-agent-config to node-agent-configmap.
Add those three parameters to `velero install` CLI.
Modify the design and the site documents.
Signed-off-by: Xun Jiang <xun.jiang@broadcom.com>
package "k8s.io/api/core/v1" is being imported more than once (ST1019)
pvc_pv_test.go:32:2: other import of "k8s.io/api/core/v1"
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
The code changes are related to the `NewPeriodicalEnqueueSource` function in the `kube/periodical_enqueue_source.go` file. This function is used to create a new instance of the `PeriodicalEnqueueSource` struct, which is responsible for periodically enqueueing objects into a work queue.
The changes involve adding two new parameters to this function: `controllerName string` and modifying the existing `logger` parameter to include additional fields.
Here's what changed:
1. A new `controllerName` parameter was added to the `NewPeriodicalEnqueueSource` function.
These changes are to adding more context or metadata to the logging output, possibly for debugging or monitoring purposes.
The other files (`restore_operations_controller.go`, `schedule_controller.go`, and their respective test files) were modified to use this updated `NewPeriodicalEnqueueSource` function with the new `controllerName` parameter.
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Changed the tests to use mocked function that will not read actual
secrets from env variables nor AWS config file that may be
on the system that is running tests.
As a second guard against exposed secrets comparison for the values
does not shows the actual values for the AWS data. This is to prevent
situation where programming error may still allow the test to read
AWS config/env variables instead of using mocked function.
Signed-off-by: Michal Pryc <mpryc@redhat.com>
* Correcting Openshift on IBM Documentation Error
I have to admit to some significant error and embarrassment regarding the documentation update
about Openshift on IBM Cloud pull request https://github.com/vmware-tanzu/velero/pull/8069.
I will correct my error before it gets any further.
Just exposing /var/data/kubelet/pods is incorrect and host path /var/lib/kubelet/pods should remain unchanged.
The errors with the defaults during csi snapshot data movement were:
data path backup failed: Failed to run kopia backup: unable to get local
block device entry: resolveSymlink: lstat /var/data/: no such
file or directory
I suspected this was the same as RancherOS and Nutanix. It is not.
The original tested changes changed both /var/lib/kubelet/{pods,plugins} to
/var/data/kubelet/{pods,plugins}.
The published changes only result in the error:
```
status:
completionTimestamp: '2024-08-02T17:12:29Z'
message: >-
data path backup failed: Failed to run kopia backup: unable to get local
block device entry: resolveSymlink: lstat /var/data/kubelet/plugins: no such
file or directory
node: 10.240.0.5
phase: Failed
progress: {}
startTimestamp: '2024-08-02T17:12:11Z'
```
After making continued modifications to the daemonset the correct configuration was:
```
volumeMounts:
- name: host-pods
mountPath: /host_pods
mountPropagation: HostToContainer
- name: host-plugins
mountPath: /var/data/kubelet/plugins
mountPropagation: HostToContainer
```
```
volumes:
- name: host-pods
hostPath:
path: /var/lib/kubelet/pods
type: ''
- name: host-plugins
hostPath:
path: /var/data/kubelet/plugins
type: ''
```
Only the changes to the plugin path were required.
The plugin path changes were required to both the mount path and the host path.
Regardless of whether /var/lib/kubelet/pods or /var/data/kubelet/pods host path, backups and restore
succeeded provided the plugin path was modified.
```
volumeMounts:
- name: host-pods
mountPath: /host_pods
mountPropagation: HostToContainer
- name: host-plugins
mountPath: /var/data/kubelet/plugins
mountPropagation: HostToContainer
```
```
volumes:
- name: host-pods
hostPath:
path: /var/data/kubelet/pods
type: ''
- name: host-plugins
hostPath:
path: /var/data/kubelet/plugins
type: ''
```
After getting on-host access was able to confirm. Pods are at /var/lib/kubelet/pods.
```
ls /var/lib/kubelet/pods
07c0be63-335d-4cfb-b39f-816bc2fb32cd
51f31b3e-4710-4ef0-8626-5f1a78a624b2
a4802fd3-3b62-45a4-8f21-974880b6f92a
cccb35c9-b4f9-4ca9-a697-736ae64f09ad
0a5d4366-7fa1-4525-9e45-a43a362b8542
558b0643-0661-4d4a-b03e-aac60c6ad710
a4b106fb-5b7b-48e5-828a-ea7b41ba0e59
ce1290e1-4330-4df6-8166-14784bcce930
```
On host the volumes are in /var/data/kubelet/plugins.
```
ls /var/data/kubelet/plugins/kubernetes.io/csi/openshift-storage.cephfs.csi.ceph.com/231e04896c4f528efb95d23a3c153db9fc4a7206b7320f74443f30de7228dba5/globalmount/velero/backups/backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c/
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-csi-volumesnapshotclasses.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-resource-list.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-csi-volumesnapshotcontents.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-results.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-csi-volumesnapshots.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-volumesnapshots.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-itemoperations.json.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c.tar.gz
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-logs.gz
velero-backup.json
backup-resources-41d84d16-47a7-4ea8-a9cb-6348d01bcb2c-podvolumebackups.json.gz
```
With the volume config changed to expose /var/data/kubelet/plugins for the plugin hostPath, the DataUploads and DataDownloads
succeed for both Filesystem and Block mode PVCs.
```
status:
completionTimestamp: '2024-08-02T17:23:33Z'
node: 10.240.0.5
path: >-
/host_pods/7fcb9d56-7885-437c-acd3-67db6b1ee8ae/volumeDevices/kubernetes.io~csi/pvc-47b91f56-db8c-44bf-9ecc-737170561b4b
phase: Completed
progress:
bytesDone: 5368709120
totalBytes: 5368709120
snapshotID: 8faae36b3592fee4efbfad024f26033e
startTimestamp: '2024-08-02T17:21:22Z'
```
```
status:
completionTimestamp: '2024-08-02T18:42:19Z'
node: 10.240.0.5
phase: Completed
progress:
bytesDone: 5368709120
totalBytes: 5368709120
startTimestamp: '2024-08-02T18:41:00Z'
```
My apologies for the error.
Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>
* Add context to plugins mountPath
Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>
---------
Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>
This commit makes sure the dbr's status is "Processed" when an error
happens before the actual deletion is started
fixes#7812
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
Added option checksumAlgorith, this stops 403 errors as per https://github.com/vmware-tanzu/velero/issues/7543
Added plugins line as velero install failed without this option in version 1.14.0
Removed the volumesnapshotlocation as it does not exist in 1.14.0
Signed-off-by: Gareth Anderson <gareth.anderson03@gmail.com>
Updates the documentation for CSI snapshot data movement for OpenShift
on IBM Cloud.
The default hostpath /var/lib/kubelet/pods cannot find
PersistentVolumeClaims with volumeMode: Block on host.
The correct hostpath for OpenShift on IBM Cloud is
/var/data/kubelet/pods.
Signed-off-by: Michael Fruchtman <msfrucht@us.ibm.com>
Breaking change (can be mitigated if needed in the future): v1 branch accepted an optional seconds field at the beginning of the cron spec. This is non-standard and has led to a lot of confusion. The new default parser conforms to the standard as described by [the Cron wikipedia page.](https://en.wikipedia.org/wiki/Cron). It is unlikely that this affects us per https://github.com/vmware-tanzu/velero/pull/31
Other notes:
> CRON_TZ is now the recommended way to specify the timezone of a single schedule, which is sanctioned by the specification. The legacy "TZ=" prefix will continue to be supported since it is unambiguous and easy to do so.
References: https://pkg.go.dev/github.com/robfig/cron/v3#readme-upgrading-to-v3-june-2019
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
add changelog file
change log level and add more detailed comments
make update
add return for sc get call if error
Signed-off-by: Shubham Pampattiwar <spampatt@redhat.com>
Check whether the namespaces specified in the
backup.Spec.IncludeNamespaces exist during backup resource collcetion
If not, log error to mark the backup as PartiallyFailed.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
This commit fixes#7849.
It will use PVC instead of PV to track CSI snapshots to generate restore
volume info metadata. So that in the case the PVC is not bound to PV
the metadata can be populated correctly.
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
When dry-run the tag-release.sh, there's an error
"fatal: detected dubious ownership in repository at
'/github.com/vmware-tanzu/velero'"
This commit works around this issue to make sure "tag-release.sh"
can finish successful
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
Tweak the command and remove the sections which include upgrading from
older versions, given v1.13.x is a prerequisite.
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
As per PR #7281, if repository count is more than 1, then snapshots deletion is achieved with a fast way, then we should have more than 1 FS backup repository per backup.
Signed-off-by: danfengl <danfengl@vmware.com>
1. In data movement scenario, volumesnapshotcontent by Velero backup will be deleted instead of retained in CSI scenaito, so add
a checkpoint for data movement scenario to verify no volumesnapshotcontent left after Velero backup;
2. Fix global context varaible issue, context varaible is not effective due to it's initialized right after the very beginning of
all tests instead of beginning of each test, so if someone script a new E2E test and did not overwrite it in the test body, then it
will fail the test if it was triggerd one hour later;
3. Due to CSI plugin is deprecated, it breaked down migration tests, because v1.13 still needs to install CSI plugin for the test.
Signed-off-by: danfengl <danfengl@vmware.com>
1. Add sleep for native snapshot tests when using test.go interface;
2. Add --confirm for velero plugin add CLI as new feature introduced.
Signed-off-by: danfengl <danfengl@vmware.com>
This commit makes change to CLI so `velero restore describe` will
download restore volume info and render the CSI snapshot restores based
on its content.
Signed-off-by: Daniel Jiang <daniel.jiang@broadcom.com>
This commit bumps up the golang for building and testing velero to v1.22
It also updates controller-gen to v0.14.0 to fix an issue under new
versino of go.
More details see https://github.com/golang/go/issues/65637
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
The wait error changed from `timed out waiting for the condition`
to `context deadline exceeded`.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
handleSkippedPVHasRetainPolicy
According to comment, calling executePVAction aims to reset PV's
claimRef, but the reset logic was moved into resetVolumeBindingInfo
since release-1.4.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
Check the existence of the namespaces provided in the "--include-namespaces" opt
ion and reports validation error if not found
Fixes#7431
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
This commit makes sure when kopia connects to the repository the
crendentials file specified in BSL.spec.config has the higher priority over
Pod Environment credentials when IRSA is configured.
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
When debugging this error it is currently hard to identify what
CRD is causing the issue. This is particularly difficult when
dealing with over a hundred CRDs.
Signed-off-by: Jose Arevalo <jose.matias.arevalo@gmail.com>
- [ ] [Accepted the DCO](https://velero.io/docs/v1.5/code-standards/#dco-sign-off). Commits without the DCO will delay acceptance.
- [ ] [Created a changelog file](https://velero.io/docs/v1.5/code-standards/#adding-a-changelog) or added`/kind changelog-not-required`as a comment on this pull request.
- [ ] [Created a changelog file (`make new-changelog`)](https://velero.io/docs/main/code-standards/#adding-a-changelog) or comment`/kind changelog-not-required`on this PR.
- [ ] Updated the corresponding documentation in `site/content/docs/main`.
stale-issue-message:"This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."
@@ -20,4 +20,4 @@ jobs:
days-before-pr-close:-1
# Only issues made after Feb 09 2021.
start-date:"2021-09-02T00:00:00"
exempt-issue-labels:"Epic,Area/CLI,Area/Cloud/AWS,Area/Cloud/Azure,Area/Cloud/GCP,Area/Cloud/vSphere,Area/CSI,Area/Design,Area/Documentation,Area/Plugins,Bug,Enhancement/User,kind/requirement,kind/refactor,kind/tech-debt,limitation,Needs investigation,Needs triage,Needs Product,P0 - Hair on fire,P1 - Important,P2 - Long-term important,P3 - Wouldn't it be nice if...,Product Requirements,Restic - GA,Restic,release-blocker,Security"
exempt-issue-labels:"Epic,Area/CLI,Area/Cloud/AWS,Area/Cloud/Azure,Area/Cloud/GCP,Area/Cloud/vSphere,Area/CSI,Area/Design,Area/Documentation,Area/Plugins,Bug,Enhancement/User,kind/requirement,kind/refactor,kind/tech-debt,limitation,Needs investigation,Needs triage,Needs Product,P0 - Hair on fire,P1 - Important,P2 - Long-term important,P3 - Wouldn't it be nice if...,Product Requirements,Restic - GA,Restic,release-blocker,Security,backlog"
@@ -63,7 +63,7 @@ Okteto integrates Velero in [Okteto Cloud][94] and [Okteto Enterprise][95] to pe
Replicated uses the Velero open source project to enable snapshots in [KOTS][101] to backup Kubernetes manifests & persistent volumes. In addition to the default functionality that Velero provides, [KOTS][101] provides a detailed interface in the [Admin Console][102] that can be used to manage the storage destination and schedule, and to perform and monitor the backup and restore process.<br>
**[CloudCasa][103]**<br>
[Catalogic Software][104] integrates Velero with [CloudCasa][103] - A Smart Home in the Cloud for Backups. CloudCasa is a simple, scalable, cloud-native solution providing data protection and disaster recovery as a service. This solution is built using Kubernetes for protecting Kubernetes clusters.<br>
[Catalogic Software][104] integrates Velero with [CloudCasa][103] - A Smart Home in the Cloud for Backups. CloudCasa is a full-featured, scalable, cloud-native solution providing Kubernetes data protection, disaster recovery, and migration as a service. An option to manage existing Velero instances and an enterprise self-hosted option are also available.<br>
**[Microsoft Azure][105]**<br>
[Azure Backup for AKS][106] is an Azure native, Kubernetes aware, Enterprise ready backup for containerized applications deployed on Azure Kubernetes Service (AKS). AKS Backup utilizes Velero to perform backup and restore operations to protect stateful applications in AKS clusters.<br>
@@ -107,6 +107,29 @@ Lazy consensus does _not_ apply to the process of:
* Removal of maintainers from Velero
## Deprecation Policy
### Deprecation Process
Any contributor may introduce a request to deprecate a feature or an option of a feature by opening a feature request issue in the vmware-tanzu/velero GitHub project. The issue should describe why the feature is no longer needed or has become detrimental to Velero, as well as whether and how it has been superseded. The submitter should give as much detail as possible.
Once the issue is filed, a one-month discussion period begins. Discussions take place within the issue itself as well as in the community meetings. The person who opens the issue, or a maintainer, should add the date and time marking the end of the discussion period in a comment on the issue as soon as possible after it is opened. A decision on the issue needs to be made within this one-month period.
The feature will be deprecated by a supermajority vote of 50% plus one of the project maintainers at the time of the vote tallying, which is 72 hours after the end of the community meeting that is the end of the comment period. (Maintainers are permitted to vote in advance of the deadline, but should hold their votes until as close as possible to hear all possible discussion.) Votes will be tallied in comments on the issue.
Non-maintainers may add non-binding votes in comments to the issue as well; these are opinions to be taken into consideration by maintainers, but they do not count as votes.
If the vote passes, the deprecation window takes effect in the subsequent release, and the removal follows the schedule.
### Schedule
If depreciation proposal passes by supermajority votes, the feature is deprecated in the next minor release and the feature can be removed completely after two minor version or equivalent major version e.g., if feature gets deprecated in Nth minor version, then feature can be removed after N+2 minor version or its equivalent if the major version number changes.
### Deprecation Window
The deprecation window is the period from the release in which the deprecation takes effect through the release in which the feature is removed. During this period, only critical security vulnerabilities and catastrophic bugs should be fixed.
**Note:** If a backup relies on a deprecated feature, then backups made with the last Velero release before this feature is removed must still be restorable in version `n+2`. For instance, something like restic feature support, that might mean that restic is removed from the list of supported uploader types in version `n` but the underlying implementation required to restore from a restic backup won't be removed until release `n+2`.
## Updating Governance
All substantive changes in Governance require a supermajority agreement by all maintainers.
Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.
The Velero maintainers are continuously working to expand testing coverage, but are not able to test every combination of Velero and supported Kubernetes versions for each Velero release. The table above is meant to track the current testing coverage and the expected supported Kubernetes versions for each Velero version. If you have a question about test coverage before v1.9, please reach out in the [#velero-users](https://kubernetes.slack.com/archives/C6VCGP4MT) Slack channel.
The Velero maintainers are continuously working to expand testing coverage, but are not able to test every combination of Velero and supported Kubernetes versions for each Velero release. The table above is meant to track the current testing coverage and the expected supported Kubernetes versions for each Velero version.
If you are interested in using a different version of Kubernetes with a given Velero version, we'd recommend that you perform testing before installing or upgrading your environment. For full information around capabilities within a release, also see the Velero [release notes](https://github.com/vmware-tanzu/velero/releases) or Kubernetes [release notes](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG). See the Velero [support page](https://velero.io/docs/latest/support-process/) for information about supported versions of Velero.
@@ -12,13 +12,13 @@ The Velero project maintains the following [governance document](https://github.
Security is of the highest importance and all security vulnerabilities or suspected security vulnerabilities should be reported to Velero privately, to minimize attacks against current users of Velero before they are fixed. Vulnerabilities will be investigated and patched on the next patch (or minor) release as soon as possible. This information could be kept entirely internal to the project.
If you know of a publicly disclosed security vulnerability for Velero, please **IMMEDIATELY** contact the VMware Security Team (security@vmware.com).
If you know of a publicly disclosed security vulnerability for Velero, please **IMMEDIATELY** contact the Security Team (velero-security.pdl@broadcom.com).
**IMPORTANT: Do not file public issues on GitHub for security vulnerabilities**
To report a vulnerability or a security-related issue, please contact the VMware email address with the details of the vulnerability. The email will be fielded by the VMware Security Team and then shared with the Velero maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/velero/issues/new/choose) instead.
To report a vulnerability or a security-related issue, please contact the email address with the details of the vulnerability. The email will be fielded by the Security Team and then shared with the Velero maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/velero/issues/new/choose) instead.
## Proposed Email Content
@@ -29,7 +29,7 @@ Provide a descriptive subject line and in the body of the email include the foll
* Basic identity information, such as your name and your affiliation or company.
* Detailed steps to reproduce the vulnerability (POC scripts, screenshots, and logs are all helpful to us).
* Description of the effects of the vulnerability on Velero and the related hardware and software configurations, so that the VMware Security Team can reproduce it.
* Description of the effects of the vulnerability on Velero and the related hardware and software configurations, so that the Security Team can reproduce it.
* How the vulnerability affects Velero usage and an estimation of the attack surface, if there is one.
* List other projects or dependencies that were used in conjunction with Velero to produce the vulnerability.
@@ -49,7 +49,7 @@ Provide a descriptive subject line and in the body of the email include the foll
## Patch, Release, and Disclosure
The VMware Security Team will respond to vulnerability reports as follows:
The Security Team will respond to vulnerability reports as follows:
@@ -62,7 +62,7 @@ The VMware Security Team will respond to vulnerability reports as follows:
5. The Security Team will also create a [CVSS](https://www.first.org/cvss/specification-document) using the [CVSS Calculator](https://www.first.org/cvss/calculator/3.0). The Security Team makes the final call on the calculated CVSS; it is better to move quickly than making the CVSS perfect. Issues may also be reported to [Mitre](https://cve.mitre.org/) using this [scoring calculator](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator). The CVE will initially be set to private.
6. The Security Team will work on fixing the vulnerability and perform internal testing before preparing to roll out the fix.
7. The Security Team will provide early disclosure of the vulnerability by emailing the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list. Distributors can initially plan for the vulnerability patch ahead of the fix, and later can test the fix and provide feedback to the Velero team. See the section **Early Disclosure to Velero Distributors List** for details about how to join this mailing list.
8. A public disclosure date is negotiated by the VMware SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if it’s already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The VMware Security Team holds the final say when setting a public disclosure date.
8. A public disclosure date is negotiated by the SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if it’s already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The Security Team holds the final say when setting a public disclosure date.
9. Once the fix is confirmed, the Security Team will patch the vulnerability in the next patch or minor release, and backport a patch release into all earlier supported releases. Upon release of the patched version of Velero, we will follow the **Public Disclosure Process**.
@@ -79,7 +79,7 @@ The Security Team will also publish any mitigating steps users can take until th
* Use security@vmware.com to report security concerns to the VMware Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure.
* Use velero-security.pdl@broadcom.com to report security concerns to the Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure.
* Join the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list for early private information and vulnerability disclosure. Early disclosure may include mitigating steps and additional information on security patch releases. See below for information on how Velero distributors or vendors can apply to join this list.
@@ -107,11 +107,11 @@ To be eligible to join the [Velero Distributors](https://groups.google.com/u/1/g
## Embargo Policy
The information that members receive on the Velero Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the VMware Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
The information that members receive on the Velero Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
Before you share any information from the list with members of your team who are required to fix the issue, these team members must agree to the same terms, and only be provided with information on a need-to-know basis.
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the VMware Security Team (security@vmware.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the Security Team (velero-security.pdl@broadcom.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
@@ -123,6 +123,6 @@ Send new membership requests to projectvelero-distributors@googlegroups.com. In
## Confidentiality, integrity and availability
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The VMware Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
Note that we do not currently consider the default settings for Velero to be secure-by-default. It is necessary for operators to explicitly configure settings, role based access control, and other resource related features in Velero to provide a hardened Velero environment. We will not act on any security disclosure that relates to a lack of safe defaults. Over time, we will work towards improved safe-by-default configuration, taking into account backwards compatibility.
@@ -64,6 +64,9 @@ To fix CVEs and keep pace with Golang, Velero made changes as follows:
### Limitations/Known issues
* The backup's VolumeInfo metadata doesn't have the information updated in the async operations. This function could be supported in v1.14 release.
### Note
* Velero introduces the informer cache which is enabled by default. The informer cache improves the restore performance but may cause higher memory consumption. Increase the memory limit of the Velero pod or disable the informer cache by specifying the `--disable-informer-cache` option when installing Velero if you get the OOM error.
### Deprecation announcement
* The generated k8s clients, informers, and listers are deprecated in the Velero v1.13 release. They are put in the Velero repository's pkg/generated directory. According to the n+2 supporting policy, the deprecated are kept for two more releases. The pkg/generated directory should be deleted in the v1.15 release.
* After the backup VolumeInfo metadata file is added to the backup, Velero decides how to restore the PV resource according to the VolumeInfo content. To support the backup generated by the older version of Velero, the old logic is also kept. The support for the backup without the VolumeInfo metadata file will be kept for two releases. The support logic will be deleted in the v1.15 release.
#### The maintenance work for kopia/restic backup repositories is run in jobs
Since velero started using kopia as the approach for filesystem-level backup/restore, we've noticed an issue when velero connects to the kopia backup repositories and performs maintenance, it sometimes consumes excessive memory that can cause the velero pod to get OOM Killed. To mitigate this issue, the maintenance work will be moved out of velero pod to a separate kubernetes job, and the user will be able to specify the resource request in "velero install".
#### Volume Policies are extended to support more actions to handle volumes
In an earlier release, a flexible volume policy was introduced to skip certain volumes from a backup. In v1.14 we've made enhancement to this policy to allow the user to set how the volumes should be backed up. The user will be able to set "fs-backup" or "snapshot" as value of “action" in the policy and velero will backup the volumes accordingly. This enhancement allows the user to achieve a fine-grained control like "opt-in/out" without having to update the target workload. For more details please refer to https://velero.io/docs/v1.14/resource-filtering/#supported-volumepolicy-actions
#### Node Selection for Data Movement Backup
In velero the data movement flow relies on datamover pods, and these pods may take substantial resources and keep running for a long time. In v1.14, the user will be able to create a configmap to define the eligible nodes on which the datamover pods are launched. For more details refer to https://velero.io/docs/v1.14/data-movement-backup-node-selection/
#### VolumeInfo metadata for restored volumes
In v1.13, we introduced volumeinfo metadata for backup to help velero CLI and downstream adopter understand how velero handles each volume during backup. In v1.14, similar metadata will be persisted for each restore. velero CLI is also updated to bring more info in the output of "velero restore describe".
#### "Finalizing" phase is introduced to restores
The "Finalizing" phase is added to the state transition flow to restore, which helps us fix several issues: The labels added to PVs will be restored after the data in the PV is restored via volumesnapshotter. The post restore hook will be executed after datamovement is finished.
#### Certificate-based authentication support for Azure
Besides the service principal with secret(password)-based authentication, Velero introduces the new support for service principal with certificate-based authentication in v1.14.0. This approach enables you to adopt a phishing resistant authentication by using conditional access policies, which better protects Azure resources and is the recommended way by Azure.
### Runtime and dependencies
* Golang runtime: v1.22.2
* kopia: v0.17.0
### Limitations/Known issues
* For the external BackupItemAction plugins that take snapshots for PVs, such as vsphere plugin. If the plugin checks the value of the field "snapshotVolumes" in the backup spec as a criteria for snapshot, the settings in the volume policy will not take effect. For example, if the "snapshotVolumes" is set to False in the backup spec, but a volume meets the condition in the volume policy for "snapshot" action, because the plugin will not check the settings in the volume policy, the plugin will not take snapshot for the volume. For more details please refer to #7818
### Breaking changes
* CSI plugin has been merged into velero repo in v1.14 release. It will be installed by default as an internal plugin, and should not be installed via "–plugins " parameter in "velero install" command.
* The default resource requests and limitations for node agent are removed in v1.14, to make the node agent pods have the QoS class of "BestEffort", more details please refer to #7391
* There's a change in namespace filtering behavior during backup: In v1.14, when the includedNamespaces/excludedNamespaces fields are not set and the labelSelector/OrLabelSelectors are set in the backup spec, the backup will only include the namespaces which contain the resources that match the label selectors, while in previous releases all namespaces will be included in the backup with such settings. More details refer to #7105
* Patching the PV in the "Finalizing" state may cause the restore to be in "PartiallyFailed" state when the PV is blocked in "Pending" state, while in the previous release the restore may end up being in "Complete" state. For more details refer to #7866
### All Changes
* Fix backup log to show error string, not index (#7805, @piny940)
* Modify the volume helper logic. (#7794, @blackpiglet)
* Add documentation for extension of volume policy feature (#7779, @shubham-pampattiwar)
* Surface errors when waiting for backupRepository and timeout occurs (#7762, @kaovilai)
* Add existingResourcePolicy restore CR validation to controller (#7757, @kaovilai)
* Fix condition matching in resource modifier when there are multiple rules (#7715, @27149chen)
* Bump up the version of KinD and k8s in github actions (#7702, @reasonerjt)
* Implementation for Extending VolumePolicies to support more actions (#7664, @shubham-pampattiwar)
* Migrate from `github.com/Azure/azure-storage-blob-go` to `github.com/Azure/azure-sdk-for-go/sdk/storage/azblob` (#7598, @mmorel-35)
* When Included/ExcludedNamespaces are omitted, and LabelSelector or OrLabelSelector is used, namespaces without selected items are excluded from backup. (#7697, @blackpiglet)
* Display CSI snapshot restores in restore describe (#7687, @reasonerjt)
* Use specific credential rather than the credential chain for Azure (#7680, @ywk253100)
* Modify hook docs for clarity on displaying hook execution results (#7679, @allenxu404)
* Wait for results of restore exec hook executions in Finalizing phase instead of InProgress phase (#7619, @allenxu404)
* migrating to `sdk/resourcemanager/**/arm**` from `services/**/mgmt/**` (#7596, @mmorel-35)
* Bump up to go1.22 (#7666, @reasonerjt)
* Fix issue #7648. Adjust the exposing logic to avoid exposing failure and snapshot leak when expose fails (#7662, @Lyndon-Li)
* Track and persist restore volume info (#7630, @reasonerjt)
* Check the existence of the namespaces provided in the "--include-namespaces" option (#7569, @ywk253100)
* Add the finalization phase to the restore workflow (#7377, @allenxu404)
* Upgrade the version of go plugin related libs/tools (#7373, @ywk253100)
* Check resource Group Version and Kind is available in cluster before attempting restore to prevent being stuck. (#7322, @kaovilai)
* Merge CSI plugin code into Velero. (#7609, @blackpiglet)
* Fix issue #7391, remove the default constraint for node-agent pods (#7488, @Lyndon-Li)
* Fix DataDownload fails during restore for empty PVC workload (#7521, @qiuming-best)
Data transfer activities for CSI Snapshot Data Movement are moved from node-agent pods to dedicate backupPods or restorePods. This brings many benefits such as:
- This avoids to access volume data through host path, while host path access is privileged and may involve security escalations, which are concerned by users.
- This enables users to to control resource (i.e., cpu, memory) allocations in a granular manner, e.g., control them per backup/restore of a volume.
- This enhances the resilience, crash of one data movement activity won't affect others.
- This prevents unnecessary full backup because of host path changes after workload pods restart.
- For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/vgdp-micro-service/vgdp-micro-service.md.
#### Item Block concepts and ItemBlockAction (IBA) plugin
Item Block concepts are introduced for resource backups to help to achieve multiple thread backups. Specifically, correlated resources are categorized in the same item block and item blocks could be processed concurrently in multiple threads.
ItemBlockAction plugin is introduced to help Velero to categorize resources into item blocks. At present, Velero provides built-in IBAs for pods and PVCs and Velero also supports customized IBAs for any resources.
In v1.15, Velero doesn't support multiple thread process of item blocks though item block concepts and IBA plugins are fully supported. The multiple thread support will be delivered in future releases.
For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/backup-performance-improvements.md.
#### Node selection for repository maintenance job
Repository maintenance are resource consuming tasks, Velero now allows you to configure the nodes to run repository maintenance jobs, so that you can run repository maintenance jobs in idle nodes or avoid them to run in nodes hosting critical workloads.
To support the configuration, a new repository maintenance configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/repository-maintenance/.
#### Backup PVC read-only configuration
In 1.15, Velero allows you to configure the data mover backupPods to read-only mount the backupPVCs. In this way, the data mover expose process could be significantly accelerated for some storages (i.e., ceph).
To support the configuration, a new backup PVC configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
#### Backup PVC storage class configuration
In 1.15, Velero allows you to configure the storageclass used by the data mover backupPods. In this way, the provision of backupPVCs don't need to adhere to the same pattern as workload PVCs, e.g., for a backupPVC, it only needs one replica, whereas, the a workload PVC may have multiple replicas.
To support the configuration, the same backup PVC configuration configMap is used.
For more information, check the document https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
#### Backup repository data cache configuration
The backup repository may need to cache data on the client side during various repository operations, i.e., read, write, maintenance, etc. The cache consumes the root file system space of the pod where the repository access happens.
In 1.15, Velero allows you to configure the total size of the cache per repository. In this way, if your pod doesn't have enough space in its root file system, the pod won't be evicted due to running out of ephemeral storage.
To support the configuration, a new backup repository configuration configMap is introduced.
For more information, check the document https://velero.io/docs/v1.15/backup-repository-configuration/.
#### Performance improvements
In 1.15, several performance related issues/enhancements are included, which makes significant performance improvements in specific scenarios:
- There was a memory leak of Velero server after plugin calls, now it is fixed, see issue https://github.com/vmware-tanzu/velero/issues/7925
- The `client-burst/client-qps` parameters are automatically inherited to plugins, so that you can use the same velero server parameters to accelerate the plugin executions when large number of API server calls happen, see issue https://github.com/vmware-tanzu/velero/issues/7806
- Maintenance of Kopia repository takes huge memory in scenarios that huge number of files have been backed up, Velero 1.15 has included the Kopia upstream enhancement to fix the problem, see issue https://github.com/vmware-tanzu/velero/issues/7510
### Runtime and dependencies
Golang runtime: v1.22.8
kopia: v0.17.0
### Limitations/Known issues
#### Read-only backup PVC may not work on SELinux environments
Due to an issue of Kubernetes upstream, if a volume is mounted as read-only in SELinux environments, the read privilege is not granted to any user, as a result, the data mover backup will fail. On the other hand, the backupPVC must be mounted as read-only in order to accelerate the data mover expose process.
Therefore, a user option is added in the same backup PVC configuration configMap, once the option is enabled, the backupPod container will run as a super privileged container and disable SELinux access control. If you have concern in this super privileged container or you have configured [pod security admissions](https://kubernetes.io/docs/concepts/security/pod-security-admission/) and don't allow super privileged containers, you will not be able to use this read-only backupPVC feature and lose the benefit to accelerate the data mover expose process.
### Breaking changes
#### Deprecation of Restic
Restic path for fs-backup is in deprecation process starting from 1.15. According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/v1.15/GOVERNANCE.md#deprecation-policy), for 1.15, if Restic path is used the backup/restore of fs-backup still creates and succeeds, but you will see warnings in below scenarios:
- When `--uploader-type=restic` is used in Velero installation
- When Restic path is used to create backup/restore of fs-backup
#### node-agent configuration name is configurable
Previously, a fixed name is searched for node-agent configuration configMap. Now in 1.15, Velero allows you to customize the name of the configMap, on the other hand, the name must be specified by node-agent server parameter `node-agent-configmap`.
#### Repository maintenance job configurations in Velero server parameter are moved to repository maintenance job configuration configMap
In 1.15, below Velero server parameters for repository maintenance jobs are moved to the repository maintenance job configuration configMap. While for back compatibility reason, the same Velero sever parameters are preserved as is. But the configMap is recommended and the same values in the configMap take preference if they exist in both places:
```
--keep-latest-maintenance-jobs
--maintenance-job-cpu-request
--maintenance-job-mem-request
--maintenance-job-cpu-limit
--maintenance-job-mem-limit
```
#### Changing PVC selected-node feature is deprecated
In 1.15, the [Changing PVC selected-node feature](https://velero.io/docs/v1.15/restore-reference/#changing-pvc-selected-node) enters deprecation process and will be removed in future releases according to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/v1.15/GOVERNANCE.md#deprecation-policy). Usage of this feature for any purpose is not recommended.
### All Changes
* add no-relabeling option to backupPVC configmap (#8288, @sseago)
* only set spec.volumes readonly if PVC is readonly for datamover (#8284, @sseago)
* Add labels to maintenance job pods (#8256, @shubham-pampattiwar)
* Add the Carvel package related resources to the restore priority list (#8228, @ywk253100)
* Reduces indirect imports for plugin/framework importers (#8208, @kaovilai)
* Add controller name to periodical_enqueue_source. The logger parameter now includes an additional field with the value of reflect.TypeOf(objList).String() and another field with the value of controllerName. (#8198, @kaovilai)
* Update Openshift SCC docs link (#8170, @shubham-pampattiwar)
* Modify E2E and perf test report generated directory (#8129, @blackpiglet)
* Add docs for backup pvc config support (#8119, @shubham-pampattiwar)
* Delete generated k8s client and informer. (#8114, @blackpiglet)
* Add support for backup PVC configuration (#8109, @shubham-pampattiwar)
* ItemBlock model and phase 1 (single-thread) workflow changes (#8102, @sseago)
* Fix issue #8032, make node-agent configMap name configurable (#8097, @Lyndon-Li)
* Fix issue #8072, add the warning messages for restic deprecation (#8096, @Lyndon-Li)
* Fix issue #7620, add backup repository configuration implementation and support cacheLimit configuration for Kopia repo (#8093, @Lyndon-Li)
* Patch dbr's status when error happens (#8086, @reasonerjt)
* According to design #7576, after node-agent restarts, if a DU/DD is in InProgress status, re-capture the data mover ms pod and continue the execution (#8085, @Lyndon-Li)
* Updates to IBM COS documentation to match current version (#8082, @gjanders)
* Data mover micro service DUCR/DDCR controller refactor according to design #7576 (#8074, @Lyndon-Li)
* add retries with timeout to existing patch calls that moves a backup/restore from InProgress/Finalizing to a final status phase. (#8068, @kaovilai)
* Data mover micro service restore according to design #7576 (#8061, @Lyndon-Li)
In v1.16, Velero supports to run in Windows clusters and backup/restore Windows workloads, either stateful or stateless:
* Hybrid build and all-in-one image: the build process is enhanced to build an all-in-one image for hybrid CPU architecture and hybrid platform. For more information, check the design https://github.com/vmware-tanzu/velero/blob/main/design/multiple-arch-build-with-windows.md
* Deployment in Windows clusters: Velero node-agent, data mover pods and maintenance jobs now support to run in both linux and Windows nodes
* Data mover backup/restore Windows workloads: Velero built-in data mover supports Windows workloads throughout its full cycle, i.e., discovery, backup, restore, pre/post hook, etc. It automatically identifies Windows workloads and schedules data mover pods to the right group of nodes
Check the epic issue https://github.com/vmware-tanzu/velero/issues/8289 for more information.
#### Parallel Item Block backup
v1.16 now supports to back up item blocks in parallel. Specifically, during backup, correlated resources are grouped in item blocks and Velero backup engine creates a thread pool to back up the item blocks in parallel. This significantly improves the backup throughput, especially when there are large scale of resources.
Pre/post hooks also belongs to item blocks, so will also run in parallel along with the item blocks.
Users are allowed to configure the parallelism through the `--item-block-worker-count` Velero server parameter. If not configured, the default parallelism is 1.
For more information, check issue https://github.com/vmware-tanzu/velero/issues/8334.
#### Data mover restore enhancement in scalability
In previous releases, for each volume of WaitForFirstConsumer mode, data mover restore is only allowed to happen in the node that the volume is attached. This severely degrades the parallelism and the balance of node resource(CPU, memory, network bandwidth) consumption for data mover restore (https://github.com/vmware-tanzu/velero/issues/8044).
In v1.16, users are allowed to configure data mover restores running and spreading evenly across all nodes in the cluster. The configuration is done through a new flag `ignoreDelayBinding` in node-agent configuration (https://github.com/vmware-tanzu/velero/issues/8242).
#### Data mover enhancements in observability
In 1.16, some observability enhancements are added:
* Output various statuses of intermediate objects for failures of data mover backup/restore (https://github.com/vmware-tanzu/velero/issues/8267)
* Output the errors when Velero fails to delete intermediate objects during clean up (https://github.com/vmware-tanzu/velero/issues/8125)
The outputs are in the same node-agent log and enabled automatically.
#### CSI snapshot backup/restore enhancement in usability
In previous releases, a unnecessary VolumeSnapshotContent object is retained for each backup and synced to other clusters sharing the same backup storage location. And during restore, the retained VolumeSnapshotContent is also restored unnecessarily.
In 1.16, the retained VolumeSnapshotContent is removed from the backup, so no unnecessary CSI objects are synced or restored.
For more information, check issue https://github.com/vmware-tanzu/velero/issues/8725.
#### Backup Repository Maintenance enhancement in resiliency and observability
In v1.16, some enhancements of backup repository maintenance are added to improve the observability and resiliency:
* A new backup repository maintenance history section, called `RecentMaintenance`, is added to the BackupRepository CR. Specifically, for each BackupRepository, including start/completion time, completion status and error message. (https://github.com/vmware-tanzu/velero/issues/7810)
* Running maintenance jobs are now recaptured after Velero server restarts. (https://github.com/vmware-tanzu/velero/issues/7753)
* The maintenance job will not be launched for readOnly BackupStorageLocation. (https://github.com/vmware-tanzu/velero/issues/8238)
* The backup repository will not try to initialize a new repository for readOnly BackupStorageLocation. (https://github.com/vmware-tanzu/velero/issues/8091)
* Users now are allowed to configure the intervals of an effective maintenance in the way of `normalGC`, `fastGC` and `eagerGC`, through the `fullMaintenanceInterval` parameter in backupRepository configuration. (https://github.com/vmware-tanzu/velero/issues/8364)
#### Volume Policy enhancement of filtering volumes by PVC labels
In v1.16, Volume Policy is extended to support filtering volumes by PVC labels. (https://github.com/vmware-tanzu/velero/issues/8256).
#### Resource Status restore per object
In v1.16, users are allowed to define whether to restore resource status per object through an annotation `velero.io/restore-status` set on the object. (https://github.com/vmware-tanzu/velero/issues/8204).
#### Velero Restore Helper binary is merged into Velero image
In v1.16, Velero banaries, i.e., velero, velero-helper and velero-restore-helper, are all included into the single Velero image. (https://github.com/vmware-tanzu/velero/issues/8484).
### Runtime and dependencies
Golang runtime: 1.23.7
kopia: 0.19.0
### Limitations/Known issues
#### Limitations of Windows support
* fs-backup is not supported for Windows workloads and so fs-backup runs only in linux nodes for linux workloads
* Backup/restore of NTFS extended attributes/advanced features are not supported, i.e., Security Descriptors, System/Hidden/ReadOnly attributes, Creation Time, NTFS Streams, etc.
### All Changes
* Add third party annotation support for maintenance job, so that the declared third party annotations could be added to the maintenance job pods (#8812, @Lyndon-Li)
* Fix issue #8803, use deterministic name to create backupRepository (#8808, @Lyndon-Li)
* Refactor restoreItem and related functions to differentiate the backup resource name and the restore target resource name. (#8797, @blackpiglet)
* ensure that PV is removed before VS is deleted (#8777, @ix-rzi)
* host_pods should not be mandatory to node-agent (#8774, @mpryc)
* Log doesn't show pv name, but displays %!s(MISSING) instead (#8771, @hu-keyu)
* Fix issue #8754, add third party annotation support for data mover (#8770, @Lyndon-Li)
* Add docs for volume policy with labels as a criteria (#8759, @shubham-pampattiwar)
* Move pvc annotation removal from CSI RIA to regular PVC RIA (#8755, @sseago)
* Add doc for maintenance history (#8747, @Lyndon-Li)
* Fix issue #8733, add doc for restorePVC (#8737, @Lyndon-Li)
* Fix issue #8426, add doc for Windows support (#8736, @Lyndon-Li)
* Return directly if no pod volme backup are tracked (#8728, @ywk253100)
* Fix issue #8706, for immediate volumes, there is no selected-node annotation on PVC, so deduce the attached node from VolumeAttachment CRs (#8715, @Lyndon-Li)
* Add labels as a criteria for volume policy (#8713, @shubham-pampattiwar)
* Copy SecurityContext from Containers[0] if present for PVR (#8712, @sseago)
* Support pushing images to an insecure registry (#8703, @ywk253100)
* Modify golangci configuration to make it work. (#8695, @blackpiglet)
* Run backup post hooks inside ItemBlock synchronously (#8694, @ywk253100)
* Add docs for object level status restore (#8693, @shubham-pampattiwar)
* Clean artifacts generated during CSI B/R. (#8684, @blackpiglet)
* Don't run maintenance on the ReadOnly BackupRepositories. (#8681, @blackpiglet)
* Fix issue #8418, add Windows toleration to data mover pods (#8606, @Lyndon-Li)
* Check the PVB status via podvolume Backupper rather than calling API server to avoid API server issue (#8603, @ywk253100)
* Fix issue #8067, add tmp folder (/tmp for linux, C:\Windows\Temp for Windows) as an alternative of udmrepo's config file location (#8602, @Lyndon-Li)
* Data mover restore for Windows (#8594, @Lyndon-Li)
* Skip patching the PV in finalization for failed operation (#8591, @reasonerjt)
* Fix issue #8579, set event burst to block event broadcaster from filtering events (#8590, @Lyndon-Li)
* Configurable Kopia Maintenance Interval. backup-repository-configmap adds an option for configurable`fullMaintenanceInterval` where fastGC (12 hours), and eagerGC (6 hours) allowing for faster removal of deleted velero backups from kopia repo. (#8581, @kaovilai)
* Fix issue #7753, recall repo maintenance history on Velero server restart (#8580, @Lyndon-Li)
* Clear validation errors when schedule is valid (#8575, @ywk253100)
* Merge restore helper image into Velero server image (#8574, @ywk253100)
* Don't include excluded items in ItemBlocks (#8572, @sseago)
* fs uploader and block uploader support Windows nodes (#8569, @Lyndon-Li)
* Fix issue #8418, support data mover backup for Windows nodes (#8555, @Lyndon-Li)
* Fix issue #8044, allow users to ignore delay binding the restorePVC of data mover when it is in WaitForFirstConsumer mode (#8550, @Lyndon-Li)
* Fix issue #8539, validate uploader types when o.CRDsOnly is set to false only since CRD installation doesn't rely on uploader types (#8538, @Lyndon-Li)
* Fix issue #7810, add maintenance history for backupRepository CRs (#8532, @Lyndon-Li)
* Make fs-backup work on linux nodes with the new Velero deployment and disable fs-backup if the source/target pod is running in non-linux node (#8424) (#8518, @Lyndon-Li)
* Fix issue: backup schedule pause/unpause doesn't work (#8512, @ywk253100)
* Fix backup post hook issue #8159 (caused by #7571): always execute backup post hooks after PVBs are handled (#8509, @ywk253100)
* Fix issue #8267, enhance the error message when expose fails (#8508, @Lyndon-Li)
* Fix issue #8416, #8417, deploy Velero server and node-agent in linux/Windows hybrid env (#8504, @Lyndon-Li)
* Design to add label selector as a criteria for volume policy (#8503, @shubham-pampattiwar)
* Related to issue #8485, move the acceptedByNode and acceptedTimestamp to Status of DU/DD CRD (#8498, @Lyndon-Li)
* Add SecurityContext to restore-helper (#8491, @reasonerjt)
* Fix issue #8433, add third party labels to data mover pods when the same labels exist in node-agent pods (#8487, @Lyndon-Li)
* Fix issue #8485, add an accepted time so as to count the prepare timeout (#8486, @Lyndon-Li)
* Fix issue #8125, log diagnostic info for data mover exposers when expose timeout (#8482, @Lyndon-Li)
* Fix issue #8415, implement multi-arch build and Windows build (#8476, @Lyndon-Li)
* Pin kopia to 0.18.2 (#8472, @Lyndon-Li)
* Add nil check for updating DataUpload VolumeInfo in finalizing phase (#8471, @blackpiglet)
* Allowing Object-Level Resource Status Restore (#8464, @shubham-pampattiwar)
* For issue #8429. Add the design for multi-arch build and windows build (#8459, @Lyndon-Li)
* Upgrade go.mod k8s.io/ go.mod to v0.31.3 and implemented proper logger configuration for both client-go and controller-runtime libraries. This change ensures that logging format and level settings are properly applied throughout the codebase. The update improves logging consistency and control across the Velero system. (#8450, @kaovilai)
* Add Design for Allowing Object-Level Resource Status Restore (#8403, @shubham-pampattiwar)
* Fix issue #8391, check ErrCancelled from suffix of data mover pod's termination message (#8396, @Lyndon-Li)
* Fix issue #8394, don't call closeDataPath in VGDP callbacks, otherwise, the VGDP cleanup will hang (#8395, @Lyndon-Li)
* Adding support in velero Resource Policies for filtering PVs based on additional VolumeAttributes properties under CSI PVs (#8383, @mayankagg9722)
* Add --item-block-worker-count flag to velero install and server (#8380, @sseago)
* Make BackedUpItems thread safe (#8366, @sseago)
* Include --annotations flag in backup and restore create commands (#8354, @alromeros)
* Use aggregated discovery API to discovery API groups and resources (#8353, @ywk253100)
* Copy "envFrom" from Velero server when creating maintenance jobs (#8343, @evhan)
* Set hinting region to use for GetBucketRegion() in pkg/repository/config/aws.go (#8297, @kaovilai)
* Bump up version of client-go and controller-runtime (#8275, @ywk253100)
* fix(pkg/repository/maintenance): don't panic when there's no container statuses (#8271, @mcluseau)
* Add Backup warning for inclusion of NS managed by ArgoCD (#8257, @shubham-pampattiwar)
* Added tracking for deleted namespace status check in restore flow. (#8233, @sangitaray2021)
In v1.17, Velero fs-backup is modernized to the micro-service architecture, which brings below benefits:
- Many features that were absent to fs-backup are now available, i.e., load concurrency control, cancel, resume on restart, etc.
- fs-backup is more robust, the running backup/restore could survive from node-agent restart; and the resource allocation is in a more granular manner, the failure of one backup/restore won't impact others.
- The resource usage of node-agent is steady, especially, the node-agent pods won't request huge memory and hold it for a long time.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/vgdp-micro-service-for-fs-backup/vgdp-micro-service-for-fs-backup.md for more details.
#### fs-backup support Windows cluster
In v1.17, Velero fs-backup supports to backup/restore Windows workloads. By leveraging the new micro-service architecture for fs-backup, data mover pods could run in Windows nodes and backup/restore Windows volumes. Together with CSI snapshot data movement for Windows which is delivered in 1.16, Velero now supports Windows workload backup/restore in full scenarios.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/vgdp-micro-service-for-fs-backup/vgdp-micro-service-for-fs-backup.md for more details.
#### Volume group snapshot support
In v1.17, Velero supports [volume group snapshots](https://kubernetes.io/blog/2024/12/18/kubernetes-1-32-volume-group-snapshot-beta/) which is a beta feature in Kubernetes upstream, for both CSI snapshot backup and CSI snapshot data movement. This allows a snapshot to be taken from multiple volumes at the same point-in-time to achieve write order consistency, which is helpful to achieve better data consistency when multiple volumes being backed up are correlated.
Check the document https://velero.io/docs/main/volume-group-snapshots/ for more details.
#### Priority class support
In v1.17, [Kubernetes priority class](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) is supported for all modules across Velero. Specifically, users are allowed to configure priority class to Velero server, node-agent, data mover pods, backup repository maintenance jobs separately.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/priority-class-name-support_design.md for more details.
#### Scalability and Resiliency improvements of data movers
##### Reduce excessive number of data mover pods in Pending state
In v1.17, Velero allows users to set a `PrepareQueueLength` in the node-agent configuration, data mover pods and volumes out of this number won't be created until data path quota is available, so that excessive number cluster resources won't be taken unnecessarily, which is particularly helpful for large scale environments. This improvement applies to all kinds of data movements, including fs-backup and CSI snapshot data movement.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/node-agent-load-soothing.md for more details.
##### Enhancement on node-agent restart handling for data movements
In v1.17, data movements in all phases could survive from node-agent restart and resume themselves; when a data movement gets orphaned in special cases, e.g., cluster node absent, it could also be canceled appropriately after the restart. This improvement applies to all kinds of data movements, including fs-backup and CSI snapshot data movement.
Check issue https://github.com/vmware-tanzu/velero/issues/8534 for more details.
##### CSI snapshot data movement restore node-selection and node-selection by storage class
In v1.17, CSI snapshot data movement restore acquires the same node-selection capability as backup, that is, users could specify which nodes can/cannot run data mover pods for both backup and restore now. And users are also allowed to configure the node-selection per storage class, which is particularly helpful to the environments where a storage class are not usable by all cluster nodes.
Check issue https://github.com/vmware-tanzu/velero/issues/8186 and https://github.com/vmware-tanzu/velero/issues/8223 for more details.
#### Include/exclude policy support for resource policy
In v1.17, Velero resource policy supports `includeExcludePolicy` besides the existing `volumePolicy`. This allows users to set include/exclude filters for resources in a resource policy configmap, so that these filters are reusable among multiple backups.
Check the document https://velero.io/docs/main/resource-filtering/#creating-resource-policies:~:text=resources%3D%22*%22-,Resource%20policies,-Velero%20provides%20resource for more details.
### Runtime and dependencies
Golang runtime: 1.24.6
kopia: 0.21.1
### Limitations/Known issues
### Breaking changes
#### Deprecation of Restic
According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#deprecation-policy), backup of fs-backup under Restic path is removed in v1.17, so `--uploader-type=restic` is not a valid installation configuration anymore. This means you cannot create a backup under Restic path, but you can still restore from the previous backups under Restic path until v1.19.
#### Repository maintenance job configurations are removed from Velero server parameter
Since the repository maintenance job configurations are moved to repository maintenance job configMap, in v1.17 below Velero sever parameters are removed:
- --keep-latest-maintenance-jobs
- --maintenance-job-cpu-request
- --maintenance-job-mem-request
- --maintenance-job-cpu-limit
- --maintenance-job-mem-limit
### All Changes
* Add ConfigMap parameters validation for install CLI and server start. (#9200, @blackpiglet)
* Add priorityclasses to high priority restore list (#9175, @kaovilai)
* Introduced context-based logger for backend implementations (Azure, GCS, S3, and Filesystem) (#9168, @priyansh17)
* Fix issue #9140, add os=windows:NoSchedule toleration for Windows pods (#9165, @Lyndon-Li)
* Remove the repository maintenance job parameters from velero server. (#9147, @blackpiglet)
* Add include/exclude policy to resources policy (#9145, @reasonerjt)
* Add ConfigMap support for keepLatestMaintenanceJobs with CLI parameter fallback (#9135, @shubham-pampattiwar)
* Fix the dd and du's node affinity issue. (#9130, @blackpiglet)
* Remove the WaitUntilVSCHandleIsReady from vs BIA. (#9124, @blackpiglet)
* Add comprehensive Volume Group Snapshots documentation with workflow diagrams and examples (#9123, @shubham-pampattiwar)
* Update CSI Snapshot Data Movement doc for issue #8534, #8185 (#9113, @Lyndon-Li)
* Fix issue #8986, refactor fs-backup doc after VGDP Micro Service for fs-backup (#9112, @Lyndon-Li)
* Return error if timeout when checking server version (#9111, @ywk253100)
* Update "Default Volumes to Fs Backup" to "File System Backup (Default)" (#9105, @shubham-pampattiwar)
* Fix issue #9077, don't block backup deletion on list VS error (#9100, @Lyndon-Li)
* Bump up Kopia to v0.21.1 (#9098, @Lyndon-Li)
* Add imagePullSecrets inheritance for VGDP pod and maintenance job. (#9096, @blackpiglet)
* Avoid checking the VS and VSC status in the backup finalizing phase. (#9092, @blackpiglet)
* Fix issue #9053, Always remove selected-node annotation during PVC restore when no node mapping exists. Breaking change: Previously, the annotation was preserved if the node existed. (#9076, @Lyndon-Li)
* Enable parameterized kubelet mount path during node-agent installation (#9074, @longxiucai)
* Fix issue #8857, support third party tolerations for data mover pods (#9072, @Lyndon-Li)
* Fix issue #8813, remove restic from the valid uploader type (#9069, @Lyndon-Li)
* Fix issue #8185, allow users to disable pod volume host path mount for node-agent (#9068, @Lyndon-Li)
* Fix #8344, add the design for a mechanism to soothe creation of data mover pods for DataUpload, DataDownload, PodVolumeBackup and PodVolumeRestore (#9067, @Lyndon-Li)
* Fix #8344, add a mechanism to soothe creation of data mover pods for DataUpload, DataDownload, PodVolumeBackup and PodVolumeRestore (#9064, @Lyndon-Li)
* Add Gauge metric for BSL availability (#9059, @reasonerjt)
* Fix missing defaultVolumesToFsBackup flag output in Velero describe backup cmd (#9056, @shubham-pampattiwar)
* Allow for proper tracking of multiple hooks per container (#9048, @sseago)
* Make the backup repository controller doesn't invalidate the BSL on restart (#9046, @blackpiglet)
* Removed username/password credential handling from newConfigCredential as azidentity.UsernamePasswordCredentialOptions is reported as deprecated. (#9041, @priyansh17)
* Remove dependency with VolumeSnapshotClass in DataUpload. (#9040, @blackpiglet)
* Fix issue #8961, cancel PVB/PVR on Velero server restart (#9031, @Lyndon-Li)
* fix: update mc command in minio-deployment example (#8982, @vishal-chdhry)
* Fix issue #8957, add design for VGDP MS for fs-backup (#8979, @Lyndon-Li)
* Add BSL status check for backup/restore operations. (#8976, @blackpiglet)
* Mark BackupRepository not ready when BSL changed (#8975, @ywk253100)
* Add support for [distributed snapshotting](https://github.com/kubernetes-csi/external-snapshotter/tree/4cedb3f45790ac593ebfa3324c490abedf739477?tab=readme-ov-file#distributed-snapshotting) (#8969, @flx5)
* Fix issue #8534, refactor dm controllers to tolerate cancel request in more cases, e.g., node restart, node drain (#8952, @Lyndon-Li)
* The backup and restore VGDP affinity enhancement implementation. (#8949, @blackpiglet)
* Remove CSI VS and VSC metadata from backup. (#8946, @blackpiglet)
* Extend PVCAction itemblock plugin to support grouping PVCs under VGS label key (#8944, @shubham-pampattiwar)
* Copy security context from origin pod (#8943, @farodin91)
* Add support for configuring VGS label key (#8938, @shubham-pampattiwar)
* Add VolumeSnapshotContent into the RIA and the mustHave resource list. (#8924, @blackpiglet)
* Mounted cloud credentials should not be world-readable (#8919, @sseago)
* Warn for not found error in patching managed fields (#8902, @sseago)
description:'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description:|-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type:string
kind:
description:'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description:|-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type:string
metadata:
type:object
@@ -41,13 +46,21 @@ spec:
description:BackupRepositorySpec is the specification for a BackupRepository.
properties:
backupStorageLocation:
description:BackupStorageLocation is the name of the BackupStorageLocation
description:|-
BackupStorageLocation is the name of the BackupStorageLocation
that should contain this repository.
type:string
maintenanceFrequency:
description:MaintenanceFrequency is how often maintenance should
be run.
type:string
repositoryConfig:
additionalProperties:
type:string
description:RepositoryConfig is for repository-specific configuration
fields.
nullable:true
type:object
repositoryType:
description:RepositoryType indicates the type of the backend repository
enum:
@@ -56,25 +69,26 @@ spec:
- ""
type:string
resticIdentifier:
description:ResticIdentifier is the full restic-compatible string
for identifying this repository.
description:|-
ResticIdentifier is the full restic-compatible string for identifying
this repository. This field is only used when RepositoryType is "restic".
type:string
volumeNamespace:
description:VolumeNamespace is the namespace this backup repository
contains pod volume backups for.
description:|-
VolumeNamespace is the namespace this backup repository contains
pod volume backups for.
type:string
required:
- backupStorageLocation
- maintenanceFrequency
- resticIdentifier
- volumeNamespace
type:object
status:
description:BackupRepositoryStatus is the current status of a BackupRepository.
properties:
lastMaintenanceTime:
description:LastMaintenanceTime is the last time maintenance was
run.
description:LastMaintenanceTime is the last time repo maintenance
succeeded.
format:date-time
nullable:true
type:string
@@ -89,6 +103,33 @@ spec:
- Ready
- NotReady
type:string
recentMaintenance:
description:RecentMaintenance is status of the recent repo maintenance.
items:
properties:
completeTimestamp:
description:CompleteTimestamp is the completion time of the
repo maintenance.
format:date-time
nullable:true
type:string
message:
description:Message is a message about the current status of
the repo maintenance.
type:string
result:
description:Result is the result of the repo maintenance.
enum:
- Succeeded
- Failed
type:string
startTimestamp:
description:StartTimestamp is the start time of the repo maintenance.
description:Backup is a Velero resource that represents the capture of Kubernetes
description:|-
Backup is a Velero resource that represents the capture of Kubernetes
cluster state at a point in time (API objects and associated volume state).
properties:
apiVersion:
description:'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description:|-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type:string
kind:
description:'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description:|-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type:string
metadata:
type:object
@@ -36,55 +42,62 @@ spec:
description:BackupSpec defines the specification for a Velero backup.
properties:
csiSnapshotTimeout:
description:CSISnapshotTimeout specifies the time used to wait for
CSI VolumeSnapshot status turns to ReadyToUse during creation, before
returning error as timeout. The default value is 10 minute.
description:|-
CSISnapshotTimeout specifies the time used to wait for CSI VolumeSnapshot status turns to
ReadyToUse during creation, before returning error as timeout.
The default value is 10 minute.
type:string
datamover:
description:DataMover specifies the data mover to be used by the
backup. If DataMover is "" or "velero", the built-in data mover
will be used.
description:|-
DataMover specifies the data mover to be used by the backup.
If DataMover is "" or "velero", the built-in data mover will be used.
type:string
defaultVolumesToFsBackup:
description:DefaultVolumesToFsBackup specifies whether pod volume
file system backup should be used for all volumes by default.
description:|-
DefaultVolumesToFsBackup specifies whether pod volume file system backup should be used
for all volumes by default.
nullable:true
type:boolean
defaultVolumesToRestic:
description:"DefaultVolumesToRestic specifies whether restic should
be used to take a backup of all pod volumes by default. \n Deprecated:
this field is no longer used and will be removed entirely in future.
Use DefaultVolumesToFsBackup instead."
description:|-
DefaultVolumesToRestic specifies whether restic should be used to take a
backup of all pod volumes by default.
Deprecated: this field is no longer used and will be removed entirely in future. Use DefaultVolumesToFsBackup instead.
nullable:true
type:boolean
excludedClusterScopedResources:
description:ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup. If set to "*", all
cluster-scoped resource types are excluded. The default value is
empty.
description:|-
ExcludedClusterScopedResources is a slice of cluster-scoped
resource type names to exclude from the backup.
If set to "*", all cluster-scoped resource types are excluded.
The default value is empty.
items:
type:string
nullable:true
type:array
excludedNamespaceScopedResources:
description:ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup. If set to "*", all
namespace-scoped resource types are excluded. The default value
is empty.
description:|-
ExcludedNamespaceScopedResources is a slice of namespace-scoped
resource type names to exclude from the backup.
If set to "*", all namespace-scoped resource types are excluded.
The default value is empty.
items:
type:string
nullable:true
type:array
excludedNamespaces:
description:ExcludedNamespaces contains a list of namespaces that
are not included in the backup.
description:|-
ExcludedNamespaces contains a list of namespaces that are not
included in the backup.
items:
type:string
nullable:true
type:array
excludedResources:
description:ExcludedResources is a slice of resource names that are
not included in the backup.
description:|-
ExcludedResources is a slice of resource names that are not
included in the backup.
items:
type:string
nullable:true
@@ -97,9 +110,9 @@ spec:
description:Resources are hooks that should be executed when
backing up individual instances of a resource.
items:
description:BackupResourceHookSpec defines one or more BackupResourceHooks
that should be executed based on the rules defined for namespaces,
resources, and label selector.
description:|-
BackupResourceHookSpec defines one or more BackupResourceHooks that should be executed based on
the rules defined for namespaces, resources, and label selector.
properties:
excludedNamespaces:
description:ExcludedNamespaces specifies the namespaces
@@ -116,17 +129,17 @@ spec:
nullable:true
type:array
includedNamespaces:
description:IncludedNamespaces specifies the namespaces
to which this hook spec applies. If empty, it applies
description:|-
IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies
to all namespaces.
items:
type:string
nullable:true
type:array
includedResources:
description:IncludedResources specifies the resources to
which this hook spec applies. If empty, it applies to
all resources.
description:|-
IncludedResources specifies the resources to which this hook spec applies. If empty, it applies
to all resources.
items:
type:string
nullable:true
@@ -140,8 +153,8 @@ spec:
description:matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description:A label selector requirement is a selector
that contains values, a key, and an operator that
description:|-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
@@ -149,33 +162,33 @@ spec:
applies to.
type:string
operator:
description:operator represents a key's relationship
to a set of values. Valid operators are In,
NotIn, Exists and DoesNotExist.
description:|-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type:string
values:
description:values is an array of string values.
If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists
or DoesNotExist, the values array must be empty.
This array is replaced during a strategic merge
patch.
description:|-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type:string
type:array
x-kubernetes-list-type:atomic
required:
- key
- operator
type:object
type:array
x-kubernetes-list-type:atomic
matchLabels:
additionalProperties:
type:string
description:matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field
is "key", the operator is "In", and the values array
contains only "value". The requirements are ANDed.
description:|-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type:object
type:object
x-kubernetes-map-type:atomic
@@ -183,10 +196,9 @@ spec:
description:Name is the name of this hook.
type:string
post:
description:PostHooks is a list of BackupResourceHooks
to execute after storing the item in the backup. These
are executed after all "additional items" from item actions
are processed.
description:|-
PostHooks is a list of BackupResourceHooks to execute after storing the item in the backup.
These are executed after all "additional items" from item actions are processed.
items:
description:BackupResourceHook defines a hook for a resource.
properties:
@@ -201,10 +213,9 @@ spec:
minItems:1
type:array
container:
description:Container is the container in the
pod where the command should be executed. If
not specified, the pod's first container is
used.
description:|-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
type:string
onError:
description:OnError specifies how Velero should
@@ -215,9 +226,9 @@ spec:
- Fail
type:string
timeout:
description:Timeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
description:|-
Timeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
type:string
required:
- command
@@ -227,10 +238,9 @@ spec:
type:object
type:array
pre:
description:PreHooks is a list of BackupResourceHooks to
execute prior to storing the item in the backup. These
are executed before any "additional items" from item actions
are processed.
description:|-
PreHooks is a list of BackupResourceHooks to execute prior to storing the item in the backup.
These are executed before any "additional items" from item actions are processed.
items:
description:BackupResourceHook defines a hook for a resource.
properties:
@@ -245,10 +255,9 @@ spec:
minItems:1
type:array
container:
description:Container is the container in the
pod where the command should be executed. If
not specified, the pod's first container is
used.
description:|-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
type:string
onError:
description:OnError specifies how Velero should
@@ -259,9 +268,9 @@ spec:
- Fail
type:string
timeout:
description:Timeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
description:|-
Timeout defines the maximum amount of time Velero should wait for the hook to complete before
description:'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description:|-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type:string
kind:
description:'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description:|-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type:string
metadata:
type:object
@@ -81,8 +86,13 @@ spec:
valid secret key.
type:string
name:
description: 'Name of the referent. More info:https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO:Add other useful fields. apiVersion, kind, uid?'
default:""
description:|-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type:string
optional:
description:Specify whether the Secret or its key must be defined
@@ -103,10 +113,38 @@ spec:
description:Bucket is the bucket to use for object storage.
type:string
caCert:
description:CACert defines a CA bundle to use when verifying
TLS connections to the provider.
description:|-
CACert defines a CA bundle to use when verifying TLS connections to the provider.
Deprecated: Use CACertRef instead.
format:byte
type:string
caCertRef:
description:|-
CACertRef is a reference to a Secret containing the CA certificate bundle to use
when verifying TLS connections to the provider. The Secret must be in the same
namespace as the BackupStorageLocation.
properties:
key:
description:The key of the secret to select from. Must be
a valid secret key.
type:string
name:
default:""
description:|-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type:string
optional:
description:Specify whether the Secret or its key must be
defined
type:boolean
required:
- key
type:object
x-kubernetes-map-type:atomic
prefix:
description:Prefix is the path inside a bucket to use for Velero
storage. Optional.
@@ -131,29 +169,34 @@ spec:
BackupStorageLocation
properties:
accessMode:
description:"AccessMode is an unused field. \n Deprecated: there
is now an AccessMode field on the Spec and this field will be removed
entirely as of v2.0."
description:|-
AccessMode is an unused field.
Deprecated: there is now an AccessMode field on the Spec and this field
will be removed entirely as of v2.0.
enum:
- ReadOnly
- ReadWrite
type:string
lastSyncedRevision:
description:"LastSyncedRevision is the value of the `metadata/revision`
file in the backup storage location the last time the BSL's contents
were synced into the cluster. \n Deprecated: this field is no longer
updated or used for detecting changes to the location's contents
and will be removed entirely in v2.0."
description:|-
LastSyncedRevision is the value of the `metadata/revision` file in the backup
storage location the last time the BSL's contents were synced into the cluster.
Deprecated: this field is no longer updated or used for detecting changes to
the location's contents and will be removed entirely in v2.0.
type:string
lastSyncedTime:
description:LastSyncedTime is the last time the contents of the location
were synced into the cluster.
description:|-
LastSyncedTime is the last time the contents of the location were synced into
the cluster.
format:date-time
nullable:true
type:string
lastValidationTime:
description:LastValidationTime is the last time the backup store
location was validated the cluster.
description:|-
LastValidationTime is the last time the backup store location was validated
description:DeleteBackupRequest is a request to delete one or more backups.
properties:
apiVersion:
description:'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description:|-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type:string
kind:
description:'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description:|-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
description:DownloadRequest is a request to download an artifact from backup
object storage, such as a backup log file.
description:|-
DownloadRequest is a request to download an artifact from backup object storage, such as a backup
log file.
properties:
apiVersion:
description:'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description:|-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type:string
kind:
description:'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description:|-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type:string
metadata:
type:object
@@ -54,6 +60,7 @@ spec:
- CSIBackupVolumeSnapshots
- CSIBackupVolumeSnapshotContents
- BackupVolumeInfos
- RestoreVolumeInfo
type:string
name:
description:Name is the name of the Kubernetes resource with
- description:PodVolumeBackup status such as New/InProgress
- description:PodVolumeBackup status such as New/InProgress
jsonPath:.status.phase
name:Status
type:string
- description:Time when this backup was started
- description:Time duration since this PodVolumeBackup was started
jsonPath:.status.startTimestamp
name:Created
name:Started
type:date
- description:Namespace of the pod containing the volume to be backed up
jsonPath:.spec.pod.namespace
name:Namespace
type:string
- description:Name of the pod containing the volume to be backed up
jsonPath:.spec.pod.name
name:Pod
type:string
- description:Name of the volume to be backed up
jsonPath:.spec.volume
name:Volume
type:string
- description:The type of the uploader to handle data transfer
jsonPath:.spec.uploaderType
name:Uploader Type
type:string
- description:Completed bytes
format:int64
jsonPath:.status.progress.bytesDone
name:Bytes Done
type:integer
- description:Total bytes
format:int64
jsonPath:.status.progress.totalBytes
name:Total Bytes
type:integer
- description:Incremental bytes
format:int64
jsonPath:.status.incrementalBytes
name:Incremental Bytes
priority:10
type:integer
- description:Name of the Backup Storage Location where this backup should be
stored
jsonPath:.spec.backupStorageLocation
name:Storage Location
type:string
- jsonPath:.metadata.creationTimestamp
- description:Time duration since this PodVolumeBackup was created
jsonPath:.metadata.creationTimestamp
name:Age
type:date
- description:Name of the node where the PodVolumeBackup is processed
jsonPath:.status.node
name:Node
type:string
- description:The type of the uploader to handle data transfer
jsonPath:.spec.uploaderType
name:Uploader
type:string
name:v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description:'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description:|-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type:string
kind:
description:'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description:|-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type:string
metadata:
type:object
@@ -67,9 +81,15 @@ spec:
description:PodVolumeBackupSpec is the specification for a PodVolumeBackup.
properties:
backupStorageLocation:
description:BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
description:|-
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
type:string
cancel:
description:|-
Cancel indicates request to cancel the ongoing PodVolumeBackup. It can be set
when the PodVolumeBackup is in InProgress phase
type:boolean
node:
description:Node is the name of the node that the Pod is running
on.
@@ -82,33 +102,39 @@ spec:
description:API version of the referent.
type:string
fieldPath:
description:'If referring to a piece of an object instead of
an entire object, this string should contain a valid JSON/Go
field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen
only to have some well-defined way of referencing a part of
an object. TODO: this design is not final and this field is
subject to change in the future.'
description:|-
If referring to a piece of an object instead of an entire object, this string
should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within a pod, this would take on a value like:
"spec.containers{name}" (where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]" (container with
index 2 in this pod). This syntax is chosen only to have some well-defined way of
referencing a part of an object.
type:string
kind:
description: 'Kind of the referent. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description:|-
Kind of the referent.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type:string
name:
description: 'Name of the referent. More info:https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
description:|-
Name of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type:string
namespace:
description: 'Namespace of the referent. More info:https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
description:|-
Namespace of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
type:string
resourceVersion:
description:'Specific resourceVersion to which this reference
is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
description:|-
Specific resourceVersion to which this reference is made, if any.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
type:string
uid:
description: 'UID of the referent. More info:https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
description:|-
UID of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
type:string
type:object
x-kubernetes-map-type:atomic
@@ -118,14 +144,16 @@ spec:
tags:
additionalProperties:
type:string
description:Tags are a map of key-value pairs that should be applied
to the volume backup as tags.
description:|-
Tags are a map of key-value pairs that should be applied to the
volume backup as tags.
type:object
uploaderSettings:
additionalProperties:
type:string
description:UploaderSettings are a map of key-value pairs that should
be applied to the uploader configuration.
description:|-
UploaderSettings are a map of key-value pairs that should be applied to the
uploader configuration.
nullable:true
type:object
uploaderType:
@@ -137,8 +165,9 @@ spec:
- ""
type:string
volume:
description:Volume is the name of the volume within the Pod to be
backed up.
description:|-
Volume is the name of the volume within the Pod to be backed
up.
type:string
required:
- backupStorageLocation
@@ -150,14 +179,27 @@ spec:
status:
description:PodVolumeBackupStatus is the current status of a PodVolumeBackup.
properties:
completionTimestamp:
description:CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups. Completion time
is recorded before uploading the backup object. The server's time
is used for CompletionTimestamps
acceptedTimestamp:
description:|-
AcceptedTimestamp records the time the pod volume backup is to be prepared.
The server's time is used for AcceptedTimestamp
format:date-time
nullable:true
type:string
completionTimestamp:
description:|-
CompletionTimestamp records the time a backup was completed.
Completion time is recorded even on failed backups.
Completion time is recorded before uploading the backup object.
The server's time is used for CompletionTimestamps
format:date-time
nullable:true
type:string
incrementalBytes:
description:IncrementalBytes holds the number of bytes new or changed
since the last backup
format:int64
type:integer
message:
description:Message is a message about the pod volume backup's status.
type:string
@@ -169,14 +211,19 @@ spec:
description:Phase is the current state of the PodVolumeBackup.
enum:
- New
- Accepted
- Prepared
- InProgress
- Canceling
- Canceled
- Completed
- Failed
type:string
progress:
description:Progress holds the total number of bytes of the volume
and the current number of backed up bytes. This can be used to display
progress information about the backup operation.
description:|-
Progress holds the total number of bytes of the volume and the current
number of backed up bytes. This can be used to display progress information
about the backup operation.
properties:
bytesDone:
format:int64
@@ -190,8 +237,10 @@ spec:
pod volume.
type:string
startTimestamp:
description:StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes on restores.
description:|-
StartTimestamp records the time a backup was started.
Separate from CreationTimestamp, since that value changes
- description:Namespace of the pod containing the volume to be restored
jsonPath:.spec.pod.namespace
name:Namespace
- description:PodVolumeRestore status such as New/InProgress
jsonPath:.status.phase
name:Status
type:string
- description:Name of the pod containing the volume to be restored
jsonPath:.spec.pod.name
name:Pod
- description:Time duration since this PodVolumeRestore was started
jsonPath:.status.startTimestamp
name:Started
type:date
- description:Completed bytes
format:int64
jsonPath:.status.progress.bytesDone
name:Bytes Done
type:integer
- description:Total bytes
format:int64
jsonPath:.status.progress.totalBytes
name:Total Bytes
type:integer
- description:Name of the Backup Storage Location where the backup data is stored
jsonPath:.spec.backupStorageLocation
name:Storage Location
type:string
- description:Time duration since this PodVolumeRestore was created
jsonPath:.metadata.creationTimestamp
name:Age
type:date
- description:Name of the node where the PodVolumeRestore is processed
jsonPath:.status.node
name:Node
type:string
- description:The type of the uploader to handle data transfer
jsonPath:.spec.uploaderType
name:Uploader Type
type:string
- description:Name of the volume to be restored
jsonPath:.spec.volume
name:Volume
type:string
- description:Pod Volume Restore status such as New/InProgress
jsonPath:.status.phase
name:Status
type:string
- description:Pod Volume Restore status such as New/InProgress
format:int64
jsonPath:.status.progress.totalBytes
name:TotalBytes
type:integer
- description:Pod Volume Restore status such as New/InProgress
format:int64
jsonPath:.status.progress.bytesDone
name:BytesDone
type:integer
- jsonPath:.metadata.creationTimestamp
name:Age
type:date
name:v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description:'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description:|-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type:string
kind:
description:'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description:|-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type:string
metadata:
type:object
@@ -68,9 +74,15 @@ spec:
description:PodVolumeRestoreSpec is the specification for a PodVolumeRestore.
properties:
backupStorageLocation:
description:BackupStorageLocation is the name of the backup storage
location where the backup repository is stored.
description:|-
BackupStorageLocation is the name of the backup storage location
where the backup repository is stored.
type:string
cancel:
description:|-
Cancel indicates request to cancel the ongoing PodVolumeRestore. It can be set
when the PodVolumeRestore is in InProgress phase
type:boolean
pod:
description:Pod is a reference to the pod containing the volume to
be restored.
@@ -79,33 +91,39 @@ spec:
description:API version of the referent.
type:string
fieldPath:
description:'If referring to a piece of an object instead of
an entire object, this string should contain a valid JSON/Go
field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within
a pod, this would take on a value like: "spec.containers{name}"
(where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]"
(container with index 2 in this pod). This syntax is chosen
only to have some well-defined way of referencing a part of
an object. TODO: this design is not final and this field is
subject to change in the future.'
description:|-
If referring to a piece of an object instead of an entire object, this string
should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2].
For example, if the object reference is to a container within a pod, this would take on a value like:
"spec.containers{name}" (where "name" refers to the name of the container that triggered
the event) or if no container name is specified "spec.containers[2]" (container with
index 2 in this pod). This syntax is chosen only to have some well-defined way of
referencing a part of an object.
type:string
kind:
description: 'Kind of the referent. More info:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description:|-
Kind of the referent.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type:string
name:
description: 'Name of the referent. More info:https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
description:|-
Name of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type:string
namespace:
description: 'Namespace of the referent. More info:https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/'
description:|-
Namespace of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
type:string
resourceVersion:
description:'Specific resourceVersion to which this reference
is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency'
description:|-
Specific resourceVersion to which this reference is made, if any.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
type:string
uid:
description: 'UID of the referent. More info:https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids'
description:|-
UID of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
type:string
type:object
x-kubernetes-map-type:atomic
@@ -115,6 +133,10 @@ spec:
snapshotID:
description:SnapshotID is the ID of the volume snapshot to be restored.
type:string
snapshotSize:
description:SnapshotSize is the logical size in Bytes of the snapshot.
format:int64
type:integer
sourceNamespace:
description:SourceNamespace is the original namespace for namaspace
mapping.
@@ -122,8 +144,9 @@ spec:
uploaderSettings:
additionalProperties:
type:string
description:UploaderSettings are a map of key-value pairs that should
be applied to the uploader configuration.
description:|-
UploaderSettings are a map of key-value pairs that should be applied to the
uploader configuration.
nullable:true
type:object
uploaderType:
@@ -149,28 +172,45 @@ spec:
status:
description:PodVolumeRestoreStatus is the current status of a PodVolumeRestore.
properties:
acceptedTimestamp:
description:|-
AcceptedTimestamp records the time the pod volume restore is to be prepared.
The server's time is used for AcceptedTimestamp
format:date-time
nullable:true
type:string
completionTimestamp:
description:CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores. The server's
time is used for CompletionTimestamps
description:|-
CompletionTimestamp records the time a restore was completed.
Completion time is recorded even on failed restores.
The server's time is used for CompletionTimestamps
format:date-time
nullable:true
type:string
message:
description:Message is a message about the pod volume restore's status.
type:string
node:
description:Node is name of the node where the pod volume restore
is processed.
type:string
phase:
description:Phase is the current state of the PodVolumeRestore.
enum:
- New
- Accepted
- Prepared
- InProgress
- Canceling
- Canceled
- Completed
- Failed
type:string
progress:
description:Progress holds the total number of bytes of the snapshot
and the current number of restored bytes. This can be used to display
progress information about the restore operation.
description:|-
Progress holds the total number of bytes of the snapshot and the current
number of restored bytes. This can be used to display progress information
about the restore operation.
properties:
bytesDone:
format:int64
@@ -180,7 +220,8 @@ spec:
type:integer
type:object
startTimestamp:
description:StartTimestamp records the time a restore was started.
description:|-
StartTimestamp records the time a restore was started.
description:Restore is a Velero resource that represents the application
of resources from a Velero backup to a target Kubernetes cluster.
description:|-
Restore is a Velero resource that represents the application of
resources from a Velero backup to a target Kubernetes cluster.
properties:
apiVersion:
description:'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
description:|-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type:string
kind:
description:'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
description:|-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type:string
metadata:
type:object
@@ -36,19 +42,22 @@ spec:
description:RestoreSpec defines the specification for a Velero restore.
properties:
backupName:
description:BackupName is the unique name of the Velero backup to
restore from.
description:|-
BackupName is the unique name of the Velero backup to restore
from.
type:string
excludedNamespaces:
description:ExcludedNamespaces contains a list of namespaces that
are not included in the restore.
description:|-
ExcludedNamespaces contains a list of namespaces that are not
included in the restore.
items:
type:string
nullable:true
type:array
excludedResources:
description:ExcludedResources is a slice of resource names that are
not included in the restore.
description:|-
ExcludedResources is a slice of resource names that are not
included in the restore.
items:
type:string
nullable:true
@@ -64,9 +73,9 @@ spec:
properties:
resources:
items:
description:RestoreResourceHookSpec defines one or more RestoreResrouceHooks
that should be executed based on the rules defined for namespaces,
resources, and label selector.
description:|-
RestoreResourceHookSpec defines one or more RestoreResrouceHooks that should be executed based on
the rules defined for namespaces, resources, and label selector.
properties:
excludedNamespaces:
description:ExcludedNamespaces specifies the namespaces
@@ -83,17 +92,17 @@ spec:
nullable:true
type:array
includedNamespaces:
description:IncludedNamespaces specifies the namespaces
to which this hook spec applies. If empty, it applies
description:|-
IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies
to all namespaces.
items:
type:string
nullable:true
type:array
includedResources:
description:IncludedResources specifies the resources to
which this hook spec applies. If empty, it applies to
all resources.
description:|-
IncludedResources specifies the resources to which this hook spec applies. If empty, it applies
to all resources.
items:
type:string
nullable:true
@@ -107,8 +116,8 @@ spec:
description:matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description:A label selector requirement is a selector
that contains values, a key, and an operator that
description:|-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
@@ -116,33 +125,33 @@ spec:
applies to.
type:string
operator:
description:operator represents a key's relationship
to a set of values. Valid operators are In,
NotIn, Exists and DoesNotExist.
description:|-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type:string
values:
description:values is an array of string values.
If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists
or DoesNotExist, the values array must be empty.
This array is replaced during a strategic merge
patch.
description:|-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type:string
type:array
x-kubernetes-list-type:atomic
required:
- key
- operator
type:object
type:array
x-kubernetes-list-type:atomic
matchLabels:
additionalProperties:
type:string
description:matchLabels is a map of {key,value} pairs.
A single {key,value} in the matchLabels map is equivalent
to an element of matchExpressions, whose key field
is "key", the operator is "In", and the values array
contains only "value". The requirements are ANDed.
description:|-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type:object
type:object
x-kubernetes-map-type:atomic
@@ -168,15 +177,14 @@ spec:
minItems:1
type:array
container:
description:Container is the container in the
pod where the command should be executed. If
not specified, the pod's first container is
used.
description:|-
Container is the container in the pod where the command should be executed. If not specified,
the pod's first container is used.
type:string
execTimeout:
description:ExecTimeout defines the maximum amount
of time Velero should wait for the hook to complete
before considering the execution a failure.
description:|-
ExecTimeout defines the maximum amount of time Velero should wait for the hook to complete before
considering the execution a failure.
type:string
onError:
description:OnError specifies how Velero should
@@ -193,9 +201,9 @@ spec:
nullable:true
type:boolean
waitTimeout:
description:WaitTimeout defines the maximum amount
of time Velero should wait for the container
to be Ready before attempting to run the command.
description:|-
WaitTimeout defines the maximum amount of time Velero should wait for the container to be Ready
should be included for consideration in the restore. If null, defaults
to true.
nullable:true
type:boolean
includedNamespaces:
description:IncludedNamespaces is a slice of namespace names to include
objects from. If empty, all namespaces are included.
description:|-
IncludedNamespaces is a slice of namespace names to include objects
from. If empty, all namespaces are included.
items:
type:string
nullable:true
type:array
includedResources:
description:IncludedResources is a slice of resource names to include
description:|-
IncludedResources is a slice of resource names to include
in the restore. If empty, all resources in the backup are included.
items:
type:string
nullable:true
type:array
itemOperationTimeout:
description:ItemOperationTimeout specifies the time used to wait
for RestoreItemAction operations The default value is 1 hour.
description:|-
ItemOperationTimeout specifies the time used to wait for RestoreItemAction operations
The default value is 4 hour.
type:string
labelSelector:
description:LabelSelector is a metav1.LabelSelector to filter with
when restoring individual objects from the backup. If empty or nil,
all objects are included. Optional.
description:|-
LabelSelector is a metav1.LabelSelector to filter with
when restoring individual objects from the backup. If empty
or nil, all objects are included. Optional.
nullable:true
properties:
matchExpressions:
description:matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description:A label selector requirement is a selector that
contains values, a key, and an operator that relates the key
and values.
description:|-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description:key is the label key that the selector applies
to.
type:string
operator:
description:operator represents a key's relationship to
a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
description:|-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type:string
values:
description:values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a strategic
description:|-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type:string
type:array
x-kubernetes-list-type:atomic
required:
- key
- operator
type:object
type:array
x-kubernetes-list-type:atomic
matchLabels:
additionalProperties:
type:string
description:matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
description:|-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type:object
type:object
x-kubernetes-map-type:atomic
namespaceMapping:
additionalProperties:
type:string
description:NamespaceMapping is a map of source namespace names to
target namespace names to restore into. Any source namespaces not
included in the map will be restored into namespaces of the same
name.
description:|-
NamespaceMapping is a map of source namespace names
to target namespace names to restore into. Any source
namespaces not included in the map will be restored into
namespaces of the same name.
type:object
orLabelSelectors:
description:OrLabelSelectors is list of metav1.LabelSelector to filter
with when restoring individual objects from the backup. If multiple
provided they will be joined by the OR operator. LabelSelector as
well as OrLabelSelectors cannot co-exist in restore request, only
one of them can be used
description:|-
OrLabelSelectors is list of metav1.LabelSelector to filter with
when restoring individual objects from the backup. If multiple provided
they will be joined by the OR operator. LabelSelector as well as
OrLabelSelectors cannot co-exist in restore request, only one of them
can be used
items:
description:A label selector is a label query over a set of resources.
The result of matchLabels and matchExpressions are ANDed. An empty
label selector matches all objects. A null label selector matches
noobjects.
description:|-
A label selector is a label query over a set of resources. The result of matchLabels and
matchExpressions are ANDed. An empty label selector matches all objects. A null
label selector matches no objects.
properties:
matchExpressions:
description:matchExpressions is a list of label selector requirements.
The requirements are ANDed.
items:
description:A label selector requirement is a selector that
contains values, a key, and an operator that relates the
key and values.
description:|-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description:key is the label key that the selector applies
to.
type:string
operator:
description:operator represents a key's relationship
to a set of values. Valid operators are In, NotIn, Exists
and DoesNotExist.
description:|-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type:string
values:
description:values is an array of string values. If the
operator is In or NotIn, the values array must be non-empty.
If the operator is Exists or DoesNotExist, the values
array must be empty. This array is replaced during a
strategic merge patch.
description:|-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type:string
type:array
x-kubernetes-list-type:atomic
required:
- key
- operator
type:object
type:array
x-kubernetes-list-type:atomic
matchLabels:
additionalProperties:
type:string
description:matchLabels is a map of {key,value} pairs. A single
{key,value} in the matchLabels map is equivalent to an element
of matchExpressions, whose key field is "key", the operator
is "In", and the values array contains only "value". The requirements
are ANDed.
description:|-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type:object
type:object
x-kubernetes-map-type:atomic
@@ -371,10 +388,10 @@ spec:
nullable:true
properties:
apiGroup:
description:APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in
the core API group. For any other third-party types, APIGroup
is required.
description:|-
APIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in the core API group.
For any other third-party types, APIGroup is required.
type:string
kind:
description:Kind is the type of resource being referenced
@@ -388,13 +405,15 @@ spec:
type:object
x-kubernetes-map-type:atomic
restorePVs:
description:RestorePVs specifies whether to restore all included
description:|-
RestorePVs specifies whether to restore all included
PVs from snapshot
nullable:true
type:boolean
restoreStatus:
description:RestoreStatus specifies which resources we should restore
the status field. If nil, no objects are included. Optional.
description:|-
RestoreStatus specifies which resources we should restore the status
field. If nil, no objects are included. Optional.
nullable:true
properties:
excludedResources:
@@ -405,46 +424,50 @@ spec:
nullable:true
type:array
includedResources:
description:IncludedResources specifies the resources to which
will restore the status. If empty, it applies to all resources.
description:|-
IncludedResources specifies the resources to which will restore the status.
If empty, it applies to all resources.
items:
type:string
nullable:true
type:array
type:object
scheduleName:
description:ScheduleName is the unique name of the Velero schedule
to restore from. If specified, and BackupName is empty, Velero will
restore from the most recent successful backup created from this
schedule.
description:|-
ScheduleName is the unique name of the Velero schedule to restore
from. If specified, and BackupName is empty, Velero will restore
from the most recent successful backup created from this schedule.
type:string
uploaderConfig:
description:UploaderConfig specifies the configuration for the restore.
nullable:true
properties:
parallelFilesDownload:
description:ParallelFilesDownload is the concurrency number setting
for restore.
type:integer
writeSparseFiles:
description:WriteSparseFiles is a flag to indicate whether write
files sparsely or not.
nullable:true
type:boolean
type:object
required:
- backupName
type:object
status:
description:RestoreStatus captures the current status of a Velero restore
properties:
completionTimestamp:
description:CompletionTimestamp records the time the restore operation
was completed. Completion time is recorded even on failed restore.
description:|-
CompletionTimestamp records the time the restore operation was completed.
Completion time is recorded even on failed restore.
The server's time is used for StartTimestamps
format:date-time
nullable:true
type:string
errors:
description:Errors is a count of all error messages that were generated
during execution of the restore. The actual errors are stored in
object storage.
description:|-
Errors is a count of all error messages that were generated during
execution of the restore. The actual errors are stored in object storage.
type:integer
failureReason:
description:FailureReason is an error that caused the entire restore
@@ -456,10 +479,10 @@ spec:
nullable:true
properties:
hooksAttempted:
description:HooksAttempted is the total number of attempted hooks
Specifically, HooksAttempted represents the number of hooks
that failed to execute and the number of hooks that executed
successfully.
description:|-
HooksAttempted is the total number of attempted hooks
Specifically, HooksAttempted represents the number of hooks that failed to execute
and the number of hooks that executed successfully.
type:integer
hooksFailed:
description:HooksFailed is the total number of hooks which ended
@@ -477,11 +500,14 @@ spec:
- Completed
- PartiallyFailed
- Failed
- Finalizing
- FinalizingPartiallyFailed
type:string
progress:
description:Progress contains information about the restore's execution
progress. Note that this information is best-effort only -- if Velero
fails to update it during a restore for any reason, it may be inaccurate/stale.
description:|-
Progress contains information about the restore's execution progress. Note
that this information is best-effort only -- if Velero fails to update it
during a restore for any reason, it may be inaccurate/stale.
nullable:true
properties:
itemsRestored:
@@ -489,42 +515,46 @@ spec:
been restored so far
type:integer
totalItems:
description:TotalItems is the total number of items to be restored.
This number may change throughout the execution of the restore
due to plugins that return additional related items to restore
description:|-
TotalItems is the total number of items to be restored. This number may change
throughout the execution of the restore due to plugins that return additional related
items to restore
type:integer
type:object
restoreItemOperationsAttempted:
description:RestoreItemOperationsAttempted is the total number of
attempted async RestoreItemAction operations for this restore.
description:|-
RestoreItemOperationsAttempted is the total number of attempted
async RestoreItemAction operations for this restore.
type:integer
restoreItemOperationsCompleted:
description:RestoreItemOperationsCompleted is the total number of
successfully completed async RestoreItemAction operations for this
restore.
description:|-
RestoreItemOperationsCompleted is the total number of successfully completed
async RestoreItemAction operations for this restore.
type:integer
restoreItemOperationsFailed:
description:RestoreItemOperationsFailed is the total number of async
RestoreItemAction operations for this restore which ended with an
error.
description:|-
RestoreItemOperationsFailed is the total number of async
RestoreItemAction operations for this restore which ended with an error.
type:integer
startTimestamp:
description:StartTimestamp records the time the restore operation
was started. The server's time is used for StartTimestamps
description:|-
StartTimestamp records the time the restore operation was started.
The server's time is used for StartTimestamps
format:date-time
nullable:true
type:string
validationErrors:
description:ValidationErrors is a slice of all validation errors
(if applicable)
description:|-
ValidationErrors is a slice of all validation errors (if
applicable)
items:
type:string
nullable:true
type:array
warnings:
description:Warnings is a count of all warning messages that were
generated during execution of the restore. The actual warnings are
stored in object storage.
description:|-
Warnings is a count of all warning messages that were generated during
execution of the restore. The actual warnings are stored in object storage.
type:integer
type:object
type:object
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.