mirror of
https://github.com/vmware-tanzu/velero.git
synced 2026-03-22 17:45:01 +00:00
Compare commits
4 Commits
v1.18.0
...
copilot/de
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cbc13366c6 | ||
|
|
664b25cca1 | ||
|
|
3504943019 | ||
|
|
acd4d5b183 |
197
.github/AI-DETECTION-EXAMPLES.md
vendored
Normal file
197
.github/AI-DETECTION-EXAMPLES.md
vendored
Normal file
@@ -0,0 +1,197 @@
|
||||
# AI Issue Detection - Examples
|
||||
|
||||
This document provides examples to help understand what triggers AI detection.
|
||||
|
||||
## Example 1: High AI Score (Score: 6/8) ❌
|
||||
|
||||
**This would be flagged:**
|
||||
|
||||
```markdown
|
||||
## Description
|
||||
When deploying Velero on an EKS cluster with `hostNetwork: true`, the application fails to start.
|
||||
|
||||
## Critical Problem
|
||||
```
|
||||
time="2026-01-26T16:40:55Z" level=fatal msg="failed to start metrics server"
|
||||
```
|
||||
|
||||
Status: BLOCKER
|
||||
|
||||
## Affected Environment
|
||||
|
||||
| Parameter | Value |
|
||||
|----------|----------|
|
||||
| Cluster | Amazon EKS |
|
||||
| Velero Version | 1.8.2 |
|
||||
| Kubernetes | 1.33 |
|
||||
|
||||
## Root Cause Analysis
|
||||
|
||||
The controller-runtime metrics uses port 8080 as a hardcoded default...
|
||||
|
||||
## Resolution Attempts
|
||||
|
||||
### Attempt 1: Use extraArgs
|
||||
Result: Failed
|
||||
|
||||
### Attempt 2: Configure metricsAddress
|
||||
Result: Failed
|
||||
|
||||
## Expected Permanent Solution
|
||||
|
||||
Velero should:
|
||||
1. Auto-detect an available port
|
||||
2. Accept configuring the controller-runtime port
|
||||
|
||||
## Questions for Maintainers
|
||||
1. Why does controller-runtime use hardcoded 8080?
|
||||
2. Is there a roadmap to support hostNetwork?
|
||||
|
||||
## Labels and Metadata
|
||||
Severity: CRITICAL
|
||||
```
|
||||
|
||||
**Why flagged (Patterns detected: 6/8):**
|
||||
- ✓ `futureDates` - References "2026-01-26" and "Kubernetes 1.33"
|
||||
- ✓ `excessiveHeaders` - 8+ section headers
|
||||
- ✓ `formalPhrases` - "Root Cause Analysis", "Expected Permanent Solution", "Questions for Maintainers", "Labels and Metadata"
|
||||
- ✓ `aiSectionHeaders` - "## Description", "## Critical Problem", "## Affected Environment", "## Resolution Attempts"
|
||||
- ✓ `perfectFormatting` - Perfect table structure
|
||||
- ✓ `genericSolutions` - Mentions "auto-detect"
|
||||
|
||||
---
|
||||
|
||||
## Example 2: Medium AI Score (Score: 2/8) ✅
|
||||
|
||||
**This would NOT be flagged (below threshold):**
|
||||
|
||||
```markdown
|
||||
**What steps did you take and what happened:**
|
||||
|
||||
I'm trying to restore a backup but getting this error:
|
||||
```
|
||||
error: backup "my-backup" not found
|
||||
```
|
||||
|
||||
**What did you expect to happen:**
|
||||
The backup should restore successfully
|
||||
|
||||
**Environment:**
|
||||
- Velero version: 1.13.0
|
||||
- Kubernetes version: 1.28
|
||||
- Cloud provider: AWS
|
||||
|
||||
**Additional context:**
|
||||
I can see the backup in S3 but Velero doesn't list it. Running `velero backup get` shows no backups.
|
||||
```
|
||||
|
||||
**Why NOT flagged (Patterns detected: 2/8):**
|
||||
- ✗ `futureDates` - Uses realistic versions
|
||||
- ✗ `excessiveHeaders` - Only 3 headers
|
||||
- ✗ `formalPhrases` - No formal AI phrases
|
||||
- ✓ `excessiveTables` - Has a table but only 1
|
||||
- ✗ `perfectFormatting` - Normal formatting
|
||||
- ✗ `aiSectionHeaders` - Standard issue template headers
|
||||
- ✓ `excessiveFormatting` - Has code blocks
|
||||
- ✗ `genericSolutions` - No generic solutions
|
||||
|
||||
---
|
||||
|
||||
## Example 3: Legitimate Detailed Issue (Score: 3/8) ⚠️
|
||||
|
||||
**This would be flagged but is actually legitimate:**
|
||||
|
||||
```markdown
|
||||
## Problem Description
|
||||
|
||||
VolumeGroupSnapshot restore fails with Ceph RBD driver.
|
||||
|
||||
## Environment
|
||||
|
||||
- Velero: 1.14.0
|
||||
- Kubernetes: 1.28.3
|
||||
- ODF: 4.14.2 with Ceph RBD CSI driver
|
||||
|
||||
## Root Cause
|
||||
|
||||
Ceph RBD stores group snapshot metadata in journal as `csi.groupid` omap key. During restore, when creating pre-provisioned VSC, the RBD driver reads this and populates `status.volumeGroupSnapshotHandle`.
|
||||
|
||||
The CSI snapshot controller looks for a VGSC with matching handle. Since Velero deletes VGSC after backup, it's not found.
|
||||
|
||||
## Reproduction Steps
|
||||
|
||||
1. Create backup with VGS
|
||||
2. Delete namespace
|
||||
3. Restore backup
|
||||
4. Observe VS stuck with "cannot find group snapshot"
|
||||
|
||||
## Workaround
|
||||
|
||||
Create stub VGSC with matching `volumeGroupSnapshotHandle` and patch status.
|
||||
|
||||
## Proposed Fix
|
||||
|
||||
1. Backup: Capture `volumeGroupSnapshotHandle` in CSISnapshotInfo
|
||||
2. Restore: Create stub VGSC if handle exists
|
||||
|
||||
## Code References
|
||||
|
||||
- Ceph RBD: https://github.com/ceph/ceph-csi/blob/devel/internal/rbd/snapshot.go#L167
|
||||
- Velero deletion: https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/actions/csi/pvc_action.go#L1124
|
||||
```
|
||||
|
||||
**Why flagged (Patterns detected: 3/8):**
|
||||
- ✗ `futureDates` - Uses current versions
|
||||
- ✓ `excessiveHeaders` - Has 6 section headers
|
||||
- ✓ `formalPhrases` - "Root Cause", "Proposed Fix"
|
||||
- ✗ `excessiveTables` - No tables
|
||||
- ✗ `perfectFormatting` - Normal formatting
|
||||
- ✗ `aiSectionHeaders` - Technical, not generic
|
||||
- ✗ `excessiveFormatting` - Reasonable formatting
|
||||
- ✓ `genericSolutions` - Structured solution with code refs
|
||||
|
||||
**Maintainer Action**: This is a legitimate, well-researched issue. Verify the details with the contributor and remove the `potential-ai-generated` label.
|
||||
|
||||
---
|
||||
|
||||
## Example 4: Simple Valid Issue (Score: 0/8) ✅
|
||||
|
||||
**This would NOT be flagged:**
|
||||
|
||||
```markdown
|
||||
Velero backup fails with error: `rpc error: code = Unavailable desc = connection error`
|
||||
|
||||
Running Velero 1.13 on GKE. Backups were working yesterday but now all fail with this error.
|
||||
|
||||
Logs show the node-agent pod is crashing. Any ideas?
|
||||
```
|
||||
|
||||
**Why NOT flagged (Patterns detected: 0/8):**
|
||||
- All patterns: None detected
|
||||
|
||||
---
|
||||
|
||||
## Key Takeaways
|
||||
|
||||
### Will Trigger Detection ❌
|
||||
- Future dates/versions (2026+, K8s 1.33+)
|
||||
- 4+ formal AI phrases
|
||||
- 8+ section headers
|
||||
- Perfect table formatting across multiple tables
|
||||
- Generic AI section titles
|
||||
- Auto-detect/generic solution patterns
|
||||
|
||||
### Will NOT Trigger ✅
|
||||
- Realistic version numbers
|
||||
- Actual error messages from real systems
|
||||
- Normal issue formatting
|
||||
- Moderate level of detail
|
||||
- Standard GitHub issue template
|
||||
|
||||
### May Trigger (But Legitimate) ⚠️
|
||||
- Very detailed technical analysis
|
||||
- Multiple code references
|
||||
- Well-structured proposals
|
||||
- Extensive testing documentation
|
||||
|
||||
For these cases, maintainers will verify with the contributor and remove the flag once confirmed.
|
||||
80
.github/AI-DETECTION-README.md
vendored
Normal file
80
.github/AI-DETECTION-README.md
vendored
Normal file
@@ -0,0 +1,80 @@
|
||||
# AI-Generated Content Detection
|
||||
|
||||
This directory contains the AI-generated content detection system for Velero issues.
|
||||
|
||||
## Overview
|
||||
|
||||
The Velero project has implemented automated detection of potentially AI-generated issues to help maintain quality and ensure that issues describe real, verified problems.
|
||||
|
||||
## How It Works
|
||||
|
||||
### Detection Workflow
|
||||
|
||||
The workflow (`.github/workflows/ai-issue-detector.yml`) runs automatically when:
|
||||
- A new issue is opened
|
||||
- An existing issue is edited
|
||||
|
||||
### Detection Patterns
|
||||
|
||||
The detector analyzes issues for several AI-generation patterns:
|
||||
|
||||
1. **Excessive Tables** - More than 5 markdown tables
|
||||
2. **Excessive Headers** - More than 8 consecutive section headers
|
||||
3. **Formal Phrases** - Multiple formal section headers typical of AI (e.g., "Root Cause Analysis", "Operational Impact", "Expected Permanent Solution")
|
||||
4. **Excessive Formatting** - Multiple horizontal rules and perfect formatting
|
||||
5. **Future Dates** - Version numbers or dates that are unrealistic or in the future
|
||||
6. **Perfect Formatting** - Overly structured tables with perfect alignment
|
||||
7. **AI Section Headers** - Generic AI-style headers like "Critical Problem", "Resolution Attempts"
|
||||
8. **Generic Solutions** - Auto-generated solution patterns with multiple YAML examples
|
||||
|
||||
### Scoring System
|
||||
|
||||
Each detected pattern adds to the AI score. If the score is 3 or higher (out of 8), the issue is flagged as potentially AI-generated.
|
||||
|
||||
### Actions Taken
|
||||
|
||||
When an issue is flagged:
|
||||
1. A `potential-ai-generated` label is added
|
||||
2. A `needs-triage` label is added
|
||||
3. An automated comment is posted explaining:
|
||||
- Why the issue was flagged
|
||||
- What patterns were detected
|
||||
- Guidelines for contributors to follow
|
||||
- Request for verification
|
||||
|
||||
## For Contributors
|
||||
|
||||
If your issue is flagged:
|
||||
|
||||
1. **Don't panic** - This is not an accusation, just a request for verification
|
||||
2. **Review the guidelines** in our [Code Standards](../site/content/docs/main/code-standards.md#ai-generated-content)
|
||||
3. **Verify your content**:
|
||||
- Ensure all version numbers are accurate
|
||||
- Confirm error messages are from your actual environment
|
||||
- Remove any placeholder or example content
|
||||
- Simplify overly structured formatting
|
||||
4. **Update the issue** with corrections if needed
|
||||
5. **Comment to confirm** that the issue describes a real problem
|
||||
|
||||
## For Maintainers
|
||||
|
||||
When reviewing flagged issues:
|
||||
|
||||
1. Check if the technical details are realistic and verifiable
|
||||
2. Look for signs of hallucinated content (fake version numbers, non-existent features)
|
||||
3. Engage with the issue author to verify the problem
|
||||
4. Remove the `potential-ai-generated` label once verified
|
||||
5. Close issues that cannot be verified or describe non-existent problems
|
||||
|
||||
## Configuration
|
||||
|
||||
The detection patterns can be adjusted in the workflow file if needed. The threshold is currently set at 3 out of 8 patterns to balance false positives with detection accuracy.
|
||||
|
||||
## False Positives
|
||||
|
||||
The detector may occasionally flag legitimate issues, especially those that are:
|
||||
- Very detailed and well-structured
|
||||
- Using formal technical documentation style
|
||||
- Reporting complex problems with extensive details
|
||||
|
||||
This is intentional - we prefer to verify detailed issues rather than miss AI-generated ones.
|
||||
186
.github/MAINTAINER-AI-DETECTION-GUIDE.md
vendored
Normal file
186
.github/MAINTAINER-AI-DETECTION-GUIDE.md
vendored
Normal file
@@ -0,0 +1,186 @@
|
||||
# Maintainer Guide: AI-Generated Issue Detection
|
||||
|
||||
This guide helps Velero maintainers understand and work with the AI-generated issue detection system.
|
||||
|
||||
## Overview
|
||||
|
||||
The AI detection system automatically analyzes new and edited issues to identify potential AI-generated content. This helps maintain issue quality and ensures contributors verify their submissions.
|
||||
|
||||
## How It Works
|
||||
|
||||
### Automatic Detection
|
||||
|
||||
When an issue is opened or edited, the workflow:
|
||||
|
||||
1. **Analyzes** the issue body for 8 different AI patterns
|
||||
2. **Calculates** an AI confidence score (0-8)
|
||||
3. **If score ≥ 3**: Adds labels and posts a comment
|
||||
4. **If score < 3**: Takes no action (issue proceeds normally)
|
||||
|
||||
### Detection Patterns
|
||||
|
||||
| Pattern | Description | Weight |
|
||||
|---------|-------------|--------|
|
||||
| `excessiveTables` | More than 5 markdown tables | 1 |
|
||||
| `excessiveHeaders` | More than 8 section headers | 1 |
|
||||
| `formalPhrases` | 4+ AI-typical phrases (e.g., "Root Cause Analysis") | 1 |
|
||||
| `excessiveFormatting` | Multiple horizontal rules (---) | 1 |
|
||||
| `futureDates` | Dates/versions in 2026+ or 2030s | 1 |
|
||||
| `perfectFormatting` | Multiple identical table structures | 1 |
|
||||
| `aiSectionHeaders` | 4+ generic AI headers (e.g., "Critical Problem") | 1 |
|
||||
| `genericSolutions` | Auto-detect patterns with multiple YAML blocks | 1 |
|
||||
|
||||
## Working with Flagged Issues
|
||||
|
||||
### Step 1: Review the Issue
|
||||
|
||||
When you see an issue labeled `potential-ai-generated`:
|
||||
|
||||
1. **Read the issue carefully**
|
||||
2. **Check the detected patterns** (listed in the auto-comment)
|
||||
3. **Look for red flags**:
|
||||
- Future version numbers (e.g., "Kubernetes 1.33")
|
||||
- Future dates (e.g., "2026-01-27")
|
||||
- Non-existent features or configurations
|
||||
- Perfect table formatting with no actual content
|
||||
- Generic solutions that don't match Velero's architecture
|
||||
|
||||
### Step 2: Engage with the Contributor
|
||||
|
||||
**If the issue seems legitimate but over-formatted:**
|
||||
|
||||
```markdown
|
||||
Thanks for the detailed report! Could you confirm:
|
||||
1. Are you running Velero version X.Y.Z (you mentioned version A.B.C)?
|
||||
2. Is the error message exactly as shown?
|
||||
3. Have you actually tried the workarounds mentioned?
|
||||
|
||||
Once verified, we'll remove the AI-generated flag and investigate.
|
||||
```
|
||||
|
||||
**If the issue appears to be unverified AI content:**
|
||||
|
||||
```markdown
|
||||
This issue appears to contain AI-generated content that hasn't been verified.
|
||||
|
||||
Please review our [AI contribution guidelines](https://github.com/vmware-tanzu/velero/blob/main/site/content/docs/main/code-standards.md#ai-generated-content) and:
|
||||
1. Confirm this describes a real problem in your environment
|
||||
2. Verify all version numbers and error messages
|
||||
3. Remove any placeholder or example content
|
||||
4. Test that the issue is reproducible
|
||||
|
||||
If you can't verify the issue, please close it. We're happy to help with real problems!
|
||||
```
|
||||
|
||||
### Step 3: Take Action
|
||||
|
||||
**For verified issues:**
|
||||
1. Remove the `potential-ai-generated` label
|
||||
2. Keep or remove `needs-triage` as appropriate
|
||||
3. Proceed with normal issue triage
|
||||
|
||||
**For unverified/invalid issues:**
|
||||
1. Request verification (see templates above)
|
||||
2. If no response after 7 days, consider closing as `stale`
|
||||
3. If clearly invalid, close with explanation
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### False Positives (Legitimate Issues)
|
||||
|
||||
These may trigger the detector but are usually valid:
|
||||
|
||||
- **Very detailed bug reports** with extensive logs and testing
|
||||
- **Technical design proposals** with multiple sections
|
||||
- **Well-organized feature requests** with tables and examples
|
||||
|
||||
**Action**: Engage with contributor, ask clarifying questions, remove flag if verified.
|
||||
|
||||
### True Positives (AI-Generated)
|
||||
|
||||
Red flags that indicate unverified AI content:
|
||||
|
||||
- **Future version numbers**: "Kubernetes 1.33" (doesn't exist yet)
|
||||
- **Future dates**: "2026-01-27" (if current date is before)
|
||||
- **Non-existent features**: References to Velero features that don't exist
|
||||
- **Generic solutions**: "Auto-detect available port" (not how Velero works)
|
||||
- **Perfect formatting, wrong content**: Beautiful tables with incorrect info
|
||||
|
||||
**Action**: Request verification, ask for actual environment details, consider closing if unverified.
|
||||
|
||||
### Edge Cases
|
||||
|
||||
**Contributor using AI as a writing assistant:**
|
||||
- Issue content is verified and accurate
|
||||
- Just used AI to help structure/format the report
|
||||
- **Action**: This is acceptable! Remove flag if content is verified.
|
||||
|
||||
**Legitimate issue that happens to match patterns:**
|
||||
- Real problem with detailed analysis
|
||||
- Includes proper version numbers and logs
|
||||
- **Action**: Verify with contributor, remove flag once confirmed.
|
||||
|
||||
## Statistics and Monitoring
|
||||
|
||||
You can search for flagged issues:
|
||||
|
||||
```
|
||||
is:issue label:potential-ai-generated
|
||||
```
|
||||
|
||||
Monitor trends:
|
||||
- High detection rate → May need to adjust thresholds
|
||||
- Low detection rate → Patterns working well or need refinement
|
||||
|
||||
## Adjusting the System
|
||||
|
||||
### Modifying Detection Patterns
|
||||
|
||||
Edit `.github/workflows/ai-issue-detector.yml`:
|
||||
|
||||
```javascript
|
||||
// Increase threshold to reduce false positives
|
||||
if (aiScore >= 4) { // was 3
|
||||
|
||||
// Adjust pattern sensitivity
|
||||
excessiveTables: (issueBody.match(/\|.*\|/g) || []).length > 8, // was 5
|
||||
```
|
||||
|
||||
### Adding New Patterns
|
||||
|
||||
Add to the `aiPatterns` object:
|
||||
|
||||
```javascript
|
||||
// Example: Detect excessive use of emojis
|
||||
excessiveEmojis: (issueBody.match(/[\u{1F300}-\u{1F9FF}]/gu) || []).length > 10,
|
||||
```
|
||||
|
||||
### Disabling the Workflow
|
||||
|
||||
Rename or delete `.github/workflows/ai-issue-detector.yml`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Be courteous**: Contributors may not realize their AI tool generated incorrect info
|
||||
2. **Verify, don't assume**: Some detailed issues are legitimate
|
||||
3. **Educate**: Point to the AI guidelines in code-standards.md
|
||||
4. **Track patterns**: Note common AI-generated patterns for future improvements
|
||||
5. **Iterate**: Adjust detection thresholds based on false positive rates
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: Should we reject all AI-assisted contributions?**
|
||||
A: No! AI assistance is fine if the contributor verifies accuracy. We only flag unverified AI content.
|
||||
|
||||
**Q: What if a contributor is offended by the flag?**
|
||||
A: Explain it's automated and not personal. We just need verification of technical details.
|
||||
|
||||
**Q: Can we automatically close flagged issues?**
|
||||
A: No. Always engage with the contributor first. Some are legitimate.
|
||||
|
||||
**Q: What's an acceptable false positive rate?**
|
||||
A: Aim for <10%. If higher, increase the threshold from 3 to 4 or 5.
|
||||
|
||||
## Support
|
||||
|
||||
Questions about the AI detection system? Tag @vmware-tanzu/velero-maintainers in issue #9501.
|
||||
1
.github/labels.yaml
vendored
1
.github/labels.yaml
vendored
@@ -41,3 +41,4 @@ kind:
|
||||
- tech-debt
|
||||
- usage-error
|
||||
- voting
|
||||
- potential-ai-generated
|
||||
|
||||
132
.github/workflows/ai-issue-detector.yml
vendored
Normal file
132
.github/workflows/ai-issue-detector.yml
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
name: "Detect AI-Generated Issues"
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, edited]
|
||||
|
||||
jobs:
|
||||
detect-ai-content:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
issues: write
|
||||
contents: read
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Analyze issue for AI-generated content
|
||||
id: analyze
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const issue = context.payload.issue;
|
||||
const issueBody = issue.body || '';
|
||||
const issueTitle = issue.title || '';
|
||||
|
||||
// AI detection patterns
|
||||
const aiPatterns = {
|
||||
// Overly structured markdown with extensive tables
|
||||
excessiveTables: (issueBody.match(/\|.*\|/g) || []).length > 5,
|
||||
|
||||
// Multiple consecutive headers with consistent formatting
|
||||
excessiveHeaders: (issueBody.match(/^#{1,6}\s+/gm) || []).length > 8,
|
||||
|
||||
// Overly formal language patterns common in AI
|
||||
formalPhrases: [
|
||||
'Root Cause Analysis',
|
||||
'Operational Impact',
|
||||
'Expected Permanent Solution',
|
||||
'Questions for Maintainers',
|
||||
'Labels and Metadata',
|
||||
'Reference Files',
|
||||
'Steps to Reproduce'
|
||||
].filter(phrase => issueBody.includes(phrase)).length > 4,
|
||||
|
||||
// Excessive use of emojis or special characters
|
||||
excessiveFormatting: issueBody.includes('---\n \n---') ||
|
||||
(issueBody.match(/---/g) || []).length > 4,
|
||||
|
||||
// Unrealistic version numbers or dates in the future
|
||||
futureDates: /202[6-9]|203\d/.test(issueBody),
|
||||
|
||||
// Overly detailed technical specs with perfect formatting
|
||||
perfectFormatting: issueBody.includes('| Parameter | Value |') &&
|
||||
issueBody.includes('| Aspect | Status | Impact |'),
|
||||
|
||||
// Generic AI-style section headers
|
||||
aiSectionHeaders: [
|
||||
'## Description',
|
||||
'## Critical Problem',
|
||||
'## Affected Environment',
|
||||
'## Full Helm Configuration',
|
||||
'## Resolution Attempts',
|
||||
'## Related Information'
|
||||
].filter(header => issueBody.includes(header)).length > 4,
|
||||
|
||||
// Unusual specificity combined with generic solutions
|
||||
genericSolutions: issueBody.includes('auto-detect') &&
|
||||
issueBody.includes('configuration:') &&
|
||||
(issueBody.match(/```yaml/g) || []).length > 2
|
||||
};
|
||||
|
||||
// Calculate AI score
|
||||
let aiScore = 0;
|
||||
let detectedPatterns = [];
|
||||
|
||||
for (const [pattern, detected] of Object.entries(aiPatterns)) {
|
||||
if (detected) {
|
||||
aiScore++;
|
||||
detectedPatterns.push(pattern);
|
||||
}
|
||||
}
|
||||
|
||||
console.log('AI Score: ' + aiScore + '/8');
|
||||
console.log('Detected patterns: ' + detectedPatterns.join(', '));
|
||||
|
||||
// If AI score is high, add label and comment
|
||||
if (aiScore >= 3) {
|
||||
// Add label
|
||||
try {
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: issue.number,
|
||||
labels: ['needs-triage', 'potential-ai-generated']
|
||||
});
|
||||
|
||||
// Add comment
|
||||
const confidence = Math.round(aiScore/8 * 100);
|
||||
const repoPath = context.repo.owner + '/' + context.repo.repo;
|
||||
const comment = '👋 Thank you for opening this issue!\n\n' +
|
||||
'This issue has been flagged for review as it may contain AI-generated content (confidence: ' + confidence + '%).\n\n' +
|
||||
'**Detected patterns:** ' + detectedPatterns.join(', ') + '\n\n' +
|
||||
'If this issue was created with AI assistance, please review our [AI contribution guidelines](https://github.com/' + repoPath + '/blob/main/site/content/docs/main/code-standards.md#ai-generated-content).\n\n' +
|
||||
'**Important:**\n' +
|
||||
'- Please verify all technical details are accurate\n' +
|
||||
'- Ensure version numbers, dates, and configurations reflect your actual environment\n' +
|
||||
'- Remove any placeholder or example content\n' +
|
||||
'- Confirm the issue is reproducible in your environment\n\n' +
|
||||
'A maintainer will review this issue shortly. If this was flagged in error, please let us know!';
|
||||
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: issue.number,
|
||||
body: comment
|
||||
});
|
||||
|
||||
core.setOutput('ai-detected', 'true');
|
||||
core.setOutput('ai-score', aiScore);
|
||||
} catch (error) {
|
||||
console.log('Error adding label or comment:', error);
|
||||
}
|
||||
} else {
|
||||
core.setOutput('ai-detected', 'false');
|
||||
core.setOutput('ai-score', aiScore);
|
||||
}
|
||||
|
||||
return {
|
||||
aiDetected: aiScore >= 3,
|
||||
score: aiScore,
|
||||
patterns: detectedPatterns
|
||||
};
|
||||
@@ -17,7 +17,6 @@ If you're using Velero and want to add your organization to this list,
|
||||
<a href="https://www.replicated.com/" border="0" target="_blank"><img alt="replicated.com" src="site/static/img/adopters/replicated-logo-red.svg" height="50"></a>
|
||||
<a href="https://cloudcasa.io/" border="0" target="_blank"><img alt="cloudcasa.io" src="site/static/img/adopters/cloudcasa.svg" height="50"></a>
|
||||
<a href="https://azure.microsoft.com/" border="0" target="_blank"><img alt="azure.com" src="site/static/img/adopters/azure.svg" height="50"></a>
|
||||
<a href="https://www.broadcom.com/" border="0" target="_blank"><img alt="broadcom.com" src="site/static/img/adopters/broadcom.svg" height="50"></a>
|
||||
## Success Stories
|
||||
|
||||
Below is a list of adopters of Velero in **production environments** that have
|
||||
@@ -69,9 +68,6 @@ Replicated uses the Velero open source project to enable snapshots in [KOTS][101
|
||||
**[Microsoft Azure][105]**<br>
|
||||
[Azure Backup for AKS][106] is an Azure native, Kubernetes aware, Enterprise ready backup for containerized applications deployed on Azure Kubernetes Service (AKS). AKS Backup utilizes Velero to perform backup and restore operations to protect stateful applications in AKS clusters.<br>
|
||||
|
||||
**[Broadcom][107]**<br>
|
||||
[VMware Cloud Foundation][108] (VCF) offers built-in [vSphere Kubernetes Service][109] (VKS), a Kubernetes runtime that includes a CNCF certified Kubernetes distribution, to deploy and manage containerized workloads. VCF empowers platform engineers with native [Kubernetes multi-cluster management][110] capability for managing Kubernetes (K8s) infrastructure at scale. VCF utilizes Velero for Kubernetes data protection enabling platform engineers to back up and restore containerized workloads manifests & persistent volumes, helping to increase the resiliency of stateful applications in VKS cluster.
|
||||
|
||||
## Adding your organization to the list of Velero Adopters
|
||||
|
||||
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/static/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
|
||||
@@ -129,8 +125,3 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
|
||||
|
||||
[105]: https://azure.microsoft.com/
|
||||
[106]: https://learn.microsoft.com/azure/backup/backup-overview
|
||||
|
||||
[107]: https://www.broadcom.com/
|
||||
[108]: https://www.vmware.com/products/cloud-infrastructure/vmware-cloud-foundation
|
||||
[109]: https://www.vmware.com/products/cloud-infrastructure/vsphere-kubernetes-service
|
||||
[110]: https://blogs.vmware.com/cloud-foundation/2025/09/29/empowering-platform-engineers-with-native-kubernetes-multi-cluster-management-in-vmware-cloud-foundation/
|
||||
@@ -13,7 +13,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
# Velero binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
|
||||
|
||||
ARG GOPROXY
|
||||
ARG BIN
|
||||
@@ -49,7 +49,7 @@ RUN mkdir -p /output/usr/bin && \
|
||||
go clean -modcache -cache
|
||||
|
||||
# Restic binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS restic-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS restic-builder
|
||||
|
||||
ARG GOPROXY
|
||||
ARG BIN
|
||||
@@ -73,7 +73,7 @@ RUN mkdir -p /output/usr/bin && \
|
||||
go clean -modcache -cache
|
||||
|
||||
# Velero image packing section
|
||||
FROM paketobuildpacks/run-jammy-tiny:0.2.104
|
||||
FROM paketobuildpacks/run-jammy-tiny:latest
|
||||
|
||||
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
ARG OS_VERSION=1809
|
||||
|
||||
# Velero binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
|
||||
|
||||
ARG GOPROXY
|
||||
ARG BIN
|
||||
|
||||
@@ -7,11 +7,11 @@
|
||||
| Maintainer | GitHub ID | Affiliation |
|
||||
|---------------------|---------------------------------------------------------------|--------------------------------------------------|
|
||||
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift) |
|
||||
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | Broadcom |
|
||||
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | Broadcom |
|
||||
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | Broadcom |
|
||||
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
|
||||
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
|
||||
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
|
||||
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
|
||||
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | Broadcom |
|
||||
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
|
||||
| Anshul Ahuja | [anshulahuja98](https://github.com/anshulahuja98) | [Microsoft Azure](https://www.github.com/azure/) |
|
||||
| Tiger Kaovilai | [kaovilai](https://github.com/kaovilai) | [OpenShift](https://github.com/openshift) |
|
||||
|
||||
@@ -27,3 +27,14 @@
|
||||
* JenTing Hsiao ([jenting](https://github.com/jenting))
|
||||
* Dave Smith-Uchida ([dsu-igeek](https://github.com/dsu-igeek))
|
||||
* Ming Qiu ([qiuming-best](https://github.com/qiuming-best))
|
||||
|
||||
## Velero Contributors & Stakeholders
|
||||
|
||||
| Feature Area | Lead |
|
||||
|------------------------|:------------------------------------------------------------------------------------:|
|
||||
| Technical Lead | Daniel Jiang [reasonerjt](https://github.com/reasonerjt) |
|
||||
| Kubernetes CSI Liaison | |
|
||||
| Deployment | |
|
||||
| Community Management | Orlin Vasilev [OrlinVasilev](https://github.com/OrlinVasilev) |
|
||||
| Product Management | Pradeep Kumar Chaturvedi [pradeepkchaturvedi](https://github.com/pradeepkchaturvedi) |
|
||||
|
||||
|
||||
@@ -42,11 +42,13 @@ The following is a list of the supported Kubernetes versions for each Velero ver
|
||||
|
||||
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|
||||
|----------------|-------------------------------------------|-------------------------------------|
|
||||
| 1.18 | 1.18-latest | 1.33.7, 1.34.1, and 1.35.0 |
|
||||
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
|
||||
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
|
||||
| 1.16 | 1.18-latest | 1.31.4, 1.32.3, and 1.33.0 |
|
||||
| 1.15 | 1.18-latest | 1.28.8, 1.29.8, 1.30.4 and 1.31.1 |
|
||||
| 1.14 | 1.18-latest | 1.27.9, 1.28.9, and 1.29.4 |
|
||||
| 1.13 | 1.18-latest | 1.26.5, 1.27.3, 1.27.8, and 1.28.3 |
|
||||
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
|
||||
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
|
||||
|
||||
Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.
|
||||
|
||||
|
||||
2
Tiltfile
2
Tiltfile
@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
|
||||
|
||||
tilt_helper_dockerfile_header = """
|
||||
# Tilt image
|
||||
FROM golang:1.25.7 as tilt-helper
|
||||
FROM golang:1.25 as tilt-helper
|
||||
|
||||
# Support live reloading with Tilt
|
||||
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \
|
||||
|
||||
@@ -1,109 +0,0 @@
|
||||
## v1.18
|
||||
|
||||
### Download
|
||||
https://github.com/vmware-tanzu/velero/releases/tag/v1.18.0
|
||||
|
||||
### Container Image
|
||||
`velero/velero:v1.18.0`
|
||||
|
||||
### Documentation
|
||||
https://velero.io/docs/v1.18/
|
||||
|
||||
### Upgrading
|
||||
https://velero.io/docs/v1.18/upgrade-to-1.18/
|
||||
|
||||
### Highlights
|
||||
#### Concurrent backup
|
||||
In v1.18, Velero is capable to process multiple backups concurrently. This is a significant usability improvement, especially for multiple tenants or multiple users case, backups submitted from different users could run their backups simultaneously without interfering with each other.
|
||||
|
||||
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/concurrent-backup-processing.md for more details.
|
||||
|
||||
#### Cache volume for data movers
|
||||
In v1.18, Velero allows users to configure cache volumes for data mover pods during restore for CSI snapshot data movement and fs-backup. This brings below benefits:
|
||||
- Solve the problem that data mover pods fail to when pod's ephemeral disk is limited
|
||||
- Solve the problem that multiple data mover pods fail to run concurrently in one node when the node's ephemeral disk is limited
|
||||
- Working together with backup repository's cache limit configuration, cache volume with appropriate size helps to improve the restore throughput
|
||||
|
||||
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/backup-repo-cache-volume.md for more details.
|
||||
|
||||
#### Incremental size for data movers
|
||||
In v1.18, Velero allows users to observe the incremental size of data movers backups for CSI snapshot data movement and fs-backup, so that users could visually see the data reduction due to incremental backup.
|
||||
|
||||
#### Wildcard support for namespaces
|
||||
In v1.18, Velero allows to use Glob regular expressions for namespace filters during backup and restore, so that users could filter namespaces in a batch manner.
|
||||
|
||||
#### VolumePolicy for PVC phase
|
||||
In v1.18, Velero VolumePolicy supports actions by PVC phase, which help users to do special operations for PVCs with a specific phase, e.g., skip PVCs in Pending/Lost status from the backup.
|
||||
|
||||
#### Scalability and Resiliency improvements
|
||||
##### Prevent Velero server OOM Kill for large backup repositories
|
||||
In v1.18, some backup repository operations are delay executed out of Velero server, so Velero server won't be OOM Killed.
|
||||
|
||||
#### Performance improvement for VolumePolicy
|
||||
In v1.18, VolumePolicy is enhanced for large number of pods/PVCs so that the performance is significantly improved.
|
||||
|
||||
#### Events for data mover pod diagnostic
|
||||
In v1.18, events are recorded into data mover pod diagnostic, which allows user to see more information for troubleshooting when the data mover pod fails.
|
||||
|
||||
### Runtime and dependencies
|
||||
Golang runtime: 1.25.7
|
||||
kopia: 0.22.3
|
||||
|
||||
### Limitations/Known issues
|
||||
|
||||
### Breaking changes
|
||||
#### Deprecation of PVC selected node feature
|
||||
According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#deprecation-policy), PVC selected node feature is deprecated in v1.18. Velero could appropriately handle PVC's selected-node annotation, so users don't need to do anything particularly.
|
||||
|
||||
### All Changes
|
||||
* Remove backup from running list when backup fails validation (#9498, @sseago)
|
||||
* Maintenance Job only uses the first element of the LoadAffinity array (#9494, @blackpiglet)
|
||||
* Fix issue #9478, add diagnose info on expose peek fails (#9481, @Lyndon-Li)
|
||||
* Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence. (#9474, @blackpiglet)
|
||||
* Add maintenance job and data mover pod's labels and annotations setting. (#9452, @blackpiglet)
|
||||
* Fix plugin init container names exceeding DNS-1123 limit (#9445, @mpryc)
|
||||
* Add PVC-to-Pod cache to improve volume policy performance (#9441, @shubham-pampattiwar)
|
||||
* Remove VolumeSnapshotClass from CSI B/R process. (#9431, @blackpiglet)
|
||||
* Use hookIndex for recording multiple restore exec hooks. (#9366, @blackpiglet)
|
||||
* Sanitize Azure HTTP responses in BSL status messages (#9321, @shubham-pampattiwar)
|
||||
* Remove labels associated with previous backups (#9206, @Joeavaikath)
|
||||
* Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs (#9166, @claude)
|
||||
* feat: Enhance BackupStorageLocation with Secret-based CA certificate support (#9141, @kaovilai)
|
||||
* Add `--apply` flag to `install` command, allowing usage of Kubernetes apply to make changes to existing installs (#9132, @mjnagel)
|
||||
* Fix issue #9194, add doc for GOMAXPROCS behavior change (#9420, @Lyndon-Li)
|
||||
* Apply volume policies to VolumeGroupSnapshot PVC filtering (#9419, @shubham-pampattiwar)
|
||||
* Fix issue #9276, add doc for cache volume support (#9418, @Lyndon-Li)
|
||||
* Add Prometheus metrics for maintenance jobs (#9414, @shubham-pampattiwar)
|
||||
* Fix issue #9400, connect repo first time after creation so that init params could be written (#9407, @Lyndon-Li)
|
||||
* Cache volume for PVR (#9397, @Lyndon-Li)
|
||||
* Cache volume support for DataDownload (#9391, @Lyndon-Li)
|
||||
* don't copy securitycontext from first container if configmap found (#9389, @sseago)
|
||||
* Refactor repo provider interface for static configuration (#9379, @Lyndon-Li)
|
||||
* Fix issue #9365, prevent fake completion notification due to multiple update of single PVR (#9375, @Lyndon-Li)
|
||||
* Add cache volume configuration (#9370, @Lyndon-Li)
|
||||
* Track actual resource names for GenerateName in restore status (#9368, @shubham-pampattiwar)
|
||||
* Fix managed fields patch for resources using GenerateName (#9367, @shubham-pampattiwar)
|
||||
* Support cache volume for generic restore exposer and pod volume exposer (#9362, @Lyndon-Li)
|
||||
* Add incrementalSize to DU/PVB for reporting new/changed size (#9357, @sseago)
|
||||
* Add snapshotSize for DataDownload, PodVolumeRestore (#9354, @Lyndon-Li)
|
||||
* Add cache dir configuration for udmrepo (#9353, @Lyndon-Li)
|
||||
* Fix the Job build error when BackupReposiotry name longer than 63. (#9350, @blackpiglet)
|
||||
* Add cache configuration to VGDP (#9342, @Lyndon-Li)
|
||||
* Fix issue #9332, add bytesDone for cache files (#9333, @Lyndon-Li)
|
||||
* Fix typos in documentation (#9329, @T4iFooN-IX)
|
||||
* Concurrent backup processing (#9307, @sseago)
|
||||
* VerifyJSONConfigs verify every elements in Data. (#9302, @blackpiglet)
|
||||
* Fix issue #9267, add events to data mover prepare diagnostic (#9296, @Lyndon-Li)
|
||||
* Add option for privileged fs-backup pod (#9295, @sseago)
|
||||
* Fix issue #9193, don't connect repo in repo controller (#9291, @Lyndon-Li)
|
||||
* Implement concurrency control for cache of native VolumeSnapshotter plugin. (#9281, @0xLeo258)
|
||||
* Fix issue #7904, remove the code and doc for PVC node selection (#9269, @Lyndon-Li)
|
||||
* Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases (#9264, @shubham-pampattiwar)
|
||||
* Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment (#9256, @shubham-pampattiwar)
|
||||
* Implement wildcard namespace pattern expansion for backup namespace includes/excludes. This change adds support for wildcard patterns (*, ?, [abc], {a,b,c}) in namespace includes and excludes during backup operations (#9255, @Joeavaikath)
|
||||
* Protect VolumeSnapshot field from race condition during multi-thread backup (#9248, @0xLeo258)
|
||||
* Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244, @priyansh17)
|
||||
* Get pod list once per namespace in pvc IBA (#9226, @sseago)
|
||||
* Fix issue #7725, add design for backup repo cache configuration (#9148, @Lyndon-Li)
|
||||
* Fix issue #9229, don't attach backupPVC to the source node (#9233, @Lyndon-Li)
|
||||
* feat: Permit specifying annotations for the BackupPVC (#9173, @clementnuss)
|
||||
1
changelogs/unreleased/9132-mjnagel
Normal file
1
changelogs/unreleased/9132-mjnagel
Normal file
@@ -0,0 +1 @@
|
||||
Add `--apply` flag to `install` command, allowing usage of Kubernetes apply to make changes to existing installs
|
||||
1
changelogs/unreleased/9141-kaovilai
Normal file
1
changelogs/unreleased/9141-kaovilai
Normal file
@@ -0,0 +1 @@
|
||||
feat: Enhance BackupStorageLocation with Secret-based CA certificate support
|
||||
1
changelogs/unreleased/9148-Lyndon-Li
Normal file
1
changelogs/unreleased/9148-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #7725, add design for backup repo cache configuration
|
||||
1
changelogs/unreleased/9166-claude
Normal file
1
changelogs/unreleased/9166-claude
Normal file
@@ -0,0 +1 @@
|
||||
Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs
|
||||
1
changelogs/unreleased/9173-clementnuss
Normal file
1
changelogs/unreleased/9173-clementnuss
Normal file
@@ -0,0 +1 @@
|
||||
feat: Permit specifying annotations for the BackupPVC
|
||||
1
changelogs/unreleased/9206-Joeavaikath
Normal file
1
changelogs/unreleased/9206-Joeavaikath
Normal file
@@ -0,0 +1 @@
|
||||
Remove labels associated with previous backups
|
||||
1
changelogs/unreleased/9226-sseago
Normal file
1
changelogs/unreleased/9226-sseago
Normal file
@@ -0,0 +1 @@
|
||||
Get pod list once per namespace in pvc IBA
|
||||
1
changelogs/unreleased/9233-Lyndon-Li
Normal file
1
changelogs/unreleased/9233-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #9229, don't attach backupPVC to the source node
|
||||
1
changelogs/unreleased/9244-priyansh17
Normal file
1
changelogs/unreleased/9244-priyansh17
Normal file
@@ -0,0 +1 @@
|
||||
Update AzureAD Microsoft Authentication Library to v1.5.0
|
||||
1
changelogs/unreleased/9248-0xLeo258
Normal file
1
changelogs/unreleased/9248-0xLeo258
Normal file
@@ -0,0 +1 @@
|
||||
Protect VolumeSnapshot field from race condition during multi-thread backup
|
||||
10
changelogs/unreleased/9255-Joeavaikath
Normal file
10
changelogs/unreleased/9255-Joeavaikath
Normal file
@@ -0,0 +1,10 @@
|
||||
Implement wildcard namespace pattern expansion for backup namespace includes/excludes.
|
||||
|
||||
This change adds support for wildcard patterns (*, ?, [abc], {a,b,c}) in namespace includes and excludes during backup operations.
|
||||
When wildcard patterns are detected, they are expanded against the list of active namespaces in the cluster before the backup proceeds.
|
||||
|
||||
Key features:
|
||||
- Wildcard patterns in namespace includes/excludes are automatically detected and expanded
|
||||
- Pattern validation ensures unsupported patterns (regex, consecutive asterisks) are rejected
|
||||
- Empty wildcard results (e.g., "invalid*" matching no namespaces) correctly result in empty backups
|
||||
- Exact namespace names and "*" continue to work as before (no expansion needed)
|
||||
1
changelogs/unreleased/9256-shubham-pampattiwar
Normal file
1
changelogs/unreleased/9256-shubham-pampattiwar
Normal file
@@ -0,0 +1 @@
|
||||
Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment
|
||||
1
changelogs/unreleased/9264-shubham-pampattiwar
Normal file
1
changelogs/unreleased/9264-shubham-pampattiwar
Normal file
@@ -0,0 +1 @@
|
||||
Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases
|
||||
1
changelogs/unreleased/9269-Lyndon-Li
Normal file
1
changelogs/unreleased/9269-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #7904, remove the code and doc for PVC node selection
|
||||
1
changelogs/unreleased/9281-0xLeo258
Normal file
1
changelogs/unreleased/9281-0xLeo258
Normal file
@@ -0,0 +1 @@
|
||||
Implement concurrency control for cache of native VolumeSnapshotter plugin.
|
||||
1
changelogs/unreleased/9291-Lyndon-Li
Normal file
1
changelogs/unreleased/9291-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #9193, don't connect repo in repo controller
|
||||
1
changelogs/unreleased/9295-sseago
Normal file
1
changelogs/unreleased/9295-sseago
Normal file
@@ -0,0 +1 @@
|
||||
Add option for privileged fs-backup pod
|
||||
1
changelogs/unreleased/9296-Lyndon-Li
Normal file
1
changelogs/unreleased/9296-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #9267, add events to data mover prepare diagnostic
|
||||
1
changelogs/unreleased/9302-blackpiglet
Normal file
1
changelogs/unreleased/9302-blackpiglet
Normal file
@@ -0,0 +1 @@
|
||||
VerifyJSONConfigs verify every elements in Data.
|
||||
1
changelogs/unreleased/9307-sseago
Normal file
1
changelogs/unreleased/9307-sseago
Normal file
@@ -0,0 +1 @@
|
||||
Concurrent backup processing
|
||||
1
changelogs/unreleased/9321-shubham-pampattiwar
Normal file
1
changelogs/unreleased/9321-shubham-pampattiwar
Normal file
@@ -0,0 +1 @@
|
||||
Sanitize Azure HTTP responses in BSL status messages
|
||||
1
changelogs/unreleased/9329-T4iFooN-IX
Normal file
1
changelogs/unreleased/9329-T4iFooN-IX
Normal file
@@ -0,0 +1 @@
|
||||
Fix typos in documentation
|
||||
1
changelogs/unreleased/9333-Lyndon-Li
Normal file
1
changelogs/unreleased/9333-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #9332, add bytesDone for cache files
|
||||
1
changelogs/unreleased/9342-Lyndon-Li
Normal file
1
changelogs/unreleased/9342-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Add cache configuration to VGDP
|
||||
1
changelogs/unreleased/9350-blackpiglet
Normal file
1
changelogs/unreleased/9350-blackpiglet
Normal file
@@ -0,0 +1 @@
|
||||
Fix the Job build error when BackupReposiotry name longer than 63.
|
||||
1
changelogs/unreleased/9353-Lyndon-Li
Normal file
1
changelogs/unreleased/9353-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Add cache dir configuration for udmrepo
|
||||
1
changelogs/unreleased/9354-Lyndon-Li
Normal file
1
changelogs/unreleased/9354-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Add snapshotSize for DataDownload, PodVolumeRestore
|
||||
1
changelogs/unreleased/9357-sseago
Normal file
1
changelogs/unreleased/9357-sseago
Normal file
@@ -0,0 +1 @@
|
||||
Add incrementalSize to DU/PVB for reporting new/changed size
|
||||
1
changelogs/unreleased/9362-Lyndon-Li
Normal file
1
changelogs/unreleased/9362-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Support cache volume for generic restore exposer and pod volume exposer
|
||||
1
changelogs/unreleased/9366-blackpiglet
Normal file
1
changelogs/unreleased/9366-blackpiglet
Normal file
@@ -0,0 +1 @@
|
||||
Use hookIndex for recording multiple restore exec hooks.
|
||||
1
changelogs/unreleased/9367-shubham-pampattiwar
Normal file
1
changelogs/unreleased/9367-shubham-pampattiwar
Normal file
@@ -0,0 +1 @@
|
||||
Fix managed fields patch for resources using GenerateName
|
||||
1
changelogs/unreleased/9368-shubham-pampattiwar
Normal file
1
changelogs/unreleased/9368-shubham-pampattiwar
Normal file
@@ -0,0 +1 @@
|
||||
Track actual resource names for GenerateName in restore status
|
||||
1
changelogs/unreleased/9370-Lyndon-Li
Normal file
1
changelogs/unreleased/9370-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Add cache volume configuration
|
||||
1
changelogs/unreleased/9375-Lyndon-Li
Normal file
1
changelogs/unreleased/9375-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #9365, prevent fake completion notification due to multiple update of single PVR
|
||||
1
changelogs/unreleased/9379-Lyndon-Li
Normal file
1
changelogs/unreleased/9379-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Refactor repo provider interface for static configuration
|
||||
1
changelogs/unreleased/9389-sseago
Normal file
1
changelogs/unreleased/9389-sseago
Normal file
@@ -0,0 +1 @@
|
||||
don't copy securitycontext from first container if configmap found
|
||||
1
changelogs/unreleased/9391-Lyndon-Li
Normal file
1
changelogs/unreleased/9391-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Cache volume support for DataDownload
|
||||
1
changelogs/unreleased/9397-Lyndon-Li
Normal file
1
changelogs/unreleased/9397-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Cache volume for PVR
|
||||
1
changelogs/unreleased/9407-Lyndon-Li
Normal file
1
changelogs/unreleased/9407-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #9400, connect repo first time after creation so that init params could be written
|
||||
1
changelogs/unreleased/9414-shubham-pampattiwar
Normal file
1
changelogs/unreleased/9414-shubham-pampattiwar
Normal file
@@ -0,0 +1 @@
|
||||
Add Prometheus metrics for maintenance jobs
|
||||
1
changelogs/unreleased/9418-Lyndon-Li
Normal file
1
changelogs/unreleased/9418-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #9276, add doc for cache volume support
|
||||
1
changelogs/unreleased/9419-shubham-pampattiwar
Normal file
1
changelogs/unreleased/9419-shubham-pampattiwar
Normal file
@@ -0,0 +1 @@
|
||||
Apply volume policies to VolumeGroupSnapshot PVC filtering
|
||||
1
changelogs/unreleased/9420-Lyndon-Li
Normal file
1
changelogs/unreleased/9420-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #9194, add doc for GOMAXPROCS behavior change
|
||||
1
changelogs/unreleased/9431-blackpiglet
Normal file
1
changelogs/unreleased/9431-blackpiglet
Normal file
@@ -0,0 +1 @@
|
||||
Remove VolumeSnapshotClass from CSI B/R process.
|
||||
1
changelogs/unreleased/9441-shubham-pampattiwar
Normal file
1
changelogs/unreleased/9441-shubham-pampattiwar
Normal file
@@ -0,0 +1 @@
|
||||
Add PVC-to-Pod cache to improve volume policy performance
|
||||
1
changelogs/unreleased/9445-mpryc
Normal file
1
changelogs/unreleased/9445-mpryc
Normal file
@@ -0,0 +1 @@
|
||||
Fix plugin init container names exceeding DNS-1123 limit
|
||||
1
changelogs/unreleased/9452-blackpiglet
Normal file
1
changelogs/unreleased/9452-blackpiglet
Normal file
@@ -0,0 +1 @@
|
||||
Add maintenance job and data mover pod's labels and annotations setting.
|
||||
1
changelogs/unreleased/9474-blackpiglet
Normal file
1
changelogs/unreleased/9474-blackpiglet
Normal file
@@ -0,0 +1 @@
|
||||
Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence.
|
||||
1
changelogs/unreleased/9481-Lyndon-Li
Normal file
1
changelogs/unreleased/9481-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #9478, add diagnose info on expose peek fails
|
||||
1
changelogs/unreleased/9494-blackpiglet
Normal file
1
changelogs/unreleased/9494-blackpiglet
Normal file
@@ -0,0 +1 @@
|
||||
Maintenance Job only uses the first element of the LoadAffinity array
|
||||
1
changelogs/unreleased/9498-sseago
Normal file
1
changelogs/unreleased/9498-sseago
Normal file
@@ -0,0 +1 @@
|
||||
Remove backup from running list when backup fails validation
|
||||
@@ -1 +0,0 @@
|
||||
Fix VolumePolicy PVC phase condition filter for unbound PVCs (#9507)
|
||||
@@ -1 +0,0 @@
|
||||
Fix VolumePolicy PVC phase condition filter for unbound PVCs (#9507)
|
||||
@@ -1 +0,0 @@
|
||||
Support all glob wildcard characters in namespace validation
|
||||
14
go.mod
14
go.mod
@@ -1,6 +1,6 @@
|
||||
module github.com/vmware-tanzu/velero
|
||||
|
||||
go 1.25.7
|
||||
go 1.25.0
|
||||
|
||||
require (
|
||||
cloud.google.com/go/storage v1.57.2
|
||||
@@ -171,11 +171,11 @@ require (
|
||||
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
|
||||
go.opentelemetry.io/otel v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/metric v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.40.0 // indirect
|
||||
go.opentelemetry.io/otel v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.38.0 // indirect
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||
@@ -183,7 +183,7 @@ require (
|
||||
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
|
||||
golang.org/x/net v0.47.0 // indirect
|
||||
golang.org/x/sync v0.18.0 // indirect
|
||||
golang.org/x/sys v0.40.0 // indirect
|
||||
golang.org/x/sys v0.38.0 // indirect
|
||||
golang.org/x/term v0.37.0 // indirect
|
||||
golang.org/x/time v0.14.0 // indirect
|
||||
golang.org/x/tools v0.38.0 // indirect
|
||||
|
||||
24
go.sum
24
go.sum
@@ -748,18 +748,18 @@ go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.6
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
|
||||
go.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=
|
||||
go.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=
|
||||
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
|
||||
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0 h1:rixTyDGXFxRy1xzhKrotaHy3/KXdPhlWARrCgK+eqUY=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0/go.mod h1:dowW6UsM9MKbJq5JTz2AMVp3/5iW5I/TStsk8S+CfHw=
|
||||
go.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=
|
||||
go.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=
|
||||
go.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=
|
||||
go.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.40.0 h1:mtmdVqgQkeRxHgRv4qhyJduP3fYJRMX4AtAlbuWdCYw=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.40.0/go.mod h1:4Z2bGMf0KSK3uRjlczMOeMhKU2rhUqdWNoKcYrtcBPg=
|
||||
go.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=
|
||||
go.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=
|
||||
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
|
||||
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
|
||||
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
|
||||
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
|
||||
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o=
|
||||
go.starlark.net v0.0.0-20201006213952-227f4aabceb5/go.mod h1:f0znQkUKRrkk36XxWbGjMqQM8wGv/xHBVE2qc3B5oFU=
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca h1:VdD38733bfYv5tUZwEIskMM93VanwNIi5bIKnDrJdEY=
|
||||
@@ -969,8 +969,8 @@ golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
||||
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
||||
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM --platform=$TARGETPLATFORM golang:1.25.7-bookworm
|
||||
FROM --platform=$TARGETPLATFORM golang:1.25-bookworm
|
||||
|
||||
ARG GOPROXY
|
||||
|
||||
@@ -21,11 +21,9 @@ ENV GO111MODULE=on
|
||||
ENV GOPROXY=${GOPROXY}
|
||||
|
||||
# kubebuilder test bundle is separated from kubebuilder. Need to setup it for CI test.
|
||||
# Using setup-envtest to download envtest binaries
|
||||
RUN go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest && \
|
||||
mkdir -p /usr/local/kubebuilder/bin && \
|
||||
ENVTEST_ASSETS_DIR=$(setup-envtest use 1.33.0 --bin-dir /usr/local/kubebuilder/bin -p path) && \
|
||||
cp -r ${ENVTEST_ASSETS_DIR}/* /usr/local/kubebuilder/bin/
|
||||
RUN curl -sSLo envtest-bins.tar.gz https://go.kubebuilder.io/test-tools/1.22.1/linux/$(go env GOARCH) && \
|
||||
mkdir /usr/local/kubebuilder && \
|
||||
tar -C /usr/local/kubebuilder --strip-components=1 -zvxf envtest-bins.tar.gz
|
||||
|
||||
RUN wget --quiet https://github.com/kubernetes-sigs/kubebuilder/releases/download/v3.2.0/kubebuilder_linux_$(go env GOARCH) && \
|
||||
mv kubebuilder_linux_$(go env GOARCH) /usr/local/kubebuilder/bin/kubebuilder && \
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
diff --git a/go.mod b/go.mod
|
||||
index 5f939c481..f6205aa3c 100644
|
||||
index 5f939c481..6ae17f4a1 100644
|
||||
--- a/go.mod
|
||||
+++ b/go.mod
|
||||
@@ -24,32 +24,31 @@ require (
|
||||
@@ -14,13 +14,13 @@ index 5f939c481..f6205aa3c 100644
|
||||
- golang.org/x/term v0.4.0
|
||||
- golang.org/x/text v0.6.0
|
||||
- google.golang.org/api v0.106.0
|
||||
+ golang.org/x/crypto v0.45.0
|
||||
+ golang.org/x/net v0.47.0
|
||||
+ golang.org/x/crypto v0.36.0
|
||||
+ golang.org/x/net v0.38.0
|
||||
+ golang.org/x/oauth2 v0.28.0
|
||||
+ golang.org/x/sync v0.18.0
|
||||
+ golang.org/x/sys v0.38.0
|
||||
+ golang.org/x/term v0.37.0
|
||||
+ golang.org/x/text v0.31.0
|
||||
+ golang.org/x/sync v0.12.0
|
||||
+ golang.org/x/sys v0.31.0
|
||||
+ golang.org/x/term v0.30.0
|
||||
+ golang.org/x/text v0.23.0
|
||||
+ google.golang.org/api v0.114.0
|
||||
)
|
||||
|
||||
@@ -64,11 +64,11 @@ index 5f939c481..f6205aa3c 100644
|
||||
)
|
||||
|
||||
-go 1.18
|
||||
+go 1.24.0
|
||||
+go 1.23.0
|
||||
+
|
||||
+toolchain go1.24.11
|
||||
+toolchain go1.23.7
|
||||
diff --git a/go.sum b/go.sum
|
||||
index 026e1d2fa..4a37e7ac7 100644
|
||||
index 026e1d2fa..805792055 100644
|
||||
--- a/go.sum
|
||||
+++ b/go.sum
|
||||
@@ -1,23 +1,24 @@
|
||||
@@ -170,8 +170,8 @@ index 026e1d2fa..4a37e7ac7 100644
|
||||
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||
-golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE=
|
||||
-golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
|
||||
+golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
|
||||
+golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
|
||||
+golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
|
||||
+golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
@@ -181,8 +181,8 @@ index 026e1d2fa..4a37e7ac7 100644
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
-golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
|
||||
-golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
|
||||
+golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
|
||||
+golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
|
||||
+golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
|
||||
+golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
-golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
|
||||
-golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
|
||||
@@ -194,8 +194,8 @@ index 026e1d2fa..4a37e7ac7 100644
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
-golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
|
||||
-golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
+golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
|
||||
+golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
+golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
|
||||
+golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -205,21 +205,21 @@ index 026e1d2fa..4a37e7ac7 100644
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
-golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
|
||||
-golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
+golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
||||
+golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
+golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
|
||||
+golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
-golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
|
||||
-golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
|
||||
+golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
|
||||
+golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
|
||||
+golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
|
||||
+golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
-golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
|
||||
-golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
+golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
|
||||
+golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
|
||||
+golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
|
||||
+golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
|
||||
@@ -134,7 +134,6 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
pv := new(corev1api.PersistentVolume)
|
||||
var err error
|
||||
|
||||
var pvNotFoundErr error
|
||||
if groupResource == kuberesource.PersistentVolumeClaims {
|
||||
if err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &pvc); err != nil {
|
||||
v.logger.WithError(err).Error("fail to convert unstructured into PVC")
|
||||
@@ -143,10 +142,8 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
|
||||
pv, err = kubeutil.GetPVForPVC(pvc, v.client)
|
||||
if err != nil {
|
||||
// Any error means PV not available - save to return later if no policy matches
|
||||
v.logger.Debugf("PV not found for PVC %s: %v", pvc.Namespace+"/"+pvc.Name, err)
|
||||
pvNotFoundErr = err
|
||||
pv = nil
|
||||
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
|
||||
@@ -161,7 +158,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
|
||||
action, err := v.volumePolicy.GetMatchAction(vfd)
|
||||
if err != nil {
|
||||
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for %+v", vfd)
|
||||
v.logger.WithError(err).Errorf("fail to get VolumePolicy match action for PV %s", pv.Name)
|
||||
return false, err
|
||||
}
|
||||
|
||||
@@ -170,21 +167,15 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
// If there is no match action, go on to the next check.
|
||||
if action != nil {
|
||||
if action.Type == resourcepolicies.Snapshot {
|
||||
v.logger.Infof("performing snapshot action for %+v", vfd)
|
||||
v.logger.Infof(fmt.Sprintf("performing snapshot action for pv %s", pv.Name))
|
||||
return true, nil
|
||||
} else {
|
||||
v.logger.Infof("Skip snapshot action for %+v as the action type is %s", vfd, action.Type)
|
||||
v.logger.Infof("Skip snapshot action for pv %s as the action type is %s", pv.Name, action.Type)
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If resource is PVC, and PV is nil (e.g., Pending/Lost PVC with no matching policy), return the original error
|
||||
if groupResource == kuberesource.PersistentVolumeClaims && pv == nil && pvNotFoundErr != nil {
|
||||
v.logger.WithError(pvNotFoundErr).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, pvNotFoundErr
|
||||
}
|
||||
|
||||
// If this PV is claimed, see if we've already taken a (pod volume backup)
|
||||
// snapshot of the contents of this PV. If so, don't take a snapshot.
|
||||
if pv.Spec.ClaimRef != nil {
|
||||
@@ -218,7 +209,7 @@ func (v *volumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured, group
|
||||
return true, nil
|
||||
}
|
||||
|
||||
v.logger.Infof("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name)
|
||||
v.logger.Infof(fmt.Sprintf("skipping snapshot action for pv %s possibly due to no volume policy setting or snapshotVolumes is false", pv.Name))
|
||||
return false, nil
|
||||
}
|
||||
|
||||
@@ -228,7 +219,6 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
|
||||
return false, nil
|
||||
}
|
||||
|
||||
var pvNotFoundErr error
|
||||
if v.volumePolicy != nil {
|
||||
var resource any
|
||||
var err error
|
||||
@@ -240,13 +230,10 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
|
||||
v.logger.WithError(err).Errorf("fail to get PVC for pod %s", pod.Namespace+"/"+pod.Name)
|
||||
return false, err
|
||||
}
|
||||
pvResource, err := kubeutil.GetPVForPVC(pvc, v.client)
|
||||
resource, err = kubeutil.GetPVForPVC(pvc, v.client)
|
||||
if err != nil {
|
||||
// Any error means PV not available - save to return later if no policy matches
|
||||
v.logger.Debugf("PV not found for PVC %s: %v", pvc.Namespace+"/"+pvc.Name, err)
|
||||
pvNotFoundErr = err
|
||||
} else {
|
||||
resource = pvResource
|
||||
v.logger.WithError(err).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
|
||||
@@ -273,12 +260,6 @@ func (v volumeHelperImpl) ShouldPerformFSBackup(volume corev1api.Volume, pod cor
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
// If no policy matched and PV was not found, return the original error
|
||||
if pvNotFoundErr != nil {
|
||||
v.logger.WithError(pvNotFoundErr).Errorf("fail to get PV for PVC %s", pvc.Namespace+"/"+pvc.Name)
|
||||
return false, pvNotFoundErr
|
||||
}
|
||||
}
|
||||
|
||||
if v.shouldPerformFSBackupLegacy(volume, pod) {
|
||||
|
||||
@@ -286,7 +286,7 @@ func TestVolumeHelperImpl_ShouldPerformSnapshot(t *testing.T) {
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "PVC not having PV, return false and error when no matching policy",
|
||||
name: "PVC not having PV, return false and error case PV not found",
|
||||
inputObj: builder.ForPersistentVolumeClaim("default", "example-pvc").StorageClass("gp2-csi").Result(),
|
||||
groupResource: kuberesource.PersistentVolumeClaims,
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
@@ -1234,312 +1234,3 @@ func TestNewVolumeHelperImplWithCache_UsesCache(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.False(t, shouldSnapshot, "Expected snapshot to be skipped due to fs-backup selection via cache")
|
||||
}
|
||||
|
||||
// TestVolumeHelperImpl_ShouldPerformSnapshot_UnboundPVC tests that Pending and Lost PVCs with
|
||||
// phase-based skip policies don't cause errors when GetPVForPVC would fail.
|
||||
func TestVolumeHelperImpl_ShouldPerformSnapshot_UnboundPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
inputPVC *corev1api.PersistentVolumeClaim
|
||||
resourcePolicies *resourcepolicies.ResourcePolicies
|
||||
shouldSnapshot bool
|
||||
expectedErr bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with phase-based skip policy should not error and return false",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-pending").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Pending PVC without matching skip policy should error (no PV)",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-pending-no-policy").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"storageClass": []string{"gp2-csi"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: true,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with phase-based skip policy should not error and return false",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with policy for Pending and Lost should not error and return false",
|
||||
inputPVC: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldSnapshot: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t)
|
||||
|
||||
var p *resourcepolicies.Policies
|
||||
if tc.resourcePolicies != nil {
|
||||
p = &resourcepolicies.Policies{}
|
||||
err := p.BuildPolicy(tc.resourcePolicies)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
vh := NewVolumeHelperImpl(
|
||||
p,
|
||||
ptr.To(true),
|
||||
logrus.StandardLogger(),
|
||||
fakeClient,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
|
||||
obj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.inputPVC)
|
||||
require.NoError(t, err)
|
||||
|
||||
actualShouldSnapshot, actualError := vh.ShouldPerformSnapshot(&unstructured.Unstructured{Object: obj}, kuberesource.PersistentVolumeClaims)
|
||||
if tc.expectedErr {
|
||||
require.Error(t, actualError, "Want error; Got nil error")
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, actualError)
|
||||
require.Equalf(t, tc.shouldSnapshot, actualShouldSnapshot, "Want shouldSnapshot as %t; Got shouldSnapshot as %t", tc.shouldSnapshot, actualShouldSnapshot)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestVolumeHelperImpl_ShouldPerformFSBackup_UnboundPVC tests that Pending and Lost PVCs with
|
||||
// phase-based skip policies don't cause errors when GetPVForPVC would fail.
|
||||
func TestVolumeHelperImpl_ShouldPerformFSBackup_UnboundPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
pod *corev1api.Pod
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
resourcePolicies *resourcepolicies.ResourcePolicies
|
||||
shouldFSBackup bool
|
||||
expectedErr bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with phase-based skip policy should not error and return false",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-pending",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-pending",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-pending").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Pending PVC without matching skip policy should error (no PV)",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-pending",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-pending-no-policy",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-pending-no-policy").
|
||||
StorageClass("non-existent-class").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"storageClass": []string{"gp2-csi"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: true,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with phase-based skip policy should not error and return false",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-lost",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-lost",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with policy for Pending and Lost should not error and return false",
|
||||
pod: builder.ForPod("ns", "pod-1").
|
||||
Volumes(
|
||||
&corev1api.Volume{
|
||||
Name: "vol-lost",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pvc-lost",
|
||||
},
|
||||
},
|
||||
}).Result(),
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pvc-lost").
|
||||
StorageClass("some-class").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
resourcePolicies: &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
shouldFSBackup: false,
|
||||
expectedErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, tc.pvc)
|
||||
require.NoError(t, fakeClient.Create(t.Context(), tc.pod))
|
||||
|
||||
var p *resourcepolicies.Policies
|
||||
if tc.resourcePolicies != nil {
|
||||
p = &resourcepolicies.Policies{}
|
||||
err := p.BuildPolicy(tc.resourcePolicies)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
vh := NewVolumeHelperImpl(
|
||||
p,
|
||||
ptr.To(true),
|
||||
logrus.StandardLogger(),
|
||||
fakeClient,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
|
||||
actualShouldFSBackup, actualError := vh.ShouldPerformFSBackup(tc.pod.Spec.Volumes[0], *tc.pod)
|
||||
if tc.expectedErr {
|
||||
require.Error(t, actualError, "Want error; Got nil error")
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, actualError)
|
||||
require.Equalf(t, tc.shouldFSBackup, actualShouldFSBackup, "Want shouldFSBackup as %t; Got shouldFSBackup as %t", tc.shouldFSBackup, actualShouldFSBackup)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -687,14 +687,15 @@ func (ib *itemBackupper) getMatchAction(obj runtime.Unstructured, groupResource
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
|
||||
var pv *corev1api.PersistentVolume
|
||||
if pvName := pvc.Spec.VolumeName; pvName != "" {
|
||||
pv = &corev1api.PersistentVolume{}
|
||||
if err := ib.kbClient.Get(context.Background(), kbClient.ObjectKey{Name: pvName}, pv); err != nil {
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
pvName := pvc.Spec.VolumeName
|
||||
if pvName == "" {
|
||||
return nil, errors.Errorf("PVC has no volume backing this claim")
|
||||
}
|
||||
|
||||
pv := &corev1api.PersistentVolume{}
|
||||
if err := ib.kbClient.Get(context.Background(), kbClient.ObjectKey{Name: pvName}, pv); err != nil {
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
// If pv is nil for unbound PVCs - policy matching will use PVC-only conditions
|
||||
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
|
||||
return ib.backupRequest.ResPolicies.GetMatchAction(vfd)
|
||||
}
|
||||
@@ -708,10 +709,7 @@ func (ib *itemBackupper) trackSkippedPV(obj runtime.Unstructured, groupResource
|
||||
if name, err := getPVName(obj, groupResource); len(name) > 0 && err == nil {
|
||||
ib.backupRequest.SkippedPVTracker.Track(name, approach, reason)
|
||||
} else if err != nil {
|
||||
// Log at info level for tracking purposes. This is not an error because
|
||||
// it's expected for some resources (e.g., PVCs in Pending or Lost phase)
|
||||
// to not have a PV name. This occurs when volume policy skips unbound PVCs.
|
||||
log.WithError(err).Infof("unable to get PV name, skip tracking.")
|
||||
log.WithError(err).Warnf("unable to get PV name, skip tracking.")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -721,17 +719,6 @@ func (ib *itemBackupper) unTrackSkippedPV(obj runtime.Unstructured, groupResourc
|
||||
if name, err := getPVName(obj, groupResource); len(name) > 0 && err == nil {
|
||||
ib.backupRequest.SkippedPVTracker.Untrack(name)
|
||||
} else if err != nil {
|
||||
// For PVCs in Pending or Lost phase, it's expected that there's no PV name.
|
||||
// Log at debug level instead of warning to reduce noise.
|
||||
if groupResource == kuberesource.PersistentVolumeClaims {
|
||||
pvc := new(corev1api.PersistentVolumeClaim)
|
||||
if convErr := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), pvc); convErr == nil {
|
||||
if pvc.Status.Phase == corev1api.ClaimPending || pvc.Status.Phase == corev1api.ClaimLost {
|
||||
log.WithError(err).Debugf("unable to get PV name for %s PVC, skip untracking.", pvc.Status.Phase)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
log.WithError(err).Warnf("unable to get PV name, skip untracking.")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -17,15 +17,12 @@ limitations under the License.
|
||||
package backup
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
ctrlfake "sigs.k8s.io/controller-runtime/pkg/client/fake"
|
||||
|
||||
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
|
||||
"github.com/vmware-tanzu/velero/pkg/kuberesource"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -272,225 +269,3 @@ func TestAddVolumeInfo(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetMatchAction_PendingLostPVC(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
require.NoError(t, corev1api.AddToScheme(scheme))
|
||||
|
||||
// Create resource policies that skip Pending/Lost PVCs
|
||||
resPolicies := &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
policies := &resourcepolicies.Policies{}
|
||||
err := policies.BuildPolicy(resPolicies)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
pv *corev1api.PersistentVolume
|
||||
expectedAction *resourcepolicies.Action
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with no VolumeName should match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with no VolumeName should match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Bound PVC with VolumeName and matching PV should not match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
StorageClass("test-sc").
|
||||
VolumeName("test-pv").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
pv: builder.ForPersistentVolume("test-pv").StorageClass("test-sc").Result(),
|
||||
expectedAction: nil,
|
||||
expectError: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
// Build fake client with PV if present
|
||||
clientBuilder := ctrlfake.NewClientBuilder().WithScheme(scheme)
|
||||
if tc.pv != nil {
|
||||
clientBuilder = clientBuilder.WithObjects(tc.pv)
|
||||
}
|
||||
fakeClient := clientBuilder.Build()
|
||||
|
||||
ib := &itemBackupper{
|
||||
kbClient: fakeClient,
|
||||
backupRequest: &Request{
|
||||
ResPolicies: policies,
|
||||
},
|
||||
}
|
||||
|
||||
// Convert PVC to unstructured
|
||||
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
|
||||
require.NoError(t, err)
|
||||
obj := &unstructured.Unstructured{Object: pvcData}
|
||||
|
||||
action, err := ib.getMatchAction(obj, kuberesource.PersistentVolumeClaims, csiBIAPluginName)
|
||||
if tc.expectError {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
if tc.expectedAction == nil {
|
||||
assert.Nil(t, action)
|
||||
} else {
|
||||
require.NotNil(t, action)
|
||||
assert.Equal(t, tc.expectedAction.Type, action.Type)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestTrackSkippedPV_PendingLostPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC should log at info level",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
},
|
||||
{
|
||||
name: "Lost PVC should log at info level",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
},
|
||||
{
|
||||
name: "Bound PVC without VolumeName should log at info level",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ib := &itemBackupper{
|
||||
backupRequest: &Request{
|
||||
SkippedPVTracker: NewSkipPVTracker(),
|
||||
},
|
||||
}
|
||||
|
||||
// Set up log capture
|
||||
logOutput := &bytes.Buffer{}
|
||||
logger := logrus.New()
|
||||
logger.SetOutput(logOutput)
|
||||
logger.SetLevel(logrus.DebugLevel)
|
||||
|
||||
// Convert PVC to unstructured
|
||||
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
|
||||
require.NoError(t, err)
|
||||
obj := &unstructured.Unstructured{Object: pvcData}
|
||||
|
||||
ib.trackSkippedPV(obj, kuberesource.PersistentVolumeClaims, "", "test reason", logger)
|
||||
|
||||
logStr := logOutput.String()
|
||||
assert.Contains(t, logStr, "level=info")
|
||||
assert.Contains(t, logStr, "unable to get PV name, skip tracking.")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestUnTrackSkippedPV_PendingLostPVC(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
expectWarningLog bool
|
||||
expectDebugMessage string
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC should log at debug level, not warning",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
expectWarningLog: false,
|
||||
expectDebugMessage: "unable to get PV name for Pending PVC, skip untracking.",
|
||||
},
|
||||
{
|
||||
name: "Lost PVC should log at debug level, not warning",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
expectWarningLog: false,
|
||||
expectDebugMessage: "unable to get PV name for Lost PVC, skip untracking.",
|
||||
},
|
||||
{
|
||||
name: "Bound PVC without VolumeName should log warning",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
expectWarningLog: true,
|
||||
expectDebugMessage: "",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ib := &itemBackupper{
|
||||
backupRequest: &Request{
|
||||
SkippedPVTracker: NewSkipPVTracker(),
|
||||
},
|
||||
}
|
||||
|
||||
// Set up log capture
|
||||
logOutput := &bytes.Buffer{}
|
||||
logger := logrus.New()
|
||||
logger.SetOutput(logOutput)
|
||||
logger.SetLevel(logrus.DebugLevel)
|
||||
|
||||
// Convert PVC to unstructured
|
||||
pvcData, err := runtime.DefaultUnstructuredConverter.ToUnstructured(tc.pvc)
|
||||
require.NoError(t, err)
|
||||
obj := &unstructured.Unstructured{Object: pvcData}
|
||||
|
||||
ib.unTrackSkippedPV(obj, kuberesource.PersistentVolumeClaims, logger)
|
||||
|
||||
logStr := logOutput.String()
|
||||
if tc.expectWarningLog {
|
||||
assert.Contains(t, logStr, "level=warning")
|
||||
assert.Contains(t, logStr, "unable to get PV name, skip untracking.")
|
||||
} else {
|
||||
assert.NotContains(t, logStr, "level=warning")
|
||||
if tc.expectDebugMessage != "" {
|
||||
assert.Contains(t, logStr, "level=debug")
|
||||
assert.Contains(t, logStr, tc.expectDebugMessage)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -210,9 +210,11 @@ func resultsKey(ns, name string) string {
|
||||
|
||||
func (b *backupper) getMatchAction(resPolicies *resourcepolicies.Policies, pvc *corev1api.PersistentVolumeClaim, volume *corev1api.Volume) (*resourcepolicies.Action, error) {
|
||||
if pvc != nil {
|
||||
// Ignore err, if the PV is not available (Pending/Lost PVC or PV fetch failed) - try matching with PVC only
|
||||
// GetPVForPVC returns nil for all error cases
|
||||
pv, _ := kube.GetPVForPVC(pvc, b.crClient)
|
||||
pv := new(corev1api.PersistentVolume)
|
||||
err := b.crClient.Get(context.TODO(), ctrlclient.ObjectKey{Name: pvc.Spec.VolumeName}, pv)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error getting pv for pvc %s", pvc.Spec.VolumeName)
|
||||
}
|
||||
vfd := resourcepolicies.NewVolumeFilterData(pv, nil, pvc)
|
||||
return resPolicies.GetMatchAction(vfd)
|
||||
}
|
||||
|
||||
@@ -309,8 +309,8 @@ func createNodeObj() *corev1api.Node {
|
||||
|
||||
func TestBackupPodVolumes(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
require.NoError(t, velerov1api.AddToScheme(scheme))
|
||||
require.NoError(t, corev1api.AddToScheme(scheme))
|
||||
velerov1api.AddToScheme(scheme)
|
||||
corev1api.AddToScheme(scheme)
|
||||
log := logrus.New()
|
||||
|
||||
tests := []struct {
|
||||
@@ -778,7 +778,7 @@ func TestWaitAllPodVolumesProcessed(t *testing.T) {
|
||||
|
||||
backuper := newBackupper(c.ctx, log, nil, nil, informer, nil, "", &velerov1api.Backup{})
|
||||
if c.pvb != nil {
|
||||
require.NoError(t, backuper.pvbIndexer.Add(c.pvb))
|
||||
backuper.pvbIndexer.Add(c.pvb)
|
||||
backuper.wg.Add(1)
|
||||
}
|
||||
|
||||
@@ -833,185 +833,3 @@ func TestPVCBackupSummary(t *testing.T) {
|
||||
assert.Empty(t, pbs.Skipped)
|
||||
assert.Len(t, pbs.Backedup, 2)
|
||||
}
|
||||
|
||||
func TestGetMatchAction_PendingPVC(t *testing.T) {
|
||||
// Create resource policies that skip Pending/Lost PVCs
|
||||
resPolicies := &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending", "Lost"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
policies := &resourcepolicies.Policies{}
|
||||
err := policies.BuildPolicy(resPolicies)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
volume *corev1api.Volume
|
||||
pv *corev1api.PersistentVolume
|
||||
expectedAction *resourcepolicies.Action
|
||||
expectError bool
|
||||
}{
|
||||
{
|
||||
name: "Pending PVC with pvcPhase skip policy should return skip action",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pending-pvc",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Lost PVC with pvcPhase skip policy should return skip action",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "lost-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimLost).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "lost-pvc",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip},
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Bound PVC with matching PV should not match pvcPhase policy",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "bound-pvc").
|
||||
StorageClass("test-sc").
|
||||
VolumeName("test-pv").
|
||||
Phase(corev1api.ClaimBound).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "bound-pvc",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: builder.ForPersistentVolume("test-pv").StorageClass("test-sc").Result(),
|
||||
expectedAction: nil,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "Pending PVC with no matching policy should return nil action",
|
||||
pvc: builder.ForPersistentVolumeClaim("ns", "pending-pvc-no-match").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result(),
|
||||
volume: &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pending-pvc-no-match",
|
||||
},
|
||||
},
|
||||
},
|
||||
pv: nil,
|
||||
expectedAction: &resourcepolicies.Action{Type: resourcepolicies.Skip}, // Will match the pvcPhase policy
|
||||
expectError: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
// Build fake client with PV if present
|
||||
var objs []runtime.Object
|
||||
if tc.pv != nil {
|
||||
objs = append(objs, tc.pv)
|
||||
}
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, objs...)
|
||||
|
||||
b := &backupper{
|
||||
crClient: fakeClient,
|
||||
}
|
||||
|
||||
action, err := b.getMatchAction(policies, tc.pvc, tc.volume)
|
||||
if tc.expectError {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
if tc.expectedAction == nil {
|
||||
assert.Nil(t, action)
|
||||
} else {
|
||||
require.NotNil(t, action)
|
||||
assert.Equal(t, tc.expectedAction.Type, action.Type)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetMatchAction_PVCWithoutPVLookupError(t *testing.T) {
|
||||
// Test that when a PVC has a VolumeName but the PV doesn't exist,
|
||||
// the function ignores the error and tries to match with PVC only
|
||||
resPolicies := &resourcepolicies.ResourcePolicies{
|
||||
Version: "v1",
|
||||
VolumePolicies: []resourcepolicies.VolumePolicy{
|
||||
{
|
||||
Conditions: map[string]any{
|
||||
"pvcPhase": []string{"Pending"},
|
||||
},
|
||||
Action: resourcepolicies.Action{
|
||||
Type: resourcepolicies.Skip,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
policies := &resourcepolicies.Policies{}
|
||||
err := policies.BuildPolicy(resPolicies)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Pending PVC without a matching PV in the cluster
|
||||
pvc := builder.ForPersistentVolumeClaim("ns", "pending-pvc").
|
||||
StorageClass("test-sc").
|
||||
Phase(corev1api.ClaimPending).
|
||||
Result()
|
||||
|
||||
volume := &corev1api.Volume{
|
||||
Name: "test-volume",
|
||||
VolumeSource: corev1api.VolumeSource{
|
||||
PersistentVolumeClaim: &corev1api.PersistentVolumeClaimVolumeSource{
|
||||
ClaimName: "pending-pvc",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Empty client - no PV exists
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t)
|
||||
|
||||
b := &backupper{
|
||||
crClient: fakeClient,
|
||||
}
|
||||
|
||||
// Should succeed even though PV lookup would fail
|
||||
// because the function ignores PV lookup errors and uses PVC-only matching
|
||||
action, err := b.getMatchAction(policies, pvc, volume)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, action)
|
||||
assert.Equal(t, resourcepolicies.Skip, action.Type)
|
||||
}
|
||||
|
||||
@@ -666,22 +666,10 @@ func validateNamespaceName(ns string) []error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Validate the namespace name to ensure it is a valid wildcard pattern
|
||||
if err := wildcard.ValidateNamespaceName(ns); err != nil {
|
||||
return []error{err}
|
||||
}
|
||||
|
||||
// Kubernetes does not allow wildcard characters in namespaces but Velero uses them
|
||||
// for glob patterns. Replace wildcard characters with valid characters to pass
|
||||
// Kubernetes validation.
|
||||
tmpNamespace := ns
|
||||
|
||||
// Replace glob wildcard characters with valid alphanumeric characters
|
||||
// Note: Validation of wildcard patterns is handled by the wildcard package.
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "*", "x") // matches any sequence
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "?", "x") // matches single character
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "[", "x") // character class start
|
||||
tmpNamespace = strings.ReplaceAll(tmpNamespace, "]", "x") // character class end
|
||||
// Kubernetes does not allow asterisks in namespaces but Velero uses them as
|
||||
// wildcards. Replace asterisks with an arbitrary letter to pass Kubernetes
|
||||
// validation.
|
||||
tmpNamespace := strings.ReplaceAll(ns, "*", "x")
|
||||
|
||||
if errMsgs := validation.ValidateNamespaceName(tmpNamespace, false); errMsgs != nil {
|
||||
for _, msg := range errMsgs {
|
||||
|
||||
@@ -289,54 +289,6 @@ func TestValidateNamespaceIncludesExcludes(t *testing.T) {
|
||||
excludes: []string{"bar"},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "glob characters in includes should not error",
|
||||
includes: []string{"kube-*", "test-?", "ns-[0-9]"},
|
||||
excludes: []string{},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "glob characters in excludes should not error",
|
||||
includes: []string{"default"},
|
||||
excludes: []string{"test-*", "app-?", "ns-[1-5]"},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "character class in includes should not error",
|
||||
includes: []string{"ns-[abc]", "test-[0-9]"},
|
||||
excludes: []string{},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "mixed glob patterns should not error",
|
||||
includes: []string{"kube-*", "test-?"},
|
||||
excludes: []string{"*-test", "debug-[0-9]"},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "pipe character in includes should error",
|
||||
includes: []string{"namespace|other"},
|
||||
excludes: []string{},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "parentheses in includes should error",
|
||||
includes: []string{"namespace(prod)", "test-(dev)"},
|
||||
excludes: []string{},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "exclamation mark in includes should error",
|
||||
includes: []string{"!namespace", "test!"},
|
||||
excludes: []string{},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "unsupported characters in excludes should error",
|
||||
includes: []string{"default"},
|
||||
excludes: []string{"test|prod", "app(staging)"},
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
@@ -1130,6 +1082,16 @@ func TestExpandIncludesExcludes(t *testing.T) {
|
||||
expectedWildcardExpanded: true,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace wildcard pattern",
|
||||
includes: []string{"app-{prod,dev}"},
|
||||
excludes: []string{},
|
||||
activeNamespaces: []string{"app-prod", "app-dev", "app-test", "default"},
|
||||
expectedIncludes: []string{"app-prod", "app-dev"},
|
||||
expectedExcludes: []string{},
|
||||
expectedWildcardExpanded: true,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "empty activeNamespaces with wildcards",
|
||||
includes: []string{"kube-*"},
|
||||
@@ -1271,6 +1233,13 @@ func TestResolveNamespaceList(t *testing.T) {
|
||||
expectedNamespaces: []string{"kube-system", "kube-public"},
|
||||
preExpandWildcards: true,
|
||||
},
|
||||
{
|
||||
name: "complex wildcard pattern",
|
||||
includes: []string{"app-{prod,dev}", "kube-*"},
|
||||
excludes: []string{"*-test"},
|
||||
activeNamespaces: []string{"app-prod", "app-dev", "app-test", "kube-system", "kube-test", "default"},
|
||||
expectedNamespaces: []string{"app-prod", "app-dev", "kube-system"},
|
||||
},
|
||||
{
|
||||
name: "question mark wildcard pattern",
|
||||
includes: []string{"ns-?"},
|
||||
|
||||
@@ -417,19 +417,19 @@ func MakePodPVCAttachment(volumeName string, volumeMode *corev1api.PersistentVol
|
||||
return volumeMounts, volumeDevices, volumePath
|
||||
}
|
||||
|
||||
// GetPVForPVC returns the PersistentVolume backing a PVC
|
||||
// returns PV, error.
|
||||
// PV will be nil on error
|
||||
func GetPVForPVC(
|
||||
pvc *corev1api.PersistentVolumeClaim,
|
||||
crClient crclient.Client,
|
||||
) (*corev1api.PersistentVolume, error) {
|
||||
if pvc.Spec.VolumeName == "" {
|
||||
return nil, errors.Errorf("PVC %s/%s has no volume backing this claim", pvc.Namespace, pvc.Name)
|
||||
return nil, errors.Errorf("PVC %s/%s has no volume backing this claim",
|
||||
pvc.Namespace, pvc.Name)
|
||||
}
|
||||
if pvc.Status.Phase != corev1api.ClaimBound {
|
||||
return nil, errors.Errorf("PVC %s/%s is in phase %v and is not bound to a volume",
|
||||
pvc.Namespace, pvc.Name, pvc.Status.Phase)
|
||||
// TODO: confirm if this PVC should be snapshotted if it has no PV bound
|
||||
return nil,
|
||||
errors.Errorf("PVC %s/%s is in phase %v and is not bound to a volume",
|
||||
pvc.Namespace, pvc.Name, pvc.Status.Phase)
|
||||
}
|
||||
|
||||
pv := &corev1api.PersistentVolume{}
|
||||
|
||||
@@ -31,77 +31,70 @@ func ShouldExpandWildcards(includes []string, excludes []string) bool {
|
||||
}
|
||||
|
||||
// containsWildcardPattern checks if a pattern contains any wildcard symbols
|
||||
// Supported patterns: *, ?, [abc]
|
||||
// Supported patterns: *, ?, [abc], {a,b,c}
|
||||
// Note: . and + are treated as literal characters (not wildcards)
|
||||
// Note: ** and consecutive asterisks are NOT supported (will cause validation error)
|
||||
func containsWildcardPattern(pattern string) bool {
|
||||
return strings.ContainsAny(pattern, "*?[")
|
||||
return strings.ContainsAny(pattern, "*?[{")
|
||||
}
|
||||
|
||||
func validateWildcardPatterns(patterns []string) error {
|
||||
for _, pattern := range patterns {
|
||||
if err := ValidateNamespaceName(pattern); err != nil {
|
||||
// Check for invalid regex-only patterns that we don't support
|
||||
if strings.ContainsAny(pattern, "|()") {
|
||||
return errors.New("wildcard pattern contains unsupported regex symbols: |, (, )")
|
||||
}
|
||||
|
||||
// Check for consecutive asterisks (2 or more)
|
||||
if strings.Contains(pattern, "**") {
|
||||
return errors.New("wildcard pattern contains consecutive asterisks (only single * allowed)")
|
||||
}
|
||||
|
||||
// Check for malformed brace patterns
|
||||
if err := validateBracePatterns(pattern); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func ValidateNamespaceName(pattern string) error {
|
||||
// Check for invalid characters that are not supported in glob patterns
|
||||
if strings.ContainsAny(pattern, "|()!{},") {
|
||||
return errors.New("wildcard pattern contains unsupported characters: |, (, ), !, {, }, ,")
|
||||
}
|
||||
|
||||
// Check for consecutive asterisks (2 or more)
|
||||
if strings.Contains(pattern, "**") {
|
||||
return errors.New("wildcard pattern contains consecutive asterisks (only single * allowed)")
|
||||
}
|
||||
|
||||
// Check for malformed brace patterns
|
||||
if err := validateBracePatterns(pattern); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// validateBracePatterns checks for malformed brace patterns like unclosed braces or empty braces
|
||||
// Also validates bracket patterns [] for character classes
|
||||
func validateBracePatterns(pattern string) error {
|
||||
bracketDepth := 0
|
||||
depth := 0
|
||||
|
||||
for i := 0; i < len(pattern); i++ {
|
||||
if pattern[i] == '[' {
|
||||
bracketStart := i
|
||||
bracketDepth++
|
||||
if pattern[i] == '{' {
|
||||
braceStart := i
|
||||
depth++
|
||||
|
||||
// Scan ahead to find the matching closing bracket and validate content
|
||||
for j := i + 1; j < len(pattern) && bracketDepth > 0; j++ {
|
||||
if pattern[j] == ']' {
|
||||
bracketDepth--
|
||||
if bracketDepth == 0 {
|
||||
// Found matching closing bracket - validate content
|
||||
content := pattern[bracketStart+1 : j]
|
||||
if content == "" {
|
||||
return errors.New("wildcard pattern contains empty bracket pattern '[]'")
|
||||
// Scan ahead to find the matching closing brace and validate content
|
||||
for j := i + 1; j < len(pattern) && depth > 0; j++ {
|
||||
if pattern[j] == '{' {
|
||||
depth++
|
||||
} else if pattern[j] == '}' {
|
||||
depth--
|
||||
if depth == 0 {
|
||||
// Found matching closing brace - validate content
|
||||
content := pattern[braceStart+1 : j]
|
||||
if strings.Trim(content, ", \t") == "" {
|
||||
return errors.New("wildcard pattern contains empty brace pattern '{}'")
|
||||
}
|
||||
// Skip to the closing bracket
|
||||
// Skip to the closing brace
|
||||
i = j
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If we exited the loop without finding a match (bracketDepth > 0), bracket is unclosed
|
||||
if bracketDepth > 0 {
|
||||
return errors.New("wildcard pattern contains unclosed bracket '['")
|
||||
// If we exited the loop without finding a match (depth > 0), brace is unclosed
|
||||
if depth > 0 {
|
||||
return errors.New("wildcard pattern contains unclosed brace '{'")
|
||||
}
|
||||
|
||||
// i is now positioned at the closing bracket; the outer loop will increment it
|
||||
} else if pattern[i] == ']' {
|
||||
// Found a closing bracket without a matching opening bracket
|
||||
return errors.New("wildcard pattern contains unmatched closing bracket ']'")
|
||||
// i is now positioned at the closing brace; the outer loop will increment it
|
||||
} else if pattern[i] == '}' {
|
||||
// Found a closing brace without a matching opening brace
|
||||
return errors.New("wildcard pattern contains unmatched closing brace '}'")
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -90,7 +90,7 @@ func TestShouldExpandWildcards(t *testing.T) {
|
||||
name: "brace alternatives wildcard",
|
||||
includes: []string{"ns{prod,staging}"},
|
||||
excludes: []string{},
|
||||
expected: false, // brace alternatives are not supported
|
||||
expected: true, // brace alternatives are considered wildcard
|
||||
},
|
||||
{
|
||||
name: "dot is literal - not wildcard",
|
||||
@@ -237,9 +237,9 @@ func TestExpandWildcards(t *testing.T) {
|
||||
activeNamespaces: []string{"app-prod", "app-staging", "app-dev", "db-prod"},
|
||||
includes: []string{"app-{prod,staging}"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: nil,
|
||||
expectedIncludes: []string{"app-prod", "app-staging"}, // {prod,staging} matches either
|
||||
expectedExcludes: nil,
|
||||
expectError: true,
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "literal dot and plus patterns",
|
||||
@@ -259,6 +259,33 @@ func TestExpandWildcards(t *testing.T) {
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // |, (, ) are not supported
|
||||
},
|
||||
{
|
||||
name: "unclosed brace patterns should error",
|
||||
activeNamespaces: []string{"app-prod"},
|
||||
includes: []string{"app-{prod,staging"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // unclosed brace
|
||||
},
|
||||
{
|
||||
name: "empty brace patterns should error",
|
||||
activeNamespaces: []string{"app-prod"},
|
||||
includes: []string{"app-{}"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // empty braces
|
||||
},
|
||||
{
|
||||
name: "unmatched closing brace should error",
|
||||
activeNamespaces: []string{"app-prod"},
|
||||
includes: []string{"app-prod}"},
|
||||
excludes: []string{},
|
||||
expectedIncludes: nil,
|
||||
expectedExcludes: nil,
|
||||
expectError: true, // unmatched closing brace
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
@@ -327,6 +354,13 @@ func TestExpandWildcardsPrivate(t *testing.T) {
|
||||
expected: []string{}, // returns empty slice, not nil
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace patterns work correctly",
|
||||
patterns: []string{"app-{prod,staging}"},
|
||||
activeNamespaces: []string{"app-prod", "app-staging", "app-dev", "app-{prod,staging}"},
|
||||
expected: []string{"app-prod", "app-staging"}, // brace patterns do expand
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "duplicate matches from multiple patterns",
|
||||
patterns: []string{"app-*", "*-prod"},
|
||||
@@ -355,6 +389,20 @@ func TestExpandWildcardsPrivate(t *testing.T) {
|
||||
expected: []string{"nsa", "nsb", "nsc"}, // [a-c] matches a to c
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "negated character class",
|
||||
patterns: []string{"ns[!abc]"},
|
||||
activeNamespaces: []string{"nsa", "nsb", "nsc", "nsd", "ns1"},
|
||||
expected: []string{"nsd", "ns1"}, // [!abc] matches anything except a, b, c
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace alternatives",
|
||||
patterns: []string{"app-{prod,test}"},
|
||||
activeNamespaces: []string{"app-prod", "app-test", "app-staging", "db-prod"},
|
||||
expected: []string{"app-prod", "app-test"}, // {prod,test} matches either
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "double asterisk should error",
|
||||
patterns: []string{"**"},
|
||||
@@ -362,6 +410,13 @@ func TestExpandWildcardsPrivate(t *testing.T) {
|
||||
expected: nil,
|
||||
expectError: true, // ** is not allowed
|
||||
},
|
||||
{
|
||||
name: "literal dot and plus",
|
||||
patterns: []string{"app.prod", "service+"},
|
||||
activeNamespaces: []string{"app.prod", "appXprod", "service+", "service"},
|
||||
expected: []string{"app.prod", "service+"}, // . and + are literal
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "unsupported regex symbols should error",
|
||||
patterns: []string{"ns(1|2)"},
|
||||
@@ -413,101 +468,153 @@ func TestValidateBracePatterns(t *testing.T) {
|
||||
expectError bool
|
||||
errorMsg string
|
||||
}{
|
||||
// Valid square bracket patterns
|
||||
// Valid patterns
|
||||
{
|
||||
name: "valid square bracket pattern",
|
||||
pattern: "ns[abc]",
|
||||
name: "valid single brace pattern",
|
||||
pattern: "app-{prod,staging}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid square bracket pattern with range",
|
||||
pattern: "ns[a-z]",
|
||||
name: "valid brace with single option",
|
||||
pattern: "app-{prod}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid square bracket pattern with numbers",
|
||||
pattern: "ns[0-9]",
|
||||
name: "valid brace with three options",
|
||||
pattern: "app-{prod,staging,dev}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid square bracket pattern with mixed",
|
||||
pattern: "ns[a-z0-9]",
|
||||
name: "valid pattern with text before and after brace",
|
||||
pattern: "prefix-{a,b}-suffix",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid square bracket pattern with single character",
|
||||
pattern: "ns[a]",
|
||||
name: "valid pattern with no braces",
|
||||
pattern: "app-prod",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "valid square bracket pattern with text before and after",
|
||||
pattern: "prefix-[abc]-suffix",
|
||||
name: "valid pattern with asterisk",
|
||||
pattern: "app-*",
|
||||
expectError: false,
|
||||
},
|
||||
// Unclosed opening brackets
|
||||
{
|
||||
name: "unclosed opening bracket at end",
|
||||
pattern: "ns[abc",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed bracket",
|
||||
name: "valid brace with spaces around content",
|
||||
pattern: "app-{ prod , staging }",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "unclosed opening bracket at start",
|
||||
pattern: "[abc",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed bracket",
|
||||
name: "valid brace with numbers",
|
||||
pattern: "ns-{1,2,3}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "unclosed opening bracket in middle",
|
||||
pattern: "ns[abc-test",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed bracket",
|
||||
name: "valid brace with hyphens in options",
|
||||
pattern: "{app-prod,db-staging}",
|
||||
expectError: false,
|
||||
},
|
||||
|
||||
// Unmatched closing brackets
|
||||
// Unclosed opening braces
|
||||
{
|
||||
name: "unmatched closing bracket at end",
|
||||
pattern: "ns-abc]",
|
||||
name: "unclosed opening brace at end",
|
||||
pattern: "app-{prod,staging",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing bracket",
|
||||
errorMsg: "unclosed brace",
|
||||
},
|
||||
{
|
||||
name: "unmatched closing bracket at start",
|
||||
pattern: "]ns-abc",
|
||||
name: "unclosed opening brace at start",
|
||||
pattern: "{prod,staging",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing bracket",
|
||||
errorMsg: "unclosed brace",
|
||||
},
|
||||
{
|
||||
name: "unmatched closing bracket in middle",
|
||||
pattern: "ns-]abc",
|
||||
name: "unclosed opening brace in middle",
|
||||
pattern: "app-{prod-test",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing bracket",
|
||||
errorMsg: "unclosed brace",
|
||||
},
|
||||
{
|
||||
name: "extra closing bracket after valid pair",
|
||||
pattern: "ns[abc]]",
|
||||
name: "multiple unclosed braces",
|
||||
pattern: "app-{prod-{staging",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing bracket",
|
||||
errorMsg: "unclosed brace",
|
||||
},
|
||||
|
||||
// Empty bracket patterns
|
||||
// Unmatched closing braces
|
||||
{
|
||||
name: "completely empty brackets",
|
||||
pattern: "ns[]",
|
||||
name: "unmatched closing brace at end",
|
||||
pattern: "app-prod}",
|
||||
expectError: true,
|
||||
errorMsg: "empty bracket pattern",
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
{
|
||||
name: "empty brackets at start",
|
||||
pattern: "[]ns",
|
||||
name: "unmatched closing brace at start",
|
||||
pattern: "}app-prod",
|
||||
expectError: true,
|
||||
errorMsg: "empty bracket pattern",
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
{
|
||||
name: "empty brackets standalone",
|
||||
pattern: "[]",
|
||||
name: "unmatched closing brace in middle",
|
||||
pattern: "app-}prod",
|
||||
expectError: true,
|
||||
errorMsg: "empty bracket pattern",
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
{
|
||||
name: "extra closing brace after valid pair",
|
||||
pattern: "app-{prod,staging}}",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
|
||||
// Empty brace patterns
|
||||
{
|
||||
name: "completely empty braces",
|
||||
pattern: "app-{}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with only spaces",
|
||||
pattern: "app-{ }",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with only comma",
|
||||
pattern: "app-{,}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with only commas",
|
||||
pattern: "app-{,,,}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with commas and spaces",
|
||||
pattern: "app-{ , , }",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "braces with tabs and commas",
|
||||
pattern: "app-{\t,\t}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "empty braces at start",
|
||||
pattern: "{}app-prod",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
{
|
||||
name: "empty braces standalone",
|
||||
pattern: "{}",
|
||||
expectError: true,
|
||||
errorMsg: "empty brace pattern",
|
||||
},
|
||||
|
||||
// Edge cases
|
||||
@@ -516,6 +623,58 @@ func TestValidateBracePatterns(t *testing.T) {
|
||||
pattern: "",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "pattern with only opening brace",
|
||||
pattern: "{",
|
||||
expectError: true,
|
||||
errorMsg: "unclosed brace",
|
||||
},
|
||||
{
|
||||
name: "pattern with only closing brace",
|
||||
pattern: "}",
|
||||
expectError: true,
|
||||
errorMsg: "unmatched closing brace",
|
||||
},
|
||||
{
|
||||
name: "valid brace with special characters inside",
|
||||
pattern: "app-{prod-1,staging_2,dev.3}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace with asterisk inside option",
|
||||
pattern: "app-{prod*,staging}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "multiple valid brace patterns",
|
||||
pattern: "{app,db}-{prod,staging}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace with single character",
|
||||
pattern: "app-{a}",
|
||||
expectError: false,
|
||||
},
|
||||
{
|
||||
name: "brace with trailing comma but has content",
|
||||
pattern: "app-{prod,staging,}",
|
||||
expectError: false, // Has content, so it's valid
|
||||
},
|
||||
{
|
||||
name: "brace with leading comma but has content",
|
||||
pattern: "app-{,prod,staging}",
|
||||
expectError: false, // Has content, so it's valid
|
||||
},
|
||||
{
|
||||
name: "brace with leading comma but has content",
|
||||
pattern: "app-{{,prod,staging}",
|
||||
expectError: true, // unclosed brace
|
||||
},
|
||||
{
|
||||
name: "brace with leading comma but has content",
|
||||
pattern: "app-{,prod,staging}}",
|
||||
expectError: true, // unmatched closing brace
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
@@ -564,6 +723,20 @@ func TestExpandWildcardsEdgeCases(t *testing.T) {
|
||||
assert.ElementsMatch(t, []string{"ns-1", "ns_2", "ns.3", "ns@4"}, result)
|
||||
})
|
||||
|
||||
t.Run("complex glob combinations", func(t *testing.T) {
|
||||
activeNamespaces := []string{"app1-prod", "app2-prod", "app1-test", "db-prod", "service"}
|
||||
result, err := expandWildcards([]string{"app?-{prod,test}"}, activeNamespaces)
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{"app1-prod", "app2-prod", "app1-test"}, result)
|
||||
})
|
||||
|
||||
t.Run("escaped characters", func(t *testing.T) {
|
||||
activeNamespaces := []string{"app*", "app-prod", "app?test", "app-test"}
|
||||
result, err := expandWildcards([]string{"app\\*"}, activeNamespaces)
|
||||
require.NoError(t, err)
|
||||
assert.ElementsMatch(t, []string{"app*"}, result)
|
||||
})
|
||||
|
||||
t.Run("mixed literal and wildcard patterns", func(t *testing.T) {
|
||||
activeNamespaces := []string{"app.prod", "app-prod", "app_prod", "test.ns"}
|
||||
result, err := expandWildcards([]string{"app.prod", "app?prod"}, activeNamespaces)
|
||||
@@ -604,8 +777,12 @@ func TestExpandWildcardsEdgeCases(t *testing.T) {
|
||||
shouldError bool
|
||||
}{
|
||||
{"unclosed bracket", "ns[abc", true},
|
||||
{"unclosed brace", "app-{prod,staging", true},
|
||||
{"nested unclosed", "ns[a{bc", true},
|
||||
{"valid bracket", "ns[abc]", false},
|
||||
{"valid brace", "app-{prod,staging}", false},
|
||||
{"empty bracket", "ns[]", true}, // empty brackets are invalid
|
||||
{"empty brace", "app-{}", true}, // empty braces are invalid
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
|
||||
@@ -15,7 +15,6 @@ params:
|
||||
latest: v1.17
|
||||
versions:
|
||||
- main
|
||||
- v1.18
|
||||
- v1.17
|
||||
- v1.16
|
||||
- v1.15
|
||||
|
||||
@@ -16,8 +16,6 @@ Backup belongs to the API group version `velero.io/v1`.
|
||||
|
||||
Here is a sample `Backup` object with each of the fields documented:
|
||||
|
||||
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
@@ -44,12 +42,11 @@ spec:
|
||||
resourcePolicy:
|
||||
kind: configmap
|
||||
name: resource-policy-configmap
|
||||
# Array of namespaces to include in the backup. Accepts glob patterns (*, ?, [abc]).
|
||||
# Note: '*' alone is reserved for empty fields, which means all namespaces.
|
||||
# If unspecified, all namespaces are included. Optional.
|
||||
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the backup. Accepts glob patterns (*, ?, [abc]). Optional.
|
||||
# Array of namespaces to exclude from the backup. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
|
||||
@@ -16,8 +16,6 @@ Restore belongs to the API group version `velero.io/v1`.
|
||||
|
||||
Here is a sample `Restore` object with each of the fields documented:
|
||||
|
||||
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
@@ -47,11 +45,11 @@ spec:
|
||||
writeSparseFiles: true
|
||||
# ParallelFilesDownload is the concurrency number setting for restore
|
||||
parallelFilesDownload: 10
|
||||
# Array of namespaces to include in the restore. Accepts glob patterns (*, ?, [abc]).
|
||||
# If unspecified, all namespaces are included. Optional.
|
||||
# Array of namespaces to include in the restore. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the restore. Accepts glob patterns (*, ?, [abc]). Optional.
|
||||
# Array of namespaces to exclude from the restore. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
|
||||
@@ -42,6 +42,46 @@ A command to do this is `make new-changelog CHANGELOG_BODY="Changes you have mad
|
||||
|
||||
If a PR does not warrant a changelog, the CI check for a changelog can be skipped by applying a `changelog-not-required` label on the PR. If you are making a PR on a release branch, you should still make a new file in the `changelogs/unreleased` folder on the release branch for your change.
|
||||
|
||||
## AI-Generated Content
|
||||
|
||||
We welcome contributions from all developers, including those who use AI tools to assist in their work. However, to maintain code quality and ensure contributions are accurate and appropriate, please follow these guidelines:
|
||||
|
||||
### Using AI Assistance
|
||||
|
||||
**Acceptable use:**
|
||||
- Using AI tools (like GitHub Copilot, ChatGPT, Claude, etc.) to generate scaffolding or boilerplate code
|
||||
- Getting AI assistance for writing unit tests
|
||||
- Using AI to help understand complex code patterns
|
||||
- AI-assisted debugging and problem-solving
|
||||
- Using AI to help with documentation writing
|
||||
|
||||
**Requirements when using AI:**
|
||||
1. **Always review and verify** AI-generated content before submitting
|
||||
2. **Test thoroughly** - ensure the code works as expected in your environment
|
||||
3. **Verify technical accuracy** - check that all version numbers, configurations, and technical details are correct
|
||||
4. **Remove placeholders** - ensure there are no example or placeholder content
|
||||
5. **Understand the code** - be able to explain and defend your changes during code review
|
||||
6. **Disclose AI usage** - if a significant portion of your PR was AI-generated, mention it in the PR description
|
||||
|
||||
### What to Avoid
|
||||
|
||||
**Unacceptable practices:**
|
||||
- Submitting entirely AI-generated PRs or issues without review or verification
|
||||
- Including hallucinated information (false version numbers, non-existent APIs, etc.)
|
||||
- Copying AI-generated content with placeholder or example data
|
||||
- Submitting AI-generated issues describing problems you haven't actually experienced
|
||||
- Using AI to generate issues about features or bugs without verifying they exist
|
||||
|
||||
### For Issues
|
||||
|
||||
When creating issues with AI assistance:
|
||||
- Ensure the issue describes a **real problem** you have experienced
|
||||
- Verify all version numbers, error messages, and configurations are from your actual environment
|
||||
- Remove any AI-generated boilerplate or overly formal structure
|
||||
- Focus on clarity and accuracy over comprehensive formatting
|
||||
|
||||
Issues that appear to be entirely AI-generated without proper verification may be labeled as `potential-ai-generated` and flagged for additional review.
|
||||
|
||||
## Copyright header
|
||||
|
||||
Whenever a source code file is being modified, the copyright notice should be updated to our standard copyright notice. That is, it should read “Copyright the Velero contributors.”
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
---
|
||||
title: "Namespace Glob Patterns"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
When using `--include-namespaces` and `--exclude-namespaces` flags with backup and restore commands, you can use glob patterns to match multiple namespaces.
|
||||
|
||||
## Supported Patterns
|
||||
|
||||
Velero supports the following glob pattern characters:
|
||||
|
||||
- `*` - Matches any sequence of characters
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "app-*"
|
||||
# Matches: app-prod, app-staging, app-dev, etc.
|
||||
```
|
||||
|
||||
- `?` - Matches exactly one character
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "ns?"
|
||||
# Matches: ns1, ns2, nsa, but NOT ns10
|
||||
```
|
||||
|
||||
- `[abc]` - Matches any single character in the brackets
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "ns[123]"
|
||||
# Matches: ns1, ns2, ns3
|
||||
```
|
||||
|
||||
- `[a-z]` - Matches any single character in the range
|
||||
```bash
|
||||
velero backup create my-backup --include-namespaces "ns[a-c]"
|
||||
# Matches: nsa, nsb, nsc
|
||||
```
|
||||
|
||||
## Unsupported Patterns
|
||||
|
||||
The following patterns are **not supported** and will cause validation errors:
|
||||
|
||||
- `**` - Consecutive asterisks
|
||||
- `|` - Alternation (regex operator)
|
||||
- `()` - Grouping (regex operators)
|
||||
- `!` - Negation
|
||||
- `{}` - Brace expansion
|
||||
- `,` - Comma (used in brace expansion)
|
||||
|
||||
## Special Cases
|
||||
|
||||
- `*` alone means "all namespaces" and is not expanded
|
||||
- Empty brackets `[]` are invalid
|
||||
- Unmatched or unclosed brackets will cause validation errors
|
||||
|
||||
## Examples
|
||||
|
||||
Combine patterns with include and exclude flags:
|
||||
|
||||
```bash
|
||||
# Backup all production namespaces except test
|
||||
velero backup create prod-backup \
|
||||
--include-namespaces "*-prod" \
|
||||
--exclude-namespaces "test-*"
|
||||
|
||||
# Backup specific numbered namespaces
|
||||
velero backup create numbered-backup \
|
||||
--include-namespaces "app-[0-9]"
|
||||
|
||||
# Restore namespaces matching multiple patterns
|
||||
velero restore create my-restore \
|
||||
--from-backup my-backup \
|
||||
--include-namespaces "frontend-*,backend-*"
|
||||
```
|
||||
@@ -17,11 +17,7 @@ Wildcard takes precedence when both a wildcard and specific resource are include
|
||||
|
||||
### --include-namespaces
|
||||
|
||||
Namespaces to include. Accepts glob patterns (`*`, `?`, `[abc]`). Default is `*`, all namespaces.
|
||||
|
||||
See [Namespace Glob Patterns](namespace-glob-patterns) for more details on supported patterns.
|
||||
|
||||
Note: `*` alone is reserved for empty fields, which means all namespaces.
|
||||
Namespaces to include. Default is `*`, all namespaces.
|
||||
|
||||
* Backup a namespace and it's objects.
|
||||
|
||||
@@ -162,9 +158,7 @@ Wildcard excludes are ignored.
|
||||
|
||||
### --exclude-namespaces
|
||||
|
||||
Namespaces to exclude. Accepts glob patterns (`*`, `?`, `[abc]`).
|
||||
|
||||
See [Namespace Glob Patterns](namespace-glob-patterns.md) for more details on supported patterns.
|
||||
Namespaces to exclude.
|
||||
|
||||
* Exclude kube-system from the cluster backup.
|
||||
|
||||
|
||||
@@ -1,58 +0,0 @@
|
||||
---
|
||||
toc: "false"
|
||||
cascade:
|
||||
version: v1.18
|
||||
toc: "true"
|
||||
---
|
||||
![100]
|
||||
|
||||
[![Build Status][1]][2]
|
||||
|
||||
## Overview
|
||||
|
||||
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
|
||||
|
||||
* Take backups of your cluster and restore in case of loss.
|
||||
* Migrate cluster resources to other clusters.
|
||||
* Replicate your production cluster to development and testing clusters.
|
||||
|
||||
Velero consists of:
|
||||
|
||||
* A server that runs on your cluster
|
||||
* A command-line client that runs locally
|
||||
|
||||
## Documentation
|
||||
|
||||
This site is our documentation home with installation instructions, plus information about customizing Velero for your needs, architecture, extending Velero, contributing to Velero and more.
|
||||
|
||||
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero-users channel][25] on the Kubernetes Slack server.
|
||||
|
||||
## Contributing
|
||||
|
||||
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.18.0/start-contributing/) documentation for guidance on how to setup Velero for development.
|
||||
|
||||
## Changelog
|
||||
|
||||
See [the list of releases][6] to find out about feature changes.
|
||||
|
||||
[1]: https://github.com/vmware-tanzu/velero/workflows/Main%20CI/badge.svg
|
||||
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Main+CI"
|
||||
|
||||
[4]: https://github.com/vmware-tanzu/velero/issues
|
||||
[6]: https://github.com/vmware-tanzu/velero/releases
|
||||
|
||||
[9]: https://kubernetes.io/docs/setup/
|
||||
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
|
||||
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
|
||||
[12]: https://github.com/kubernetes/kubernetes/blob/main/cluster/addons/dns/README.md
|
||||
[14]: https://github.com/kubernetes/kubernetes
|
||||
[24]: https://groups.google.com/forum/#!forum/projectvelero
|
||||
[25]: https://kubernetes.slack.com/messages/velero-users
|
||||
|
||||
[30]: troubleshooting.md
|
||||
|
||||
[100]: img/velero.png
|
||||
@@ -1,21 +0,0 @@
|
||||
---
|
||||
title: "Table of Contents"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## API types
|
||||
|
||||
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
|
||||
(hooks)
|
||||
|
||||
* [Backup][1]
|
||||
* [Restore][2]
|
||||
* [Schedule][3]
|
||||
* [BackupStorageLocation][4]
|
||||
* [VolumeSnapshotLocation][5]
|
||||
|
||||
[1]: backup.md
|
||||
[2]: restore.md
|
||||
[3]: schedule.md
|
||||
[4]: backupstoragelocation.md
|
||||
[5]: volumesnapshotlocation.md
|
||||
@@ -1,19 +0,0 @@
|
||||
---
|
||||
layout: docs
|
||||
title: API types
|
||||
---
|
||||
|
||||
Here's a list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
|
||||
(hooks)
|
||||
|
||||
* [Backup][1]
|
||||
* [Restore][2]
|
||||
* [Schedule][3]
|
||||
* [BackupStorageLocation][4]
|
||||
* [VolumeSnapshotLocation][5]
|
||||
|
||||
[1]: backup.md
|
||||
[2]: restore.md
|
||||
[3]: schedule.md
|
||||
[4]: backupstoragelocation.md
|
||||
[5]: volumesnapshotlocation.md
|
||||
@@ -1,213 +0,0 @@
|
||||
---
|
||||
title: "Backup API Type"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Use
|
||||
|
||||
Use the `Backup` API type to request the Velero server to perform a backup. Once created, the
|
||||
Velero Server immediately starts the backup process.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Backup belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Backup` object with each of the fields documented:
|
||||
|
||||
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Backup
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Backup name. May be any valid Kubernetes object name. Required.
|
||||
name: a
|
||||
# Backup namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the backup. Required.
|
||||
spec:
|
||||
# CSISnapshotTimeout specifies the time used to wait for
|
||||
# CSI VolumeSnapshot status turns to ReadyToUse during creation, before
|
||||
# returning error as timeout. The default value is 10 minute.
|
||||
csiSnapshotTimeout: 10m
|
||||
# ItemOperationTimeout specifies the time used to wait for
|
||||
# asynchronous BackupItemAction operations
|
||||
# The default value is 4 hour.
|
||||
itemOperationTimeout: 4h
|
||||
# resourcePolicy specifies the referenced resource policies that backup should follow
|
||||
# optional
|
||||
resourcePolicy:
|
||||
kind: configmap
|
||||
name: resource-policy-configmap
|
||||
# Array of namespaces to include in the backup. Accepts glob patterns (*, ?, [abc]).
|
||||
# If unspecified, all namespaces are included. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the backup. Accepts glob patterns (*, ?, [abc]). Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
# Order of the resources to be collected during the backup process. It's a map with key being the plural resource
|
||||
# name, and the value being a list of object names separated by comma. Each resource name has format "namespace/objectname".
|
||||
# For cluster resources, simply use "objectname". Optional
|
||||
orderedResources:
|
||||
pods: mysql/mysql-cluster-replica-0,mysql/mysql-cluster-replica-1,mysql/mysql-cluster-source-0
|
||||
persistentvolumes: pvc-87ae0832-18fd-4f40-a2a4-5ed4242680c4,pvc-63be1bb0-90f5-4629-a7db-b8ce61ee29b3
|
||||
# Whether to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the backup. For example, if a
|
||||
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Array of cluster-scoped resources to exclude from the backup. Resources may be shortcuts
|
||||
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
|
||||
# no additional cluster-scoped resources are excluded. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
excludedClusterScopedResources: {}
|
||||
# Array of cluster-scoped resources to include from the backup. Resources may be shortcuts
|
||||
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
|
||||
# no additional cluster-scoped resources are included. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
includedClusterScopedResources: {}
|
||||
# Array of namespace-scoped resources to exclude from the backup. Resources may be shortcuts
|
||||
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
|
||||
# no namespace-scoped resources are excluded. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
excludedNamespaceScopedResources: {}
|
||||
# Array of namespace-scoped resources to include from the backup. Resources may be shortcuts
|
||||
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
|
||||
# all namespace-scoped resources are included. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
includedNamespaceScopedResources: {}
|
||||
# Individual objects must match this label selector to be included in the backup. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Individual object when matched with any of the label selector specified in the set are to be included in the backup. Optional.
|
||||
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the backup request
|
||||
orLabelSelectors:
|
||||
- matchLabels:
|
||||
app: velero
|
||||
- matchLabels:
|
||||
app: data-protection
|
||||
# Whether or not to snapshot volumes. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
|
||||
# a persistent volume provider is configured for Velero.
|
||||
snapshotVolumes: null
|
||||
# Where to store the tarball and logs.
|
||||
storageLocation: aws-primary
|
||||
# The list of locations in which to store volume snapshots created for this backup.
|
||||
volumeSnapshotLocations:
|
||||
- aws-primary
|
||||
- gcp-primary
|
||||
# The amount of time before this backup is eligible for garbage collection. If not specified,
|
||||
# a default value of 30 days will be used. The default can be configured on the velero server
|
||||
# by passing the flag --default-backup-ttl.
|
||||
ttl: 24h0m0s
|
||||
# whether pod volume file system backup should be used for all volumes by default.
|
||||
defaultVolumesToFsBackup: true
|
||||
# Whether snapshot data should be moved. If set, data movement is launched after the snapshot is created.
|
||||
snapshotMoveData: true
|
||||
# The data mover to be used by the backup. If the value is "" or "velero", the built-in data mover will be used.
|
||||
datamover: velero
|
||||
# UploaderConfig specifies the configuration for the uploader
|
||||
uploaderConfig:
|
||||
# ParallelFilesUpload is the number of files parallel uploads to perform when using the uploader.
|
||||
parallelFilesUpload: 10
|
||||
# Actions to perform at different times during a backup. The only hook supported is
|
||||
# executing a command in a container in a pod using the pod exec API. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
-
|
||||
# Name of the hook. Will be displayed in backup log.
|
||||
name: my-hook
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to which this hook applies. The only resource supported at this time is
|
||||
# pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
|
||||
pre:
|
||||
-
|
||||
# The type of hook. This must be "exec".
|
||||
exec:
|
||||
# The name of the container where the command will be executed. If unspecified, the
|
||||
# first container in the pod will be used. Optional.
|
||||
container: my-container
|
||||
# The command to execute, specified as an array. Required.
|
||||
command:
|
||||
- /bin/uname
|
||||
- -a
|
||||
# How to handle an error executing the command. Valid values are Fail and Continue.
|
||||
# Defaults to Fail. Optional.
|
||||
onError: Fail
|
||||
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
|
||||
timeout: 10s
|
||||
# An array of hooks to run after all custom actions and additional items have been
|
||||
# processed. Only "exec" hooks are supported.
|
||||
post:
|
||||
# Same content as pre above.
|
||||
# Status about the Backup. Users should not set any data here.
|
||||
status:
|
||||
# The version of this Backup. The only version supported is 1.
|
||||
version: 1
|
||||
# The date and time when the Backup is eligible for garbage collection.
|
||||
expiration: null
|
||||
# The current phase.
|
||||
# Valid values are New, FailedValidation, InProgress, WaitingForPluginOperations,
|
||||
# WaitingForPluginOperationsPartiallyFailed, FinalizingafterPluginOperations,
|
||||
# FinalizingPartiallyFailed, Completed, PartiallyFailed, Failed.
|
||||
phase: ""
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors: null
|
||||
# Date/time when the backup started being processed.
|
||||
startTimestamp: 2019-04-29T15:58:43Z
|
||||
# Date/time when the backup finished being processed.
|
||||
completionTimestamp: 2019-04-29T15:58:56Z
|
||||
# Number of volume snapshots that Velero tried to create for this backup.
|
||||
volumeSnapshotsAttempted: 2
|
||||
# Number of volume snapshots that Velero successfully created for this backup.
|
||||
volumeSnapshotsCompleted: 1
|
||||
# Number of attempted BackupItemAction operations for this backup.
|
||||
backupItemOperationsAttempted: 2
|
||||
# Number of BackupItemAction operations that Velero successfully completed for this backup.
|
||||
backupItemOperationsCompleted: 1
|
||||
# Number of BackupItemAction operations that ended in failure for this backup.
|
||||
backupItemOperationsFailed: 0
|
||||
# Number of warnings that were logged by the backup.
|
||||
warnings: 2
|
||||
# Number of errors that were logged by the backup.
|
||||
errors: 0
|
||||
# An error that caused the entire backup to fail.
|
||||
failureReason: ""
|
||||
```
|
||||
@@ -1,112 +0,0 @@
|
||||
---
|
||||
title: "Velero Backup Storage Locations"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Backup Storage Location
|
||||
|
||||
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
|
||||
|
||||
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
|
||||
|
||||
A sample YAML `BackupStorageLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
backupSyncPeriod: 2m0s
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: myBucket
|
||||
credential:
|
||||
name: secret-name
|
||||
key: key-in-secret
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Example with self-signed certificate
|
||||
|
||||
When using object storage with self-signed certificates, you can specify the CA certificate:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: velero-backups
|
||||
# Base64 encoded CA certificate (deprecated - use caCertRef instead)
|
||||
caCert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1VENDQXFHZ0F3SUJBZ0lVTWRiWkNaYnBhcE9lYThDR0NMQnhhY3dVa213d0RRWUpLb1pJaHZjTkFRRUwKQlFBd2JERUxNQWtHQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdNQ2tOaGJHbG1iM0p1YVdFeEZqQVVCZ05WQkFjTQpEVk5oYmlCR2NtRnVZMmx6WTI4eEdEQVdCZ05WQkFvTUQwVjRZVzF3YkdVZ1EyOXRjR0Z1ZVRFV01CUUdBMVVFCkF3d05aWGhoYlhCc1pTNXNiMk5oYkRBZUZ3MHlNekEzTVRBeE9UVXlNVGhhRncweU5EQTNNRGt4T1RVeU1UaGEKTUd3eEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUNEQXBEWEJ4cG1iM0p1YVdFeEZqQVVCZ05WQkFjTURWTmgKYmlCR2NtRnVZMmx6WTI4eEdEQVdCZ05WQkFvTUQwVjRZVzF3YkdVZ1EyOXRjR0Z1ZVRFV01CUUdBMVVFQXd3TgpaWGhoYlhCc1pTNXNiMk5oYkRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS1dqCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
|
||||
config:
|
||||
region: us-east-1
|
||||
s3Url: https://minio.example.com
|
||||
```
|
||||
|
||||
#### Using a CA Certificate with Secret Reference (Recommended)
|
||||
|
||||
The recommended approach is to use `caCertRef` to reference a Secret containing the CA certificate:
|
||||
|
||||
```yaml
|
||||
# First, create a Secret containing the CA certificate
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: storage-ca-cert
|
||||
namespace: velero
|
||||
type: Opaque
|
||||
data:
|
||||
ca-bundle.crt: <base64-encoded-certificate>
|
||||
|
||||
---
|
||||
# Then reference it in the BackupStorageLocation
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: myBucket
|
||||
caCertRef:
|
||||
name: storage-ca-cert
|
||||
key: ca-bundle.crt
|
||||
# ... other configuration
|
||||
```
|
||||
|
||||
**Note:** You cannot specify both `caCert` and `caCertRef` in the same BackupStorageLocation. The `caCert` field is deprecated and will be removed in a future version.
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
{{< table caption="Main config parameters" >}}
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String | Required Field | The name for whichever object storage provider will be used to store the backups. See [your object storage provider's plugin documentation](../supported-providers) for the appropriate value to use. |
|
||||
| `objectStorage` | ObjectStorageLocation | Required Field | Specification of the object storage for the given provider. |
|
||||
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
|
||||
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
|
||||
| `objectStorage/caCert` | String | Optional Field | **Deprecated**: Use `caCertRef` instead. A base64 encoded CA bundle to be used when verifying TLS connections |
|
||||
| `objectStorage/caCertRef` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | Reference to a Secret containing a CA bundle to be used when verifying TLS connections. The Secret must be in the same namespace as the BackupStorageLocation. |
|
||||
| `objectStorage/caCertRef/name` | String | Required Field (when using caCertRef) | The name of the Secret containing the CA certificate bundle |
|
||||
| `objectStorage/caCertRef/key` | String | Required Field (when using caCertRef) | The key within the Secret that contains the CA certificate bundle |
|
||||
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the object store plugin. See [your object storage provider's plugin documentation](../supported-providers) for details. |
|
||||
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
|
||||
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
|
||||
| `validationFrequency` | metav1.Duration | Optional Field | How frequently Velero should validate the object storage . Default is Velero's server validation frequency. Set this to `0s` to disable validation. Default 1 minute. |
|
||||
| `credential` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | The credential information to be used with this location. |
|
||||
| `credential/name` | String | Optional Field | The name of the secret within the Velero namespace which contains the credential information. |
|
||||
| `credential/key` | String | Optional Field | The key to use within the secret. |
|
||||
{{< /table >}}
|
||||
@@ -1,221 +0,0 @@
|
||||
---
|
||||
title: "Restore API Type"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Use
|
||||
|
||||
The `Restore` API type is used as a request for the Velero server to perform a Restore. Once created, the
|
||||
Velero Server immediately starts the Restore process.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Restore belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Restore` object with each of the fields documented:
|
||||
|
||||
**Note:** Namespace includes/excludes support glob patterns (`*`, `?`, `[abc]`). See [Namespace Glob Patterns](../namespace-glob-patterns) for more details.
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Restore
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Restore name. May be any valid Kubernetes object name. Required.
|
||||
name: a-very-special-backup-0000111122223333
|
||||
# Restore namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the restore. Required.
|
||||
spec:
|
||||
# The unique name of the Velero backup to restore from.
|
||||
backupName: a-very-special-backup
|
||||
# The unique name of the Velero schedule
|
||||
# to restore from. If specified, and BackupName is empty, Velero will
|
||||
# restore from the most recent successful backup created from this schedule.
|
||||
scheduleName: my-scheduled-backup-name
|
||||
# ItemOperationTimeout specifies the time used to wait for
|
||||
# asynchronous BackupItemAction operations
|
||||
# The default value is 4 hour.
|
||||
itemOperationTimeout: 4h
|
||||
# UploaderConfig specifies the configuration for the restore.
|
||||
uploaderConfig:
|
||||
# WriteSparseFiles is a flag to indicate whether write files sparsely or not
|
||||
writeSparseFiles: true
|
||||
# ParallelFilesDownload is the concurrency number setting for restore
|
||||
parallelFilesDownload: 10
|
||||
# Array of namespaces to include in the restore. Accepts glob patterns (*, ?, [abc]).
|
||||
# If unspecified, all namespaces are included. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the restore. Accepts glob patterns (*, ?, [abc]). Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the restore. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
|
||||
# restoreStatus selects resources to restore not only the specification, but
|
||||
# the status of the manifest. This is specially useful for CRDs that maintain
|
||||
# external references. By default, it excludes all resources.
|
||||
restoreStatus:
|
||||
# Array of resources to include in the restore status. Just like above,
|
||||
# resources may be shortcuts (for example 'po' for 'pods') or fully-qualified.
|
||||
# If unspecified, no resources are included. Optional.
|
||||
includedResources:
|
||||
- workflows
|
||||
# Array of resources to exclude from the restore status. Resources may be
|
||||
# shortcuts (for example 'po' for 'pods') or fully-qualified.
|
||||
# If unspecified, all resources are excluded. Optional.
|
||||
excludedResources: []
|
||||
|
||||
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the restore. For example, if a
|
||||
# PersistentVolumeClaim is included in the restore, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Individual objects must match this label selector to be included in the restore. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Individual object when matched with any of the label selector specified in the set are to be included in the restore. Optional.
|
||||
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the restore request
|
||||
orLabelSelectors:
|
||||
- matchLabels:
|
||||
app: velero
|
||||
- matchLabels:
|
||||
app: data-protection
|
||||
# namespaceMapping is a map of source namespace names to
|
||||
# target namespace names to restore into. Any source namespaces not
|
||||
# included in the map will be restored into namespaces of the same name.
|
||||
namespaceMapping:
|
||||
namespace-backup-from: namespace-to-restore-to
|
||||
# restorePVs specifies whether to restore all included PVs
|
||||
# from snapshot. Optional
|
||||
restorePVs: true
|
||||
# preserveNodePorts specifies whether to restore old nodePorts from backup,
|
||||
# so that the exposed port numbers on the node will remain the same after restore. Optional
|
||||
preserveNodePorts: true
|
||||
# existingResourcePolicy specifies the restore behaviour
|
||||
# for the Kubernetes resource to be restored. Optional
|
||||
existingResourcePolicy: none
|
||||
# ResourceModifier specifies the reference to JSON resource patches
|
||||
# that should be applied to resources before restoration. Optional
|
||||
resourceModifier:
|
||||
kind: ConfigMap
|
||||
name: resource-modifier-configmap
|
||||
# Actions to perform during or post restore. The only hooks currently supported are
|
||||
# adding an init container to a pod before it can be restored and executing a command in a
|
||||
# restored pod's container. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
# Name is the name of this hook.
|
||||
- name: restore-hook-1
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- ns1
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- ns3
|
||||
# Array of resources to which this hook applies. If unspecified, the hook applies to all resources in the backup. Optional.
|
||||
# The only resource supported at this time is pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run during or after restores. Currently only "init" and "exec" hooks
|
||||
# are supported.
|
||||
postHooks:
|
||||
# The type of the hook. This must be "init" or "exec".
|
||||
- init:
|
||||
# An array of container specs to be added as init containers to pods to which this hook applies to.
|
||||
initContainers:
|
||||
- name: restore-hook-init1
|
||||
image: alpine:latest
|
||||
# Mounting volumes from the podSpec to which this hooks applies to.
|
||||
volumeMounts:
|
||||
- mountPath: /restores/pvc1-vm
|
||||
# Volume name from the podSpec
|
||||
name: pvc1-vm
|
||||
command:
|
||||
- /bin/ash
|
||||
- -c
|
||||
- echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz
|
||||
- name: restore-hook-init2
|
||||
image: alpine:latest
|
||||
# Mounting volumes from the podSpec to which this hooks applies to.
|
||||
volumeMounts:
|
||||
- mountPath: /restores/pvc2-vm
|
||||
# Volume name from the podSpec
|
||||
name: pvc2-vm
|
||||
command:
|
||||
- /bin/ash
|
||||
- -c
|
||||
- echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed
|
||||
- exec:
|
||||
# The container name where the hook will be executed. Defaults to the first container.
|
||||
# Optional.
|
||||
container: foo
|
||||
# The command that will be executed in the container. Required.
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- "psql < /backup/backup.sql"
|
||||
# How long to wait for a container to become ready. This should be long enough for the
|
||||
# container to start plus any preceding hooks in the same container to complete. The wait
|
||||
# timeout begins when the container is restored and may require time for the image to pull
|
||||
# and volumes to mount. If not set the restore will wait indefinitely. Optional.
|
||||
waitTimeout: 5m
|
||||
# How long to wait once execution begins. Defaults to 30 seconds. Optional.
|
||||
execTimeout: 1m
|
||||
# How to handle execution failures. Valid values are `Fail` and `Continue`. Defaults to
|
||||
# `Continue`. With `Continue` mode, execution failures are logged only. With `Fail` mode,
|
||||
# no more restore hooks will be executed in any container in any pod and the status of the
|
||||
# Restore will be `PartiallyFailed`. Optional.
|
||||
onError: Continue
|
||||
# RestoreStatus captures the current status of a Velero restore. Users should not set any data here.
|
||||
status:
|
||||
# The current phase.
|
||||
# Valid values are New, FailedValidation, InProgress, WaitingForPluginOperations,
|
||||
# WaitingForPluginOperationsPartiallyFailed, Completed, PartiallyFailed, Failed.
|
||||
phase: ""
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors: null
|
||||
# Number of attempted RestoreItemAction operations for this restore.
|
||||
restoreItemOperationsAttempted: 2
|
||||
# Number of RestoreItemAction operations that Velero successfully completed for this restore.
|
||||
restoreItemOperationsCompleted: 1
|
||||
# Number of RestoreItemAction operations that ended in failure for this restore.
|
||||
restoreItemOperationsFailed: 0
|
||||
# Number of warnings that were logged by the restore.
|
||||
warnings: 2
|
||||
# Errors is a count of all error messages that were generated
|
||||
# during execution of the restore. The actual errors are stored in object
|
||||
# storage.
|
||||
errors: 0
|
||||
# FailureReason is an error that caused the entire restore
|
||||
# to fail.
|
||||
failureReason:
|
||||
|
||||
```
|
||||
@@ -1,216 +0,0 @@
|
||||
---
|
||||
title: "Schedule API Type"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Use
|
||||
|
||||
The `Schedule` API type is used as a repeatable request for the Velero server to perform a backup for a given cron notation. Once created, the
|
||||
Velero Server will start the backup process. It will then wait for the next valid point of the given cron expression and execute the backup
|
||||
process on a repeating basis.
|
||||
|
||||
### Schedule Control Fields
|
||||
|
||||
The Schedule API provides several fields to control backup execution behavior:
|
||||
|
||||
- **paused**: When set to `true`, the schedule is paused and no new backups will be created. When set back to `false`, the schedule is unpaused and will resume creating backups according to the cron schedule.
|
||||
|
||||
- **skipImmediately**: Controls whether to skip an immediate backup when a schedule is created or unpaused. By default (when `false`), if a backup is due immediately upon creation or unpausing, it will be executed right away. When set to `true`, the controller will:
|
||||
1. Skip the immediate backup
|
||||
2. Record the current time in the `lastSkipped` status field
|
||||
3. Automatically reset `skipImmediately` back to `false` (one-time use)
|
||||
4. Schedule the next backup based on the cron expression, using `lastSkipped` as the reference time
|
||||
|
||||
- **lastSkipped**: A status field (not directly settable) that records when a backup was last skipped due to `skipImmediately` being `true`. The controller uses this timestamp, if more recent than `lastBackup`, to calculate the next scheduled backup time.
|
||||
|
||||
This "consume and reset" pattern for `skipImmediately` ensures that after skipping one immediate backup, the schedule returns to normal behavior for subsequent runs without requiring user intervention.
|
||||
|
||||
## API GroupVersion
|
||||
|
||||
Schedule belongs to the API group version `velero.io/v1`.
|
||||
|
||||
## Definition
|
||||
|
||||
Here is a sample `Schedule` object with each of the fields documented:
|
||||
|
||||
```yaml
|
||||
# Standard Kubernetes API Version declaration. Required.
|
||||
apiVersion: velero.io/v1
|
||||
# Standard Kubernetes Kind declaration. Required.
|
||||
kind: Schedule
|
||||
# Standard Kubernetes metadata. Required.
|
||||
metadata:
|
||||
# Schedule name. May be any valid Kubernetes object name. Required.
|
||||
name: a
|
||||
# Schedule namespace. Must be the namespace of the Velero server. Required.
|
||||
namespace: velero
|
||||
# Parameters about the scheduled backup. Required.
|
||||
spec:
|
||||
# Paused specifies whether the schedule is paused or not
|
||||
paused: false
|
||||
# SkipImmediately specifies whether to skip backup if schedule is due immediately when unpaused or created.
|
||||
# This is a one-time flag that will be automatically reset to false after being consumed.
|
||||
# When true, the controller will skip the immediate backup, set LastSkipped timestamp, and reset this to false.
|
||||
skipImmediately: false
|
||||
# Schedule is a Cron expression defining when to run the Backup
|
||||
schedule: 0 7 * * *
|
||||
# Specifies whether to use OwnerReferences on backups created by this Schedule.
|
||||
# Notice: if set to true, when schedule is deleted, backups will be deleted too. Optional.
|
||||
useOwnerReferencesInBackup: false
|
||||
# Template is the spec that should be used for each backup triggered by this schedule.
|
||||
template:
|
||||
# CSISnapshotTimeout specifies the time used to wait for
|
||||
# CSI VolumeSnapshot status turns to ReadyToUse during creation, before
|
||||
# returning error as timeout. The default value is 10 minute.
|
||||
csiSnapshotTimeout: 10m
|
||||
# resourcePolicy specifies the referenced resource policies that backup should follow
|
||||
# optional
|
||||
resourcePolicy:
|
||||
kind: configmap
|
||||
name: resource-policy-configmap
|
||||
# Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included.
|
||||
# Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to exclude from the scheduled backup. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to include in the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. If unspecified, all resources are included. Optional.
|
||||
includedResources:
|
||||
- '*'
|
||||
# Array of resources to exclude from the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
|
||||
# or fully-qualified. Optional.
|
||||
excludedResources:
|
||||
- storageclasses.storage.k8s.io
|
||||
orderedResources:
|
||||
pods: mysql/mysql-cluster-replica-0,mysql/mysql-cluster-replica-1,mysql/mysql-cluster-source-0
|
||||
persistentvolumes: pvc-87ae0832-18fd-4f40-a2a4-5ed4242680c4,pvc-63be1bb0-90f5-4629-a7db-b8ce61ee29b3
|
||||
# Whether to include cluster-scoped resources. Valid values are true, false, and
|
||||
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
|
||||
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
|
||||
# all cluster-scoped resources are included if and only if all namespaces are included and there are
|
||||
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
|
||||
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
|
||||
# up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a
|
||||
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
|
||||
# cluster-scoped) would also be backed up.
|
||||
includeClusterResources: null
|
||||
# Array of cluster-scoped resources to exclude from the backup. Resources may be shortcuts
|
||||
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
|
||||
# no additional cluster-scoped resources are excluded. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
excludedClusterScopedResources: {}
|
||||
# Array of cluster-scoped resources to include from the backup. Resources may be shortcuts
|
||||
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
|
||||
# no additional cluster-scoped resources are included. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
includedClusterScopedResources: {}
|
||||
# Array of namespace-scoped resources to exclude from the backup. Resources may be shortcuts
|
||||
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
|
||||
# no namespace-scoped resources are excluded. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
excludedNamespaceScopedResources: {}
|
||||
# Array of namespace-scoped resources to include from the backup. Resources may be shortcuts
|
||||
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
|
||||
# all namespace-scoped resources are included. Optional.
|
||||
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
|
||||
includedNamespaceScopedResources: {}
|
||||
# Individual objects must match this label selector to be included in the scheduled backup. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# Individual object when matched with any of the label selector specified in the set are to be included in the backup. Optional.
|
||||
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the backup request
|
||||
orLabelSelectors:
|
||||
- matchLabels:
|
||||
app: velero
|
||||
- matchLabels:
|
||||
app: data-protection
|
||||
# Whether to snapshot volumes. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
|
||||
# a persistent volume provider is configured for Velero.
|
||||
snapshotVolumes: null
|
||||
# Where to store the tarball and logs.
|
||||
storageLocation: aws-primary
|
||||
# The list of locations in which to store volume snapshots created for backups under this schedule.
|
||||
volumeSnapshotLocations:
|
||||
- aws-primary
|
||||
- gcp-primary
|
||||
# The amount of time before backups created on this schedule are eligible for garbage collection. If not specified,
|
||||
# a default value of 30 days will be used. The default can be configured on the velero server
|
||||
# by passing the flag --default-backup-ttl.
|
||||
ttl: 24h0m0s
|
||||
# whether pod volume file system backup should be used for all volumes by default.
|
||||
defaultVolumesToFsBackup: true
|
||||
# Whether snapshot data should be moved. If set, data movement is launched after the snapshot is created.
|
||||
snapshotMoveData: true
|
||||
# The data mover to be used by the backup. If the value is "" or "velero", the built-in data mover will be used.
|
||||
datamover: velero
|
||||
# UploaderConfig specifies the configuration for the uploader
|
||||
uploaderConfig:
|
||||
# ParallelFilesUpload is the number of files parallel uploads to perform when using the uploader.
|
||||
parallelFilesUpload: 10
|
||||
# The labels you want on backup objects, created from this schedule (instead of copying the labels you have on schedule object itself).
|
||||
# When this field is set, the labels from the Schedule resource are not copied to the Backup resource.
|
||||
metadata:
|
||||
labels:
|
||||
labelname: somelabelvalue
|
||||
# Actions to perform at different times during a backup. The only hook supported is
|
||||
# executing a command in a container in a pod using the pod exec API. Optional.
|
||||
hooks:
|
||||
# Array of hooks that are applicable to specific resources. Optional.
|
||||
resources:
|
||||
-
|
||||
# Name of the hook. Will be displayed in backup log.
|
||||
name: my-hook
|
||||
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
|
||||
# namespaces. Optional.
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
# Array of namespaces to which this hook does not apply. Optional.
|
||||
excludedNamespaces:
|
||||
- some-namespace
|
||||
# Array of resources to which this hook applies. The only resource supported at this time is
|
||||
# pods.
|
||||
includedResources:
|
||||
- pods
|
||||
# Array of resources to which this hook does not apply. Optional.
|
||||
excludedResources: []
|
||||
# This hook only applies to objects matching this label selector. Optional.
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
app: velero
|
||||
component: server
|
||||
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
|
||||
pre:
|
||||
-
|
||||
# The type of hook. This must be "exec".
|
||||
exec:
|
||||
# The name of the container where the command will be executed. If unspecified, the
|
||||
# first container in the pod will be used. Optional.
|
||||
container: my-container
|
||||
# The command to execute, specified as an array. Required.
|
||||
command:
|
||||
- /bin/uname
|
||||
- -a
|
||||
# How to handle an error executing the command. Valid values are Fail and Continue.
|
||||
# Defaults to Fail. Optional.
|
||||
onError: Fail
|
||||
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
|
||||
timeout: 10s
|
||||
# An array of hooks to run after all custom actions and additional items have been
|
||||
# processed. Only "exec" hooks are supported.
|
||||
post:
|
||||
# Same content as pre above.
|
||||
status:
|
||||
# The current phase.
|
||||
# Valid values are New, Enabled, FailedValidation.
|
||||
phase: ""
|
||||
# Date/time of the last backup for a given schedule
|
||||
lastBackup:
|
||||
# Date/time when a backup was last skipped due to skipImmediately being true
|
||||
lastSkipped:
|
||||
# An array of any validation errors encountered.
|
||||
validationErrors:
|
||||
```
|
||||
@@ -1,46 +0,0 @@
|
||||
---
|
||||
title: "Velero Volume Snapshot Location"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
## Volume Snapshot Location
|
||||
|
||||
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
|
||||
|
||||
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
|
||||
|
||||
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
|
||||
|
||||
A sample YAML `VolumeSnapshotLocation` looks like the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: VolumeSnapshotLocation
|
||||
metadata:
|
||||
name: aws-default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
credential:
|
||||
name: secret-name
|
||||
key: key-in-secret
|
||||
config:
|
||||
region: us-west-2
|
||||
profile: "default"
|
||||
```
|
||||
|
||||
### Parameter Reference
|
||||
|
||||
The configurable parameters are as follows:
|
||||
|
||||
#### Main config parameters
|
||||
|
||||
{{< table caption="Main config parameters" >}}
|
||||
| Key | Type | Default | Meaning |
|
||||
| --- | --- | --- | --- |
|
||||
| `provider` | String | Required Field | The name for whichever storage provider will be used to create/store the volume snapshots. See [your volume snapshot provider's plugin documentation](../supported-providers) for the appropriate value to use. |
|
||||
| `config` | map string string | None (Optional) | Provider-specific configuration keys/values to be passed to the volume snapshotter plugin. See [your volume snapshot provider's plugin documentation](../supported-providers) for details. |
|
||||
| `credential` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | The credential information to be used with this location. |
|
||||
| `credential/name` | String | Optional Field | The name of the secret within the Velero namespace which contains the credential information. |
|
||||
| `credential/key` | String | Optional Field | The key to use within the secret. |
|
||||
{{< /table >}}
|
||||
@@ -1,126 +0,0 @@
|
||||
---
|
||||
title: "Backup Hooks"
|
||||
layout: docs
|
||||
---
|
||||
|
||||
Velero supports executing commands in containers in pods during a backup.
|
||||
|
||||
## Backup Hooks
|
||||
|
||||
When performing a backup, you can specify one or more commands to execute in a container in a pod
|
||||
when that pod is being backed up. The commands can be configured to run *before* any custom action
|
||||
processing ("pre" hooks), or after all custom actions have been completed and any additional items
|
||||
specified by custom action have been backed up ("post" hooks). Note that hooks are _not_ executed within a shell
|
||||
on the containers.
|
||||
|
||||
As of Velero 1.15, related items that must be backed up together are grouped into ItemBlocks, and pod hooks run before and after the ItemBlock is backed up.
|
||||
In particular, this means that if an ItemBlock contains more than one pod (such as in a scenario where an RWX volume is mounted by multiple pods), pre hooks are run for all pods in the ItemBlock, then the items are backed up, then all post hooks are run.
|
||||
|
||||
There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec.
|
||||
|
||||
### Specifying Hooks As Pod Annotations
|
||||
|
||||
You can use the following annotations on a pod to make Velero execute a hook when backing up the pod:
|
||||
|
||||
#### Pre hooks
|
||||
|
||||
* `pre.hook.backup.velero.io/container`
|
||||
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
|
||||
* `pre.hook.backup.velero.io/command`
|
||||
* The command to execute. This command is not executed within a shell by default. If a shell is needed to run your command, include a shell command, like `/bin/sh`, that is supported by the container at the beginning of your command. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`. See [examples of using pre hook commands](#backup-hook-commands-examples). Optional.
|
||||
* `pre.hook.backup.velero.io/on-error`
|
||||
* What to do if the command returns a non-zero exit code. Defaults to `Fail`. Valid values are Fail and Continue. Optional.
|
||||
* `pre.hook.backup.velero.io/timeout`
|
||||
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
|
||||
|
||||
|
||||
#### Post hooks
|
||||
|
||||
* `post.hook.backup.velero.io/container`
|
||||
* The container where the command should be executed. Default is the first container in the pod. Optional.
|
||||
* `post.hook.backup.velero.io/command`
|
||||
* The command to execute. This command is not executed within a shell by default. If a shell is needed to run your command, include a shell command, like `/bin/sh`, that is supported by the container at the beginning of your command. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`. See [examples of using pre hook commands](#backup-hook-commands-examples). Optional.
|
||||
* `post.hook.backup.velero.io/on-error`
|
||||
* What to do if the command returns a non-zero exit code. Defaults to `Fail`. Valid values are Fail and Continue. Optional.
|
||||
* `post.hook.backup.velero.io/timeout`
|
||||
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
|
||||
|
||||
### Specifying Hooks in the Backup Spec
|
||||
|
||||
Please see the documentation on the [Backup API Type][1] for how to specify hooks in the Backup
|
||||
spec.
|
||||
|
||||
## Hook Example with fsfreeze
|
||||
|
||||
This examples walks you through using both pre and post hooks for freezing a file system. Freezing the
|
||||
file system is useful to ensure that all pending disk I/O operations have completed prior to taking a snapshot.
|
||||
|
||||
### Annotations
|
||||
|
||||
The Velero [example/nginx-app/with-pv.yaml][2] serves as an example of adding the pre and post hook annotations directly
|
||||
to your declarative deployment. Below is an example of what updating an object in place might look like.
|
||||
|
||||
```shell
|
||||
kubectl annotate pod -n nginx-example -l app=nginx \
|
||||
pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
|
||||
pre.hook.backup.velero.io/container=fsfreeze \
|
||||
post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
|
||||
post.hook.backup.velero.io/container=fsfreeze
|
||||
```
|
||||
|
||||
Now test the pre and post hooks by creating a backup. You can use the Velero logs to verify that the pre and post
|
||||
hooks are running and exiting without error.
|
||||
|
||||
```shell
|
||||
velero backup create nginx-hook-test
|
||||
|
||||
velero backup get nginx-hook-test
|
||||
velero backup logs nginx-hook-test | grep hookCommand
|
||||
```
|
||||
|
||||
## Backup hook commands examples
|
||||
|
||||
### Multiple commands
|
||||
|
||||
To use multiple commands, wrap your target command in a shell and separate them with `;`, `&&`, or other shell conditional constructs.
|
||||
|
||||
```shell
|
||||
pre.hook.backup.velero.io/command='["/bin/bash", "-c", "echo hello > hello.txt && echo goodbye > goodbye.txt"]'
|
||||
```
|
||||
|
||||
#### Using environment variables
|
||||
|
||||
You are able to use environment variables from your pods in your pre and post hook commands by including a shell command before using the environment variable. For example, `MYSQL_ROOT_PASSWORD` is an environment variable defined in pod called `mysql`. To use `MYSQL_ROOT_PASSWORD` in your pre-hook, you'd include a shell, like `/bin/sh`, before calling your environment variable:
|
||||
|
||||
```
|
||||
pre:
|
||||
- exec:
|
||||
container: mysql
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- mysql --password=$MYSQL_ROOT_PASSWORD -e "FLUSH TABLES WITH READ LOCK"
|
||||
onError: Fail
|
||||
```
|
||||
|
||||
Note that the container must support the shell command you use.
|
||||
|
||||
## Backup Hook Execution Results
|
||||
### Viewing Results
|
||||
|
||||
Velero records the execution results of hooks, allowing users to obtain this information by running the following command:
|
||||
|
||||
```bash
|
||||
$ velero backup describe <backup name>
|
||||
```
|
||||
|
||||
The displayed results include the number of hooks that were attempted to be executed and the number of hooks that failed execution. Any detailed failure reasons will be present in `Errors` section if applicable.
|
||||
|
||||
```bash
|
||||
HooksAttempted: 1
|
||||
HooksFailed: 0
|
||||
```
|
||||
|
||||
|
||||
[1]: api-types/backup.md
|
||||
[2]: https://github.com/vmware-tanzu/velero/blob/v1.18.0/examples/nginx-app/with-pv.yaml
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user