Compare commits

..

4 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
cbc13366c6 Add comprehensive documentation and examples for AI detection system
Co-authored-by: blackpiglet <59276555+blackpiglet@users.noreply.github.com>
2026-02-02 03:25:37 +00:00
copilot-swe-agent[bot]
664b25cca1 Fix YAML syntax and validate AI detection workflow
Co-authored-by: blackpiglet <59276555+blackpiglet@users.noreply.github.com>
2026-02-02 03:23:57 +00:00
copilot-swe-agent[bot]
3504943019 Add AI-generated issue detection system with workflow and documentation
Co-authored-by: blackpiglet <59276555+blackpiglet@users.noreply.github.com>
2026-02-02 03:22:43 +00:00
copilot-swe-agent[bot]
acd4d5b183 Initial plan 2026-02-02 03:19:24 +00:00
170 changed files with 755 additions and 12158 deletions

197
.github/AI-DETECTION-EXAMPLES.md vendored Normal file
View File

@@ -0,0 +1,197 @@
# AI Issue Detection - Examples
This document provides examples to help understand what triggers AI detection.
## Example 1: High AI Score (Score: 6/8) ❌
**This would be flagged:**
```markdown
## Description
When deploying Velero on an EKS cluster with `hostNetwork: true`, the application fails to start.
## Critical Problem
```
time="2026-01-26T16:40:55Z" level=fatal msg="failed to start metrics server"
```
Status: BLOCKER
## Affected Environment
| Parameter | Value |
|----------|----------|
| Cluster | Amazon EKS |
| Velero Version | 1.8.2 |
| Kubernetes | 1.33 |
## Root Cause Analysis
The controller-runtime metrics uses port 8080 as a hardcoded default...
## Resolution Attempts
### Attempt 1: Use extraArgs
Result: Failed
### Attempt 2: Configure metricsAddress
Result: Failed
## Expected Permanent Solution
Velero should:
1. Auto-detect an available port
2. Accept configuring the controller-runtime port
## Questions for Maintainers
1. Why does controller-runtime use hardcoded 8080?
2. Is there a roadmap to support hostNetwork?
## Labels and Metadata
Severity: CRITICAL
```
**Why flagged (Patterns detected: 6/8):**
-`futureDates` - References "2026-01-26" and "Kubernetes 1.33"
-`excessiveHeaders` - 8+ section headers
-`formalPhrases` - "Root Cause Analysis", "Expected Permanent Solution", "Questions for Maintainers", "Labels and Metadata"
-`aiSectionHeaders` - "## Description", "## Critical Problem", "## Affected Environment", "## Resolution Attempts"
-`perfectFormatting` - Perfect table structure
-`genericSolutions` - Mentions "auto-detect"
---
## Example 2: Medium AI Score (Score: 2/8) ✅
**This would NOT be flagged (below threshold):**
```markdown
**What steps did you take and what happened:**
I'm trying to restore a backup but getting this error:
```
error: backup "my-backup" not found
```
**What did you expect to happen:**
The backup should restore successfully
**Environment:**
- Velero version: 1.13.0
- Kubernetes version: 1.28
- Cloud provider: AWS
**Additional context:**
I can see the backup in S3 but Velero doesn't list it. Running `velero backup get` shows no backups.
```
**Why NOT flagged (Patterns detected: 2/8):**
-`futureDates` - Uses realistic versions
-`excessiveHeaders` - Only 3 headers
-`formalPhrases` - No formal AI phrases
-`excessiveTables` - Has a table but only 1
-`perfectFormatting` - Normal formatting
-`aiSectionHeaders` - Standard issue template headers
-`excessiveFormatting` - Has code blocks
-`genericSolutions` - No generic solutions
---
## Example 3: Legitimate Detailed Issue (Score: 3/8) ⚠️
**This would be flagged but is actually legitimate:**
```markdown
## Problem Description
VolumeGroupSnapshot restore fails with Ceph RBD driver.
## Environment
- Velero: 1.14.0
- Kubernetes: 1.28.3
- ODF: 4.14.2 with Ceph RBD CSI driver
## Root Cause
Ceph RBD stores group snapshot metadata in journal as `csi.groupid` omap key. During restore, when creating pre-provisioned VSC, the RBD driver reads this and populates `status.volumeGroupSnapshotHandle`.
The CSI snapshot controller looks for a VGSC with matching handle. Since Velero deletes VGSC after backup, it's not found.
## Reproduction Steps
1. Create backup with VGS
2. Delete namespace
3. Restore backup
4. Observe VS stuck with "cannot find group snapshot"
## Workaround
Create stub VGSC with matching `volumeGroupSnapshotHandle` and patch status.
## Proposed Fix
1. Backup: Capture `volumeGroupSnapshotHandle` in CSISnapshotInfo
2. Restore: Create stub VGSC if handle exists
## Code References
- Ceph RBD: https://github.com/ceph/ceph-csi/blob/devel/internal/rbd/snapshot.go#L167
- Velero deletion: https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/actions/csi/pvc_action.go#L1124
```
**Why flagged (Patterns detected: 3/8):**
-`futureDates` - Uses current versions
-`excessiveHeaders` - Has 6 section headers
-`formalPhrases` - "Root Cause", "Proposed Fix"
-`excessiveTables` - No tables
-`perfectFormatting` - Normal formatting
-`aiSectionHeaders` - Technical, not generic
-`excessiveFormatting` - Reasonable formatting
-`genericSolutions` - Structured solution with code refs
**Maintainer Action**: This is a legitimate, well-researched issue. Verify the details with the contributor and remove the `potential-ai-generated` label.
---
## Example 4: Simple Valid Issue (Score: 0/8) ✅
**This would NOT be flagged:**
```markdown
Velero backup fails with error: `rpc error: code = Unavailable desc = connection error`
Running Velero 1.13 on GKE. Backups were working yesterday but now all fail with this error.
Logs show the node-agent pod is crashing. Any ideas?
```
**Why NOT flagged (Patterns detected: 0/8):**
- All patterns: None detected
---
## Key Takeaways
### Will Trigger Detection ❌
- Future dates/versions (2026+, K8s 1.33+)
- 4+ formal AI phrases
- 8+ section headers
- Perfect table formatting across multiple tables
- Generic AI section titles
- Auto-detect/generic solution patterns
### Will NOT Trigger ✅
- Realistic version numbers
- Actual error messages from real systems
- Normal issue formatting
- Moderate level of detail
- Standard GitHub issue template
### May Trigger (But Legitimate) ⚠️
- Very detailed technical analysis
- Multiple code references
- Well-structured proposals
- Extensive testing documentation
For these cases, maintainers will verify with the contributor and remove the flag once confirmed.

80
.github/AI-DETECTION-README.md vendored Normal file
View File

@@ -0,0 +1,80 @@
# AI-Generated Content Detection
This directory contains the AI-generated content detection system for Velero issues.
## Overview
The Velero project has implemented automated detection of potentially AI-generated issues to help maintain quality and ensure that issues describe real, verified problems.
## How It Works
### Detection Workflow
The workflow (`.github/workflows/ai-issue-detector.yml`) runs automatically when:
- A new issue is opened
- An existing issue is edited
### Detection Patterns
The detector analyzes issues for several AI-generation patterns:
1. **Excessive Tables** - More than 5 markdown tables
2. **Excessive Headers** - More than 8 consecutive section headers
3. **Formal Phrases** - Multiple formal section headers typical of AI (e.g., "Root Cause Analysis", "Operational Impact", "Expected Permanent Solution")
4. **Excessive Formatting** - Multiple horizontal rules and perfect formatting
5. **Future Dates** - Version numbers or dates that are unrealistic or in the future
6. **Perfect Formatting** - Overly structured tables with perfect alignment
7. **AI Section Headers** - Generic AI-style headers like "Critical Problem", "Resolution Attempts"
8. **Generic Solutions** - Auto-generated solution patterns with multiple YAML examples
### Scoring System
Each detected pattern adds to the AI score. If the score is 3 or higher (out of 8), the issue is flagged as potentially AI-generated.
### Actions Taken
When an issue is flagged:
1. A `potential-ai-generated` label is added
2. A `needs-triage` label is added
3. An automated comment is posted explaining:
- Why the issue was flagged
- What patterns were detected
- Guidelines for contributors to follow
- Request for verification
## For Contributors
If your issue is flagged:
1. **Don't panic** - This is not an accusation, just a request for verification
2. **Review the guidelines** in our [Code Standards](../site/content/docs/main/code-standards.md#ai-generated-content)
3. **Verify your content**:
- Ensure all version numbers are accurate
- Confirm error messages are from your actual environment
- Remove any placeholder or example content
- Simplify overly structured formatting
4. **Update the issue** with corrections if needed
5. **Comment to confirm** that the issue describes a real problem
## For Maintainers
When reviewing flagged issues:
1. Check if the technical details are realistic and verifiable
2. Look for signs of hallucinated content (fake version numbers, non-existent features)
3. Engage with the issue author to verify the problem
4. Remove the `potential-ai-generated` label once verified
5. Close issues that cannot be verified or describe non-existent problems
## Configuration
The detection patterns can be adjusted in the workflow file if needed. The threshold is currently set at 3 out of 8 patterns to balance false positives with detection accuracy.
## False Positives
The detector may occasionally flag legitimate issues, especially those that are:
- Very detailed and well-structured
- Using formal technical documentation style
- Reporting complex problems with extensive details
This is intentional - we prefer to verify detailed issues rather than miss AI-generated ones.

186
.github/MAINTAINER-AI-DETECTION-GUIDE.md vendored Normal file
View File

@@ -0,0 +1,186 @@
# Maintainer Guide: AI-Generated Issue Detection
This guide helps Velero maintainers understand and work with the AI-generated issue detection system.
## Overview
The AI detection system automatically analyzes new and edited issues to identify potential AI-generated content. This helps maintain issue quality and ensures contributors verify their submissions.
## How It Works
### Automatic Detection
When an issue is opened or edited, the workflow:
1. **Analyzes** the issue body for 8 different AI patterns
2. **Calculates** an AI confidence score (0-8)
3. **If score ≥ 3**: Adds labels and posts a comment
4. **If score < 3**: Takes no action (issue proceeds normally)
### Detection Patterns
| Pattern | Description | Weight |
|---------|-------------|--------|
| `excessiveTables` | More than 5 markdown tables | 1 |
| `excessiveHeaders` | More than 8 section headers | 1 |
| `formalPhrases` | 4+ AI-typical phrases (e.g., "Root Cause Analysis") | 1 |
| `excessiveFormatting` | Multiple horizontal rules (---) | 1 |
| `futureDates` | Dates/versions in 2026+ or 2030s | 1 |
| `perfectFormatting` | Multiple identical table structures | 1 |
| `aiSectionHeaders` | 4+ generic AI headers (e.g., "Critical Problem") | 1 |
| `genericSolutions` | Auto-detect patterns with multiple YAML blocks | 1 |
## Working with Flagged Issues
### Step 1: Review the Issue
When you see an issue labeled `potential-ai-generated`:
1. **Read the issue carefully**
2. **Check the detected patterns** (listed in the auto-comment)
3. **Look for red flags**:
- Future version numbers (e.g., "Kubernetes 1.33")
- Future dates (e.g., "2026-01-27")
- Non-existent features or configurations
- Perfect table formatting with no actual content
- Generic solutions that don't match Velero's architecture
### Step 2: Engage with the Contributor
**If the issue seems legitimate but over-formatted:**
```markdown
Thanks for the detailed report! Could you confirm:
1. Are you running Velero version X.Y.Z (you mentioned version A.B.C)?
2. Is the error message exactly as shown?
3. Have you actually tried the workarounds mentioned?
Once verified, we'll remove the AI-generated flag and investigate.
```
**If the issue appears to be unverified AI content:**
```markdown
This issue appears to contain AI-generated content that hasn't been verified.
Please review our [AI contribution guidelines](https://github.com/vmware-tanzu/velero/blob/main/site/content/docs/main/code-standards.md#ai-generated-content) and:
1. Confirm this describes a real problem in your environment
2. Verify all version numbers and error messages
3. Remove any placeholder or example content
4. Test that the issue is reproducible
If you can't verify the issue, please close it. We're happy to help with real problems!
```
### Step 3: Take Action
**For verified issues:**
1. Remove the `potential-ai-generated` label
2. Keep or remove `needs-triage` as appropriate
3. Proceed with normal issue triage
**For unverified/invalid issues:**
1. Request verification (see templates above)
2. If no response after 7 days, consider closing as `stale`
3. If clearly invalid, close with explanation
## Common Patterns
### False Positives (Legitimate Issues)
These may trigger the detector but are usually valid:
- **Very detailed bug reports** with extensive logs and testing
- **Technical design proposals** with multiple sections
- **Well-organized feature requests** with tables and examples
**Action**: Engage with contributor, ask clarifying questions, remove flag if verified.
### True Positives (AI-Generated)
Red flags that indicate unverified AI content:
- **Future version numbers**: "Kubernetes 1.33" (doesn't exist yet)
- **Future dates**: "2026-01-27" (if current date is before)
- **Non-existent features**: References to Velero features that don't exist
- **Generic solutions**: "Auto-detect available port" (not how Velero works)
- **Perfect formatting, wrong content**: Beautiful tables with incorrect info
**Action**: Request verification, ask for actual environment details, consider closing if unverified.
### Edge Cases
**Contributor using AI as a writing assistant:**
- Issue content is verified and accurate
- Just used AI to help structure/format the report
- **Action**: This is acceptable! Remove flag if content is verified.
**Legitimate issue that happens to match patterns:**
- Real problem with detailed analysis
- Includes proper version numbers and logs
- **Action**: Verify with contributor, remove flag once confirmed.
## Statistics and Monitoring
You can search for flagged issues:
```
is:issue label:potential-ai-generated
```
Monitor trends:
- High detection rate → May need to adjust thresholds
- Low detection rate → Patterns working well or need refinement
## Adjusting the System
### Modifying Detection Patterns
Edit `.github/workflows/ai-issue-detector.yml`:
```javascript
// Increase threshold to reduce false positives
if (aiScore >= 4) { // was 3
// Adjust pattern sensitivity
excessiveTables: (issueBody.match(/\|.*\|/g) || []).length > 8, // was 5
```
### Adding New Patterns
Add to the `aiPatterns` object:
```javascript
// Example: Detect excessive use of emojis
excessiveEmojis: (issueBody.match(/[\u{1F300}-\u{1F9FF}]/gu) || []).length > 10,
```
### Disabling the Workflow
Rename or delete `.github/workflows/ai-issue-detector.yml`
## Best Practices
1. **Be courteous**: Contributors may not realize their AI tool generated incorrect info
2. **Verify, don't assume**: Some detailed issues are legitimate
3. **Educate**: Point to the AI guidelines in code-standards.md
4. **Track patterns**: Note common AI-generated patterns for future improvements
5. **Iterate**: Adjust detection thresholds based on false positive rates
## FAQ
**Q: Should we reject all AI-assisted contributions?**
A: No! AI assistance is fine if the contributor verifies accuracy. We only flag unverified AI content.
**Q: What if a contributor is offended by the flag?**
A: Explain it's automated and not personal. We just need verification of technical details.
**Q: Can we automatically close flagged issues?**
A: No. Always engage with the contributor first. Some are legitimate.
**Q: What's an acceptable false positive rate?**
A: Aim for <10%. If higher, increase the threshold from 3 to 4 or 5.
## Support
Questions about the AI detection system? Tag @vmware-tanzu/velero-maintainers in issue #9501.

1
.github/labels.yaml vendored
View File

@@ -41,3 +41,4 @@ kind:
- tech-debt
- usage-error
- voting
- potential-ai-generated

132
.github/workflows/ai-issue-detector.yml vendored Normal file
View File

@@ -0,0 +1,132 @@
name: "Detect AI-Generated Issues"
on:
issues:
types: [opened, edited]
jobs:
detect-ai-content:
runs-on: ubuntu-latest
permissions:
issues: write
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Analyze issue for AI-generated content
id: analyze
uses: actions/github-script@v7
with:
script: |
const issue = context.payload.issue;
const issueBody = issue.body || '';
const issueTitle = issue.title || '';
// AI detection patterns
const aiPatterns = {
// Overly structured markdown with extensive tables
excessiveTables: (issueBody.match(/\|.*\|/g) || []).length > 5,
// Multiple consecutive headers with consistent formatting
excessiveHeaders: (issueBody.match(/^#{1,6}\s+/gm) || []).length > 8,
// Overly formal language patterns common in AI
formalPhrases: [
'Root Cause Analysis',
'Operational Impact',
'Expected Permanent Solution',
'Questions for Maintainers',
'Labels and Metadata',
'Reference Files',
'Steps to Reproduce'
].filter(phrase => issueBody.includes(phrase)).length > 4,
// Excessive use of emojis or special characters
excessiveFormatting: issueBody.includes('---\n \n---') ||
(issueBody.match(/---/g) || []).length > 4,
// Unrealistic version numbers or dates in the future
futureDates: /202[6-9]|203\d/.test(issueBody),
// Overly detailed technical specs with perfect formatting
perfectFormatting: issueBody.includes('| Parameter | Value |') &&
issueBody.includes('| Aspect | Status | Impact |'),
// Generic AI-style section headers
aiSectionHeaders: [
'## Description',
'## Critical Problem',
'## Affected Environment',
'## Full Helm Configuration',
'## Resolution Attempts',
'## Related Information'
].filter(header => issueBody.includes(header)).length > 4,
// Unusual specificity combined with generic solutions
genericSolutions: issueBody.includes('auto-detect') &&
issueBody.includes('configuration:') &&
(issueBody.match(/```yaml/g) || []).length > 2
};
// Calculate AI score
let aiScore = 0;
let detectedPatterns = [];
for (const [pattern, detected] of Object.entries(aiPatterns)) {
if (detected) {
aiScore++;
detectedPatterns.push(pattern);
}
}
console.log('AI Score: ' + aiScore + '/8');
console.log('Detected patterns: ' + detectedPatterns.join(', '));
// If AI score is high, add label and comment
if (aiScore >= 3) {
// Add label
try {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
labels: ['needs-triage', 'potential-ai-generated']
});
// Add comment
const confidence = Math.round(aiScore/8 * 100);
const repoPath = context.repo.owner + '/' + context.repo.repo;
const comment = '👋 Thank you for opening this issue!\n\n' +
'This issue has been flagged for review as it may contain AI-generated content (confidence: ' + confidence + '%).\n\n' +
'**Detected patterns:** ' + detectedPatterns.join(', ') + '\n\n' +
'If this issue was created with AI assistance, please review our [AI contribution guidelines](https://github.com/' + repoPath + '/blob/main/site/content/docs/main/code-standards.md#ai-generated-content).\n\n' +
'**Important:**\n' +
'- Please verify all technical details are accurate\n' +
'- Ensure version numbers, dates, and configurations reflect your actual environment\n' +
'- Remove any placeholder or example content\n' +
'- Confirm the issue is reproducible in your environment\n\n' +
'A maintainer will review this issue shortly. If this was flagged in error, please let us know!';
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
body: comment
});
core.setOutput('ai-detected', 'true');
core.setOutput('ai-score', aiScore);
} catch (error) {
console.log('Error adding label or comment:', error);
}
} else {
core.setOutput('ai-detected', 'false');
core.setOutput('ai-score', aiScore);
}
return {
aiDetected: aiScore >= 3,
score: aiScore,
patterns: detectedPatterns
};

View File

@@ -17,7 +17,6 @@ If you're using Velero and want to add your organization to this list,
<a href="https://www.replicated.com/" border="0" target="_blank"><img alt="replicated.com" src="site/static/img/adopters/replicated-logo-red.svg" height="50"></a>
<a href="https://cloudcasa.io/" border="0" target="_blank"><img alt="cloudcasa.io" src="site/static/img/adopters/cloudcasa.svg" height="50"></a>
<a href="https://azure.microsoft.com/" border="0" target="_blank"><img alt="azure.com" src="site/static/img/adopters/azure.svg" height="50"></a>
<a href="https://www.broadcom.com/" border="0" target="_blank"><img alt="broadcom.com" src="site/static/img/adopters/broadcom.svg" height="50"></a>
## Success Stories
Below is a list of adopters of Velero in **production environments** that have
@@ -69,9 +68,6 @@ Replicated uses the Velero open source project to enable snapshots in [KOTS][101
**[Microsoft Azure][105]**<br>
[Azure Backup for AKS][106] is an Azure native, Kubernetes aware, Enterprise ready backup for containerized applications deployed on Azure Kubernetes Service (AKS). AKS Backup utilizes Velero to perform backup and restore operations to protect stateful applications in AKS clusters.<br>
**[Broadcom][107]**<br>
[VMware Cloud Foundation][108] (VCF) offers built-in [vSphere Kubernetes Service][109] (VKS), a Kubernetes runtime that includes a CNCF certified Kubernetes distribution, to deploy and manage containerized workloads. VCF empowers platform engineers with native [Kubernetes multi-cluster management][110] capability for managing Kubernetes (K8s) infrastructure at scale. VCF utilizes Velero for Kubernetes data protection enabling platform engineers to back up and restore containerized workloads manifests & persistent volumes, helping to increase the resiliency of stateful applications in VKS cluster.
## Adding your organization to the list of Velero Adopters
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/static/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
@@ -129,8 +125,3 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
[105]: https://azure.microsoft.com/
[106]: https://learn.microsoft.com/azure/backup/backup-overview
[107]: https://www.broadcom.com/
[108]: https://www.vmware.com/products/cloud-infrastructure/vmware-cloud-foundation
[109]: https://www.vmware.com/products/cloud-infrastructure/vsphere-kubernetes-service
[110]: https://blogs.vmware.com/cloud-foundation/2025/09/29/empowering-platform-engineers-with-native-kubernetes-multi-cluster-management-in-vmware-cloud-foundation/

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
ARG GOPROXY
ARG BIN
@@ -49,7 +49,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS restic-builder
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS restic-builder
ARG GOPROXY
ARG BIN

View File

@@ -15,7 +15,7 @@
ARG OS_VERSION=1809
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.25.7-bookworm AS velero-builder
FROM --platform=$BUILDPLATFORM golang:1.25-bookworm AS velero-builder
ARG GOPROXY
ARG BIN

View File

@@ -7,11 +7,11 @@
| Maintainer | GitHub ID | Affiliation |
|---------------------|---------------------------------------------------------------|--------------------------------------------------|
| Scott Seago | [sseago](https://github.com/sseago) | [OpenShift](https://github.com/openshift) |
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | Broadcom |
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | Broadcom |
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | Broadcom |
| Daniel Jiang | [reasonerjt](https://github.com/reasonerjt) | [VMware](https://www.github.com/vmware/) |
| Wenkai Yin | [ywk253100](https://github.com/ywk253100) | [VMware](https://www.github.com/vmware/) |
| Xun Jiang | [blackpiglet](https://github.com/blackpiglet) | [VMware](https://www.github.com/vmware/) |
| Shubham Pampattiwar | [shubham-pampattiwar](https://github.com/shubham-pampattiwar) | [OpenShift](https://github.com/openshift) |
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | Broadcom |
| Yonghui Li | [Lyndon-Li](https://github.com/Lyndon-Li) | [VMware](https://www.github.com/vmware/) |
| Anshul Ahuja | [anshulahuja98](https://github.com/anshulahuja98) | [Microsoft Azure](https://www.github.com/azure/) |
| Tiger Kaovilai | [kaovilai](https://github.com/kaovilai) | [OpenShift](https://github.com/openshift) |
@@ -27,3 +27,14 @@
* JenTing Hsiao ([jenting](https://github.com/jenting))
* Dave Smith-Uchida ([dsu-igeek](https://github.com/dsu-igeek))
* Ming Qiu ([qiuming-best](https://github.com/qiuming-best))
## Velero Contributors & Stakeholders
| Feature Area | Lead |
|------------------------|:------------------------------------------------------------------------------------:|
| Technical Lead | Daniel Jiang [reasonerjt](https://github.com/reasonerjt) |
| Kubernetes CSI Liaison | |
| Deployment | |
| Community Management | Orlin Vasilev [OrlinVasilev](https://github.com/OrlinVasilev) |
| Product Management | Pradeep Kumar Chaturvedi [pradeepkchaturvedi](https://github.com/pradeepkchaturvedi) |

View File

@@ -42,11 +42,13 @@ The following is a list of the supported Kubernetes versions for each Velero ver
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|----------------|-------------------------------------------|-------------------------------------|
| 1.18 | 1.18-latest | 1.33.7, 1.34.1, and 1.35.0 |
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
| 1.17 | 1.18-latest | 1.31.7, 1.32.3, 1.33.1, and 1.34.0 |
| 1.16 | 1.18-latest | 1.31.4, 1.32.3, and 1.33.0 |
| 1.15 | 1.18-latest | 1.28.8, 1.29.8, 1.30.4 and 1.31.1 |
| 1.14 | 1.18-latest | 1.27.9, 1.28.9, and 1.29.4 |
| 1.13 | 1.18-latest | 1.26.5, 1.27.3, 1.27.8, and 1.28.3 |
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
Velero supports IPv4, IPv6, and dual stack environments. Support for this was tested against Velero v1.8.

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.25.7 as tilt-helper
FROM golang:1.25 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \

View File

@@ -1,109 +0,0 @@
## v1.18
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.18.0
### Container Image
`velero/velero:v1.18.0`
### Documentation
https://velero.io/docs/v1.18/
### Upgrading
https://velero.io/docs/v1.18/upgrade-to-1.18/
### Highlights
#### Concurrent backup
In v1.18, Velero is capable to process multiple backups concurrently. This is a significant usability improvement, especially for multiple tenants or multiple users case, backups submitted from different users could run their backups simultaneously without interfering with each other.
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/concurrent-backup-processing.md for more details.
#### Cache volume for data movers
In v1.18, Velero allows users to configure cache volumes for data mover pods during restore for CSI snapshot data movement and fs-backup. This brings below benefits:
- Solve the problem that data mover pods fail to when pod's ephemeral disk is limited
- Solve the problem that multiple data mover pods fail to run concurrently in one node when the node's ephemeral disk is limited
- Working together with backup repository's cache limit configuration, cache volume with appropriate size helps to improve the restore throughput
Check design https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/backup-repo-cache-volume.md for more details.
#### Incremental size for data movers
In v1.18, Velero allows users to observe the incremental size of data movers backups for CSI snapshot data movement and fs-backup, so that users could visually see the data reduction due to incremental backup.
#### Wildcard support for namespaces
In v1.18, Velero allows to use Glob regular expressions for namespace filters during backup and restore, so that users could filter namespaces in a batch manner.
#### VolumePolicy for PVC phase
In v1.18, Velero VolumePolicy supports actions by PVC phase, which help users to do special operations for PVCs with a specific phase, e.g., skip PVCs in Pending/Lost status from the backup.
#### Scalability and Resiliency improvements
##### Prevent Velero server OOM Kill for large backup repositories
In v1.18, some backup repository operations are delay executed out of Velero server, so Velero server won't be OOM Killed.
#### Performance improvement for VolumePolicy
In v1.18, VolumePolicy is enhanced for large number of pods/PVCs so that the performance is significantly improved.
#### Events for data mover pod diagnostic
In v1.18, events are recorded into data mover pod diagnostic, which allows user to see more information for troubleshooting when the data mover pod fails.
### Runtime and dependencies
Golang runtime: 1.25.7
kopia: 0.22.3
### Limitations/Known issues
### Breaking changes
#### Deprecation of PVC selected node feature
According to [Velero deprecation policy](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#deprecation-policy), PVC selected node feature is deprecated in v1.18. Velero could appropriately handle PVC's selected-node annotation, so users don't need to do anything particularly.
### All Changes
* Remove backup from running list when backup fails validation (#9498, @sseago)
* Maintenance Job only uses the first element of the LoadAffinity array (#9494, @blackpiglet)
* Fix issue #9478, add diagnose info on expose peek fails (#9481, @Lyndon-Li)
* Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence. (#9474, @blackpiglet)
* Add maintenance job and data mover pod's labels and annotations setting. (#9452, @blackpiglet)
* Fix plugin init container names exceeding DNS-1123 limit (#9445, @mpryc)
* Add PVC-to-Pod cache to improve volume policy performance (#9441, @shubham-pampattiwar)
* Remove VolumeSnapshotClass from CSI B/R process. (#9431, @blackpiglet)
* Use hookIndex for recording multiple restore exec hooks. (#9366, @blackpiglet)
* Sanitize Azure HTTP responses in BSL status messages (#9321, @shubham-pampattiwar)
* Remove labels associated with previous backups (#9206, @Joeavaikath)
* Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs (#9166, @claude)
* feat: Enhance BackupStorageLocation with Secret-based CA certificate support (#9141, @kaovilai)
* Add `--apply` flag to `install` command, allowing usage of Kubernetes apply to make changes to existing installs (#9132, @mjnagel)
* Fix issue #9194, add doc for GOMAXPROCS behavior change (#9420, @Lyndon-Li)
* Apply volume policies to VolumeGroupSnapshot PVC filtering (#9419, @shubham-pampattiwar)
* Fix issue #9276, add doc for cache volume support (#9418, @Lyndon-Li)
* Add Prometheus metrics for maintenance jobs (#9414, @shubham-pampattiwar)
* Fix issue #9400, connect repo first time after creation so that init params could be written (#9407, @Lyndon-Li)
* Cache volume for PVR (#9397, @Lyndon-Li)
* Cache volume support for DataDownload (#9391, @Lyndon-Li)
* don't copy securitycontext from first container if configmap found (#9389, @sseago)
* Refactor repo provider interface for static configuration (#9379, @Lyndon-Li)
* Fix issue #9365, prevent fake completion notification due to multiple update of single PVR (#9375, @Lyndon-Li)
* Add cache volume configuration (#9370, @Lyndon-Li)
* Track actual resource names for GenerateName in restore status (#9368, @shubham-pampattiwar)
* Fix managed fields patch for resources using GenerateName (#9367, @shubham-pampattiwar)
* Support cache volume for generic restore exposer and pod volume exposer (#9362, @Lyndon-Li)
* Add incrementalSize to DU/PVB for reporting new/changed size (#9357, @sseago)
* Add snapshotSize for DataDownload, PodVolumeRestore (#9354, @Lyndon-Li)
* Add cache dir configuration for udmrepo (#9353, @Lyndon-Li)
* Fix the Job build error when BackupReposiotry name longer than 63. (#9350, @blackpiglet)
* Add cache configuration to VGDP (#9342, @Lyndon-Li)
* Fix issue #9332, add bytesDone for cache files (#9333, @Lyndon-Li)
* Fix typos in documentation (#9329, @T4iFooN-IX)
* Concurrent backup processing (#9307, @sseago)
* VerifyJSONConfigs verify every elements in Data. (#9302, @blackpiglet)
* Fix issue #9267, add events to data mover prepare diagnostic (#9296, @Lyndon-Li)
* Add option for privileged fs-backup pod (#9295, @sseago)
* Fix issue #9193, don't connect repo in repo controller (#9291, @Lyndon-Li)
* Implement concurrency control for cache of native VolumeSnapshotter plugin. (#9281, @0xLeo258)
* Fix issue #7904, remove the code and doc for PVC node selection (#9269, @Lyndon-Li)
* Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases (#9264, @shubham-pampattiwar)
* Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment (#9256, @shubham-pampattiwar)
* Implement wildcard namespace pattern expansion for backup namespace includes/excludes. This change adds support for wildcard patterns (*, ?, [abc], {a,b,c}) in namespace includes and excludes during backup operations (#9255, @Joeavaikath)
* Protect VolumeSnapshot field from race condition during multi-thread backup (#9248, @0xLeo258)
* Update AzureAD Microsoft Authentication Library to v1.5.0 (#9244, @priyansh17)
* Get pod list once per namespace in pvc IBA (#9226, @sseago)
* Fix issue #7725, add design for backup repo cache configuration (#9148, @Lyndon-Li)
* Fix issue #9229, don't attach backupPVC to the source node (#9233, @Lyndon-Li)
* feat: Permit specifying annotations for the BackupPVC (#9173, @clementnuss)

View File

@@ -0,0 +1 @@
Add `--apply` flag to `install` command, allowing usage of Kubernetes apply to make changes to existing installs

View File

@@ -0,0 +1 @@
feat: Enhance BackupStorageLocation with Secret-based CA certificate support

View File

@@ -0,0 +1 @@
Fix issue #7725, add design for backup repo cache configuration

View File

@@ -0,0 +1 @@
Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs

View File

@@ -0,0 +1 @@
feat: Permit specifying annotations for the BackupPVC

View File

@@ -0,0 +1 @@
Remove labels associated with previous backups

View File

@@ -0,0 +1 @@
Get pod list once per namespace in pvc IBA

View File

@@ -0,0 +1 @@
Fix issue #9229, don't attach backupPVC to the source node

View File

@@ -0,0 +1 @@
Update AzureAD Microsoft Authentication Library to v1.5.0

View File

@@ -0,0 +1 @@
Protect VolumeSnapshot field from race condition during multi-thread backup

View File

@@ -0,0 +1,10 @@
Implement wildcard namespace pattern expansion for backup namespace includes/excludes.
This change adds support for wildcard patterns (*, ?, [abc], {a,b,c}) in namespace includes and excludes during backup operations.
When wildcard patterns are detected, they are expanded against the list of active namespaces in the cluster before the backup proceeds.
Key features:
- Wildcard patterns in namespace includes/excludes are automatically detected and expanded
- Pattern validation ensures unsupported patterns (regex, consecutive asterisks) are rejected
- Empty wildcard results (e.g., "invalid*" matching no namespaces) correctly result in empty backups
- Exact namespace names and "*" continue to work as before (no expansion needed)

View File

@@ -0,0 +1 @@
Fix repository maintenance jobs to inherit allowlisted tolerations from Velero deployment

View File

@@ -0,0 +1 @@
Fix schedule controller to prevent backup queue accumulation during extended blocking scenarios by properly handling empty backup phases

View File

@@ -0,0 +1 @@
Fix issue #7904, remove the code and doc for PVC node selection

View File

@@ -0,0 +1 @@
Implement concurrency control for cache of native VolumeSnapshotter plugin.

View File

@@ -0,0 +1 @@
Fix issue #9193, don't connect repo in repo controller

View File

@@ -0,0 +1 @@
Add option for privileged fs-backup pod

View File

@@ -0,0 +1 @@
Fix issue #9267, add events to data mover prepare diagnostic

View File

@@ -0,0 +1 @@
VerifyJSONConfigs verify every elements in Data.

View File

@@ -0,0 +1 @@
Concurrent backup processing

View File

@@ -0,0 +1 @@
Sanitize Azure HTTP responses in BSL status messages

View File

@@ -0,0 +1 @@
Fix typos in documentation

View File

@@ -0,0 +1 @@
Fix issue #9332, add bytesDone for cache files

View File

@@ -0,0 +1 @@
Add cache configuration to VGDP

View File

@@ -0,0 +1 @@
Fix the Job build error when BackupReposiotry name longer than 63.

View File

@@ -0,0 +1 @@
Add cache dir configuration for udmrepo

View File

@@ -0,0 +1 @@
Add snapshotSize for DataDownload, PodVolumeRestore

View File

@@ -0,0 +1 @@
Add incrementalSize to DU/PVB for reporting new/changed size

View File

@@ -0,0 +1 @@
Support cache volume for generic restore exposer and pod volume exposer

View File

@@ -0,0 +1 @@
Use hookIndex for recording multiple restore exec hooks.

View File

@@ -0,0 +1 @@
Fix managed fields patch for resources using GenerateName

View File

@@ -0,0 +1 @@
Track actual resource names for GenerateName in restore status

View File

@@ -0,0 +1 @@
Add cache volume configuration

View File

@@ -0,0 +1 @@
Fix issue #9365, prevent fake completion notification due to multiple update of single PVR

View File

@@ -0,0 +1 @@
Refactor repo provider interface for static configuration

View File

@@ -0,0 +1 @@
don't copy securitycontext from first container if configmap found

View File

@@ -0,0 +1 @@
Cache volume support for DataDownload

View File

@@ -0,0 +1 @@
Cache volume for PVR

View File

@@ -0,0 +1 @@
Fix issue #9400, connect repo first time after creation so that init params could be written

View File

@@ -0,0 +1 @@
Add Prometheus metrics for maintenance jobs

View File

@@ -0,0 +1 @@
Fix issue #9276, add doc for cache volume support

View File

@@ -0,0 +1 @@
Apply volume policies to VolumeGroupSnapshot PVC filtering

View File

@@ -0,0 +1 @@
Fix issue #9194, add doc for GOMAXPROCS behavior change

View File

@@ -0,0 +1 @@
Remove VolumeSnapshotClass from CSI B/R process.

View File

@@ -0,0 +1 @@
Add PVC-to-Pod cache to improve volume policy performance

View File

@@ -0,0 +1 @@
Fix plugin init container names exceeding DNS-1123 limit

View File

@@ -0,0 +1 @@
Add maintenance job and data mover pod's labels and annotations setting.

View File

@@ -0,0 +1 @@
Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence.

View File

@@ -0,0 +1 @@
Fix issue #9478, add diagnose info on expose peek fails

View File

@@ -0,0 +1 @@
Maintenance Job only uses the first element of the LoadAffinity array

View File

@@ -0,0 +1 @@
Remove backup from running list when backup fails validation

2
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/vmware-tanzu/velero
go 1.25.7
go 1.25.0
require (
cloud.google.com/go/storage v1.57.2

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=$TARGETPLATFORM golang:1.25.7-bookworm
FROM --platform=$TARGETPLATFORM golang:1.25-bookworm
ARG GOPROXY
@@ -21,11 +21,9 @@ ENV GO111MODULE=on
ENV GOPROXY=${GOPROXY}
# kubebuilder test bundle is separated from kubebuilder. Need to setup it for CI test.
# Using setup-envtest to download envtest binaries
RUN go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest && \
mkdir -p /usr/local/kubebuilder/bin && \
ENVTEST_ASSETS_DIR=$(setup-envtest use 1.33.0 --bin-dir /usr/local/kubebuilder/bin -p path) && \
cp -r ${ENVTEST_ASSETS_DIR}/* /usr/local/kubebuilder/bin/
RUN curl -sSLo envtest-bins.tar.gz https://go.kubebuilder.io/test-tools/1.22.1/linux/$(go env GOARCH) && \
mkdir /usr/local/kubebuilder && \
tar -C /usr/local/kubebuilder --strip-components=1 -zvxf envtest-bins.tar.gz
RUN wget --quiet https://github.com/kubernetes-sigs/kubebuilder/releases/download/v3.2.0/kubebuilder_linux_$(go env GOARCH) && \
mv kubebuilder_linux_$(go env GOARCH) /usr/local/kubebuilder/bin/kubebuilder && \

View File

@@ -1,5 +1,5 @@
diff --git a/go.mod b/go.mod
index 5f939c481..f6205aa3c 100644
index 5f939c481..6ae17f4a1 100644
--- a/go.mod
+++ b/go.mod
@@ -24,32 +24,31 @@ require (
@@ -14,13 +14,13 @@ index 5f939c481..f6205aa3c 100644
- golang.org/x/term v0.4.0
- golang.org/x/text v0.6.0
- google.golang.org/api v0.106.0
+ golang.org/x/crypto v0.45.0
+ golang.org/x/net v0.47.0
+ golang.org/x/crypto v0.36.0
+ golang.org/x/net v0.38.0
+ golang.org/x/oauth2 v0.28.0
+ golang.org/x/sync v0.18.0
+ golang.org/x/sys v0.38.0
+ golang.org/x/term v0.37.0
+ golang.org/x/text v0.31.0
+ golang.org/x/sync v0.12.0
+ golang.org/x/sys v0.31.0
+ golang.org/x/term v0.30.0
+ golang.org/x/text v0.23.0
+ google.golang.org/api v0.114.0
)
@@ -64,11 +64,11 @@ index 5f939c481..f6205aa3c 100644
)
-go 1.18
+go 1.24.0
+go 1.23.0
+
+toolchain go1.24.11
+toolchain go1.23.7
diff --git a/go.sum b/go.sum
index 026e1d2fa..4a37e7ac7 100644
index 026e1d2fa..805792055 100644
--- a/go.sum
+++ b/go.sum
@@ -1,23 +1,24 @@
@@ -170,8 +170,8 @@ index 026e1d2fa..4a37e7ac7 100644
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
-golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE=
-golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
+golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
+golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
+golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
+golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@@ -181,8 +181,8 @@ index 026e1d2fa..4a37e7ac7 100644
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
-golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
-golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
+golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
+golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
+golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
+golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
-golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
-golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
@@ -194,8 +194,8 @@ index 026e1d2fa..4a37e7ac7 100644
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
-golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
+golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
+golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
+golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -205,21 +205,21 @@ index 026e1d2fa..4a37e7ac7 100644
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
-golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
+golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
+golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
+golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
-golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
-golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
+golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
+golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
+golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
+golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
-golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
-golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
+golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
+golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
+golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=

View File

@@ -15,7 +15,6 @@ params:
latest: v1.17
versions:
- main
- v1.18
- v1.17
- v1.16
- v1.15

View File

@@ -42,6 +42,46 @@ A command to do this is `make new-changelog CHANGELOG_BODY="Changes you have mad
If a PR does not warrant a changelog, the CI check for a changelog can be skipped by applying a `changelog-not-required` label on the PR. If you are making a PR on a release branch, you should still make a new file in the `changelogs/unreleased` folder on the release branch for your change.
## AI-Generated Content
We welcome contributions from all developers, including those who use AI tools to assist in their work. However, to maintain code quality and ensure contributions are accurate and appropriate, please follow these guidelines:
### Using AI Assistance
**Acceptable use:**
- Using AI tools (like GitHub Copilot, ChatGPT, Claude, etc.) to generate scaffolding or boilerplate code
- Getting AI assistance for writing unit tests
- Using AI to help understand complex code patterns
- AI-assisted debugging and problem-solving
- Using AI to help with documentation writing
**Requirements when using AI:**
1. **Always review and verify** AI-generated content before submitting
2. **Test thoroughly** - ensure the code works as expected in your environment
3. **Verify technical accuracy** - check that all version numbers, configurations, and technical details are correct
4. **Remove placeholders** - ensure there are no example or placeholder content
5. **Understand the code** - be able to explain and defend your changes during code review
6. **Disclose AI usage** - if a significant portion of your PR was AI-generated, mention it in the PR description
### What to Avoid
**Unacceptable practices:**
- Submitting entirely AI-generated PRs or issues without review or verification
- Including hallucinated information (false version numbers, non-existent APIs, etc.)
- Copying AI-generated content with placeholder or example data
- Submitting AI-generated issues describing problems you haven't actually experienced
- Using AI to generate issues about features or bugs without verifying they exist
### For Issues
When creating issues with AI assistance:
- Ensure the issue describes a **real problem** you have experienced
- Verify all version numbers, error messages, and configurations are from your actual environment
- Remove any AI-generated boilerplate or overly formal structure
- Focus on clarity and accuracy over comprehensive formatting
Issues that appear to be entirely AI-generated without proper verification may be labeled as `potential-ai-generated` and flagged for additional review.
## Copyright header
Whenever a source code file is being modified, the copyright notice should be updated to our standard copyright notice. That is, it should read “Copyright the Velero contributors.”

View File

@@ -1,58 +0,0 @@
---
toc: "false"
cascade:
version: v1.18
toc: "true"
---
![100]
[![Build Status][1]][2]
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
* Replicate your production cluster to development and testing clusters.
Velero consists of:
* A server that runs on your cluster
* A command-line client that runs locally
## Documentation
This site is our documentation home with installation instructions, plus information about customizing Velero for your needs, architecture, extending Velero, contributing to Velero and more.
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
## Troubleshooting
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero-users channel][25] on the Kubernetes Slack server.
## Contributing
If you are ready to jump in and test, add code, or help with documentation, follow the instructions on our [Start contributing](https://velero.io/docs/v1.18.0/start-contributing/) documentation for guidance on how to setup Velero for development.
## Changelog
See [the list of releases][6] to find out about feature changes.
[1]: https://github.com/vmware-tanzu/velero/workflows/Main%20CI/badge.svg
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Main+CI"
[4]: https://github.com/vmware-tanzu/velero/issues
[6]: https://github.com/vmware-tanzu/velero/releases
[9]: https://kubernetes.io/docs/setup/
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
[12]: https://github.com/kubernetes/kubernetes/blob/main/cluster/addons/dns/README.md
[14]: https://github.com/kubernetes/kubernetes
[24]: https://groups.google.com/forum/#!forum/projectvelero
[25]: https://kubernetes.slack.com/messages/velero-users
[30]: troubleshooting.md
[100]: img/velero.png

View File

@@ -1,21 +0,0 @@
---
title: "Table of Contents"
layout: docs
---
## API types
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
(hooks)
* [Backup][1]
* [Restore][2]
* [Schedule][3]
* [BackupStorageLocation][4]
* [VolumeSnapshotLocation][5]
[1]: backup.md
[2]: restore.md
[3]: schedule.md
[4]: backupstoragelocation.md
[5]: volumesnapshotlocation.md

View File

@@ -1,19 +0,0 @@
---
layout: docs
title: API types
---
Here's a list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
(hooks)
* [Backup][1]
* [Restore][2]
* [Schedule][3]
* [BackupStorageLocation][4]
* [VolumeSnapshotLocation][5]
[1]: backup.md
[2]: restore.md
[3]: schedule.md
[4]: backupstoragelocation.md
[5]: volumesnapshotlocation.md

View File

@@ -1,211 +0,0 @@
---
title: "Backup API Type"
layout: docs
---
## Use
Use the `Backup` API type to request the Velero server to perform a backup. Once created, the
Velero Server immediately starts the backup process.
## API GroupVersion
Backup belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Backup` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Backup
# Standard Kubernetes metadata. Required.
metadata:
# Backup name. May be any valid Kubernetes object name. Required.
name: a
# Backup namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the backup. Required.
spec:
# CSISnapshotTimeout specifies the time used to wait for
# CSI VolumeSnapshot status turns to ReadyToUse during creation, before
# returning error as timeout. The default value is 10 minute.
csiSnapshotTimeout: 10m
# ItemOperationTimeout specifies the time used to wait for
# asynchronous BackupItemAction operations
# The default value is 4 hour.
itemOperationTimeout: 4h
# resourcePolicy specifies the referenced resource policies that backup should follow
# optional
resourcePolicy:
kind: configmap
name: resource-policy-configmap
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# Order of the resources to be collected during the backup process. It's a map with key being the plural resource
# name, and the value being a list of object names separated by comma. Each resource name has format "namespace/objectname".
# For cluster resources, simply use "objectname". Optional
orderedResources:
pods: mysql/mysql-cluster-replica-0,mysql/mysql-cluster-replica-1,mysql/mysql-cluster-source-0
persistentvolumes: pvc-87ae0832-18fd-4f40-a2a4-5ed4242680c4,pvc-63be1bb0-90f5-4629-a7db-b8ce61ee29b3
# Whether to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Array of cluster-scoped resources to exclude from the backup. Resources may be shortcuts
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
# no additional cluster-scoped resources are excluded. Optional.
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
excludedClusterScopedResources: {}
# Array of cluster-scoped resources to include from the backup. Resources may be shortcuts
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
# no additional cluster-scoped resources are included. Optional.
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
includedClusterScopedResources: {}
# Array of namespace-scoped resources to exclude from the backup. Resources may be shortcuts
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
# no namespace-scoped resources are excluded. Optional.
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
excludedNamespaceScopedResources: {}
# Array of namespace-scoped resources to include from the backup. Resources may be shortcuts
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
# all namespace-scoped resources are included. Optional.
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
includedNamespaceScopedResources: {}
# Individual objects must match this label selector to be included in the backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Individual object when matched with any of the label selector specified in the set are to be included in the backup. Optional.
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the backup request
orLabelSelectors:
- matchLabels:
app: velero
- matchLabels:
app: data-protection
# Whether or not to snapshot volumes. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for this backup.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before this backup is eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# whether pod volume file system backup should be used for all volumes by default.
defaultVolumesToFsBackup: true
# Whether snapshot data should be moved. If set, data movement is launched after the snapshot is created.
snapshotMoveData: true
# The data mover to be used by the backup. If the value is "" or "velero", the built-in data mover will be used.
datamover: velero
# UploaderConfig specifies the configuration for the uploader
uploaderConfig:
# ParallelFilesUpload is the number of files parallel uploads to perform when using the uploader.
parallelFilesUpload: 10
# Actions to perform at different times during a backup. The only hook supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Only "exec" hooks are supported.
post:
# Same content as pre above.
# Status about the Backup. Users should not set any data here.
status:
# The version of this Backup. The only version supported is 1.
version: 1
# The date and time when the Backup is eligible for garbage collection.
expiration: null
# The current phase.
# Valid values are New, FailedValidation, InProgress, WaitingForPluginOperations,
# WaitingForPluginOperationsPartiallyFailed, FinalizingafterPluginOperations,
# FinalizingPartiallyFailed, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Date/time when the backup started being processed.
startTimestamp: 2019-04-29T15:58:43Z
# Date/time when the backup finished being processed.
completionTimestamp: 2019-04-29T15:58:56Z
# Number of volume snapshots that Velero tried to create for this backup.
volumeSnapshotsAttempted: 2
# Number of volume snapshots that Velero successfully created for this backup.
volumeSnapshotsCompleted: 1
# Number of attempted BackupItemAction operations for this backup.
backupItemOperationsAttempted: 2
# Number of BackupItemAction operations that Velero successfully completed for this backup.
backupItemOperationsCompleted: 1
# Number of BackupItemAction operations that ended in failure for this backup.
backupItemOperationsFailed: 0
# Number of warnings that were logged by the backup.
warnings: 2
# Number of errors that were logged by the backup.
errors: 0
# An error that caused the entire backup to fail.
failureReason: ""
```

View File

@@ -1,112 +0,0 @@
---
title: "Velero Backup Storage Locations"
layout: docs
---
## Backup Storage Location
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
A sample YAML `BackupStorageLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
backupSyncPeriod: 2m0s
provider: aws
objectStorage:
bucket: myBucket
credential:
name: secret-name
key: key-in-secret
config:
region: us-west-2
profile: "default"
```
### Example with self-signed certificate
When using object storage with self-signed certificates, you can specify the CA certificate:
```yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
provider: aws
objectStorage:
bucket: velero-backups
# Base64 encoded CA certificate (deprecated - use caCertRef instead)
caCert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1VENDQXFHZ0F3SUJBZ0lVTWRiWkNaYnBhcE9lYThDR0NMQnhhY3dVa213d0RRWUpLb1pJaHZjTkFRRUwKQlFBd2JERUxNQWtHQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdNQ2tOaGJHbG1iM0p1YVdFeEZqQVVCZ05WQkFjTQpEVk5oYmlCR2NtRnVZMmx6WTI4eEdEQVdCZ05WQkFvTUQwVjRZVzF3YkdVZ1EyOXRjR0Z1ZVRFV01CUUdBMVVFCkF3d05aWGhoYlhCc1pTNXNiMk5oYkRBZUZ3MHlNekEzTVRBeE9UVXlNVGhhRncweU5EQTNNRGt4T1RVeU1UaGEKTUd3eEN6QUpCZ05WQkFZVEFsVlRNUk13RVFZRFZRUUNEQXBEWEJ4cG1iM0p1YVdFeEZqQVVCZ05WQkFjTURWTmgKYmlCR2NtRnVZMmx6WTI4eEdEQVdCZ05WQkFvTUQwVjRZVzF3YkdVZ1EyOXRjR0Z1ZVRFV01CUUdBMVVFQXd3TgpaWGhoYlhCc1pTNXNiMk5oYkRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS1dqCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
config:
region: us-east-1
s3Url: https://minio.example.com
```
#### Using a CA Certificate with Secret Reference (Recommended)
The recommended approach is to use `caCertRef` to reference a Secret containing the CA certificate:
```yaml
# First, create a Secret containing the CA certificate
apiVersion: v1
kind: Secret
metadata:
name: storage-ca-cert
namespace: velero
type: Opaque
data:
ca-bundle.crt: <base64-encoded-certificate>
---
# Then reference it in the BackupStorageLocation
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
provider: aws
objectStorage:
bucket: myBucket
caCertRef:
name: storage-ca-cert
key: ca-bundle.crt
# ... other configuration
```
**Note:** You cannot specify both `caCert` and `caCertRef` in the same BackupStorageLocation. The `caCert` field is deprecated and will be removed in a future version.
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
{{< table caption="Main config parameters" >}}
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String | Required Field | The name for whichever object storage provider will be used to store the backups. See [your object storage provider's plugin documentation](../supported-providers) for the appropriate value to use. |
| `objectStorage` | ObjectStorageLocation | Required Field | Specification of the object storage for the given provider. |
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
| `objectStorage/caCert` | String | Optional Field | **Deprecated**: Use `caCertRef` instead. A base64 encoded CA bundle to be used when verifying TLS connections |
| `objectStorage/caCertRef` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | Reference to a Secret containing a CA bundle to be used when verifying TLS connections. The Secret must be in the same namespace as the BackupStorageLocation. |
| `objectStorage/caCertRef/name` | String | Required Field (when using caCertRef) | The name of the Secret containing the CA certificate bundle |
| `objectStorage/caCertRef/key` | String | Required Field (when using caCertRef) | The key within the Secret that contains the CA certificate bundle |
| `config` | map[string]string | None (Optional) | Provider-specific configuration keys/values to be passed to the object store plugin. See [your object storage provider's plugin documentation](../supported-providers) for details. |
| `accessMode` | String | `ReadWrite` | How Velero can access the backup storage location. Valid values are `ReadWrite`, `ReadOnly`. |
| `backupSyncPeriod` | metav1.Duration | Optional Field | How frequently Velero should synchronize backups in object storage. Default is Velero's server backup sync period. Set this to `0s` to disable sync. |
| `validationFrequency` | metav1.Duration | Optional Field | How frequently Velero should validate the object storage . Default is Velero's server validation frequency. Set this to `0s` to disable validation. Default 1 minute. |
| `credential` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | The credential information to be used with this location. |
| `credential/name` | String | Optional Field | The name of the secret within the Velero namespace which contains the credential information. |
| `credential/key` | String | Optional Field | The key to use within the secret. |
{{< /table >}}

View File

@@ -1,219 +0,0 @@
---
title: "Restore API Type"
layout: docs
---
## Use
The `Restore` API type is used as a request for the Velero server to perform a Restore. Once created, the
Velero Server immediately starts the Restore process.
## API GroupVersion
Restore belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Restore` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Restore
# Standard Kubernetes metadata. Required.
metadata:
# Restore name. May be any valid Kubernetes object name. Required.
name: a-very-special-backup-0000111122223333
# Restore namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the restore. Required.
spec:
# The unique name of the Velero backup to restore from.
backupName: a-very-special-backup
# The unique name of the Velero schedule
# to restore from. If specified, and BackupName is empty, Velero will
# restore from the most recent successful backup created from this schedule.
scheduleName: my-scheduled-backup-name
# ItemOperationTimeout specifies the time used to wait for
# asynchronous BackupItemAction operations
# The default value is 4 hour.
itemOperationTimeout: 4h
# UploaderConfig specifies the configuration for the restore.
uploaderConfig:
# WriteSparseFiles is a flag to indicate whether write files sparsely or not
writeSparseFiles: true
# ParallelFilesDownload is the concurrency number setting for restore
parallelFilesDownload: 10
# Array of namespaces to include in the restore. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the restore. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the restore. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the restore. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
# restoreStatus selects resources to restore not only the specification, but
# the status of the manifest. This is specially useful for CRDs that maintain
# external references. By default, it excludes all resources.
restoreStatus:
# Array of resources to include in the restore status. Just like above,
# resources may be shortcuts (for example 'po' for 'pods') or fully-qualified.
# If unspecified, no resources are included. Optional.
includedResources:
- workflows
# Array of resources to exclude from the restore status. Resources may be
# shortcuts (for example 'po' for 'pods') or fully-qualified.
# If unspecified, all resources are excluded. Optional.
excludedResources: []
# Whether or not to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the restore. For example, if a
# PersistentVolumeClaim is included in the restore, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Individual objects must match this label selector to be included in the restore. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Individual object when matched with any of the label selector specified in the set are to be included in the restore. Optional.
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the restore request
orLabelSelectors:
- matchLabels:
app: velero
- matchLabels:
app: data-protection
# namespaceMapping is a map of source namespace names to
# target namespace names to restore into. Any source namespaces not
# included in the map will be restored into namespaces of the same name.
namespaceMapping:
namespace-backup-from: namespace-to-restore-to
# restorePVs specifies whether to restore all included PVs
# from snapshot. Optional
restorePVs: true
# preserveNodePorts specifies whether to restore old nodePorts from backup,
# so that the exposed port numbers on the node will remain the same after restore. Optional
preserveNodePorts: true
# existingResourcePolicy specifies the restore behaviour
# for the Kubernetes resource to be restored. Optional
existingResourcePolicy: none
# ResourceModifier specifies the reference to JSON resource patches
# that should be applied to resources before restoration. Optional
resourceModifier:
kind: ConfigMap
name: resource-modifier-configmap
# Actions to perform during or post restore. The only hooks currently supported are
# adding an init container to a pod before it can be restored and executing a command in a
# restored pod's container. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
# Name is the name of this hook.
- name: restore-hook-1
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- ns1
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- ns3
# Array of resources to which this hook applies. If unspecified, the hook applies to all resources in the backup. Optional.
# The only resource supported at this time is pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run during or after restores. Currently only "init" and "exec" hooks
# are supported.
postHooks:
# The type of the hook. This must be "init" or "exec".
- init:
# An array of container specs to be added as init containers to pods to which this hook applies to.
initContainers:
- name: restore-hook-init1
image: alpine:latest
# Mounting volumes from the podSpec to which this hooks applies to.
volumeMounts:
- mountPath: /restores/pvc1-vm
# Volume name from the podSpec
name: pvc1-vm
command:
- /bin/ash
- -c
- echo -n "FOOBARBAZ" >> /restores/pvc1-vm/foobarbaz
- name: restore-hook-init2
image: alpine:latest
# Mounting volumes from the podSpec to which this hooks applies to.
volumeMounts:
- mountPath: /restores/pvc2-vm
# Volume name from the podSpec
name: pvc2-vm
command:
- /bin/ash
- -c
- echo -n "DEADFEED" >> /restores/pvc2-vm/deadfeed
- exec:
# The container name where the hook will be executed. Defaults to the first container.
# Optional.
container: foo
# The command that will be executed in the container. Required.
command:
- /bin/bash
- -c
- "psql < /backup/backup.sql"
# How long to wait for a container to become ready. This should be long enough for the
# container to start plus any preceding hooks in the same container to complete. The wait
# timeout begins when the container is restored and may require time for the image to pull
# and volumes to mount. If not set the restore will wait indefinitely. Optional.
waitTimeout: 5m
# How long to wait once execution begins. Defaults to 30 seconds. Optional.
execTimeout: 1m
# How to handle execution failures. Valid values are `Fail` and `Continue`. Defaults to
# `Continue`. With `Continue` mode, execution failures are logged only. With `Fail` mode,
# no more restore hooks will be executed in any container in any pod and the status of the
# Restore will be `PartiallyFailed`. Optional.
onError: Continue
# RestoreStatus captures the current status of a Velero restore. Users should not set any data here.
status:
# The current phase.
# Valid values are New, FailedValidation, InProgress, WaitingForPluginOperations,
# WaitingForPluginOperationsPartiallyFailed, Completed, PartiallyFailed, Failed.
phase: ""
# An array of any validation errors encountered.
validationErrors: null
# Number of attempted RestoreItemAction operations for this restore.
restoreItemOperationsAttempted: 2
# Number of RestoreItemAction operations that Velero successfully completed for this restore.
restoreItemOperationsCompleted: 1
# Number of RestoreItemAction operations that ended in failure for this restore.
restoreItemOperationsFailed: 0
# Number of warnings that were logged by the restore.
warnings: 2
# Errors is a count of all error messages that were generated
# during execution of the restore. The actual errors are stored in object
# storage.
errors: 0
# FailureReason is an error that caused the entire restore
# to fail.
failureReason:
```

View File

@@ -1,216 +0,0 @@
---
title: "Schedule API Type"
layout: docs
---
## Use
The `Schedule` API type is used as a repeatable request for the Velero server to perform a backup for a given cron notation. Once created, the
Velero Server will start the backup process. It will then wait for the next valid point of the given cron expression and execute the backup
process on a repeating basis.
### Schedule Control Fields
The Schedule API provides several fields to control backup execution behavior:
- **paused**: When set to `true`, the schedule is paused and no new backups will be created. When set back to `false`, the schedule is unpaused and will resume creating backups according to the cron schedule.
- **skipImmediately**: Controls whether to skip an immediate backup when a schedule is created or unpaused. By default (when `false`), if a backup is due immediately upon creation or unpausing, it will be executed right away. When set to `true`, the controller will:
1. Skip the immediate backup
2. Record the current time in the `lastSkipped` status field
3. Automatically reset `skipImmediately` back to `false` (one-time use)
4. Schedule the next backup based on the cron expression, using `lastSkipped` as the reference time
- **lastSkipped**: A status field (not directly settable) that records when a backup was last skipped due to `skipImmediately` being `true`. The controller uses this timestamp, if more recent than `lastBackup`, to calculate the next scheduled backup time.
This "consume and reset" pattern for `skipImmediately` ensures that after skipping one immediate backup, the schedule returns to normal behavior for subsequent runs without requiring user intervention.
## API GroupVersion
Schedule belongs to the API group version `velero.io/v1`.
## Definition
Here is a sample `Schedule` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Schedule
# Standard Kubernetes metadata. Required.
metadata:
# Schedule name. May be any valid Kubernetes object name. Required.
name: a
# Schedule namespace. Must be the namespace of the Velero server. Required.
namespace: velero
# Parameters about the scheduled backup. Required.
spec:
# Paused specifies whether the schedule is paused or not
paused: false
# SkipImmediately specifies whether to skip backup if schedule is due immediately when unpaused or created.
# This is a one-time flag that will be automatically reset to false after being consumed.
# When true, the controller will skip the immediate backup, set LastSkipped timestamp, and reset this to false.
skipImmediately: false
# Schedule is a Cron expression defining when to run the Backup
schedule: 0 7 * * *
# Specifies whether to use OwnerReferences on backups created by this Schedule.
# Notice: if set to true, when schedule is deleted, backups will be deleted too. Optional.
useOwnerReferencesInBackup: false
# Template is the spec that should be used for each backup triggered by this schedule.
template:
# CSISnapshotTimeout specifies the time used to wait for
# CSI VolumeSnapshot status turns to ReadyToUse during creation, before
# returning error as timeout. The default value is 10 minute.
csiSnapshotTimeout: 10m
# resourcePolicy specifies the referenced resource policies that backup should follow
# optional
resourcePolicy:
kind: configmap
name: resource-policy-configmap
# Array of namespaces to include in the scheduled backup. If unspecified, all namespaces are included.
# Optional.
includedNamespaces:
- '*'
# Array of namespaces to exclude from the scheduled backup. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to include in the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. If unspecified, all resources are included. Optional.
includedResources:
- '*'
# Array of resources to exclude from the scheduled backup. Resources may be shortcuts (for example 'po' for 'pods')
# or fully-qualified. Optional.
excludedResources:
- storageclasses.storage.k8s.io
orderedResources:
pods: mysql/mysql-cluster-replica-0,mysql/mysql-cluster-replica-1,mysql/mysql-cluster-source-0
persistentvolumes: pvc-87ae0832-18fd-4f40-a2a4-5ed4242680c4,pvc-63be1bb0-90f5-4629-a7db-b8ce61ee29b3
# Whether to include cluster-scoped resources. Valid values are true, false, and
# null/unset. If true, all cluster-scoped resources are included (subject to included/excluded
# resources and the label selector). If false, no cluster-scoped resources are included. If unset,
# all cluster-scoped resources are included if and only if all namespaces are included and there are
# no excluded namespaces. Otherwise, if there is at least one namespace specified in either
# includedNamespaces or excludedNamespaces, then the only cluster-scoped resources that are backed
# up are those associated with namespace-scoped resources included in the scheduled backup. For example, if a
# PersistentVolumeClaim is included in the backup, its associated PersistentVolume (which is
# cluster-scoped) would also be backed up.
includeClusterResources: null
# Array of cluster-scoped resources to exclude from the backup. Resources may be shortcuts
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
# no additional cluster-scoped resources are excluded. Optional.
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
excludedClusterScopedResources: {}
# Array of cluster-scoped resources to include from the backup. Resources may be shortcuts
# (for example 'sc' for 'storageclasses'), or fully-qualified. If unspecified,
# no additional cluster-scoped resources are included. Optional.
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
includedClusterScopedResources: {}
# Array of namespace-scoped resources to exclude from the backup. Resources may be shortcuts
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
# no namespace-scoped resources are excluded. Optional.
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
excludedNamespaceScopedResources: {}
# Array of namespace-scoped resources to include from the backup. Resources may be shortcuts
# (for example 'cm' for 'configmaps'), or fully-qualified. If unspecified,
# all namespace-scoped resources are included. Optional.
# Cannot work with include-resources, exclude-resources and include-cluster-resources.
includedNamespaceScopedResources: {}
# Individual objects must match this label selector to be included in the scheduled backup. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# Individual object when matched with any of the label selector specified in the set are to be included in the backup. Optional.
# orLabelSelectors as well as labelSelector cannot co-exist, only one of them can be specified in the backup request
orLabelSelectors:
- matchLabels:
app: velero
- matchLabels:
app: data-protection
# Whether to snapshot volumes. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
# The list of locations in which to store volume snapshots created for backups under this schedule.
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before backups created on this schedule are eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# whether pod volume file system backup should be used for all volumes by default.
defaultVolumesToFsBackup: true
# Whether snapshot data should be moved. If set, data movement is launched after the snapshot is created.
snapshotMoveData: true
# The data mover to be used by the backup. If the value is "" or "velero", the built-in data mover will be used.
datamover: velero
# UploaderConfig specifies the configuration for the uploader
uploaderConfig:
# ParallelFilesUpload is the number of files parallel uploads to perform when using the uploader.
parallelFilesUpload: 10
# The labels you want on backup objects, created from this schedule (instead of copying the labels you have on schedule object itself).
# When this field is set, the labels from the Schedule resource are not copied to the Backup resource.
metadata:
labels:
labelname: somelabelvalue
# Actions to perform at different times during a backup. The only hook supported is
# executing a command in a container in a pod using the pod exec API. Optional.
hooks:
# Array of hooks that are applicable to specific resources. Optional.
resources:
-
# Name of the hook. Will be displayed in backup log.
name: my-hook
# Array of namespaces to which this hook applies. If unspecified, the hook applies to all
# namespaces. Optional.
includedNamespaces:
- '*'
# Array of namespaces to which this hook does not apply. Optional.
excludedNamespaces:
- some-namespace
# Array of resources to which this hook applies. The only resource supported at this time is
# pods.
includedResources:
- pods
# Array of resources to which this hook does not apply. Optional.
excludedResources: []
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: velero
component: server
# An array of hooks to run before executing custom actions. Only "exec" hooks are supported.
pre:
-
# The type of hook. This must be "exec".
exec:
# The name of the container where the command will be executed. If unspecified, the
# first container in the pod will be used. Optional.
container: my-container
# The command to execute, specified as an array. Required.
command:
- /bin/uname
- -a
# How to handle an error executing the command. Valid values are Fail and Continue.
# Defaults to Fail. Optional.
onError: Fail
# How long to wait for the command to finish executing. Defaults to 30 seconds. Optional.
timeout: 10s
# An array of hooks to run after all custom actions and additional items have been
# processed. Only "exec" hooks are supported.
post:
# Same content as pre above.
status:
# The current phase.
# Valid values are New, Enabled, FailedValidation.
phase: ""
# Date/time of the last backup for a given schedule
lastBackup:
# Date/time when a backup was last skipped due to skipImmediately being true
lastSkipped:
# An array of any validation errors encountered.
validationErrors:
```

View File

@@ -1,46 +0,0 @@
---
title: "Velero Volume Snapshot Location"
layout: docs
---
## Volume Snapshot Location
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
A sample YAML `VolumeSnapshotLocation` looks like the following:
```yaml
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: aws-default
namespace: velero
spec:
provider: aws
credential:
name: secret-name
key: key-in-secret
config:
region: us-west-2
profile: "default"
```
### Parameter Reference
The configurable parameters are as follows:
#### Main config parameters
{{< table caption="Main config parameters" >}}
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String | Required Field | The name for whichever storage provider will be used to create/store the volume snapshots. See [your volume snapshot provider's plugin documentation](../supported-providers) for the appropriate value to use. |
| `config` | map string string | None (Optional) | Provider-specific configuration keys/values to be passed to the volume snapshotter plugin. See [your volume snapshot provider's plugin documentation](../supported-providers) for details. |
| `credential` | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core) | Optional Field | The credential information to be used with this location. |
| `credential/name` | String | Optional Field | The name of the secret within the Velero namespace which contains the credential information. |
| `credential/key` | String | Optional Field | The key to use within the secret. |
{{< /table >}}

View File

@@ -1,126 +0,0 @@
---
title: "Backup Hooks"
layout: docs
---
Velero supports executing commands in containers in pods during a backup.
## Backup Hooks
When performing a backup, you can specify one or more commands to execute in a container in a pod
when that pod is being backed up. The commands can be configured to run *before* any custom action
processing ("pre" hooks), or after all custom actions have been completed and any additional items
specified by custom action have been backed up ("post" hooks). Note that hooks are _not_ executed within a shell
on the containers.
As of Velero 1.15, related items that must be backed up together are grouped into ItemBlocks, and pod hooks run before and after the ItemBlock is backed up.
In particular, this means that if an ItemBlock contains more than one pod (such as in a scenario where an RWX volume is mounted by multiple pods), pre hooks are run for all pods in the ItemBlock, then the items are backed up, then all post hooks are run.
There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec.
### Specifying Hooks As Pod Annotations
You can use the following annotations on a pod to make Velero execute a hook when backing up the pod:
#### Pre hooks
* `pre.hook.backup.velero.io/container`
* The container where the command should be executed. Defaults to the first container in the pod. Optional.
* `pre.hook.backup.velero.io/command`
* The command to execute. This command is not executed within a shell by default. If a shell is needed to run your command, include a shell command, like `/bin/sh`, that is supported by the container at the beginning of your command. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`. See [examples of using pre hook commands](#backup-hook-commands-examples). Optional.
* `pre.hook.backup.velero.io/on-error`
* What to do if the command returns a non-zero exit code. Defaults to `Fail`. Valid values are Fail and Continue. Optional.
* `pre.hook.backup.velero.io/timeout`
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
#### Post hooks
* `post.hook.backup.velero.io/container`
* The container where the command should be executed. Default is the first container in the pod. Optional.
* `post.hook.backup.velero.io/command`
* The command to execute. This command is not executed within a shell by default. If a shell is needed to run your command, include a shell command, like `/bin/sh`, that is supported by the container at the beginning of your command. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]`. See [examples of using pre hook commands](#backup-hook-commands-examples). Optional.
* `post.hook.backup.velero.io/on-error`
* What to do if the command returns a non-zero exit code. Defaults to `Fail`. Valid values are Fail and Continue. Optional.
* `post.hook.backup.velero.io/timeout`
* How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional.
### Specifying Hooks in the Backup Spec
Please see the documentation on the [Backup API Type][1] for how to specify hooks in the Backup
spec.
## Hook Example with fsfreeze
This examples walks you through using both pre and post hooks for freezing a file system. Freezing the
file system is useful to ensure that all pending disk I/O operations have completed prior to taking a snapshot.
### Annotations
The Velero [example/nginx-app/with-pv.yaml][2] serves as an example of adding the pre and post hook annotations directly
to your declarative deployment. Below is an example of what updating an object in place might look like.
```shell
kubectl annotate pod -n nginx-example -l app=nginx \
pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
pre.hook.backup.velero.io/container=fsfreeze \
post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
post.hook.backup.velero.io/container=fsfreeze
```
Now test the pre and post hooks by creating a backup. You can use the Velero logs to verify that the pre and post
hooks are running and exiting without error.
```shell
velero backup create nginx-hook-test
velero backup get nginx-hook-test
velero backup logs nginx-hook-test | grep hookCommand
```
## Backup hook commands examples
### Multiple commands
To use multiple commands, wrap your target command in a shell and separate them with `;`, `&&`, or other shell conditional constructs.
```shell
pre.hook.backup.velero.io/command='["/bin/bash", "-c", "echo hello > hello.txt && echo goodbye > goodbye.txt"]'
```
#### Using environment variables
You are able to use environment variables from your pods in your pre and post hook commands by including a shell command before using the environment variable. For example, `MYSQL_ROOT_PASSWORD` is an environment variable defined in pod called `mysql`. To use `MYSQL_ROOT_PASSWORD` in your pre-hook, you'd include a shell, like `/bin/sh`, before calling your environment variable:
```
pre:
- exec:
container: mysql
command:
- /bin/sh
- -c
- mysql --password=$MYSQL_ROOT_PASSWORD -e "FLUSH TABLES WITH READ LOCK"
onError: Fail
```
Note that the container must support the shell command you use.
## Backup Hook Execution Results
### Viewing Results
Velero records the execution results of hooks, allowing users to obtain this information by running the following command:
```bash
$ velero backup describe <backup name>
```
The displayed results include the number of hooks that were attempted to be executed and the number of hooks that failed execution. Any detailed failure reasons will be present in `Errors` section if applicable.
```bash
HooksAttempted: 1
HooksFailed: 0
```
[1]: api-types/backup.md
[2]: https://github.com/vmware-tanzu/velero/blob/v1.18.0/examples/nginx-app/with-pv.yaml

View File

@@ -1,167 +0,0 @@
---
title: "Backup Reference"
layout: docs
---
## Exclude Specific Items from Backup
It is possible to exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows:
```bash
kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true
```
## Parallel Files Upload
If using fs-backup with Kopia uploader or CSI snapshot data movements, it's allowed to configure the option for parallel files upload, which could accelerate the backup:
```bash
velero backup create <BACKUP_NAME> --include-namespaces <NAMESPACE> --parallel-files-upload <NUM> --wait
```
## Specify Backup Orders of Resources of Specific Kind
To backup resources of specific Kind in a specific order, use option --ordered-resources to specify a mapping Kinds to an ordered list of specific resources of that Kind. Resource names are separated by commas and their names are in format 'namespace/resourcename'. For cluster scope resource, simply use resource name. Key-value pairs in the mapping are separated by semi-colon. Kind name is in plural form.
```bash
velero backup create backupName --include-cluster-resources=true --ordered-resources 'pods=ns1/pod1,ns1/pod2;persistentvolumes=pv4,pv8' --include-namespaces=ns1
velero backup create backupName --ordered-resources 'statefulsets=ns1/sts1,ns1/sts0' --include-namespaces=ns1
```
## Schedule a Backup
The **schedule** operation allows you to create a backup of your data at a specified time, defined by a [Cron expression](https://en.wikipedia.org/wiki/Cron).
```
velero schedule create NAME --schedule="* * * * *" [flags]
```
Cron schedules use the following format.
```
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │ 7 is also Sunday on some systems)
# │ │ │ │ │
# │ │ │ │ │
# * * * * *
```
For example, the command below creates a backup that runs every day at 3am.
```
velero schedule create example-schedule --schedule="0 3 * * *"
```
This command will create the backup, `example-schedule`, within Velero, but the backup will not be taken until the next scheduled time, 3am. Backups created by a schedule are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*. For a full list of available configuration flags use the Velero CLI help command.
```
velero schedule create --help
```
Once you create the scheduled backup, you can then trigger it manually using the `velero backup` command.
```
velero backup create --from-schedule example-schedule
```
This command will immediately trigger a new backup based on your template for `example-schedule`. This will not affect the backup schedule, and another backup will trigger at the scheduled time.
### Time zone specification
Time zone can be specified in the schedule cron. The format is `CRON_TZ=<timezone> <cron>`.
Specifying timezones can reduce disputes in the case of daylight saving time changes. For example, if the schedule is set to run at 3am, and daylight saving time changes, the schedule will still run at 3am in the timezone specified.
Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run!
For example, the command below creates a backup that runs every day at 3am in the timezone `America/New_York`.
```
velero schedule create example-schedule --schedule="CRON_TZ=America/New_York 0 3 * * *"
```
Another example, the command below creates a backup that runs every day at 3am in the timezone `Asia/Shanghai`.
```
velero schedule create example-schedule --schedule="CRON_TZ=Asia/Shanghai 0 3 * * *"
```
The supported timezone names are listed in the [IANA Time Zone Database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List) under 'TZ identifier'.
<!--
cron's WithLocation functions uses time.Location as parameter, and [time.LoadLocation](https://pkg.go.dev/time#LoadLocation) support names from IANA timezone database in following locations in this order
- the directory or uncompressed zip file named by the ZONEINFO environment variable
- on a Unix system, the system standard installation location
- $GOROOT/lib/time/zoneinfo.zip
- the time/tzdata package, if it was imported
-->
### Limitation
#### Backup's OwnerReference with Schedule
Backups created from schedule can have owner reference to the schedule. This can be achieved by command:
```
velero schedule create --use-owner-references-in-backup <backup-name>
```
By this way, schedule is the owner of it created backups. This is useful for some GitOps scenarios, or the resource tree of k8s synchronized from other places.
Please do notice there is also side effect that may not be expected. Because schedule is the owner, when the schedule is deleted, the related backups CR (Just backup CR is deleted. Backup data still exists in object store and snapshots) will be deleted by k8s GC controller, too, but Velero controller will sync these backups from object store's metadata into k8s. Then k8s GC controller and Velero controller will fight over whether these backups should exist all through.
If there is possibility the schedule will be disable to not create backup anymore, and the created backups are still useful. Please do not enable this option. For detail, please reference to [Backups created by a schedule with useOwnerReferenceInBackup set do not get synced properly](https://github.com/vmware-tanzu/velero/issues/4093).
Some GitOps tools have configurations to avoid pruning the day 2 backups generated from the schedule.
For example, the ArgoCD has two ways to do that:
* Add annotations to schedule. This method makes ArgoCD ignore the schedule from syncing, so the generated backups are ignored too, but it has a side effect. When deleting the schedule from the GitOps manifest, the schedule can not be deleted. User needs to do it manually.
``` yaml
annotations:
argocd.argoproj.io/compare-options: IgnoreExtraneous
argocd.argoproj.io/sync-options: Delete=false,Prune=false
```
* If ArgoCD is deployed by ArgoCD-Operator, there is another option: [resourceExclusions](https://argocd-operator.readthedocs.io/en/latest/reference/argocd/#resource-exclusions-example). This is an example, which means ArgoCD operator should ignore `Backup` and `Restore` in `velero.io` group in the `velero` namespace for all managed k8s cluster.
``` yaml
apiVersion: argoproj.io/v1alpha1
kind: ArgoCD
metadata:
name: velero-argocd
namespace: velero
spec:
resourceExclusions: |
- apiGroups:
- velero.io
kinds:
- Backup
- Restore
clusters:
- "*"
```
#### Cannot support backup data immutability
Starting from 1.11, Velero's backups may not work as expected when the target object storage has some kind of an "immutability" option configured. These options are known by different names (see links below for some examples). The main reason is that Velero first saves the state of a backup as Finalizing and then checks whether there are any async operations in progress. If there are, it needs to wait for all of them to be finished before moving the backup state to Complete. If there are no async operations, the state is moved to Complete right away. In either case, Velero needs to modify the metadata in object storage and that will not be possible if some kind of immutability is configured on the object storage.
Even with versions prior to 1.11, there was no explicit support in Velero to work with object storage that has "immutability" configuration. As a result, you may see some problems even though backups seem to work (e.g. versions objects not being deleted when backup is deleted).
Note that backups may still work in some cases depending on specific providers and configurations.
* For AWS S3 service, backups work because S3's object lock only applies to versioned buckets, and the object data can still be updated as the new version. But when backups are deleted, old versions of the objects will not be deleted.
* Azure Storage Blob supports both versioned-level immutability and container-level immutability. For the versioned-level scenario, data immutability can still work in Velero, but the container-level cannot.
* GCP Cloud storage policy only supports bucket-level immutability, so there is no way to make it work in the GCP environment.
The following are the links to cloud providers' documentation in this regard:
* [AWS S3 Using S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html)
* [Azure Storage Blob Containers - Lock Immutability Policy](https://learn.microsoft.com/en-us/azure/storage/blobs/immutable-policy-configure-version-scope?tabs=azure-portal)
* [GCP cloud storage Retention policies and retention policy locks](https://cloud.google.com/storage/docs/bucket-lock)
## Kubernetes API Pagination
By default, Velero will paginate the LIST API call for each resource type in the Kubernetes API when collecting items into a backup. The `--client-page-size` flag for the Velero server configures the size of each page.
Depending on the cluster's scale, tuning the page size can improve backup performance. You can experiment with higher values, noting their impact on the relevant `apiserver_request_duration_seconds_*` metrics from the Kubernetes apiserver.
Pagination can be entirely disabled by setting `--client-page-size` to `0`. This will request all items in a single unpaginated LIST call.
## Deleting Backups
Use the following commands to delete Velero backups and data:
* `kubectl delete backup <backupName> -n <veleroNamespace>` will delete the backup custom resource only and will not delete any associated data from object/block storage
* `velero backup delete <backupName>` will delete the backup resource including all data in object/block storage

View File

@@ -1,63 +0,0 @@
---
title: "Backup Repository Configuration"
layout: docs
---
Velero uses selectable backup repositories for various backup/restore methods, i.e., [file-system backup][1], [CSI snapshot data movement][2], etc. To achieve the best performance, backup repositories may need to be configured according to the running environments.
Velero uses a BackupRepository CR to represent the instance of the backup repository. Now, a new field `repositoryConfig` is added to support various configurations to the underlying backup repository.
Velero also allows you to specify configurations before the BackupRepository CR is created through a configMap. The configurations in the configMap will be copied to the BackupRepository CR when it is created at the due time.
The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to Velero instance in that namespace only. The name of the configMap should be specified in the Velero server parameter `--backup-repository-configmap`.
The users can specify the ConfigMap name during velero installation by CLI:
`velero install --backup-repository-configmap=<ConfigMap-Name>`
Conclusively, you have two ways to add/change/delete configurations of a backup repository:
- If the BackupRepository CR for the backup repository is already there, you should modify the `repositoryConfig` field. The new changes will be applied to the backup repository at the due time, it doesn't require Velero server to restart.
- Otherwise, you can create the backup repository configMap as a template for the BackupRepository CRs that are going to be created.
The backup repository configMap is repository type (i.e., kopia, restic) specific, so for one repository type, you only need to create one set of configurations, they will be applied to all BackupRepository CRs of the same type. Whereas, the changes of `repositoryConfig` field apply to the specific BackupRepository CR only, you may need to change every BackupRepository CR of the same type.
Below is an example of the BackupRepository configMap with the configurations:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: <config-name>
namespace: velero
data:
<kopia>: |
{
"cacheLimitMB": 2048,
"fullMaintenanceInterval": "fastGC"
}
<other-repository-type>: |
{
"cacheLimitMB": 1024
}
```
To create the configMap, you need to save something like the above sample to a file and then run below commands:
```shell
kubectl apply -f <yaml-file-name>
```
When and how the configurations are used is decided by the backup repository itself. Though you can specify any configuration to the configMap or `repositoryConfig`, the configuration may/may not be used by the backup repository, or the configuration may be used at an arbitrary time.
Below is the supported configurations by Velero and the specific backup repository.
***Kopia repository:***
`cacheLimitMB`: specifies the size limit(in MB) for the local data cache. The more data is cached locally, the less data may be downloaded from the backup storage, so the better performance may be achieved. Practically, you can specify any size that is smaller than the free space so that the disk space won't run out. This parameter is for repository connection, that is, you could change it before connecting to the repository. E.g., before a backup/restore/maintenance.
`fullMaintenanceInterval`: The full maintenance interval defaults to kopia defaults of 24 hours. Override options below allows for faster removal of deleted velero backups from kopia repo.
- normalGC: 24 hours
- fastGC: 12 hours
- eagerGC: 6 hours
Per kopia [Maintenance Safety](https://kopia.io/docs/advanced/maintenance/#maintenance-safety), it is expected that velero backup deletion will not result in immediate kopia repository data removal. Reducing full maintenance interval using above options should help reduce time taken to remove blobs not in use.
On the other hand, the not-in-use data will be deleted permanently after the full maintenance, so shorter full maintenance intervals may weaken the data safety if they are used incorrectly.
[1]: file-system-backup.md
[2]: csi-snapshot-data-movement.md

View File

@@ -1,79 +0,0 @@
---
title: "Backup Restore Windows Workloads"
layout: docs
---
## Prerequisites
Velero supports to backup and restore Windows workloads, either stateless or stateful.
To keep compatibility to the existing Velero plugins, Velero server runs in linux nodes only, so Velero requires at least one linux node in the cluster. And it is not recommended to run Velero server in control plane, so a linux worker node is required. For resource requirement of the linux node for Velero server, see [Customize resource requests and limits][1].
Velero is built and tested with `windows/amd64/ltsc2022` container only, older Windows versions, i.e., Windows Server 2019, are not supported.
For volume backups, CSI and CSI snapshot should be supported by the storage.
## Installation
As mentioned in [Image building][2], a hybrid image is provided for all platforms, so you don't need to set different images for linux and Windows clusters, you can always use the all-in-one image, e.g., `velero/velero:v1.16.0` or `velero/velero:main`.
In order to backup/restore volumes for stateful workloads, Velero node-agent needs to run in the Windows nodes. Velero provides a dedicated daemonset for Windows nodes, called `node-agent-windows`.
Therefore, in a typical cluster with linux and Windows nodes, there are two daemonsets for Velero node-agent, the existing `node-agent` deamonset for linux nodes, and the `node-agent-windows` daemonset for Windows nodes.
If you want to install `node-agent` deamonset, specify `--use-node-agent` parameter in `velero install` command; and if you want to install `node-agent-windows` daemonset, specify `--use-node-agent-windows` parameter.
## Resource backup restore
Resource backup/restore for Windows workloads are done by Velero server as same as linux workloads.
Since Velero server is running in linux nodes only, all the existing plugins, i.e., BIA, RIA, BackupStore plugins, could be started by Velero in a cluster with Windows nodes. However, whether or how the plugins are functional to Windows workloads are decided by the plugins themselves.
It is recommended that plugin providers do a well round test with Velero in Windows cluster environments, and:
- If they need to support Windows workloads, make the necessary modification to ensure their plugins work well with Windows workloads
- If they don't want to support Windows workloads, or part of the Windows workloads, they need to ensure the plugins won't cause any failure or crash when they process the undesired Windows workload items
## Volume backup restore
Below are the status of supportive of Windows workload volumes for different backup methods:
- CSI snapshot data movement: block volumes (i.e., vSphere CNS Block Volume, Azure Disk, AWS EBS, GCP Persistent Disk, etc.) are full supported; file volumes (i.e., vSphere CNS File Volume, Azure File, AWS EFS, GCP Filestore, etc.) are not tested or officially supported. This is the same with linux workloads
- CSI snapshot backup: block volumes (i.e., vSphere CNS Block Volume, Azure Disk, AWS EBS, GCP Persistent Disk, etc.) are full supported; file volumes (i.e., vSphere CNS File Volume, Azure File, AWS EFS, GCP Filestore, etc.) are not tested or officially supported. This is the same with linux workloads
- native snapshot backup: supported as same as linux workloads
- file system backup: at present, NOT supported
For volume backups/restores conducted through Velero plugins, the supportive status is decided by the plugin themselves.
### CSI snapshot data movement
During backup, Velero automatically identifies the OS type of the workload and schedules data mover pods to the right nodes. Specifically, for a linux workload, linux nodes in the cluster will be used; for a Windows workload, Windows nodes in the cluster will be used.
You could view the OS type that a data mover pod is running with from the DataUpload status's `nodeOS` field.
Velero takes several measures to deduce the OS type for volumes of workloads, from PVCs, VolumeAttach CRs, nodes and storage classes. If Velero fails to deduce the OS type, it fallbacks to linux, then the data mover pods will be scheduled to linux nodes. As a result, the data mover pods may not be able to start and the corresponding DataUploads will be cancelled because of timeout, so the backup will be partially failed.
Therefore, it is highly recommended you provide a dedicated storage class for Windows workloads volumes, and set `csi.storage.k8s.io/fstype` correctly. E.g., for linux workload volumes, set `csi.storage.k8s.io/fstype=ext4`; for Windows workload volumes set `csi.storage.k8s.io/fstype=ntfs`.
Specifically, if you have X number of storage classes for linux workloads, you need to create another X number of storage classes for Windows workloads.
This is helpful for Velero to deduce the right OS type successfully all the time, especially when you are backing up below kind of volumes belonging to a Windows workload:
- The PVC is with Immediate mode
- There is no pod mounting the PVC at the time of backup
For restore, Velero automatically inherits the OS type from backup, so no deduction process is required.
For other information, check [CSI Snapshot Data Movement][3].
## Backup Repository Maintenance job
Backup Repository Maintenance jobs and pods are supported to run in Windows nodes, that is, you can take full node resources in a cluster with Windows nodes for Backup Repository Maintenance. For more information, check [Repository Maintenance][4].
## Backup restore hooks
Pre/post backup/restore hooks are supported for Windows workloads, the commands run in the same Windows nodes hosting the workload pods. For more information, check [Backup Hooks][5] and [Restore Hooks][6].
## Limitations
NTFS extended attributes/advanced features are not supported, i.e., Security Descriptors, System/Hidden/ReadOnly attributes, Creation Time, NTFS Streams, etc. That is, after backup/restore, these data will be lost.
[1]: customize-installation.md#customize-resource-requests-and-limits
[2]: build-from-source.md#image-building
[3]: csi-snapshot-data-movement.md
[4]: repository-maintenance.md
[5]: backup-hooks.md
[6]: restore-hooks.md

View File

@@ -1,73 +0,0 @@
---
title: "Basic Install"
layout: docs
---
Use this doc to get a basic installation of Velero.
Refer [this document](customize-installation.md) to customize your installation, including setting priority classes for Velero components.
## Prerequisites
- Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled. For more information on supported Kubernetes versions, see the Velero [compatibility matrix](https://github.com/vmware-tanzu/velero#velero-compatibility-matrix).
- `kubectl` installed locally
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you'll be using from the list of [compatible providers][0].
Velero supports storage providers for both cloud-provider environments and on-premises environments. For more details on on-premises scenarios, see the [on-premises documentation][2].
### Velero on Windows
Velero supports to backup and restore Windows workloads, either stateless or stateful.
Velero node-agent and data mover pods could run in Windows nodes. To keep compatibility to the existing Velero plugins, Velero server runs in linux nodes only, so Velero requires at least one linux node in the cluster. Velero provides Windows images for specific Windows versions. For more information see [Backup Restore Windows Workloads][6].
## Install the CLI
### Option 1: MacOS - Homebrew
On macOS, you can use [Homebrew](https://brew.sh) to install the `velero` client:
```bash
brew install velero
```
### Option 2: GitHub release
1. Download the [latest release][1]'s tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz
```
1. Move the extracted `velero` binary to somewhere in your `$PATH` (`/usr/local/bin` for most users).
### Option 3: Windows - Chocolatey
On Windows, you can use [Chocolatey](https://chocolatey.org/install) to install the [velero](https://chocolatey.org/packages/velero) client:
```powershell
choco install velero
```
## Install and configure the server components
There are two supported methods for installing the Velero server components:
- the `velero install` CLI command
- the [Helm chart](https://vmware-tanzu.github.io/helm-charts/)
Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations. The steps to install and configure the Velero server components along with the appropriate plugins are specific to your chosen storage provider. To find installation instructions for your chosen storage provider, follow the documentation link for your provider at our [supported storage providers][0] page
_Note: if your object storage provider is different than your volume snapshot provider, follow the installation instructions for your object storage provider first, then return here and follow the instructions to [add your volume snapshot provider][4]._
## Command line Autocompletion
Please refer to [this part of the documentation][5].
[0]: supported-providers.md
[1]: https://github.com/vmware-tanzu/velero/releases/latest
[2]: on-premises.md
[3]: overview-plugins.md
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
[5]: customize-installation.md#optional-velero-cli-configurations
[6]: backup-restore-windows.md

View File

@@ -1,198 +0,0 @@
---
title: "Build from source"
layout: docs
---
## Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later.
* A DNS server on the cluster
* `kubectl` installed
* [Go][5] installed (minimum version 1.8)
## Get the source
### Option 1) Get latest (recommended)
```bash
mkdir $HOME/go
export GOPATH=$HOME/go
go get github.com/vmware-tanzu/velero
```
Where `go` is your [import path][4] for Go.
For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path.
### Option 2) Release archive
Download the archive named `Source code` from the [release page][22] and extract it in your Go import path as `src/github.com/vmware-tanzu/velero`.
Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below.
## Build
There are a number of different ways to build `velero` depending on your needs. This section outlines the main possibilities.
When building by using `make`, it will place the binaries under `_output/bin/$GOOS/$GOARCH`. For example, you will find the binary for darwin here: `_output/bin/darwin/amd64/velero`, and the binary for linux here: `_output/bin/linux/amd64/velero`. `make` will also splice version and git commit information in so that `velero version` displays proper output.
Note: `velero install` will also use the version information to determine which tagged image to deploy. If you would like to overwrite what image gets deployed, use the `image` flag (see below for instructions on how to build images).
### Build the binary
To build the `velero` binary on your local machine, compiled for your OS and architecture, run one of these two commands:
```bash
go build ./cmd/velero
```
```bash
make local
```
### Cross compiling
To build the velero binary targeting linux/amd64 within a build container on your local machine, run:
```bash
make build
```
For any specific platform, run `make build-<GOOS>-<GOARCH>`.
For example, to build for the Mac, run `make build-darwin-amd64`.
Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
* linux-amd64
* linux-arm
* linux-arm64
* linux-ppc64le
* darwin-amd64
* windows-amd64
## Making images and updating Velero
If after installing Velero you would like to change the image used by its deployment to one that contains your code changes, you may do so by updating the image:
```bash
kubectl -n velero set image deploy/velero velero=myimagerepo/velero:$VERSION
```
To build a Velero container image, you need to configure `buildx` first.
### Buildx
Docker Buildx is a CLI plugin that extends the docker command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently.
More information in the [docker docs][23] and in the [buildx github][24] repo.
### Image building
#### Build local image
If you want to build an image with the same OS type and CPU architecture with your local machine, you can keep most the build parameters as default.
Run below command to build the local image:
```bash
make container
```
Optionally, set the `$VERSION` environment variable to change the image tag or `$BIN` to change which binary to build a container image for.
Optionally, you can set the `$REGISTRY` environment variable. For example, if you want to build the `gcr.io/my-registry/velero:main` image, set `$REGISTRY` to `gcr.io/my-registry`. If this variable is not set, the default is `velero`.
The image is preserved in the local machine, you can run `docker push` to push the image to the specified registry, or if not specified, docker hub by default.
#### Build hybrid image
You can also build a hybrid image that supports multiple OS types or CPU architectures. A hybrid image contains a manifest list with one or more manifests each of which maps to a single `os type/arch/os version` configuration.
Below `os type/arch/os version` configurations are tested and supported:
* `linux/amd64`
* `linux/arm64`
* `windows/amd64/ltsc2022`
The hybrid image must be pushed to a registry as the local system doesn't support all the manifests in the image. So `BUILDX_OUTPUT_TYPE` parameter must be set as `registry`.
By default, `$REGISTRY` is set as `velero`, you can change it to your own registry.
To build a hybrid image, the following one time setup is necessary:
1. If you are building cross platform container images
```bash
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
```
2. Create and bootstrap a new docker buildx builder
```bash
$ docker buildx create --use --name builder
builder
$ docker buildx inspect --bootstrap
[+] Building 2.6s (1/1) FINISHED
=> [internal] booting buildkit 2.6s
=> => pulling image moby/buildkit:buildx-stable-1 1.9s
=> => creating container buildx_buildkit_builder0 0.7s
Name: builder
Driver: docker-container
Nodes:
Name: builder0
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
```
NOTE: Without the above setup, the output of `docker buildx inspect --bootstrap` will be:
```bash
$ docker buildx inspect --bootstrap
Name: default
Driver: docker
Nodes:
Name: default
Endpoint: default
Status: running
Platforms: linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
```
And the `REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry make container` will fail with the below error:
```bash
$ REGISTRY=ashishamarnath BUILDX_PLATFORMS=linux/arm64 BUILDX_OUTPUT_TYPE=registry make container
auto-push is currently not implemented for docker driver
make: *** [container] Error 1
```
Having completed the above one time setup, now the output of `docker buildx inspect --bootstrap` should be like
```bash
$ docker buildx inspect --bootstrap
Name: builder
Driver: docker-container
Nodes:
Name: builder0
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v
```
Now build and push the container image by running the `make container` command with `$BUILDX_OUTPUT_TYPE` set to `registry`.
Blow command builds a hybrid image with single configuration `linux/amd64`:
```bash
$ REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry make container
```
Blow command builds a hybrid image with configurations `linux/amd64` and `linux/arm64`:
```bash
$ REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry BUILD_ARCH=amd64,arm64 make container
```
Blow command builds a hybrid image with configurations `linux/amd64`, `linux/arm64` and `windows/amd64/ltsc2022`:
```bash
$ REGISTRY=myrepo BUILDX_OUTPUT_TYPE=registry BUILD_OS=linux,windows BUILD_ARCH=amd64,arm64 make container
```
Note: if you want to update the image but not change its name, you will have to trigger Kubernetes to pick up the new image. One way of doing so is by deleting the Velero deployment pod and node-agent pods:
```bash
kubectl -n velero delete pods -l deploy=velero
```
[4]: https://blog.golang.org/organizing-go-code
[5]: https://golang.org/doc/install
[22]: https://github.com/vmware-tanzu/velero/releases
[23]: https://docs.docker.com/buildx/working-with-buildx/
[24]: https://github.com/docker/buildx

View File

@@ -1,171 +0,0 @@
---
title: "Code Standards"
layout: docs
toc: "true"
---
## Opening PRs
When opening a pull request, please fill out the checklist supplied the template. This will help others properly categorize and review your pull request.
### PR title
Make sure that the pull request title summarizes the change made (and not just "fixes issue #xxxx"):
Example PR titles:
- "Check for nil when validating foo"
- "Issue #1234: Check for nil when validating foo"
### Cherry-pick PRs
When a PR to main needs to be cherry-picked to a release branch, please wait until the main PR is merged first before creating the CP PR. If the CP PR is made before the main PR is merged, there is a risk that PR modifications in response to review comments will not make it into the CP PR.
The Cherry-pick PR title should reference the branch it's cherry-picked to and the fact that it's a CP of a commit to main:
- "[release-1.13 CP] Issue #1234: Check for nil when validating foo"
## Adding a changelog
Authors are expected to include a changelog file with their pull requests. The changelog file
should be a new file created in the `changelogs/unreleased` folder. The file should follow the
naming convention of `pr-username` and the contents of the file should be your text for the
changelog.
velero/changelogs/unreleased <- folder
000-username <- file
Add that to the PR.
A command to do this is `make new-changelog CHANGELOG_BODY="Changes you have made"`
If a PR does not warrant a changelog, the CI check for a changelog can be skipped by applying a `changelog-not-required` label on the PR. If you are making a PR on a release branch, you should still make a new file in the `changelogs/unreleased` folder on the release branch for your change.
## Copyright header
Whenever a source code file is being modified, the copyright notice should be updated to our standard copyright notice. That is, it should read “Copyright the Velero contributors.”
For new files, the entire copyright and license header must be added.
Please note that doc files do not need a copyright header.
## Code
- Log messages are capitalized.
- Error messages are kept lower-cased.
- Wrap/add a stack only to errors that are being directly returned from non-velero code, such as an API call to the Kubernetes server.
```bash
errors.WithStack(err)
```
- Prefer to use the utilities in the Kubernetes package [`sets`](https://godoc.org/github.com/kubernetes/apimachinery/pkg/util/sets).
```bash
k8s.io/apimachinery/pkg/util/sets
```
## Imports
For imports, we use the following convention:
`<group><version><api | client | informer | ...>`
Example:
import (
corev1api "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
corev1listers "k8s.io/client-go/listers/core/v1"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
velerov1client "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/typed/velero/v1"
)
## Mocks
We use a package to generate mocks for our interfaces.
Example: if you want to change this mock: https://github.com/vmware-tanzu/velero/blob/v1.18.0/pkg/podvolume/mocks/restorer.go
Run:
```bash
go get github.com/vektra/mockery/.../
cd pkg/podvolume
mockery -name=Restorer
```
Might need to run `make update` to update the imports.
## Kubernetes Labels
When generating label values, be sure to pass them through the `label.GetValidName()` helper function.
This will help ensure that the values are the proper length and format to be stored and queried.
In general, UIDs are safe to persist as label values.
This function is not relevant to annotation values, which do not have restrictions.
## DCO Sign off
All authors to the project retain copyright to their work. However, to ensure
that they are only submitting work that they have rights to, we are requiring
everyone to acknowledge this by signing their work.
Any copyright notices in this repo should specify the authors as "the Velero contributors".
To sign your work, just add a line like this at the end of your commit message:
```
Signed-off-by: Joe Beda <joe@heptio.com>
```
This can easily be done with the `--signoff` option to `git commit`.
By doing this you state that you can certify the following (from [https://developercertificate.org/](https://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```

View File

@@ -1,102 +0,0 @@
---
title: "Use IBM Cloud Object Storage as Velero's storage destination."
layout: docs
---
You can deploy Velero on IBM [Public][5] or [Private][4] clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero's backups.
To set up IBM Cloud Object Storage (COS) as Velero's destination, you:
* Download an official release of Velero
* Create your COS instance
* Create an S3 bucket
* Define a service that can store data in the bucket
* Configure and start the Velero server
## Download Velero
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
The directory you extracted is called the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
## Create COS instance
If you dont have a COS instance, you can create a new one, according to the detailed instructions in [Creating a new resource instance][1].
## Create an S3 bucket
Velero requires an object storage bucket to store backups in. See instructions in [Create some buckets to store your data][2].
## Define a service that can store data in the bucket.
The process of creating service credentials is described in [Service credentials][3].
Several comments:
1. The Velero service will write its backup into the bucket, so it requires the “Writer” access role.
2. Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keysa set of HMAC credentials. You can create these HMAC credentials by specifying `{“HMAC”:true}` as an optional inline parameter. See [HMAC credentials][31] guide.
3. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `cos_hmac_keys` entry there are `access_key_id` and `secret_access_key`. Use them in the next step.
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
aws_access_key_id=<ACCESS_KEY_ID>
aws_secret_access_key=<SECRET_ACCESS_KEY>
```
Where the access key id and secret are the values that you got above.
## Install and start Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it.
```bash
velero install \
--provider aws \
--bucket <YOUR_BUCKET> \
--secret-file ./credentials-velero \
--plugins velero/velero-plugin-for-aws:v1.10.0\
--use-volume-snapshots=false \
--backup-location-config region=<YOUR_REGION>,s3ForcePathStyle="true",s3Url=<YOUR_URL_ACCESS_POINT>,checksumAlgorithm=""
```
Velero does not have a volume snapshot plugin for IBM Cloud, so creating volume snapshots is disabled.
Additionally, you can specify `--use-node-agent` to enable [File System Backup][16], and `--wait` to wait for the deployment to be ready.
(Optional) Specify [CPU and memory resource requests and limits][15] for the Velero/node-agent pods.
Once the installation is complete, remove the default `VolumeSnapshotLocation` that was created by `velero install`, since it's specific to AWS and won't work for IBM Cloud:
```bash
kubectl -n velero delete volumesnapshotlocation.velero.io default
```
For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation.
## Installing the nginx example (optional)
If you run the nginx example, in file `examples/nginx-app/with-pv.yaml`:
Uncomment `storageClassName: <YOUR_STORAGE_CLASS_NAME>` and replace with your `StorageClass` name.
[0]: ../namespace.md
[1]: https://cloud.ibm.com/docs/cloud-object-storage/getting-started.html
[2]: https://cloud.ibm.com/docs/cloud-object-storage/getting-started.html#create-buckets
[3]: https://cloud.ibm.com/docs/cloud-object-storage/iam?topic=cloud-object-storage-service-credentials
[31]: https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main
[4]: https://www.ibm.com/docs/en/cloud-private
[5]: https://cloud.ibm.com/docs/containers/container_index.html#container_index
[14]: http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
[15]: ../customize-installation.md#customize-resource-requests-and-limits
[16]: ../file-system-backup.md

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 407 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 285 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 277 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

View File

@@ -1,301 +0,0 @@
---
title: "Quick start evaluation install with Minio"
layout: docs
---
The following example sets up the Velero server and client, then backs up and restores a sample application.
For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster.
For additional functionality with this setup, see the section below on how to [expose Minio outside your cluster][1].
**NOTE** The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope.
See [Set up Velero on your platform][3] for how to configure Velero for a production environment.
If you encounter issues with installing or configuring, see [Debugging Installation Issues](debugging-install.md).
## Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later. **Note:** File System Backup support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. File System Backup support is not required for this example, but may be of interest later. See [File System Backup][17].
* A DNS server on the cluster
* `kubectl` installed
* Sufficient disk space to store backups in Minio. You will need sufficient disk space available to handle any
backups plus at least 1GB additional. Minio will not operate if less than 1GB of free disk space is available.
## Install the CLI
### Option 1: MacOS - Homebrew
On macOS, you can use [Homebrew](https://brew.sh) to install the `velero` client:
```bash
brew install velero
```
### Option 2: GitHub release
1. Download the [latest official release's](https://github.com/vmware-tanzu/velero/releases) tarball for your client platform.
_We strongly recommend that you use an [official release](https://github.com/vmware-tanzu/velero/releases) of
Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch
of the Velero repository is under active development and is not guaranteed to be stable!_
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
The directory you extracted is called the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
## Set up server
These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See [Expose Minio outside your cluster](#expose-minio-outside-your-cluster-with-a-service) for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands.
1. Create a Velero-specific credentials file (`credentials-velero`) in your Velero directory:
```
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
```
1. Start the server and the local storage service. In the Velero directory, run:
```
kubectl apply -f examples/minio/00-minio-deployment.yaml
```
_Note_: The example Minio yaml provided uses "empty dir". Your node needs to have enough space available to store the
data being backed up plus 1GB of free space. If the node does not have enough space, you can modify the example yaml to
use a Persistent Volume instead of "empty dir"
```
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.2.1 \
--bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
```
* This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). You may need to update AWS plugin version to one that is [compatible](https://github.com/vmware-tanzu/velero-plugin-for-aws#compatibility) with the version of Velero you are installing.
* Additionally, you can specify `--use-node-agent` to enable File System Backup support, and `--wait` to wait for the deployment to be ready.
* This example also assumes you have named your Minio bucket "velero".
* Please make sure to set parameter `s3ForcePathStyle=true`. The parameter is used to set the Velero integrated AWS SDK data query address style. There are two types of the address: [virtual-host and path-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). If the `s3ForcePathStyle=true` is not set, the default value is false, then the AWS SDK will query in virtual-host style, but the MinIO server only support path-style address by default. The miss match will mean Velero can upload data to MinIO, but **cannot download from MinIO**. This [link](https://github.com/vmware-tanzu/velero/issues/7268) is an example of this issue.
It can be resolved by two ways:
* Set `s3ForcePathStyle=true` for parameter `--backup-location-config` when installing Velero. This is the preferred way.
* Make MinIO server support virtual-host style address. Add the [MINIO_DOMAIN environment variable](https://min.io/docs/minio/linux/reference/minio-server/settings/core.html#id5) for MinIO server will do the magic.
1. Deploy the example nginx application:
```bash
kubectl apply -f examples/nginx-app/base.yaml
```
1. Check to see that both the Velero and nginx deployments are successfully created:
```
kubectl get deployments -l component=velero --namespace=velero
kubectl get deployments --namespace=nginx-example
```
## Back up
1. Create a backup for any object that matches the `app=nginx` label selector:
```
velero backup create nginx-backup --selector app=nginx
```
Alternatively if you want to backup all objects *except* those matching the label `backup=ignore`:
```
velero backup create nginx-backup --selector 'backup notin (ignore)'
```
1. (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector:
```
velero schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
```
Alternatively, you can use some non-standard shorthand cron expressions:
```
velero schedule create nginx-daily --schedule="@daily" --selector app=nginx
```
See the [cron package's documentation][30] for more usage examples.
1. Simulate a disaster:
```
kubectl delete namespace nginx-example
```
1. To check that the nginx deployment and service are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
You should get no results.
NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned up.
## Restore
1. Run:
```
velero restore create --from-backup nginx-backup
```
1. Run:
```
velero restore get
```
After the restore finishes, the output looks like the following:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, you can look at them in detail:
```
velero restore describe <RESTORE_NAME>
```
For more information, see [the debugging information][18].
## Clean up
If you want to delete any backups you created, including data in object storage and persistent
volume snapshots, you can run:
```
velero backup delete BACKUP_NAME
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do
this for each backup you want to permanently delete. A future version of Velero will allow you to
delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run:
```
velero backup get BACKUP_NAME
```
To completely uninstall Velero, minio, and the nginx example app from your Kubernetes cluster:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
kubectl delete -f examples/nginx-app/base.yaml
```
## Expose Minio outside your cluster with a Service
When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can:
- Change the Minio Service type from `ClusterIP` to `NodePort`.
- Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`.
You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config.
### Expose Minio with Service of type NodePort
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config.
1. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`.
1. Get the Minio URL:
- if you're running Minikube:
```shell
minikube service minio --namespace=velero --url
```
- in any other environment:
1. Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client.
1. Append the value of the NodePort to get a complete URL. You can get this value by running:
```shell
kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
```
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_FROM_PREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix.
## Accessing logs with an HTTPS endpoint
If you're using Minio with HTTPS, you may see unintelligible text in the output of `velero describe`, or `velero logs` commands.
To fix this, you can add a public URL to the `BackupStorageLocation`.
In a terminal, run the following:
```shell
kubectl patch -n velero backupstoragelocation default --type merge -p '{"spec":{"config":{"publicUrl":"https://<a public IP for your Minio instance>:9000"}}}'
```
If your certificate is self-signed, see the [documentation on self-signed certificates][32].
## Expose Minio outside your cluster with Kubernetes in Docker (KinD):
Kubernetes in Docker does not have support for NodePort services (see [this issue](https://github.com/kubernetes-sigs/kind/issues/99)). In this case, you can use a port forward to access the Minio bucket.
In a terminal, run the following:
```shell
MINIO_POD=$(kubectl get pods -n velero -l component=minio -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward $MINIO_POD -n velero 9000:9000
```
Then, in another terminal:
```shell
kubectl edit backupstoragelocation default -n velero
```
Add `publicUrl: http://localhost:9000` under the `spec.config` section.
### Work with Ingress
Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio.
In this case:
1. Keep the Service type as `ClusterIP`.
1. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URL_AND_PORT_OF_INGRESS>` as a field under `spec.config`.
[1]: #expose-minio-with-service-of-type-nodeport
[3]: ../customize-installation.md
[17]: ../file-system-backup.md
[18]: ../debugging-restores.md
[26]: https://github.com/vmware-tanzu/velero/releases
[30]: https://godoc.org/github.com/robfig/cron
[32]: ../self-signed-certificates.md

View File

@@ -1,248 +0,0 @@
---
title: "Use Oracle Cloud as a Backup Storage Provider for Velero"
layout: docs
---
## Introduction
[Velero](https://velero.io/) is a tool used to backup and migrate Kubernetes applications. Here are the steps to use [Oracle Cloud Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) as a destination for Velero backups.
1. [Download Velero](#download-velero)
2. [Create A Customer Secret Key](#create-a-customer-secret-key)
3. [Create An Oracle Object Storage Bucket](#create-an-oracle-object-storage-bucket)
4. [Install Velero](#install-velero)
5. [Clean Up](#clean-up)
6. [Examples](#examples)
7. [Additional Reading](#additional-reading)
## Download Velero
1. Download the [latest release](https://github.com/vmware-tanzu/velero/releases/) of Velero to your development environment. This includes the `velero` CLI utility and example Kubernetes manifest files. For example:
```
wget https://github.com/vmware-tanzu/velero/releases/download/v1.0.0/velero-v1.0.0-linux-amd64.tar.gz
```
**NOTE:** Its strongly recommend that you use an official release of Velero. The tarballs for each release contain the velero command-line client. The code in the main branch of the Velero repository is under active development and is not guaranteed to be stable!
2. Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz`
You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero`
3. Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH`
4. Run `velero` to confirm the CLI has been installed correctly. You should see an output like this:
```
$ velero
Velero is a tool for managing disaster recovery, specifically for Kubernetes
cluster resources. It provides a simple, configurable, and operationally robust
way to back up your application state and associated data.
If you're familiar with kubectl, Velero supports a similar model, allowing you to
execute commands such as 'velero get backup' and 'velero create schedule'. The same
operations can also be performed as 'velero backup get' and 'velero schedule create'.
Usage:
velero [command]
```
## Create A Customer Secret Key
1. Oracle Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3. This special signing key is an Access Key/Secret Key pair. Follow these steps to [create a Customer Secret Key](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#To4). Refer to this link for more information about [Working with Customer Secret Keys](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm#s3).
2. Create a Velero credentials file with your Customer Secret Key:
```
$ vi credentials-velero
[default]
aws_access_key_id=bae031188893d1eb83719648790ac850b76c9441
aws_secret_access_key=MmY9heKrWiNVCSZQ2Mf5XTJ6Ys93Bw2d2D6NMSTXZlk=
```
## Create An Oracle Object Storage Bucket
Create an Oracle Cloud Object Storage bucket called `velero` in the root compartment of your Oracle Cloud tenancy. Refer to this page for [more information about creating a bucket with Object Storage](https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/managingbuckets.htm#usingconsole).
## Install Velero
You will need the following information to install Velero into your Kubernetes cluster with Oracle Object Storage as the Backup Storage provider:
```
velero install \
--provider [provider name] \
--bucket [bucket name] \
--prefix [tenancy name] \
--use-volume-snapshots=false \
--secret-file [secret file location] \
--backup-location-config region=[region],s3ForcePathStyle="true",s3Url=[storage API endpoint]
```
- `--provider` This example uses the S3-compatible API, so use `aws` as the provider.
- `--bucket` The name of the bucket created in Oracle Object Storage - in our case this is named `velero`.
- ` --prefix` The name of your Oracle Cloud tenancy - in our case this is named `oracle-cloudnative`.
- `--use-volume-snapshots=false` Velero does not have a volume snapshot plugin for Oracle Cloud, so creating volume snapshots is disabled.
- `--secret-file` The path to your `credentials-velero` file.
- `--backup-location-config` The path to your Oracle Object Storage bucket. This consists of your `region` which corresponds to your Oracle Cloud region name ([List of Oracle Cloud Regions](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm?Highlight=regions)) and the `s3Url`, the S3-compatible API endpoint for Oracle Object Storage based on your region: `https://oracle-cloudnative.compat.objectstorage.[region name].oraclecloud.com`
For example:
```
velero install \
--provider aws \
--bucket velero \
--prefix oracle-cloudnative \
--use-volume-snapshots=false \
--secret-file /Users/mboxell/bin/velero/credentials-velero \
--backup-location-config region=us-phoenix-1,s3ForcePathStyle="true",s3Url=https://oracle-cloudnative.compat.objectstorage.us-phoenix-1.oraclecloud.com
```
This will create a `velero` namespace in your cluster along with a number of CRDs, a ClusterRoleBinding, ServiceAccount, Secret, and Deployment for Velero. If your pod fails to successfully provision, you can troubleshoot your installation by running: `kubectl logs [velero pod name]`.
## Clean Up
To remove Velero from your environment, delete the namespace, ClusterRoleBinding, ServiceAccount, Secret, and Deployment and delete the CRDs, run:
```
kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero
```
This will remove all resources created by `velero install`.
## Examples
After creating the Velero server in your cluster, try this example:
### Basic example (without PersistentVolumes)
1. Start the sample nginx app: `kubectl apply -f examples/nginx-app/base.yaml`
This will create an `nginx-example` namespace with a `nginx-deployment` deployment, and `my-nginx` service.
```
$ kubectl apply -f examples/nginx-app/base.yaml
namespace/nginx-example created
deployment.apps/nginx-deployment created
service/my-nginx created
```
You can see the created resources by running `kubectl get all`
```
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-67594d6bf6-4296p 1/1 Running 0 20s
pod/nginx-deployment-67594d6bf6-f9r5s 1/1 Running 0 20s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-nginx LoadBalancer 10.96.69.166 <pending> 80:31859/TCP 21s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 2 2 2 2 21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-67594d6bf6 2 2 2 21s
```
2. Create a backup: `velero backup create nginx-backup --include-namespaces nginx-example`
```
$ velero backup create nginx-backup --include-namespaces nginx-example
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
```
At this point you can navigate to appropriate bucket, called `velero`, in the Oracle Cloud Object Storage console to see the resources backed up using Velero.
3. Simulate a disaster by deleting the `nginx-example` namespace: `kubectl delete namespaces nginx-example`
```
$ kubectl delete namespaces nginx-example
namespace "nginx-example" deleted
```
Wait for the namespace to be deleted. To check that the nginx deployment, service, and namespace are gone, run:
```
kubectl get deployments --namespace=nginx-example
kubectl get services --namespace=nginx-example
kubectl get namespace/nginx-example
```
This should return: `No resources found.`
4. Restore your lost resources: `velero restore create --from-backup nginx-backup`
```
$ velero restore create --from-backup nginx-backup
Restore request "nginx-backup-20190604102710" submitted successfully.
Run `velero restore describe nginx-backup-20190604102710` or `velero restore logs nginx-backup-20190604102710` for more details.
```
Running `kubectl get namespaces` will show that the `nginx-example` namespace has been restored along with its contents.
5. Run: `velero restore get` to view the list of restored resources. After the restore finishes, the output looks like the following:
```
$ velero restore get
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
nginx-backup-20190604104249 nginx-backup Completed 0 0 2019-06-04 10:42:39 -0700 PDT <none>
```
NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`.
After a successful restore, the `STATUS` column shows `Completed`, and `WARNINGS` and `ERRORS` will show `0`. All objects in the `nginx-example` namespace should be just as they were before you deleted them.
If there are errors or warnings, for instance if the `STATUS` column displays `FAILED` instead of `InProgress`, you can look at them in detail with `velero restore describe <RESTORE_NAME>`
6. Clean up the environment with `kubectl delete -f examples/nginx-app/base.yaml`
```
$ kubectl delete -f examples/nginx-app/base.yaml
namespace "nginx-example" deleted
deployment.apps "nginx-deployment" deleted
service "my-nginx" deleted
```
If you want to delete any backups you created, including data in object storage, you can run: `velero backup delete BACKUP_NAME`
```
$ velero backup delete nginx-backup
Are you sure you want to continue (Y/N)? Y
Request to delete backup "nginx-backup" submitted successfully.
The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed.
```
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run: `velero backup get BACKUP_NAME` or more generally `velero backup get`:
```
$ velero backup get nginx-backup
An error occurred: backups.velero.io "nginx-backup" not found
```
```
$ velero backup get
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
```
## Additional Reading
* [Official Velero Documentation](https://velero.io/docs/v1.18.0/)
* [Oracle Cloud Infrastructure Documentation](https://docs.cloud.oracle.com/)

Some files were not shown because too many files have changed in this diff Show More