mirror of
https://github.com/vmware-tanzu/velero.git
synced 2026-03-22 17:45:01 +00:00
Compare commits
28 Commits
9380_fix
...
copilot/de
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cbc13366c6 | ||
|
|
664b25cca1 | ||
|
|
3504943019 | ||
|
|
acd4d5b183 | ||
|
|
386599638f | ||
|
|
9796da389d | ||
|
|
dfb1d45831 | ||
|
|
72beb35edc | ||
|
|
7442d20f9d | ||
|
|
4dfb47dd21 | ||
|
|
e72fea8ecd | ||
|
|
f388a5ce51 | ||
|
|
e703e06eeb | ||
|
|
1feaafc03e | ||
|
|
e446ce54f6 | ||
|
|
b7289b51c7 | ||
|
|
6eae73f0bf | ||
|
|
07a3cf759d | ||
|
|
8f8367be65 | ||
|
|
0d80995e62 | ||
|
|
1425ebb369 | ||
|
|
04364ef2ca | ||
|
|
3b5118b45e | ||
|
|
c556603ce2 | ||
|
|
8708d4fda8 | ||
|
|
0db7816fa6 | ||
|
|
8f2016000a | ||
|
|
80211d77e5 |
197
.github/AI-DETECTION-EXAMPLES.md
vendored
Normal file
197
.github/AI-DETECTION-EXAMPLES.md
vendored
Normal file
@@ -0,0 +1,197 @@
|
||||
# AI Issue Detection - Examples
|
||||
|
||||
This document provides examples to help understand what triggers AI detection.
|
||||
|
||||
## Example 1: High AI Score (Score: 6/8) ❌
|
||||
|
||||
**This would be flagged:**
|
||||
|
||||
```markdown
|
||||
## Description
|
||||
When deploying Velero on an EKS cluster with `hostNetwork: true`, the application fails to start.
|
||||
|
||||
## Critical Problem
|
||||
```
|
||||
time="2026-01-26T16:40:55Z" level=fatal msg="failed to start metrics server"
|
||||
```
|
||||
|
||||
Status: BLOCKER
|
||||
|
||||
## Affected Environment
|
||||
|
||||
| Parameter | Value |
|
||||
|----------|----------|
|
||||
| Cluster | Amazon EKS |
|
||||
| Velero Version | 1.8.2 |
|
||||
| Kubernetes | 1.33 |
|
||||
|
||||
## Root Cause Analysis
|
||||
|
||||
The controller-runtime metrics uses port 8080 as a hardcoded default...
|
||||
|
||||
## Resolution Attempts
|
||||
|
||||
### Attempt 1: Use extraArgs
|
||||
Result: Failed
|
||||
|
||||
### Attempt 2: Configure metricsAddress
|
||||
Result: Failed
|
||||
|
||||
## Expected Permanent Solution
|
||||
|
||||
Velero should:
|
||||
1. Auto-detect an available port
|
||||
2. Accept configuring the controller-runtime port
|
||||
|
||||
## Questions for Maintainers
|
||||
1. Why does controller-runtime use hardcoded 8080?
|
||||
2. Is there a roadmap to support hostNetwork?
|
||||
|
||||
## Labels and Metadata
|
||||
Severity: CRITICAL
|
||||
```
|
||||
|
||||
**Why flagged (Patterns detected: 6/8):**
|
||||
- ✓ `futureDates` - References "2026-01-26" and "Kubernetes 1.33"
|
||||
- ✓ `excessiveHeaders` - 8+ section headers
|
||||
- ✓ `formalPhrases` - "Root Cause Analysis", "Expected Permanent Solution", "Questions for Maintainers", "Labels and Metadata"
|
||||
- ✓ `aiSectionHeaders` - "## Description", "## Critical Problem", "## Affected Environment", "## Resolution Attempts"
|
||||
- ✓ `perfectFormatting` - Perfect table structure
|
||||
- ✓ `genericSolutions` - Mentions "auto-detect"
|
||||
|
||||
---
|
||||
|
||||
## Example 2: Medium AI Score (Score: 2/8) ✅
|
||||
|
||||
**This would NOT be flagged (below threshold):**
|
||||
|
||||
```markdown
|
||||
**What steps did you take and what happened:**
|
||||
|
||||
I'm trying to restore a backup but getting this error:
|
||||
```
|
||||
error: backup "my-backup" not found
|
||||
```
|
||||
|
||||
**What did you expect to happen:**
|
||||
The backup should restore successfully
|
||||
|
||||
**Environment:**
|
||||
- Velero version: 1.13.0
|
||||
- Kubernetes version: 1.28
|
||||
- Cloud provider: AWS
|
||||
|
||||
**Additional context:**
|
||||
I can see the backup in S3 but Velero doesn't list it. Running `velero backup get` shows no backups.
|
||||
```
|
||||
|
||||
**Why NOT flagged (Patterns detected: 2/8):**
|
||||
- ✗ `futureDates` - Uses realistic versions
|
||||
- ✗ `excessiveHeaders` - Only 3 headers
|
||||
- ✗ `formalPhrases` - No formal AI phrases
|
||||
- ✓ `excessiveTables` - Has a table but only 1
|
||||
- ✗ `perfectFormatting` - Normal formatting
|
||||
- ✗ `aiSectionHeaders` - Standard issue template headers
|
||||
- ✓ `excessiveFormatting` - Has code blocks
|
||||
- ✗ `genericSolutions` - No generic solutions
|
||||
|
||||
---
|
||||
|
||||
## Example 3: Legitimate Detailed Issue (Score: 3/8) ⚠️
|
||||
|
||||
**This would be flagged but is actually legitimate:**
|
||||
|
||||
```markdown
|
||||
## Problem Description
|
||||
|
||||
VolumeGroupSnapshot restore fails with Ceph RBD driver.
|
||||
|
||||
## Environment
|
||||
|
||||
- Velero: 1.14.0
|
||||
- Kubernetes: 1.28.3
|
||||
- ODF: 4.14.2 with Ceph RBD CSI driver
|
||||
|
||||
## Root Cause
|
||||
|
||||
Ceph RBD stores group snapshot metadata in journal as `csi.groupid` omap key. During restore, when creating pre-provisioned VSC, the RBD driver reads this and populates `status.volumeGroupSnapshotHandle`.
|
||||
|
||||
The CSI snapshot controller looks for a VGSC with matching handle. Since Velero deletes VGSC after backup, it's not found.
|
||||
|
||||
## Reproduction Steps
|
||||
|
||||
1. Create backup with VGS
|
||||
2. Delete namespace
|
||||
3. Restore backup
|
||||
4. Observe VS stuck with "cannot find group snapshot"
|
||||
|
||||
## Workaround
|
||||
|
||||
Create stub VGSC with matching `volumeGroupSnapshotHandle` and patch status.
|
||||
|
||||
## Proposed Fix
|
||||
|
||||
1. Backup: Capture `volumeGroupSnapshotHandle` in CSISnapshotInfo
|
||||
2. Restore: Create stub VGSC if handle exists
|
||||
|
||||
## Code References
|
||||
|
||||
- Ceph RBD: https://github.com/ceph/ceph-csi/blob/devel/internal/rbd/snapshot.go#L167
|
||||
- Velero deletion: https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/actions/csi/pvc_action.go#L1124
|
||||
```
|
||||
|
||||
**Why flagged (Patterns detected: 3/8):**
|
||||
- ✗ `futureDates` - Uses current versions
|
||||
- ✓ `excessiveHeaders` - Has 6 section headers
|
||||
- ✓ `formalPhrases` - "Root Cause", "Proposed Fix"
|
||||
- ✗ `excessiveTables` - No tables
|
||||
- ✗ `perfectFormatting` - Normal formatting
|
||||
- ✗ `aiSectionHeaders` - Technical, not generic
|
||||
- ✗ `excessiveFormatting` - Reasonable formatting
|
||||
- ✓ `genericSolutions` - Structured solution with code refs
|
||||
|
||||
**Maintainer Action**: This is a legitimate, well-researched issue. Verify the details with the contributor and remove the `potential-ai-generated` label.
|
||||
|
||||
---
|
||||
|
||||
## Example 4: Simple Valid Issue (Score: 0/8) ✅
|
||||
|
||||
**This would NOT be flagged:**
|
||||
|
||||
```markdown
|
||||
Velero backup fails with error: `rpc error: code = Unavailable desc = connection error`
|
||||
|
||||
Running Velero 1.13 on GKE. Backups were working yesterday but now all fail with this error.
|
||||
|
||||
Logs show the node-agent pod is crashing. Any ideas?
|
||||
```
|
||||
|
||||
**Why NOT flagged (Patterns detected: 0/8):**
|
||||
- All patterns: None detected
|
||||
|
||||
---
|
||||
|
||||
## Key Takeaways
|
||||
|
||||
### Will Trigger Detection ❌
|
||||
- Future dates/versions (2026+, K8s 1.33+)
|
||||
- 4+ formal AI phrases
|
||||
- 8+ section headers
|
||||
- Perfect table formatting across multiple tables
|
||||
- Generic AI section titles
|
||||
- Auto-detect/generic solution patterns
|
||||
|
||||
### Will NOT Trigger ✅
|
||||
- Realistic version numbers
|
||||
- Actual error messages from real systems
|
||||
- Normal issue formatting
|
||||
- Moderate level of detail
|
||||
- Standard GitHub issue template
|
||||
|
||||
### May Trigger (But Legitimate) ⚠️
|
||||
- Very detailed technical analysis
|
||||
- Multiple code references
|
||||
- Well-structured proposals
|
||||
- Extensive testing documentation
|
||||
|
||||
For these cases, maintainers will verify with the contributor and remove the flag once confirmed.
|
||||
80
.github/AI-DETECTION-README.md
vendored
Normal file
80
.github/AI-DETECTION-README.md
vendored
Normal file
@@ -0,0 +1,80 @@
|
||||
# AI-Generated Content Detection
|
||||
|
||||
This directory contains the AI-generated content detection system for Velero issues.
|
||||
|
||||
## Overview
|
||||
|
||||
The Velero project has implemented automated detection of potentially AI-generated issues to help maintain quality and ensure that issues describe real, verified problems.
|
||||
|
||||
## How It Works
|
||||
|
||||
### Detection Workflow
|
||||
|
||||
The workflow (`.github/workflows/ai-issue-detector.yml`) runs automatically when:
|
||||
- A new issue is opened
|
||||
- An existing issue is edited
|
||||
|
||||
### Detection Patterns
|
||||
|
||||
The detector analyzes issues for several AI-generation patterns:
|
||||
|
||||
1. **Excessive Tables** - More than 5 markdown tables
|
||||
2. **Excessive Headers** - More than 8 consecutive section headers
|
||||
3. **Formal Phrases** - Multiple formal section headers typical of AI (e.g., "Root Cause Analysis", "Operational Impact", "Expected Permanent Solution")
|
||||
4. **Excessive Formatting** - Multiple horizontal rules and perfect formatting
|
||||
5. **Future Dates** - Version numbers or dates that are unrealistic or in the future
|
||||
6. **Perfect Formatting** - Overly structured tables with perfect alignment
|
||||
7. **AI Section Headers** - Generic AI-style headers like "Critical Problem", "Resolution Attempts"
|
||||
8. **Generic Solutions** - Auto-generated solution patterns with multiple YAML examples
|
||||
|
||||
### Scoring System
|
||||
|
||||
Each detected pattern adds to the AI score. If the score is 3 or higher (out of 8), the issue is flagged as potentially AI-generated.
|
||||
|
||||
### Actions Taken
|
||||
|
||||
When an issue is flagged:
|
||||
1. A `potential-ai-generated` label is added
|
||||
2. A `needs-triage` label is added
|
||||
3. An automated comment is posted explaining:
|
||||
- Why the issue was flagged
|
||||
- What patterns were detected
|
||||
- Guidelines for contributors to follow
|
||||
- Request for verification
|
||||
|
||||
## For Contributors
|
||||
|
||||
If your issue is flagged:
|
||||
|
||||
1. **Don't panic** - This is not an accusation, just a request for verification
|
||||
2. **Review the guidelines** in our [Code Standards](../site/content/docs/main/code-standards.md#ai-generated-content)
|
||||
3. **Verify your content**:
|
||||
- Ensure all version numbers are accurate
|
||||
- Confirm error messages are from your actual environment
|
||||
- Remove any placeholder or example content
|
||||
- Simplify overly structured formatting
|
||||
4. **Update the issue** with corrections if needed
|
||||
5. **Comment to confirm** that the issue describes a real problem
|
||||
|
||||
## For Maintainers
|
||||
|
||||
When reviewing flagged issues:
|
||||
|
||||
1. Check if the technical details are realistic and verifiable
|
||||
2. Look for signs of hallucinated content (fake version numbers, non-existent features)
|
||||
3. Engage with the issue author to verify the problem
|
||||
4. Remove the `potential-ai-generated` label once verified
|
||||
5. Close issues that cannot be verified or describe non-existent problems
|
||||
|
||||
## Configuration
|
||||
|
||||
The detection patterns can be adjusted in the workflow file if needed. The threshold is currently set at 3 out of 8 patterns to balance false positives with detection accuracy.
|
||||
|
||||
## False Positives
|
||||
|
||||
The detector may occasionally flag legitimate issues, especially those that are:
|
||||
- Very detailed and well-structured
|
||||
- Using formal technical documentation style
|
||||
- Reporting complex problems with extensive details
|
||||
|
||||
This is intentional - we prefer to verify detailed issues rather than miss AI-generated ones.
|
||||
186
.github/MAINTAINER-AI-DETECTION-GUIDE.md
vendored
Normal file
186
.github/MAINTAINER-AI-DETECTION-GUIDE.md
vendored
Normal file
@@ -0,0 +1,186 @@
|
||||
# Maintainer Guide: AI-Generated Issue Detection
|
||||
|
||||
This guide helps Velero maintainers understand and work with the AI-generated issue detection system.
|
||||
|
||||
## Overview
|
||||
|
||||
The AI detection system automatically analyzes new and edited issues to identify potential AI-generated content. This helps maintain issue quality and ensures contributors verify their submissions.
|
||||
|
||||
## How It Works
|
||||
|
||||
### Automatic Detection
|
||||
|
||||
When an issue is opened or edited, the workflow:
|
||||
|
||||
1. **Analyzes** the issue body for 8 different AI patterns
|
||||
2. **Calculates** an AI confidence score (0-8)
|
||||
3. **If score ≥ 3**: Adds labels and posts a comment
|
||||
4. **If score < 3**: Takes no action (issue proceeds normally)
|
||||
|
||||
### Detection Patterns
|
||||
|
||||
| Pattern | Description | Weight |
|
||||
|---------|-------------|--------|
|
||||
| `excessiveTables` | More than 5 markdown tables | 1 |
|
||||
| `excessiveHeaders` | More than 8 section headers | 1 |
|
||||
| `formalPhrases` | 4+ AI-typical phrases (e.g., "Root Cause Analysis") | 1 |
|
||||
| `excessiveFormatting` | Multiple horizontal rules (---) | 1 |
|
||||
| `futureDates` | Dates/versions in 2026+ or 2030s | 1 |
|
||||
| `perfectFormatting` | Multiple identical table structures | 1 |
|
||||
| `aiSectionHeaders` | 4+ generic AI headers (e.g., "Critical Problem") | 1 |
|
||||
| `genericSolutions` | Auto-detect patterns with multiple YAML blocks | 1 |
|
||||
|
||||
## Working with Flagged Issues
|
||||
|
||||
### Step 1: Review the Issue
|
||||
|
||||
When you see an issue labeled `potential-ai-generated`:
|
||||
|
||||
1. **Read the issue carefully**
|
||||
2. **Check the detected patterns** (listed in the auto-comment)
|
||||
3. **Look for red flags**:
|
||||
- Future version numbers (e.g., "Kubernetes 1.33")
|
||||
- Future dates (e.g., "2026-01-27")
|
||||
- Non-existent features or configurations
|
||||
- Perfect table formatting with no actual content
|
||||
- Generic solutions that don't match Velero's architecture
|
||||
|
||||
### Step 2: Engage with the Contributor
|
||||
|
||||
**If the issue seems legitimate but over-formatted:**
|
||||
|
||||
```markdown
|
||||
Thanks for the detailed report! Could you confirm:
|
||||
1. Are you running Velero version X.Y.Z (you mentioned version A.B.C)?
|
||||
2. Is the error message exactly as shown?
|
||||
3. Have you actually tried the workarounds mentioned?
|
||||
|
||||
Once verified, we'll remove the AI-generated flag and investigate.
|
||||
```
|
||||
|
||||
**If the issue appears to be unverified AI content:**
|
||||
|
||||
```markdown
|
||||
This issue appears to contain AI-generated content that hasn't been verified.
|
||||
|
||||
Please review our [AI contribution guidelines](https://github.com/vmware-tanzu/velero/blob/main/site/content/docs/main/code-standards.md#ai-generated-content) and:
|
||||
1. Confirm this describes a real problem in your environment
|
||||
2. Verify all version numbers and error messages
|
||||
3. Remove any placeholder or example content
|
||||
4. Test that the issue is reproducible
|
||||
|
||||
If you can't verify the issue, please close it. We're happy to help with real problems!
|
||||
```
|
||||
|
||||
### Step 3: Take Action
|
||||
|
||||
**For verified issues:**
|
||||
1. Remove the `potential-ai-generated` label
|
||||
2. Keep or remove `needs-triage` as appropriate
|
||||
3. Proceed with normal issue triage
|
||||
|
||||
**For unverified/invalid issues:**
|
||||
1. Request verification (see templates above)
|
||||
2. If no response after 7 days, consider closing as `stale`
|
||||
3. If clearly invalid, close with explanation
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### False Positives (Legitimate Issues)
|
||||
|
||||
These may trigger the detector but are usually valid:
|
||||
|
||||
- **Very detailed bug reports** with extensive logs and testing
|
||||
- **Technical design proposals** with multiple sections
|
||||
- **Well-organized feature requests** with tables and examples
|
||||
|
||||
**Action**: Engage with contributor, ask clarifying questions, remove flag if verified.
|
||||
|
||||
### True Positives (AI-Generated)
|
||||
|
||||
Red flags that indicate unverified AI content:
|
||||
|
||||
- **Future version numbers**: "Kubernetes 1.33" (doesn't exist yet)
|
||||
- **Future dates**: "2026-01-27" (if current date is before)
|
||||
- **Non-existent features**: References to Velero features that don't exist
|
||||
- **Generic solutions**: "Auto-detect available port" (not how Velero works)
|
||||
- **Perfect formatting, wrong content**: Beautiful tables with incorrect info
|
||||
|
||||
**Action**: Request verification, ask for actual environment details, consider closing if unverified.
|
||||
|
||||
### Edge Cases
|
||||
|
||||
**Contributor using AI as a writing assistant:**
|
||||
- Issue content is verified and accurate
|
||||
- Just used AI to help structure/format the report
|
||||
- **Action**: This is acceptable! Remove flag if content is verified.
|
||||
|
||||
**Legitimate issue that happens to match patterns:**
|
||||
- Real problem with detailed analysis
|
||||
- Includes proper version numbers and logs
|
||||
- **Action**: Verify with contributor, remove flag once confirmed.
|
||||
|
||||
## Statistics and Monitoring
|
||||
|
||||
You can search for flagged issues:
|
||||
|
||||
```
|
||||
is:issue label:potential-ai-generated
|
||||
```
|
||||
|
||||
Monitor trends:
|
||||
- High detection rate → May need to adjust thresholds
|
||||
- Low detection rate → Patterns working well or need refinement
|
||||
|
||||
## Adjusting the System
|
||||
|
||||
### Modifying Detection Patterns
|
||||
|
||||
Edit `.github/workflows/ai-issue-detector.yml`:
|
||||
|
||||
```javascript
|
||||
// Increase threshold to reduce false positives
|
||||
if (aiScore >= 4) { // was 3
|
||||
|
||||
// Adjust pattern sensitivity
|
||||
excessiveTables: (issueBody.match(/\|.*\|/g) || []).length > 8, // was 5
|
||||
```
|
||||
|
||||
### Adding New Patterns
|
||||
|
||||
Add to the `aiPatterns` object:
|
||||
|
||||
```javascript
|
||||
// Example: Detect excessive use of emojis
|
||||
excessiveEmojis: (issueBody.match(/[\u{1F300}-\u{1F9FF}]/gu) || []).length > 10,
|
||||
```
|
||||
|
||||
### Disabling the Workflow
|
||||
|
||||
Rename or delete `.github/workflows/ai-issue-detector.yml`
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Be courteous**: Contributors may not realize their AI tool generated incorrect info
|
||||
2. **Verify, don't assume**: Some detailed issues are legitimate
|
||||
3. **Educate**: Point to the AI guidelines in code-standards.md
|
||||
4. **Track patterns**: Note common AI-generated patterns for future improvements
|
||||
5. **Iterate**: Adjust detection thresholds based on false positive rates
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: Should we reject all AI-assisted contributions?**
|
||||
A: No! AI assistance is fine if the contributor verifies accuracy. We only flag unverified AI content.
|
||||
|
||||
**Q: What if a contributor is offended by the flag?**
|
||||
A: Explain it's automated and not personal. We just need verification of technical details.
|
||||
|
||||
**Q: Can we automatically close flagged issues?**
|
||||
A: No. Always engage with the contributor first. Some are legitimate.
|
||||
|
||||
**Q: What's an acceptable false positive rate?**
|
||||
A: Aim for <10%. If higher, increase the threshold from 3 to 4 or 5.
|
||||
|
||||
## Support
|
||||
|
||||
Questions about the AI detection system? Tag @vmware-tanzu/velero-maintainers in issue #9501.
|
||||
1
.github/labels.yaml
vendored
1
.github/labels.yaml
vendored
@@ -41,3 +41,4 @@ kind:
|
||||
- tech-debt
|
||||
- usage-error
|
||||
- voting
|
||||
- potential-ai-generated
|
||||
|
||||
132
.github/workflows/ai-issue-detector.yml
vendored
Normal file
132
.github/workflows/ai-issue-detector.yml
vendored
Normal file
@@ -0,0 +1,132 @@
|
||||
name: "Detect AI-Generated Issues"
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, edited]
|
||||
|
||||
jobs:
|
||||
detect-ai-content:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
issues: write
|
||||
contents: read
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Analyze issue for AI-generated content
|
||||
id: analyze
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const issue = context.payload.issue;
|
||||
const issueBody = issue.body || '';
|
||||
const issueTitle = issue.title || '';
|
||||
|
||||
// AI detection patterns
|
||||
const aiPatterns = {
|
||||
// Overly structured markdown with extensive tables
|
||||
excessiveTables: (issueBody.match(/\|.*\|/g) || []).length > 5,
|
||||
|
||||
// Multiple consecutive headers with consistent formatting
|
||||
excessiveHeaders: (issueBody.match(/^#{1,6}\s+/gm) || []).length > 8,
|
||||
|
||||
// Overly formal language patterns common in AI
|
||||
formalPhrases: [
|
||||
'Root Cause Analysis',
|
||||
'Operational Impact',
|
||||
'Expected Permanent Solution',
|
||||
'Questions for Maintainers',
|
||||
'Labels and Metadata',
|
||||
'Reference Files',
|
||||
'Steps to Reproduce'
|
||||
].filter(phrase => issueBody.includes(phrase)).length > 4,
|
||||
|
||||
// Excessive use of emojis or special characters
|
||||
excessiveFormatting: issueBody.includes('---\n \n---') ||
|
||||
(issueBody.match(/---/g) || []).length > 4,
|
||||
|
||||
// Unrealistic version numbers or dates in the future
|
||||
futureDates: /202[6-9]|203\d/.test(issueBody),
|
||||
|
||||
// Overly detailed technical specs with perfect formatting
|
||||
perfectFormatting: issueBody.includes('| Parameter | Value |') &&
|
||||
issueBody.includes('| Aspect | Status | Impact |'),
|
||||
|
||||
// Generic AI-style section headers
|
||||
aiSectionHeaders: [
|
||||
'## Description',
|
||||
'## Critical Problem',
|
||||
'## Affected Environment',
|
||||
'## Full Helm Configuration',
|
||||
'## Resolution Attempts',
|
||||
'## Related Information'
|
||||
].filter(header => issueBody.includes(header)).length > 4,
|
||||
|
||||
// Unusual specificity combined with generic solutions
|
||||
genericSolutions: issueBody.includes('auto-detect') &&
|
||||
issueBody.includes('configuration:') &&
|
||||
(issueBody.match(/```yaml/g) || []).length > 2
|
||||
};
|
||||
|
||||
// Calculate AI score
|
||||
let aiScore = 0;
|
||||
let detectedPatterns = [];
|
||||
|
||||
for (const [pattern, detected] of Object.entries(aiPatterns)) {
|
||||
if (detected) {
|
||||
aiScore++;
|
||||
detectedPatterns.push(pattern);
|
||||
}
|
||||
}
|
||||
|
||||
console.log('AI Score: ' + aiScore + '/8');
|
||||
console.log('Detected patterns: ' + detectedPatterns.join(', '));
|
||||
|
||||
// If AI score is high, add label and comment
|
||||
if (aiScore >= 3) {
|
||||
// Add label
|
||||
try {
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: issue.number,
|
||||
labels: ['needs-triage', 'potential-ai-generated']
|
||||
});
|
||||
|
||||
// Add comment
|
||||
const confidence = Math.round(aiScore/8 * 100);
|
||||
const repoPath = context.repo.owner + '/' + context.repo.repo;
|
||||
const comment = '👋 Thank you for opening this issue!\n\n' +
|
||||
'This issue has been flagged for review as it may contain AI-generated content (confidence: ' + confidence + '%).\n\n' +
|
||||
'**Detected patterns:** ' + detectedPatterns.join(', ') + '\n\n' +
|
||||
'If this issue was created with AI assistance, please review our [AI contribution guidelines](https://github.com/' + repoPath + '/blob/main/site/content/docs/main/code-standards.md#ai-generated-content).\n\n' +
|
||||
'**Important:**\n' +
|
||||
'- Please verify all technical details are accurate\n' +
|
||||
'- Ensure version numbers, dates, and configurations reflect your actual environment\n' +
|
||||
'- Remove any placeholder or example content\n' +
|
||||
'- Confirm the issue is reproducible in your environment\n\n' +
|
||||
'A maintainer will review this issue shortly. If this was flagged in error, please let us know!';
|
||||
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: issue.number,
|
||||
body: comment
|
||||
});
|
||||
|
||||
core.setOutput('ai-detected', 'true');
|
||||
core.setOutput('ai-score', aiScore);
|
||||
} catch (error) {
|
||||
console.log('Error adding label or comment:', error);
|
||||
}
|
||||
} else {
|
||||
core.setOutput('ai-detected', 'false');
|
||||
core.setOutput('ai-score', aiScore);
|
||||
}
|
||||
|
||||
return {
|
||||
aiDetected: aiScore >= 3,
|
||||
score: aiScore,
|
||||
patterns: detectedPatterns
|
||||
};
|
||||
1
changelogs/unreleased/9166-claude
Normal file
1
changelogs/unreleased/9166-claude
Normal file
@@ -0,0 +1 @@
|
||||
Add VolumePolicy support for PVC Phase conditions to allow skipping Pending PVCs
|
||||
1
changelogs/unreleased/9474-blackpiglet
Normal file
1
changelogs/unreleased/9474-blackpiglet
Normal file
@@ -0,0 +1 @@
|
||||
Add Role, RoleBinding, ClusterRole, and ClusterRoleBinding in restore sequence.
|
||||
1
changelogs/unreleased/9481-Lyndon-Li
Normal file
1
changelogs/unreleased/9481-Lyndon-Li
Normal file
@@ -0,0 +1 @@
|
||||
Fix issue #9478, add diagnose info on expose peek fails
|
||||
1
changelogs/unreleased/9494-blackpiglet
Normal file
1
changelogs/unreleased/9494-blackpiglet
Normal file
@@ -0,0 +1 @@
|
||||
Maintenance Job only uses the first element of the LoadAffinity array
|
||||
1
changelogs/unreleased/9498-sseago
Normal file
1
changelogs/unreleased/9498-sseago
Normal file
@@ -0,0 +1 @@
|
||||
Remove backup from running list when backup fails validation
|
||||
106
go.mod
106
go.mod
@@ -3,12 +3,12 @@ module github.com/vmware-tanzu/velero
|
||||
go 1.25.0
|
||||
|
||||
require (
|
||||
cloud.google.com/go/storage v1.55.0
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1
|
||||
cloud.google.com/go/storage v1.57.2
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.1
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5 v5.6.0
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3
|
||||
github.com/aws/aws-sdk-go-v2 v1.24.1
|
||||
github.com/aws/aws-sdk-go-v2/config v1.26.3
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.16.14
|
||||
@@ -31,22 +31,22 @@ require (
|
||||
github.com/onsi/gomega v1.36.1
|
||||
github.com/petar/GoLLRB v0.0.0-20210522233825-ae3b015fd3e9
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/prometheus/client_golang v1.22.0
|
||||
github.com/prometheus/client_golang v1.23.2
|
||||
github.com/prometheus/client_model v0.6.2
|
||||
github.com/robfig/cron/v3 v3.0.1
|
||||
github.com/sirupsen/logrus v1.9.3
|
||||
github.com/spf13/afero v1.10.0
|
||||
github.com/spf13/cobra v1.8.1
|
||||
github.com/spf13/pflag v1.0.5
|
||||
github.com/stretchr/testify v1.10.0
|
||||
github.com/stretchr/testify v1.11.1
|
||||
github.com/vmware-tanzu/crash-diagnostics v0.3.7
|
||||
go.uber.org/zap v1.27.0
|
||||
golang.org/x/mod v0.29.0
|
||||
golang.org/x/oauth2 v0.30.0
|
||||
go.uber.org/zap v1.27.1
|
||||
golang.org/x/mod v0.30.0
|
||||
golang.org/x/oauth2 v0.33.0
|
||||
golang.org/x/text v0.31.0
|
||||
google.golang.org/api v0.241.0
|
||||
google.golang.org/grpc v1.73.0
|
||||
google.golang.org/protobuf v1.36.6
|
||||
google.golang.org/api v0.256.0
|
||||
google.golang.org/grpc v1.77.0
|
||||
google.golang.org/protobuf v1.36.10
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
k8s.io/api v0.33.3
|
||||
k8s.io/apiextensions-apiserver v0.33.3
|
||||
@@ -63,19 +63,19 @@ require (
|
||||
)
|
||||
|
||||
require (
|
||||
cel.dev/expr v0.23.0 // indirect
|
||||
cloud.google.com/go v0.121.1 // indirect
|
||||
cloud.google.com/go/auth v0.16.2 // indirect
|
||||
cel.dev/expr v0.24.0 // indirect
|
||||
cloud.google.com/go v0.121.6 // indirect
|
||||
cloud.google.com/go/auth v0.17.0 // indirect
|
||||
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
|
||||
cloud.google.com/go/compute/metadata v0.7.0 // indirect
|
||||
cloud.google.com/go/compute/metadata v0.9.0 // indirect
|
||||
cloud.google.com/go/iam v1.5.2 // indirect
|
||||
cloud.google.com/go/monitoring v1.24.2 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
|
||||
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 // indirect
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0 // indirect
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0 // indirect
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0 // indirect
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0 // indirect
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.30.0 // indirect
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 // indirect
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.11 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.10 // indirect
|
||||
@@ -93,18 +93,18 @@ require (
|
||||
github.com/blang/semver/v4 v4.0.0 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/chmduquesne/rollinghash v4.0.0+incompatible // indirect
|
||||
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f // indirect
|
||||
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f // indirect
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/edsrzf/mmap-go v1.2.0 // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
|
||||
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
|
||||
github.com/envoyproxy/go-control-plane/envoy v1.35.0 // indirect
|
||||
github.com/envoyproxy/protoc-gen-validate v1.2.1 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||
github.com/fsnotify/fsnotify v1.7.0 // indirect
|
||||
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
|
||||
github.com/go-ini/ini v1.67.0 // indirect
|
||||
github.com/go-jose/go-jose/v4 v4.0.5 // indirect
|
||||
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
github.com/go-ole/go-ole v1.3.0 // indirect
|
||||
@@ -112,36 +112,36 @@ require (
|
||||
github.com/go-openapi/jsonreference v0.20.2 // indirect
|
||||
github.com/go-openapi/swag v0.23.0 // indirect
|
||||
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
|
||||
github.com/goccy/go-json v0.10.5 // indirect
|
||||
github.com/gofrs/flock v0.12.1 // indirect
|
||||
github.com/gofrs/flock v0.13.0 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0 // indirect
|
||||
github.com/golang/protobuf v1.5.4 // indirect
|
||||
github.com/google/btree v1.1.3 // indirect
|
||||
github.com/google/gnostic-models v0.6.9 // indirect
|
||||
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db // indirect
|
||||
github.com/google/s2a-go v0.1.9 // indirect
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
|
||||
github.com/googleapis/gax-go/v2 v2.14.2 // indirect
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.7 // indirect
|
||||
github.com/googleapis/gax-go/v2 v2.15.0 // indirect
|
||||
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
|
||||
github.com/hashicorp/cronexpr v1.1.2 // indirect
|
||||
github.com/hashicorp/cronexpr v1.1.3 // indirect
|
||||
github.com/hashicorp/yamux v0.1.1 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/klauspost/compress v1.18.0 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
|
||||
github.com/klauspost/compress v1.18.2 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
||||
github.com/klauspost/crc32 v1.3.0 // indirect
|
||||
github.com/klauspost/pgzip v1.2.6 // indirect
|
||||
github.com/klauspost/reedsolomon v1.12.4 // indirect
|
||||
github.com/klauspost/reedsolomon v1.12.6 // indirect
|
||||
github.com/kylelemons/godebug v1.1.0 // indirect
|
||||
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
|
||||
github.com/mailru/easyjson v0.7.7 // indirect
|
||||
github.com/mattn/go-colorable v0.1.14 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/minio/crc64nvme v1.0.1 // indirect
|
||||
github.com/minio/crc64nvme v1.1.0 // indirect
|
||||
github.com/minio/md5-simd v1.1.2 // indirect
|
||||
github.com/minio/minio-go/v7 v7.0.94 // indirect
|
||||
github.com/minio/minio-go/v7 v7.0.97 // indirect
|
||||
github.com/mitchellh/go-testing-interface v1.0.0 // indirect
|
||||
github.com/moby/spdystream v0.5.0 // indirect
|
||||
github.com/moby/term v0.5.0 // indirect
|
||||
@@ -153,44 +153,44 @@ require (
|
||||
github.com/natefinch/atomic v1.0.1 // indirect
|
||||
github.com/nxadm/tail v1.4.8 // indirect
|
||||
github.com/oklog/run v1.0.0 // indirect
|
||||
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c // indirect
|
||||
github.com/philhofer/fwd v1.2.0 // indirect
|
||||
github.com/pierrec/lz4 v2.6.1+incompatible // indirect
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
|
||||
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/prometheus/common v0.65.0 // indirect
|
||||
github.com/prometheus/procfs v0.15.1 // indirect
|
||||
github.com/prometheus/common v0.67.4 // indirect
|
||||
github.com/prometheus/procfs v0.16.1 // indirect
|
||||
github.com/rs/xid v1.6.0 // indirect
|
||||
github.com/spiffe/go-spiffe/v2 v2.5.0 // indirect
|
||||
github.com/spiffe/go-spiffe/v2 v2.6.0 // indirect
|
||||
github.com/stretchr/objx v0.5.2 // indirect
|
||||
github.com/tinylib/msgp v1.3.0 // indirect
|
||||
github.com/vladimirvivien/gexe v0.1.1 // indirect
|
||||
github.com/x448/float16 v0.8.4 // indirect
|
||||
github.com/zeebo/blake3 v0.2.4 // indirect
|
||||
github.com/zeebo/errs v1.4.0 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
|
||||
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
|
||||
go.opentelemetry.io/otel v1.37.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.37.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/metric v1.36.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.37.0 // indirect
|
||||
go.opentelemetry.io/otel v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.38.0 // indirect
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||
golang.org/x/crypto v0.45.0 // indirect
|
||||
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
|
||||
golang.org/x/net v0.47.0 // indirect
|
||||
golang.org/x/sync v0.18.0 // indirect
|
||||
golang.org/x/sys v0.38.0 // indirect
|
||||
golang.org/x/term v0.37.0 // indirect
|
||||
golang.org/x/time v0.12.0 // indirect
|
||||
golang.org/x/time v0.14.0 // indirect
|
||||
golang.org/x/tools v0.38.0 // indirect
|
||||
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
|
||||
google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2 // indirect
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
|
||||
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 // indirect
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 // indirect
|
||||
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
|
||||
@@ -198,4 +198,4 @@ require (
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
|
||||
)
|
||||
|
||||
replace github.com/kopia/kopia => github.com/project-velero/kopia v0.0.0-20250722052735-3ea24d208777
|
||||
replace github.com/kopia/kopia => github.com/project-velero/kopia v0.0.0-20251230033609-d946b1e75197
|
||||
|
||||
242
go.sum
242
go.sum
@@ -1,7 +1,7 @@
|
||||
al.essio.dev/pkg/shellescape v1.5.1 h1:86HrALUujYS/h+GtqoB26SBEdkWfmMI6FubjXlsXyho=
|
||||
al.essio.dev/pkg/shellescape v1.5.1/go.mod h1:6sIqp7X2P6mThCQ7twERpZTuigpr6KbZWtls1U8I890=
|
||||
cel.dev/expr v0.23.0 h1:wUb94w6OYQS4uXraxo9U+wUAs9jT47Xvl4iPgAwM2ss=
|
||||
cel.dev/expr v0.23.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
|
||||
cel.dev/expr v0.24.0 h1:56OvJKSH3hDGL0ml5uSxZmz3/3Pq4tJ+fb1unVLAFcY=
|
||||
cel.dev/expr v0.24.0/go.mod h1:hLPLo1W4QUmuYdA72RBX06QTs6MXw941piREPl3Yfiw=
|
||||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
|
||||
@@ -24,10 +24,10 @@ cloud.google.com/go v0.75.0/go.mod h1:VGuuCn7PG0dwsd5XPVm2Mm3wlh3EL55/79EKB6hlPT
|
||||
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
|
||||
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
|
||||
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
|
||||
cloud.google.com/go v0.121.1 h1:S3kTQSydxmu1JfLRLpKtxRPA7rSrYPRPEUmL/PavVUw=
|
||||
cloud.google.com/go v0.121.1/go.mod h1:nRFlrHq39MNVWu+zESP2PosMWA0ryJw8KUBZ2iZpxbw=
|
||||
cloud.google.com/go/auth v0.16.2 h1:QvBAGFPLrDeoiNjyfVunhQ10HKNYuOwZ5noee0M5df4=
|
||||
cloud.google.com/go/auth v0.16.2/go.mod h1:sRBas2Y1fB1vZTdurouM0AzuYQBMZinrUYL8EufhtEA=
|
||||
cloud.google.com/go v0.121.6 h1:waZiuajrI28iAf40cWgycWNgaXPO06dupuS+sgibK6c=
|
||||
cloud.google.com/go v0.121.6/go.mod h1:coChdst4Ea5vUpiALcYKXEpR1S9ZgXbhEzzMcMR66vI=
|
||||
cloud.google.com/go/auth v0.17.0 h1:74yCm7hCj2rUyyAocqnFzsAYXgJhrG26XCFimrc/Kz4=
|
||||
cloud.google.com/go/auth v0.17.0/go.mod h1:6wv/t5/6rOPAX4fJiRjKkJCvswLwdet7G8+UGXt7nCQ=
|
||||
cloud.google.com/go/auth/oauth2adapt v0.2.8 h1:keo8NaayQZ6wimpNSmW5OPc283g65QNIiLpZnkHRbnc=
|
||||
cloud.google.com/go/auth/oauth2adapt v0.2.8/go.mod h1:XQ9y31RkqZCcwJWNSx2Xvric3RrU88hAYYbjDWYDL+c=
|
||||
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
|
||||
@@ -36,8 +36,8 @@ cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvf
|
||||
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
|
||||
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
|
||||
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
|
||||
cloud.google.com/go/compute/metadata v0.7.0 h1:PBWF+iiAerVNe8UCHxdOt6eHLVc3ydFeOCw78U8ytSU=
|
||||
cloud.google.com/go/compute/metadata v0.7.0/go.mod h1:j5MvL9PprKL39t166CoB1uVHfQMs4tFQZZcKwksXUjo=
|
||||
cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdBtwLoEkH9Zs=
|
||||
cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
|
||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
|
||||
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
|
||||
@@ -45,8 +45,8 @@ cloud.google.com/go/iam v1.5.2 h1:qgFRAGEmd8z6dJ/qyEchAuL9jpswyODjA2lS+w234g8=
|
||||
cloud.google.com/go/iam v1.5.2/go.mod h1:SE1vg0N81zQqLzQEwxL2WI6yhetBdbNQuTvIKCSkUHE=
|
||||
cloud.google.com/go/logging v1.13.0 h1:7j0HgAp0B94o1YRDqiqm26w4q1rDMH7XNRU34lJXHYc=
|
||||
cloud.google.com/go/logging v1.13.0/go.mod h1:36CoKh6KA/M0PbhPKMq6/qety2DCAErbhXT62TuXALA=
|
||||
cloud.google.com/go/longrunning v0.6.7 h1:IGtfDWHhQCgCjwQjV9iiLnUta9LBCo8R9QmAFsS/PrE=
|
||||
cloud.google.com/go/longrunning v0.6.7/go.mod h1:EAFV3IZAKmM56TyiE6VAP3VoTzhZzySwI/YI1s/nRsY=
|
||||
cloud.google.com/go/longrunning v0.7.0 h1:FV0+SYF1RIj59gyoWDRi45GiYUMM3K1qO51qoboQT1E=
|
||||
cloud.google.com/go/longrunning v0.7.0/go.mod h1:ySn2yXmjbK9Ba0zsQqunhDkYi0+9rlXIwnoAf+h+TPY=
|
||||
cloud.google.com/go/monitoring v1.24.2 h1:5OTsoJ1dXYIiMiuL+sYscLc9BumrL3CarVLL7dd7lHM=
|
||||
cloud.google.com/go/monitoring v1.24.2/go.mod h1:x7yzPWcgDRnPEv3sI+jJGBkwl5qINf+6qY4eq0I9B4U=
|
||||
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
|
||||
@@ -59,19 +59,19 @@ cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohl
|
||||
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
|
||||
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
|
||||
cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo=
|
||||
cloud.google.com/go/storage v1.55.0 h1:NESjdAToN9u1tmhVqhXCaCwYBuvEhZLLv0gBr+2znf0=
|
||||
cloud.google.com/go/storage v1.55.0/go.mod h1:ztSmTTwzsdXe5syLVS0YsbFxXuvEmEyZj7v7zChEmuY=
|
||||
cloud.google.com/go/storage v1.57.2 h1:sVlym3cHGYhrp6XZKkKb+92I1V42ks2qKKpB0CF5Mb4=
|
||||
cloud.google.com/go/storage v1.57.2/go.mod h1:n5ijg4yiRXXpCu0sJTD6k+eMf7GRrJmPyr9YxLXGHOk=
|
||||
cloud.google.com/go/trace v1.11.6 h1:2O2zjPzqPYAHrn3OKl029qlqG6W8ZdYaOWRyr8NgMT4=
|
||||
cloud.google.com/go/trace v1.11.6/go.mod h1:GA855OeDEBiBMzcckLPE2kDunIpC72N+Pq8WFieFjnI=
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1 h1:Wc1ml6QlJs2BHQ/9Bqu1jiyggbsSjramq2oUmp5WeIo=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.1/go.mod h1:Ot/6aikWnKWi4l9QB7qVSwa8iMphQNqkWALMoNT3rzM=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 h1:B+blDbyVIG3WaikNxPnhPiJ1MThR03b3vKGtER95TP4=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1/go.mod h1:JdM5psgjfBf5fo2uWOZhflPWyDBZ/O/CNAH9CtsuZE4=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0 h1:JXg2dwJUmPB9JmtVmdEB16APJ7jurfbY5jnfXpJoRMc=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.20.0/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.1 h1:Hk5QBxZQC1jb2Fwj6mpzme37xbCDdNTxU7O9eb5+LB4=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.13.1/go.mod h1:IYus9qsFobWIc2YVwe/WPjcnyCkPKtnHAqUYeebc8z0=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2 h1:yz1bePFlP5Vws5+8ez6T3HWXPmwOK7Yvq8QxDBD3SKY=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.2/go.mod h1:Pa9ZNPuoNu/GztvBSKk9J1cDJW6vk/n0zLtV4mgd8N8=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 h1:FPKJS1T+clwv+OLGt13a8UjqeRuh0O4SJ3lUriThc+4=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1/go.mod h1:j2chePtV91HrC22tGoRX3sGY42uF13WzmmV80/OdVAA=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5 v5.6.0 h1:ui3YNbxfW7J3tTFIZMH6LIGRjCngp+J+nIFlnizfNTE=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5 v5.6.0/go.mod h1:gZmgV+qBqygoznvqo2J9oKZAFziqhLZ2xE/WVUmzkHA=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/internal/v2 v2.0.0 h1:PTFGRSlMKCQelWwxUyYVEUqseBJVemLyqWJjvMyt0do=
|
||||
@@ -80,10 +80,10 @@ github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/internal/v3 v3.1.0 h1:2qsI
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/internal/v3 v3.1.0/go.mod h1:AW8VEadnhw9xox+VaVd9sP7NjzOAnaZBLRH6Tq3cJ38=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armresources v1.2.0 h1:Dd+RhdJn0OTtVGaeDLZpcumkIVCtA/3/Fo42+eoYvVM=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armresources v1.2.0/go.mod h1:5kakwfW5CjC9KK+Q4wjXAg+ShuIm2mBMua0ZFj2C8PE=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0 h1:LR0kAX9ykz8G4YgLCaRDVJ3+n43R8MneB5dTy2konZo=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.0/go.mod h1:DWAciXemNf++PQJLeXUB4HHH5OpsAh12HZnu2wXE1jA=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1 h1:lhZdRq7TIx0GJQvSyX2Si406vrYsov2FXGp/RnSEtcs=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.1/go.mod h1:8cl44BDmi+effbARHMQjgOKA2AYvcohNm7KEt42mSV8=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1 h1:/Zt+cDPnpC3OVDm/JKLOs7M2DKmLRIIp3XIx9pHHiig=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.8.1/go.mod h1:Ng3urmn6dYe8gnbCMoHHVl5APYz2txho3koEkV2o2HA=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 h1:ZJJNFaQ86GVKQ9ehwqyAFE6pIfyicpuJ8IkVaPBc6/4=
|
||||
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3/go.mod h1:URuDvhmATVKqHBH9/0nOiNKk0+YcwfQ3WkK5PqHKxc8=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 h1:L/gRVlceqvL25UVaW/CKtUDjefjrs0SPonmDGUVOYP0=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
||||
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
||||
@@ -95,20 +95,20 @@ github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZ
|
||||
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
|
||||
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJTmL004Abzc5wDB5VtZG2PJk5ndYDgVacGqfirKxjM=
|
||||
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0 h1:XkkQbfMyuH2jTSjQjSoihryI8GINRcs4xp8lNawg0FI=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.5.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0 h1:XRzhVemXdgvJqCH0sFfrBUTnUJSBrBf7++ypk+twtRs=
|
||||
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/GehirnInc/crypt v0.0.0-20230320061759-8cc1b52080c5 h1:IEjq88XO4PuBDcvmjQJcQGg+w+UaafSy8G5Kcb5tBhI=
|
||||
github.com/GehirnInc/crypt v0.0.0-20230320061759-8cc1b52080c5/go.mod h1:exZ0C/1emQJAw5tHOaUDyY1ycttqBAPcxuzf7QbY6ec=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0 h1:ErKg/3iS1AKcTkf3yixlZ54f9U1rljCkQyEXWUnIUxc=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.27.0/go.mod h1:yAZHSGnqScoU556rBOVkwLze6WP5N+U11RHuWaGVxwY=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0 h1:fYE9p3esPxA/C0rQ0AHhP0drtPXDRhaWiwg1DPqO7IU=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.51.0/go.mod h1:BnBReJLvVYx2CS/UHOgVz2BXKXD9wsQPxZug20nZhd0=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.51.0 h1:OqVGm6Ei3x5+yZmSJG1Mh2NwHvpVmZ08CB5qJhT9Nuk=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.51.0/go.mod h1:SZiPHWGOOk3bl8tkevxkoiwPgsIl6CwrWcbwjfHZpdM=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0 h1:6/0iUd0xrnX7qt+mLNRwg5c0PGv8wpE8K90ryANQwMI=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.51.0/go.mod h1:otE2jQekW/PqXk1Awf5lmfokJx4uwuqcj1ab5SpGeW0=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.30.0 h1:sBEjpZlNHzK1voKq9695PJSX2o5NEXl7/OL3coiIY0c=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.30.0/go.mod h1:P4WPRUkOhJC13W//jWpyfJNDAIpvRbAUIYLX/4jtlE0=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0 h1:owcC2UnmsZycprQ5RfRgjydWhuoxg71LUfyiQdijZuM=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0/go.mod h1:ZPpqegjbE99EPKsu3iUWV22A04wzGPcAY/ziSIQEEgs=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0 h1:4LP6hvB4I5ouTbGgWtixJhgED6xdf67twf9PoY96Tbg=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/cloudmock v0.53.0/go.mod h1:jUZ5LYlw40WMd07qxcQJD5M40aUxrfwqQX1g7zxYnrQ=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0 h1:Ron4zCA/yk6U7WOBXhTJcDpsUBG9npumK6xw2auFltQ=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.53.0/go.mod h1:cSgYe11MCNYunTnRXrKiR/tHc0eoKjICUuWpNZoVCOo=
|
||||
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
|
||||
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
|
||||
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
|
||||
@@ -189,8 +189,8 @@ github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDk
|
||||
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
|
||||
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
|
||||
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
|
||||
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f h1:C5bqEmzEPLsHm9Mv73lSE9e9bKV23aB1vxOsmZrkl3k=
|
||||
github.com/cncf/xds/go v0.0.0-20250326154945-ae57f3c0d45f/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
|
||||
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f h1:Y8xYupdHxryycyPlc9Y+bSQAYZnetRJ70VMVKm5CKI0=
|
||||
github.com/cncf/xds/go v0.0.0-20251022180443-0feb69152e9f/go.mod h1:HlzOvOjVBOfTGSRXRyY0OiCS/3J1akRGQQpRO/7zyF4=
|
||||
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
||||
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
@@ -211,8 +211,6 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
||||
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
@@ -229,10 +227,10 @@ github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1m
|
||||
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
|
||||
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
|
||||
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
|
||||
github.com/envoyproxy/go-control-plane v0.13.4 h1:zEqyPVyku6IvWCFwux4x9RxkLOMUL+1vC9xUFv5l2/M=
|
||||
github.com/envoyproxy/go-control-plane v0.13.4/go.mod h1:kDfuBlDVsSj2MjrLEtRWtHlsWIFcGyB2RMO44Dc5GZA=
|
||||
github.com/envoyproxy/go-control-plane/envoy v1.32.4 h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A=
|
||||
github.com/envoyproxy/go-control-plane/envoy v1.32.4/go.mod h1:Gzjc5k8JcJswLjAx1Zm+wSYE20UrLtt7JZMWiWQXQEw=
|
||||
github.com/envoyproxy/go-control-plane v0.13.5-0.20251024222203-75eaa193e329 h1:K+fnvUM0VZ7ZFJf0n4L/BRlnsb9pL/GuDG6FqaH+PwM=
|
||||
github.com/envoyproxy/go-control-plane v0.13.5-0.20251024222203-75eaa193e329/go.mod h1:Alz8LEClvR7xKsrq3qzoc4N0guvVNSS8KmSChGYr9hs=
|
||||
github.com/envoyproxy/go-control-plane/envoy v1.35.0 h1:ixjkELDE+ru6idPxcHLj8LBVc2bFP7iBytj353BoHUo=
|
||||
github.com/envoyproxy/go-control-plane/envoy v1.35.0/go.mod h1:09qwbGVuSWWAyN5t/b3iyVfz5+z8QWGrzkoqm/8SbEs=
|
||||
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0 h1:/G9QYbddjL25KvtKTv3an9lx6VBE2cnb8wp1vEGNYGI=
|
||||
github.com/envoyproxy/go-control-plane/ratelimit v0.1.0/go.mod h1:Wk+tMFAFbCXaJPzVVHnPgRKdUdwW/KdbRt94AzgRee4=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
@@ -266,8 +264,8 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2
|
||||
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
|
||||
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
|
||||
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
|
||||
github.com/go-jose/go-jose/v4 v4.0.5 h1:M6T8+mKZl/+fNNuFHvGIzDz7BTLQPIounk/b9dw3AaE=
|
||||
github.com/go-jose/go-jose/v4 v4.0.5/go.mod h1:s3P1lRrkT8igV8D9OjyL4WRyHvjB6a4JSllnOrmmBOA=
|
||||
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
|
||||
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
|
||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||
@@ -301,21 +299,19 @@ github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1v
|
||||
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
|
||||
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
|
||||
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
|
||||
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
|
||||
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
|
||||
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
|
||||
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||
github.com/gofrs/flock v0.12.1 h1:MTLVXXHf8ekldpJk3AKicLij9MdwOWkZ+a/jHHZby9E=
|
||||
github.com/gofrs/flock v0.12.1/go.mod h1:9zxTsyu5xtJ9DK+1tFZyibEV7y3uwDxPPfbxeeHCoD0=
|
||||
github.com/gofrs/flock v0.13.0 h1:95JolYOvGMqeH31+FC7D2+uULf6mG61mEZ/A8dRYMzw=
|
||||
github.com/gofrs/flock v0.13.0/go.mod h1:jxeyy9R1auM5S6JYDBhDt+E2TCo7DkratH4Pgi8P+Z0=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
@@ -403,12 +399,12 @@ github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+
|
||||
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.6 h1:GW/XbdyBFQ8Qe+YAmFU9uHLo7OnF5tL52HFAgMmyrf4=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.6/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.7 h1:zrn2Ee/nWmHulBx5sAVrGgAa0f2/R35S4DJwfFaUPFQ=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.7/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||
github.com/googleapis/gax-go/v2 v2.14.2 h1:eBLnkZ9635krYIPD+ag1USrOAI0Nr0QYF3+/3GqO0k0=
|
||||
github.com/googleapis/gax-go/v2 v2.14.2/go.mod h1:ON64QhlJkhVtSqp4v1uaK92VyZ2gmvDQsweuyLV+8+w=
|
||||
github.com/googleapis/gax-go/v2 v2.15.0 h1:SyjDc1mGgZU5LncH8gimWo9lW1DtIfPibOG81vgd/bo=
|
||||
github.com/googleapis/gax-go/v2 v2.15.0/go.mod h1:zVVkkxAQHa1RQpg9z2AUCMnKhi0Qld9rcmyfL1OZhoc=
|
||||
github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU=
|
||||
github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA=
|
||||
github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
|
||||
@@ -424,12 +420,12 @@ github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmg
|
||||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
|
||||
github.com/hanwen/go-fuse/v2 v2.8.0 h1:wV8rG7rmCz8XHSOwBZhG5YcVqcYjkzivjmbaMafPlAs=
|
||||
github.com/hanwen/go-fuse/v2 v2.8.0/go.mod h1:yE6D2PqWwm3CbYRxFXV9xUd8Md5d6NG0WBs5spCswmI=
|
||||
github.com/hanwen/go-fuse/v2 v2.9.0 h1:0AOGUkHtbOVeyGLr0tXupiid1Vg7QB7M6YUcdmVdC58=
|
||||
github.com/hanwen/go-fuse/v2 v2.9.0/go.mod h1:yE6D2PqWwm3CbYRxFXV9xUd8Md5d6NG0WBs5spCswmI=
|
||||
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
|
||||
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
|
||||
github.com/hashicorp/cronexpr v1.1.2 h1:wG/ZYIKT+RT3QkOdgYc+xsKWVRgnxJ1OJtjjy84fJ9A=
|
||||
github.com/hashicorp/cronexpr v1.1.2/go.mod h1:P4wA0KBl9C5q2hABiMO7cp6jcIg96CDh1Efb3g1PWA4=
|
||||
github.com/hashicorp/cronexpr v1.1.3 h1:rl5IkxXN2m681EfivTlccqIryzYJSXRGRNa0xeG7NA4=
|
||||
github.com/hashicorp/cronexpr v1.1.3/go.mod h1:P4wA0KBl9C5q2hABiMO7cp6jcIg96CDh1Efb3g1PWA4=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
|
||||
github.com/hashicorp/go-hclog v0.14.1 h1:nQcJDQwIAGnmoUWp8ubocEX40cCml/17YkF6csQLReU=
|
||||
@@ -486,18 +482,20 @@ github.com/keybase/go-keychain v0.0.1/go.mod h1:PdEILRW3i9D8JcdM+FmY6RwkHGnhHxXw
|
||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
|
||||
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
|
||||
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
|
||||
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
|
||||
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||
github.com/klauspost/cpuid/v2 v2.2.10 h1:tBs3QSyvjDyFTq3uoc/9xFpCuOsJQFNPiAhYdw2skhE=
|
||||
github.com/klauspost/cpuid/v2 v2.2.10/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
||||
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
|
||||
github.com/klauspost/crc32 v1.3.0 h1:sSmTt3gUt81RP655XGZPElI0PelVTZ6YwCRnPSupoFM=
|
||||
github.com/klauspost/crc32 v1.3.0/go.mod h1:D7kQaZhnkX/Y0tstFGf8VUzv2UofNGqCjnC3zdHB0Hw=
|
||||
github.com/klauspost/pgzip v1.2.6 h1:8RXeL5crjEUFnR2/Sn6GJNWtSQ3Dk8pq4CL3jvdDyjU=
|
||||
github.com/klauspost/pgzip v1.2.6/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
|
||||
github.com/klauspost/reedsolomon v1.12.4 h1:5aDr3ZGoJbgu/8+j45KtUJxzYm8k08JGtB9Wx1VQ4OA=
|
||||
github.com/klauspost/reedsolomon v1.12.4/go.mod h1:d3CzOMOt0JXGIFZm1StgkyF14EYr3xneR2rNWo7NcMU=
|
||||
github.com/klauspost/reedsolomon v1.12.6 h1:8pqE9aECQG/ZFitiUD1xK/E83zwosBAZtE3UbuZM8TQ=
|
||||
github.com/klauspost/reedsolomon v1.12.6/go.mod h1:ggJT9lc71Vu+cSOPBlxGvBN6TfAS77qB4fp8vJ05NSA=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kopia/htmluibuild v0.0.1-0.20250607181534-77e0f3f9f557 h1:je1C/xnmKxnaJsIgj45me5qA51TgtK9uMwTxgDw+9H0=
|
||||
github.com/kopia/htmluibuild v0.0.1-0.20250607181534-77e0f3f9f557/go.mod h1:h53A5JM3t2qiwxqxusBe+PFgGcgZdS+DWCQvG5PTlto=
|
||||
github.com/kopia/htmluibuild v0.0.1-0.20251125011029-7f1c3f84f29d h1:U3VB/cDMsPW4zB4JRFbVRDzIpPytt889rJUKAG40NPA=
|
||||
github.com/kopia/htmluibuild v0.0.1-0.20251125011029-7f1c3f84f29d/go.mod h1:h53A5JM3t2qiwxqxusBe+PFgGcgZdS+DWCQvG5PTlto=
|
||||
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
@@ -535,12 +533,12 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
|
||||
github.com/minio/crc64nvme v1.0.1 h1:DHQPrYPdqK7jQG/Ls5CTBZWeex/2FMS3G5XGkycuFrY=
|
||||
github.com/minio/crc64nvme v1.0.1/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
|
||||
github.com/minio/crc64nvme v1.1.0 h1:e/tAguZ+4cw32D+IO/8GSf5UVr9y+3eJcxZI2WOO/7Q=
|
||||
github.com/minio/crc64nvme v1.1.0/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
|
||||
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
|
||||
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
|
||||
github.com/minio/minio-go/v7 v7.0.94 h1:1ZoksIKPyaSt64AVOyaQvhDOgVC3MfZsWM6mZXRUGtM=
|
||||
github.com/minio/minio-go/v7 v7.0.94/go.mod h1:71t2CqDt3ThzESgZUlU1rBN54mksGGlkLcFgguDnnAc=
|
||||
github.com/minio/minio-go/v7 v7.0.97 h1:lqhREPyfgHTB/ciX8k2r8k0D93WaFqxbJX36UZq5occ=
|
||||
github.com/minio/minio-go/v7 v7.0.97/go.mod h1:re5VXuo0pwEtoNLsNuSr0RrLfT/MBtohwdaSmPPSRSk=
|
||||
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
|
||||
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
@@ -599,8 +597,8 @@ github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCko
|
||||
github.com/petar/GoLLRB v0.0.0-20210522233825-ae3b015fd3e9 h1:1/WtZae0yGtPq+TI6+Tv1WTxkukpXeMlviSxvL7SRgk=
|
||||
github.com/petar/GoLLRB v0.0.0-20210522233825-ae3b015fd3e9/go.mod h1:x3N5drFsm2uilKKuuYo6LdyD8vZAW55sH/9w+pbo1sw=
|
||||
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
|
||||
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c h1:dAMKvw0MlJT1GshSTtih8C2gDs04w8dReiOGXrGLNoY=
|
||||
github.com/philhofer/fwd v1.1.3-0.20240916144458-20a13a1f6b7c/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
|
||||
github.com/philhofer/fwd v1.2.0 h1:e6DnBTl7vGY+Gz322/ASL4Gyp1FspeMvx1RNDoToZuM=
|
||||
github.com/philhofer/fwd v1.2.0/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
|
||||
github.com/pierrec/lz4 v2.6.1+incompatible h1:9UY3+iC23yxF0UfGaYrGplQ+79Rg+h/q9FV9ix19jjM=
|
||||
github.com/pierrec/lz4 v2.6.1+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
|
||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
|
||||
@@ -617,12 +615,12 @@ github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZN
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
|
||||
github.com/project-velero/kopia v0.0.0-20250722052735-3ea24d208777 h1:T7t+u+mnF33qFTDq7bIMSMB51BEA8zkD7aU6tFQNZ6E=
|
||||
github.com/project-velero/kopia v0.0.0-20250722052735-3ea24d208777/go.mod h1:qlSnPHrsV8eEeU4l4zqEw8mJ5CUeXr7PDiJNI4r4Bus=
|
||||
github.com/project-velero/kopia v0.0.0-20251230033609-d946b1e75197 h1:iGkfuELGvFCqW+zcrhf2GsOwNH1nWYBsC69IOc57KJk=
|
||||
github.com/project-velero/kopia v0.0.0-20251230033609-d946b1e75197/go.mod h1:RL4KehCNKEIDNltN7oruSa3ldwBNVPmQbwmN3Schbjc=
|
||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
|
||||
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
|
||||
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
|
||||
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
|
||||
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
@@ -630,22 +628,20 @@ github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNw
|
||||
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
|
||||
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.65.0 h1:QDwzd+G1twt//Kwj/Ww6E9FQq1iVMmODnILtW1t2VzE=
|
||||
github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
|
||||
github.com/prometheus/common v0.67.4 h1:yR3NqWO1/UyO1w2PhUvXlGQs/PtFmoveVO0KZ4+Lvsc=
|
||||
github.com/prometheus/common v0.67.4/go.mod h1:gP0fq6YjjNCLssJCQp0yk4M8W6ikLURwkdd/YKtTbyI=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
|
||||
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
|
||||
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
|
||||
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
|
||||
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
|
||||
github.com/redis/go-redis/v9 v9.8.0 h1:q3nRvjrlge/6UD7eTu/DSg2uYiU2mCL0G/uzBWqhicI=
|
||||
github.com/redis/go-redis/v9 v9.8.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
|
||||
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
||||
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
|
||||
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
|
||||
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
|
||||
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
|
||||
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
@@ -683,8 +679,8 @@ github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An
|
||||
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
|
||||
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
|
||||
github.com/spf13/viper v1.8.1/go.mod h1:o0Pch8wJ9BVSWGQMbra6iw0oQ5oktSIBaujf1rJH9Ns=
|
||||
github.com/spiffe/go-spiffe/v2 v2.5.0 h1:N2I01KCUkv1FAjZXJMwh95KK1ZIQLYbPfhaxw8WS0hE=
|
||||
github.com/spiffe/go-spiffe/v2 v2.5.0/go.mod h1:P+NxobPc6wXhVtINNtFjNWGBTreew1GBUCwT2wPmb7g=
|
||||
github.com/spiffe/go-spiffe/v2 v2.6.0 h1:l+DolpxNWYgruGQVV0xsfeya3CsC7m8iBzDnMpsbLuo=
|
||||
github.com/spiffe/go-spiffe/v2 v2.6.0/go.mod h1:gm2SeUoMZEtpnzPNs2Csc0D/gX33k1xIx7lEzqblHEs=
|
||||
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
@@ -702,8 +698,8 @@ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
|
||||
github.com/tg123/go-htpasswd v1.2.4 h1:HgH8KKCjdmo7jjXWN9k1nefPBd7Be3tFCTjc2jPraPU=
|
||||
github.com/tg123/go-htpasswd v1.2.4/go.mod h1:EKThQok9xHkun6NBMynNv6Jmu24A33XdZzzl4Q7H1+0=
|
||||
@@ -731,8 +727,6 @@ github.com/zeebo/assert v1.1.0 h1:hU1L1vLTHsnO8x8c9KAR5GmM5QscxHg5RNU5z5qbUWY=
|
||||
github.com/zeebo/assert v1.1.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=
|
||||
github.com/zeebo/blake3 v0.2.4 h1:KYQPkhpRtcqh0ssGYcKLG1JYvddkEA8QwCM/yBqhaZI=
|
||||
github.com/zeebo/blake3 v0.2.4/go.mod h1:7eeQ6d2iXWRGF6npfaxl2CU+xy2Fjo2gxeyZGCRUjcE=
|
||||
github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM=
|
||||
github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4=
|
||||
github.com/zeebo/pcg v1.0.1 h1:lyqfGeWiv4ahac6ttHs+I5hwtH/+1mrhlCtVNQM2kHo=
|
||||
github.com/zeebo/pcg v1.0.1/go.mod h1:09F0S9iiKrwn9rlI5yjLkmrug154/YRW6KnnXVDM/l4=
|
||||
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||
@@ -746,26 +740,26 @@ go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
|
||||
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||
go.opentelemetry.io/contrib/detectors/gcp v1.36.0 h1:F7q2tNlCaHY9nMKHR6XH9/qkp8FktLnIcy6jJNyOCQw=
|
||||
go.opentelemetry.io/contrib/detectors/gcp v1.36.0/go.mod h1:IbBN8uAIIx734PTonTPxAxnjc2pQTxWNkwfstZ+6H2k=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
|
||||
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
|
||||
go.opentelemetry.io/contrib/detectors/gcp v1.38.0 h1:ZoYbqX7OaA/TAikspPl3ozPI6iY6LiIY9I8cUfm+pJs=
|
||||
go.opentelemetry.io/contrib/detectors/gcp v1.38.0/go.mod h1:SU+iU7nu5ud4oCb3LQOhIZ3nRLj6FNVrKgtflbaf2ts=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
|
||||
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
|
||||
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
|
||||
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
|
||||
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0 h1:rixTyDGXFxRy1xzhKrotaHy3/KXdPhlWARrCgK+eqUY=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.36.0/go.mod h1:dowW6UsM9MKbJq5JTz2AMVp3/5iW5I/TStsk8S+CfHw=
|
||||
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
|
||||
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
|
||||
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
|
||||
go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
|
||||
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
|
||||
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
|
||||
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
|
||||
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
|
||||
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
|
||||
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
|
||||
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o=
|
||||
go.starlark.net v0.0.0-20201006213952-227f4aabceb5/go.mod h1:f0znQkUKRrkk36XxWbGjMqQM8wGv/xHBVE2qc3B5oFU=
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca h1:VdD38733bfYv5tUZwEIskMM93VanwNIi5bIKnDrJdEY=
|
||||
@@ -780,8 +774,10 @@ go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
|
||||
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
|
||||
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
|
||||
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
||||
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
|
||||
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
||||
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
|
||||
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
@@ -833,8 +829,8 @@ golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
|
||||
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
|
||||
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
|
||||
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
@@ -895,8 +891,8 @@ golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ
|
||||
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
|
||||
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
|
||||
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
|
||||
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
|
||||
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
@@ -996,8 +992,8 @@ golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxb
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
|
||||
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
|
||||
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
||||
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
@@ -1059,6 +1055,8 @@ golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8T
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw=
|
||||
gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=
|
||||
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
|
||||
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
|
||||
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
|
||||
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
|
||||
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
|
||||
@@ -1081,8 +1079,8 @@ google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjR
|
||||
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
|
||||
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
|
||||
google.golang.org/api v0.44.0/go.mod h1:EBOGZqzyhtvMDoxwS97ctnh0zUmYY6CxqXsc1AvkYD8=
|
||||
google.golang.org/api v0.241.0 h1:QKwqWQlkc6O895LchPEDUSYr22Xp3NCxpQRiWTB6avE=
|
||||
google.golang.org/api v0.241.0/go.mod h1:cOVEm2TpdAGHL2z+UwyS+kmlGr3bVWQQ6sYEqkKje50=
|
||||
google.golang.org/api v0.256.0 h1:u6Khm8+F9sxbCTYNoBHg6/Hwv0N/i+V94MvkOSor6oI=
|
||||
google.golang.org/api v0.256.0/go.mod h1:KIgPhksXADEKJlnEoRa9qAII4rXcy40vfI8HRqcU964=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
@@ -1134,12 +1132,12 @@ google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6D
|
||||
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
|
||||
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
|
||||
google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2 h1:1tXaIXCracvtsRxSBsYDiSBN0cuJvM7QYW+MrpIRY78=
|
||||
google.golang.org/genproto v0.0.0-20250505200425-f936aa4a68b2/go.mod h1:49MsLSx0oWMOZqcpB3uL8ZOkAh1+TndpJ8ONoCBWiZk=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 h1:oWVWY3NzT7KJppx2UKhKmzPq4SRe0LdCijVRwvGeikY=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822/go.mod h1:h3c4v36UTKzUiuaOKQ6gr3S+0hovBtUrXzTG/i3+XEc=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 h1:fc6jSaCT0vBduLYZHYrBBNY4dsWuvgyff9noRNDdBeE=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
|
||||
google.golang.org/genproto v0.0.0-20250603155806-513f23925822 h1:rHWScKit0gvAPuOnu87KpaYtjK5zBMLcULh7gxkCXu4=
|
||||
google.golang.org/genproto v0.0.0-20250603155806-513f23925822/go.mod h1:HubltRL7rMh0LfnQPkMH4NPDFEWp0jw3vixw7jEM53s=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8 h1:mepRgnBZa07I4TRuomDE4sTIYieg/osKmzIf4USdWS4=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20251022142026-3a174f9686a8/go.mod h1:fDMmzKV90WSg1NbozdqrE64fkuTv6mlq2zxo9ad+3yo=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101 h1:tRPGkdGHuewF4UisLzzHHr1spKw92qLM98nIzxbC0wY=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20251103181224-f26f9409b101/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
|
||||
@@ -1161,8 +1159,8 @@ google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAG
|
||||
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
|
||||
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
|
||||
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
|
||||
google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok=
|
||||
google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc=
|
||||
google.golang.org/grpc v1.77.0 h1:wVVY6/8cGA6vvffn+wWK5ToddbgdU3d8MNENr4evgXM=
|
||||
google.golang.org/grpc v1.77.0/go.mod h1:z0BY1iVj0q8E1uSQCjL9cppRj+gnZjzDnzV0dHhrNig=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
@@ -1176,8 +1174,8 @@ google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlba
|
||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
|
||||
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
|
||||
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
|
||||
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
|
||||
@@ -146,6 +146,9 @@ func (p *Policies) BuildPolicy(resPolicies *ResourcePolicies) error {
|
||||
if len(con.PVCLabels) > 0 {
|
||||
volP.conditions = append(volP.conditions, &pvcLabelsCondition{labels: con.PVCLabels})
|
||||
}
|
||||
if len(con.PVCPhase) > 0 {
|
||||
volP.conditions = append(volP.conditions, &pvcPhaseCondition{phases: con.PVCPhase})
|
||||
}
|
||||
p.volumePolicies = append(p.volumePolicies, volP)
|
||||
}
|
||||
|
||||
@@ -191,6 +194,9 @@ func (p *Policies) GetMatchAction(res any) (*Action, error) {
|
||||
if data.PVC != nil {
|
||||
volume.parsePVC(data.PVC)
|
||||
}
|
||||
case data.PVC != nil:
|
||||
// Handle PVC-only scenarios (e.g., unbound PVCs)
|
||||
volume.parsePVC(data.PVC)
|
||||
default:
|
||||
return nil, errors.New("failed to convert object")
|
||||
}
|
||||
|
||||
@@ -983,6 +983,69 @@ volumePolicies:
|
||||
},
|
||||
skip: false,
|
||||
},
|
||||
{
|
||||
name: "PVC phase matching - Pending phase should skip",
|
||||
yamlData: `version: v1
|
||||
volumePolicies:
|
||||
- conditions:
|
||||
pvcPhase: ["Pending"]
|
||||
action:
|
||||
type: skip`,
|
||||
vol: nil,
|
||||
podVol: nil,
|
||||
pvc: &corev1api.PersistentVolumeClaim{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
Name: "pvc-pending",
|
||||
},
|
||||
Status: corev1api.PersistentVolumeClaimStatus{
|
||||
Phase: corev1api.ClaimPending,
|
||||
},
|
||||
},
|
||||
skip: true,
|
||||
},
|
||||
{
|
||||
name: "PVC phase matching - Bound phase should not skip",
|
||||
yamlData: `version: v1
|
||||
volumePolicies:
|
||||
- conditions:
|
||||
pvcPhase: ["Pending"]
|
||||
action:
|
||||
type: skip`,
|
||||
vol: nil,
|
||||
podVol: nil,
|
||||
pvc: &corev1api.PersistentVolumeClaim{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
Name: "pvc-bound",
|
||||
},
|
||||
Status: corev1api.PersistentVolumeClaimStatus{
|
||||
Phase: corev1api.ClaimBound,
|
||||
},
|
||||
},
|
||||
skip: false,
|
||||
},
|
||||
{
|
||||
name: "PVC phase matching - Multiple phases (Pending, Lost)",
|
||||
yamlData: `version: v1
|
||||
volumePolicies:
|
||||
- conditions:
|
||||
pvcPhase: ["Pending", "Lost"]
|
||||
action:
|
||||
type: skip`,
|
||||
vol: nil,
|
||||
podVol: nil,
|
||||
pvc: &corev1api.PersistentVolumeClaim{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "default",
|
||||
Name: "pvc-lost",
|
||||
},
|
||||
Status: corev1api.PersistentVolumeClaimStatus{
|
||||
Phase: corev1api.ClaimLost,
|
||||
},
|
||||
},
|
||||
skip: true,
|
||||
},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
@@ -1059,32 +1122,53 @@ func TestParsePVC(t *testing.T) {
|
||||
name string
|
||||
pvc *corev1api.PersistentVolumeClaim
|
||||
expectedLabels map[string]string
|
||||
expectedPhase string
|
||||
expectErr bool
|
||||
}{
|
||||
{
|
||||
name: "valid PVC with labels",
|
||||
name: "valid PVC with labels and Pending phase",
|
||||
pvc: &corev1api.PersistentVolumeClaim{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{"env": "prod"},
|
||||
},
|
||||
Status: corev1api.PersistentVolumeClaimStatus{
|
||||
Phase: corev1api.ClaimPending,
|
||||
},
|
||||
},
|
||||
expectedLabels: map[string]string{"env": "prod"},
|
||||
expectedPhase: "Pending",
|
||||
expectErr: false,
|
||||
},
|
||||
{
|
||||
name: "valid PVC with empty labels",
|
||||
name: "valid PVC with Bound phase",
|
||||
pvc: &corev1api.PersistentVolumeClaim{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Labels: map[string]string{},
|
||||
},
|
||||
Status: corev1api.PersistentVolumeClaimStatus{
|
||||
Phase: corev1api.ClaimBound,
|
||||
},
|
||||
},
|
||||
expectedLabels: nil,
|
||||
expectedPhase: "Bound",
|
||||
expectErr: false,
|
||||
},
|
||||
{
|
||||
name: "valid PVC with Lost phase",
|
||||
pvc: &corev1api.PersistentVolumeClaim{
|
||||
Status: corev1api.PersistentVolumeClaimStatus{
|
||||
Phase: corev1api.ClaimLost,
|
||||
},
|
||||
},
|
||||
expectedLabels: nil,
|
||||
expectedPhase: "Lost",
|
||||
expectErr: false,
|
||||
},
|
||||
{
|
||||
name: "nil PVC pointer",
|
||||
pvc: (*corev1api.PersistentVolumeClaim)(nil),
|
||||
expectedLabels: nil,
|
||||
expectedPhase: "",
|
||||
expectErr: false,
|
||||
},
|
||||
}
|
||||
@@ -1095,6 +1179,66 @@ func TestParsePVC(t *testing.T) {
|
||||
s.parsePVC(tc.pvc)
|
||||
|
||||
assert.Equal(t, tc.expectedLabels, s.pvcLabels)
|
||||
assert.Equal(t, tc.expectedPhase, s.pvcPhase)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestPVCPhaseMatch(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
condition *pvcPhaseCondition
|
||||
volume *structuredVolume
|
||||
expectedMatch bool
|
||||
}{
|
||||
{
|
||||
name: "match Pending phase",
|
||||
condition: &pvcPhaseCondition{phases: []string{"Pending"}},
|
||||
volume: &structuredVolume{pvcPhase: "Pending"},
|
||||
expectedMatch: true,
|
||||
},
|
||||
{
|
||||
name: "match multiple phases - Pending matches",
|
||||
condition: &pvcPhaseCondition{phases: []string{"Pending", "Bound"}},
|
||||
volume: &structuredVolume{pvcPhase: "Pending"},
|
||||
expectedMatch: true,
|
||||
},
|
||||
{
|
||||
name: "match multiple phases - Bound matches",
|
||||
condition: &pvcPhaseCondition{phases: []string{"Pending", "Bound"}},
|
||||
volume: &structuredVolume{pvcPhase: "Bound"},
|
||||
expectedMatch: true,
|
||||
},
|
||||
{
|
||||
name: "no match for different phase",
|
||||
condition: &pvcPhaseCondition{phases: []string{"Pending"}},
|
||||
volume: &structuredVolume{pvcPhase: "Bound"},
|
||||
expectedMatch: false,
|
||||
},
|
||||
{
|
||||
name: "no match for empty phase",
|
||||
condition: &pvcPhaseCondition{phases: []string{"Pending"}},
|
||||
volume: &structuredVolume{pvcPhase: ""},
|
||||
expectedMatch: false,
|
||||
},
|
||||
{
|
||||
name: "match with empty phases list (always match)",
|
||||
condition: &pvcPhaseCondition{phases: []string{}},
|
||||
volume: &structuredVolume{pvcPhase: "Pending"},
|
||||
expectedMatch: true,
|
||||
},
|
||||
{
|
||||
name: "match with nil phases list (always match)",
|
||||
condition: &pvcPhaseCondition{phases: nil},
|
||||
volume: &structuredVolume{pvcPhase: "Pending"},
|
||||
expectedMatch: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
result := tc.condition.match(tc.volume)
|
||||
assert.Equal(t, tc.expectedMatch, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -51,6 +51,7 @@ type structuredVolume struct {
|
||||
csi *csiVolumeSource
|
||||
volumeType SupportedVolume
|
||||
pvcLabels map[string]string
|
||||
pvcPhase string
|
||||
}
|
||||
|
||||
func (s *structuredVolume) parsePV(pv *corev1api.PersistentVolume) {
|
||||
@@ -70,8 +71,11 @@ func (s *structuredVolume) parsePV(pv *corev1api.PersistentVolume) {
|
||||
}
|
||||
|
||||
func (s *structuredVolume) parsePVC(pvc *corev1api.PersistentVolumeClaim) {
|
||||
if pvc != nil && len(pvc.GetLabels()) > 0 {
|
||||
s.pvcLabels = pvc.Labels
|
||||
if pvc != nil {
|
||||
if len(pvc.GetLabels()) > 0 {
|
||||
s.pvcLabels = pvc.Labels
|
||||
}
|
||||
s.pvcPhase = string(pvc.Status.Phase)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -110,6 +114,31 @@ func (c *pvcLabelsCondition) validate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// pvcPhaseCondition defines a condition that matches if the PVC's phase matches any of the provided phases.
|
||||
type pvcPhaseCondition struct {
|
||||
phases []string
|
||||
}
|
||||
|
||||
func (c *pvcPhaseCondition) match(v *structuredVolume) bool {
|
||||
// No phases specified: always match.
|
||||
if len(c.phases) == 0 {
|
||||
return true
|
||||
}
|
||||
if v.pvcPhase == "" {
|
||||
return false
|
||||
}
|
||||
for _, phase := range c.phases {
|
||||
if v.pvcPhase == phase {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (c *pvcPhaseCondition) validate() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type capacityCondition struct {
|
||||
capacity capacity
|
||||
}
|
||||
|
||||
@@ -46,6 +46,7 @@ type volumeConditions struct {
|
||||
CSI *csiVolumeSource `yaml:"csi,omitempty"`
|
||||
VolumeTypes []SupportedVolume `yaml:"volumeTypes,omitempty"`
|
||||
PVCLabels map[string]string `yaml:"pvcLabels,omitempty"`
|
||||
PVCPhase []string `yaml:"pvcPhase,omitempty"`
|
||||
}
|
||||
|
||||
func (c *capacityCondition) validate() error {
|
||||
|
||||
@@ -340,20 +340,16 @@ func (s *nodeAgentServer) run() {
|
||||
}
|
||||
}
|
||||
|
||||
var cachePVCConfig *velerotypes.CachePVC
|
||||
if s.dataPathConfigs != nil && s.dataPathConfigs.CachePVCConfig != nil {
|
||||
if err := s.validateCachePVCConfig(*s.dataPathConfigs.CachePVCConfig); err != nil {
|
||||
s.logger.WithError(err).Warnf("Ignore cache config %v", s.dataPathConfigs.CachePVCConfig)
|
||||
} else {
|
||||
cachePVCConfig = s.dataPathConfigs.CachePVCConfig
|
||||
s.logger.Infof("Using cache volume configs %v", s.dataPathConfigs.CachePVCConfig)
|
||||
}
|
||||
}
|
||||
|
||||
var cachePVCConfig *velerotypes.CachePVC
|
||||
if s.dataPathConfigs != nil && s.dataPathConfigs.CachePVCConfig != nil {
|
||||
cachePVCConfig = s.dataPathConfigs.CachePVCConfig
|
||||
s.logger.Infof("Using customized cachePVC config %v", cachePVCConfig)
|
||||
}
|
||||
|
||||
var podLabels map[string]string
|
||||
if s.dataPathConfigs != nil && len(s.dataPathConfigs.PodLabels) > 0 {
|
||||
podLabels = s.dataPathConfigs.PodLabels
|
||||
@@ -368,6 +364,8 @@ func (s *nodeAgentServer) run() {
|
||||
|
||||
if s.backupRepoConfigs != nil {
|
||||
s.logger.Infof("Using backup repo config %v", s.backupRepoConfigs)
|
||||
} else if cachePVCConfig != nil {
|
||||
s.logger.Info("Backup repo config is not provided, using default values for cache volume configs")
|
||||
}
|
||||
|
||||
pvbReconciler := controller.NewPodVolumeBackupReconciler(
|
||||
|
||||
@@ -115,7 +115,11 @@ var (
|
||||
"datauploads.velero.io",
|
||||
"persistentvolumes",
|
||||
"persistentvolumeclaims",
|
||||
"clusterroles",
|
||||
"roles",
|
||||
"serviceaccounts",
|
||||
"clusterrolebindings",
|
||||
"rolebindings",
|
||||
"secrets",
|
||||
"configmaps",
|
||||
"limitranges",
|
||||
|
||||
@@ -307,6 +307,16 @@ func (b *backupReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
|
||||
|
||||
backupScheduleName := request.GetLabels()[velerov1api.ScheduleNameLabel]
|
||||
|
||||
b.backupTracker.Add(request.Namespace, request.Name)
|
||||
defer func() {
|
||||
switch request.Status.Phase {
|
||||
case velerov1api.BackupPhaseCompleted, velerov1api.BackupPhasePartiallyFailed, velerov1api.BackupPhaseFailed, velerov1api.BackupPhaseFailedValidation:
|
||||
b.backupTracker.Delete(request.Namespace, request.Name)
|
||||
case velerov1api.BackupPhaseWaitingForPluginOperations, velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed, velerov1api.BackupPhaseFinalizing, velerov1api.BackupPhaseFinalizingPartiallyFailed:
|
||||
b.backupTracker.AddPostProcessing(request.Namespace, request.Name)
|
||||
}
|
||||
}()
|
||||
|
||||
if request.Status.Phase == velerov1api.BackupPhaseFailedValidation {
|
||||
log.Debug("failed to validate backup status")
|
||||
b.metrics.RegisterBackupValidationFailure(backupScheduleName)
|
||||
@@ -318,16 +328,6 @@ func (b *backupReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctr
|
||||
// store ref to just-updated item for creating patch
|
||||
original = request.Backup.DeepCopy()
|
||||
|
||||
b.backupTracker.Add(request.Namespace, request.Name)
|
||||
defer func() {
|
||||
switch request.Status.Phase {
|
||||
case velerov1api.BackupPhaseCompleted, velerov1api.BackupPhasePartiallyFailed, velerov1api.BackupPhaseFailed, velerov1api.BackupPhaseFailedValidation:
|
||||
b.backupTracker.Delete(request.Namespace, request.Name)
|
||||
case velerov1api.BackupPhaseWaitingForPluginOperations, velerov1api.BackupPhaseWaitingForPluginOperationsPartiallyFailed, velerov1api.BackupPhaseFinalizing, velerov1api.BackupPhaseFinalizingPartiallyFailed:
|
||||
b.backupTracker.AddPostProcessing(request.Namespace, request.Name)
|
||||
}
|
||||
}()
|
||||
|
||||
log.Debug("Running backup")
|
||||
|
||||
b.metrics.RegisterBackupAttempt(backupScheduleName)
|
||||
|
||||
@@ -246,6 +246,7 @@ func TestProcessBackupValidationFailures(t *testing.T) {
|
||||
clock: &clock.RealClock{},
|
||||
formatFlag: formatFlag,
|
||||
metrics: metrics.NewServerMetrics(),
|
||||
backupTracker: NewBackupTracker(),
|
||||
}
|
||||
|
||||
require.NotNil(t, test.backup)
|
||||
|
||||
@@ -292,8 +292,14 @@ func (r *DataDownloadReconciler) Reconcile(ctx context.Context, req ctrl.Request
|
||||
return ctrl.Result{}, nil
|
||||
} else if dd.Status.Phase == velerov2alpha1api.DataDownloadPhaseAccepted {
|
||||
if peekErr := r.restoreExposer.PeekExposed(ctx, getDataDownloadOwnerObject(dd)); peekErr != nil {
|
||||
r.tryCancelDataDownload(ctx, dd, fmt.Sprintf("found a datadownload %s/%s with expose error: %s. mark it as cancel", dd.Namespace, dd.Name, peekErr))
|
||||
log.Errorf("Cancel dd %s/%s because of expose error %s", dd.Namespace, dd.Name, peekErr)
|
||||
|
||||
diags := strings.Split(r.restoreExposer.DiagnoseExpose(ctx, getDataDownloadOwnerObject(dd)), "\n")
|
||||
for _, diag := range diags {
|
||||
log.Warnf("[Diagnose DD expose]%s", diag)
|
||||
}
|
||||
|
||||
r.tryCancelDataDownload(ctx, dd, fmt.Sprintf("found a datadownload %s/%s with expose error: %s. mark it as cancel", dd.Namespace, dd.Name, peekErr))
|
||||
} else if dd.Status.AcceptedTimestamp != nil {
|
||||
if time.Since(dd.Status.AcceptedTimestamp.Time) >= r.preparingTimeout {
|
||||
r.onPrepareTimeout(ctx, dd)
|
||||
@@ -918,7 +924,7 @@ func (r *DataDownloadReconciler) setupExposeParam(dd *velerov2alpha1api.DataDown
|
||||
cacheVolume = &exposer.CacheConfigs{
|
||||
Limit: limit,
|
||||
StorageClass: r.cacheVolumeConfigs.StorageClass,
|
||||
ResidentThreshold: r.cacheVolumeConfigs.ResidentThreshold,
|
||||
ResidentThreshold: r.cacheVolumeConfigs.ResidentThresholdInMB << 20,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -561,6 +561,7 @@ func TestDataDownloadReconcile(t *testing.T) {
|
||||
ep.On("GetExposed", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil, nil)
|
||||
} else if test.isPeekExposeErr {
|
||||
ep.On("PeekExposed", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(errors.New("fake-peek-error"))
|
||||
ep.On("DiagnoseExpose", mock.Anything, mock.Anything).Return("")
|
||||
}
|
||||
|
||||
if !test.notMockCleanUp {
|
||||
|
||||
@@ -298,8 +298,14 @@ func (r *DataUploadReconciler) Reconcile(ctx context.Context, req ctrl.Request)
|
||||
return ctrl.Result{}, nil
|
||||
} else if du.Status.Phase == velerov2alpha1api.DataUploadPhaseAccepted {
|
||||
if peekErr := ep.PeekExposed(ctx, getOwnerObject(du)); peekErr != nil {
|
||||
r.tryCancelDataUpload(ctx, du, fmt.Sprintf("found a du %s/%s with expose error: %s. mark it as cancel", du.Namespace, du.Name, peekErr))
|
||||
log.Errorf("Cancel du %s/%s because of expose error %s", du.Namespace, du.Name, peekErr)
|
||||
|
||||
diags := strings.Split(ep.DiagnoseExpose(ctx, getOwnerObject(du)), "\n")
|
||||
for _, diag := range diags {
|
||||
log.Warnf("[Diagnose DU expose]%s", diag)
|
||||
}
|
||||
|
||||
r.tryCancelDataUpload(ctx, du, fmt.Sprintf("found a du %s/%s with expose error: %s. mark it as cancel", du.Namespace, du.Name, peekErr))
|
||||
} else if du.Status.AcceptedTimestamp != nil {
|
||||
if time.Since(du.Status.AcceptedTimestamp.Time) >= r.preparingTimeout {
|
||||
r.onPrepareTimeout(ctx, du)
|
||||
|
||||
@@ -260,6 +260,12 @@ func (r *PodVolumeBackupReconciler) Reconcile(ctx context.Context, req ctrl.Requ
|
||||
} else if pvb.Status.Phase == velerov1api.PodVolumeBackupPhaseAccepted {
|
||||
if peekErr := r.exposer.PeekExposed(ctx, getPVBOwnerObject(pvb)); peekErr != nil {
|
||||
log.Errorf("Cancel PVB %s/%s because of expose error %s", pvb.Namespace, pvb.Name, peekErr)
|
||||
|
||||
diags := strings.Split(r.exposer.DiagnoseExpose(ctx, getPVBOwnerObject(pvb)), "\n")
|
||||
for _, diag := range diags {
|
||||
log.Warnf("[Diagnose PVB expose]%s", diag)
|
||||
}
|
||||
|
||||
r.tryCancelPodVolumeBackup(ctx, pvb, fmt.Sprintf("found a PVB %s/%s with expose error: %s. mark it as cancel", pvb.Namespace, pvb.Name, peekErr))
|
||||
} else if pvb.Status.AcceptedTimestamp != nil {
|
||||
if time.Since(pvb.Status.AcceptedTimestamp.Time) >= r.preparingTimeout {
|
||||
|
||||
@@ -274,6 +274,12 @@ func (r *PodVolumeRestoreReconciler) Reconcile(ctx context.Context, req ctrl.Req
|
||||
} else if pvr.Status.Phase == velerov1api.PodVolumeRestorePhaseAccepted {
|
||||
if peekErr := r.exposer.PeekExposed(ctx, getPVROwnerObject(pvr)); peekErr != nil {
|
||||
log.Errorf("Cancel PVR %s/%s because of expose error %s", pvr.Namespace, pvr.Name, peekErr)
|
||||
|
||||
diags := strings.Split(r.exposer.DiagnoseExpose(ctx, getPVROwnerObject(pvr)), "\n")
|
||||
for _, diag := range diags {
|
||||
log.Warnf("[Diagnose PVR expose]%s", diag)
|
||||
}
|
||||
|
||||
_ = r.tryCancelPodVolumeRestore(ctx, pvr, fmt.Sprintf("found a PVR %s/%s with expose error: %s. mark it as cancel", pvr.Namespace, pvr.Name, peekErr))
|
||||
} else if pvr.Status.AcceptedTimestamp != nil {
|
||||
if time.Since(pvr.Status.AcceptedTimestamp.Time) >= r.preparingTimeout {
|
||||
@@ -934,7 +940,7 @@ func (r *PodVolumeRestoreReconciler) setupExposeParam(pvr *velerov1api.PodVolume
|
||||
cacheVolume = &exposer.CacheConfigs{
|
||||
Limit: limit,
|
||||
StorageClass: r.cacheVolumeConfigs.StorageClass,
|
||||
ResidentThreshold: r.cacheVolumeConfigs.ResidentThreshold,
|
||||
ResidentThreshold: r.cacheVolumeConfigs.ResidentThresholdInMB << 20,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1024,6 +1024,7 @@ func TestPodVolumeRestoreReconcile(t *testing.T) {
|
||||
ep.On("GetExposed", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil, nil)
|
||||
} else if test.isPeekExposeErr {
|
||||
ep.On("PeekExposed", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(errors.New("fake-peek-error"))
|
||||
ep.On("DiagnoseExpose", mock.Anything, mock.Anything).Return("")
|
||||
}
|
||||
|
||||
if !test.notMockCleanUp {
|
||||
|
||||
@@ -259,7 +259,7 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1api.O
|
||||
return errors.Wrap(err, "error to create backup pod")
|
||||
}
|
||||
|
||||
curLog.WithField("pod name", backupPod.Name).WithField("affinity", csiExposeParam.Affinity).Info("Backup pod is created")
|
||||
curLog.WithField("pod name", backupPod.Name).WithField("affinity", affinity).Info("Backup pod is created")
|
||||
|
||||
defer func() {
|
||||
if err != nil {
|
||||
|
||||
@@ -1307,6 +1307,7 @@ func Test_csiSnapshotExposer_DiagnoseExpose(t *testing.T) {
|
||||
Message: "fake-pod-message",
|
||||
},
|
||||
},
|
||||
Message: "fake-pod-message-1",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1501,7 +1502,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutStatus,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name
|
||||
Pod velero/fake-backup, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to
|
||||
VS velero/fake-backup, bind to , readyToUse false, errMessage
|
||||
@@ -1518,7 +1519,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name
|
||||
Pod velero/fake-backup, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to
|
||||
VS velero/fake-backup, bind to , readyToUse false, errMessage
|
||||
@@ -1535,7 +1536,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
node-agent is not running in node fake-node, err: daemonset pod not found in running state in node fake-node
|
||||
PVC velero/fake-backup, phase Pending, binding to
|
||||
@@ -1554,7 +1555,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to
|
||||
VS velero/fake-backup, bind to , readyToUse false, errMessage
|
||||
@@ -1572,7 +1573,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to fake-pv
|
||||
error getting backup pv fake-pv, err: persistentvolumes "fake-pv" not found
|
||||
@@ -1592,7 +1593,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithoutVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to fake-pv
|
||||
PV fake-pv, phase Pending, reason , message fake-pv-message
|
||||
@@ -1612,7 +1613,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSWithVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to fake-pv
|
||||
PV fake-pv, phase Pending, reason , message fake-pv-message
|
||||
@@ -1634,7 +1635,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup, phase Pending, binding to fake-pv
|
||||
PV fake-pv, phase Pending, reason , message fake-pv-message
|
||||
@@ -1698,7 +1699,7 @@ end diagnose CSI exposer`,
|
||||
&backupVSC,
|
||||
},
|
||||
expected: `begin diagnose CSI exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
Pod event reason reason-2, message message-2
|
||||
Pod event reason reason-6, message message-6
|
||||
|
||||
@@ -664,6 +664,7 @@ func Test_ReastoreDiagnoseExpose(t *testing.T) {
|
||||
Message: "fake-pod-message",
|
||||
},
|
||||
},
|
||||
Message: "fake-pod-message-1",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -815,7 +816,7 @@ end diagnose restore exposer`,
|
||||
&restorePVCWithoutVolumeName,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name
|
||||
Pod velero/fake-restore, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to
|
||||
end diagnose restore exposer`,
|
||||
@@ -828,7 +829,7 @@ end diagnose restore exposer`,
|
||||
&restorePVCWithoutVolumeName,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name
|
||||
Pod velero/fake-restore, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to
|
||||
end diagnose restore exposer`,
|
||||
@@ -841,7 +842,7 @@ end diagnose restore exposer`,
|
||||
&restorePVCWithoutVolumeName,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
node-agent is not running in node fake-node, err: daemonset pod not found in running state in node fake-node
|
||||
PVC velero/fake-restore, phase Pending, binding to
|
||||
@@ -856,7 +857,7 @@ end diagnose restore exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to
|
||||
end diagnose restore exposer`,
|
||||
@@ -870,7 +871,7 @@ end diagnose restore exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to fake-pv
|
||||
error getting restore pv fake-pv, err: persistentvolumes "fake-pv" not found
|
||||
@@ -886,7 +887,7 @@ end diagnose restore exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to fake-pv
|
||||
PV fake-pv, phase Pending, reason , message fake-pv-message
|
||||
@@ -902,7 +903,7 @@ end diagnose restore exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to fake-pv
|
||||
error getting restore pv fake-pv, err: persistentvolumes "fake-pv" not found
|
||||
@@ -922,7 +923,7 @@ end diagnose restore exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-restore, phase Pending, binding to fake-pv
|
||||
PV fake-pv, phase Pending, reason , message fake-pv-message
|
||||
@@ -975,7 +976,7 @@ end diagnose restore exposer`,
|
||||
},
|
||||
},
|
||||
expected: `begin diagnose restore exposer
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node
|
||||
Pod velero/fake-restore, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
Pod event reason reason-2, message message-2
|
||||
Pod event reason reason-5, message message-5
|
||||
|
||||
@@ -592,6 +592,7 @@ func TestPodVolumeDiagnoseExpose(t *testing.T) {
|
||||
Message: "fake-pod-message",
|
||||
},
|
||||
},
|
||||
Message: "fake-pod-message-1",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -691,7 +692,7 @@ end diagnose pod volume exposer`,
|
||||
&backupPodWithoutNodeName,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name
|
||||
Pod velero/fake-backup, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
end diagnose pod volume exposer`,
|
||||
},
|
||||
@@ -702,7 +703,7 @@ end diagnose pod volume exposer`,
|
||||
&backupPodWithoutNodeName,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name
|
||||
Pod velero/fake-backup, phase Pending, node name , message fake-pod-message-1
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
end diagnose pod volume exposer`,
|
||||
},
|
||||
@@ -713,7 +714,7 @@ end diagnose pod volume exposer`,
|
||||
&backupPodWithNodeName,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
node-agent is not running in node fake-node, err: daemonset pod not found in running state in node fake-node
|
||||
end diagnose pod volume exposer`,
|
||||
@@ -726,7 +727,7 @@ end diagnose pod volume exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
end diagnose pod volume exposer`,
|
||||
},
|
||||
@@ -739,7 +740,7 @@ end diagnose pod volume exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup-cache, phase Pending, binding to fake-pv-cache
|
||||
error getting cache pv fake-pv-cache, err: persistentvolumes "fake-pv-cache" not found
|
||||
@@ -755,7 +756,7 @@ end diagnose pod volume exposer`,
|
||||
&nodeAgentPod,
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
PVC velero/fake-backup-cache, phase Pending, binding to fake-pv-cache
|
||||
PV fake-pv-cache, phase Pending, reason , message fake-pv-message
|
||||
@@ -797,7 +798,7 @@ end diagnose pod volume exposer`,
|
||||
},
|
||||
},
|
||||
expected: `begin diagnose pod volume exposer
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node
|
||||
Pod velero/fake-backup, phase Pending, node name fake-node, message
|
||||
Pod condition Initialized, status True, reason , message fake-pod-message
|
||||
Pod event reason reason-2, message message-2
|
||||
Pod event reason reason-4, message message-4
|
||||
|
||||
@@ -18,6 +18,7 @@ limitations under the License.
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
|
||||
"github.com/kopia/kopia/repo/logging"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -30,6 +31,10 @@ type kopiaLog struct {
|
||||
logger logrus.FieldLogger
|
||||
}
|
||||
|
||||
type repoLog struct {
|
||||
logger logrus.FieldLogger
|
||||
}
|
||||
|
||||
// SetupKopiaLog sets the Kopia log handler to the specific context, Kopia modules
|
||||
// call the logger in the context to write logs
|
||||
func SetupKopiaLog(ctx context.Context, logger logrus.FieldLogger) context.Context {
|
||||
@@ -39,6 +44,10 @@ func SetupKopiaLog(ctx context.Context, logger logrus.FieldLogger) context.Conte
|
||||
})
|
||||
}
|
||||
|
||||
func RepositoryLogger(logger logrus.FieldLogger) io.Writer {
|
||||
return &repoLog{logger: logger}
|
||||
}
|
||||
|
||||
// Enabled decides whether a given logging level is enabled when logging a message
|
||||
func (kl *kopiaLog) Enabled(level zapcore.Level) bool {
|
||||
entry := kl.logger.WithField("null", "null")
|
||||
@@ -160,3 +169,9 @@ func (kl *kopiaLog) logrusFieldsForWrite(ent zapcore.Entry, fields []zapcore.Fie
|
||||
|
||||
return copied
|
||||
}
|
||||
|
||||
func (rl *repoLog) Write(p []byte) (int, error) {
|
||||
rl.logger.Debug(string(p))
|
||||
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
@@ -671,7 +671,8 @@ func buildJob(
|
||||
}
|
||||
|
||||
if config != nil && len(config.LoadAffinities) > 0 {
|
||||
affinity := kube.ToSystemAffinity(config.LoadAffinities)
|
||||
// Maintenance job only takes the first loadAffinity.
|
||||
affinity := kube.ToSystemAffinity([]*kube.LoadAffinity{config.LoadAffinities[0]})
|
||||
job.Spec.Template.Spec.Affinity = affinity
|
||||
}
|
||||
|
||||
|
||||
@@ -1,669 +0,0 @@
|
||||
// Code generated by mockery v2.53.2. DO NOT EDIT.
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
blob "github.com/kopia/kopia/repo/blob"
|
||||
content "github.com/kopia/kopia/repo/content"
|
||||
|
||||
context "context"
|
||||
|
||||
format "github.com/kopia/kopia/repo/format"
|
||||
|
||||
index "github.com/kopia/kopia/repo/content/index"
|
||||
|
||||
indexblob "github.com/kopia/kopia/repo/content/indexblob"
|
||||
|
||||
manifest "github.com/kopia/kopia/repo/manifest"
|
||||
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
|
||||
object "github.com/kopia/kopia/repo/object"
|
||||
|
||||
repo "github.com/kopia/kopia/repo"
|
||||
|
||||
throttling "github.com/kopia/kopia/repo/blob/throttling"
|
||||
|
||||
time "time"
|
||||
)
|
||||
|
||||
// DirectRepository is an autogenerated mock type for the DirectRepository type
|
||||
type DirectRepository struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
// AlsoLogToContentLog provides a mock function with given fields: ctx
|
||||
func (_m *DirectRepository) AlsoLogToContentLog(ctx context.Context) context.Context {
|
||||
ret := _m.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for AlsoLogToContentLog")
|
||||
}
|
||||
|
||||
var r0 context.Context
|
||||
if rf, ok := ret.Get(0).(func(context.Context) context.Context); ok {
|
||||
r0 = rf(ctx)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(context.Context)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BlobReader provides a mock function with no fields
|
||||
func (_m *DirectRepository) BlobReader() blob.Reader {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for BlobReader")
|
||||
}
|
||||
|
||||
var r0 blob.Reader
|
||||
if rf, ok := ret.Get(0).(func() blob.Reader); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(blob.Reader)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BlobVolume provides a mock function with no fields
|
||||
func (_m *DirectRepository) BlobVolume() blob.Volume {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for BlobVolume")
|
||||
}
|
||||
|
||||
var r0 blob.Volume
|
||||
if rf, ok := ret.Get(0).(func() blob.Volume); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(blob.Volume)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ClientOptions provides a mock function with no fields
|
||||
func (_m *DirectRepository) ClientOptions() repo.ClientOptions {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ClientOptions")
|
||||
}
|
||||
|
||||
var r0 repo.ClientOptions
|
||||
if rf, ok := ret.Get(0).(func() repo.ClientOptions); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Get(0).(repo.ClientOptions)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// Close provides a mock function with given fields: ctx
|
||||
func (_m *DirectRepository) Close(ctx context.Context) error {
|
||||
ret := _m.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Close")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
|
||||
r0 = rf(ctx)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ConfigFilename provides a mock function with no fields
|
||||
func (_m *DirectRepository) ConfigFilename() string {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ConfigFilename")
|
||||
}
|
||||
|
||||
var r0 string
|
||||
if rf, ok := ret.Get(0).(func() string); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Get(0).(string)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ContentInfo provides a mock function with given fields: ctx, contentID
|
||||
func (_m *DirectRepository) ContentInfo(ctx context.Context, contentID index.ID) (index.Info, error) {
|
||||
ret := _m.Called(ctx, contentID)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ContentInfo")
|
||||
}
|
||||
|
||||
var r0 index.Info
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, index.ID) (index.Info, error)); ok {
|
||||
return rf(ctx, contentID)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, index.ID) index.Info); ok {
|
||||
r0 = rf(ctx, contentID)
|
||||
} else {
|
||||
r0 = ret.Get(0).(index.Info)
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, index.ID) error); ok {
|
||||
r1 = rf(ctx, contentID)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// ContentReader provides a mock function with no fields
|
||||
func (_m *DirectRepository) ContentReader() content.Reader {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ContentReader")
|
||||
}
|
||||
|
||||
var r0 content.Reader
|
||||
if rf, ok := ret.Get(0).(func() content.Reader); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(content.Reader)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// DeriveKey provides a mock function with given fields: purpose, keyLength
|
||||
func (_m *DirectRepository) DeriveKey(purpose []byte, keyLength int) ([]byte, error) {
|
||||
ret := _m.Called(purpose, keyLength)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for DeriveKey")
|
||||
}
|
||||
|
||||
var r0 []byte
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func([]byte, int) ([]byte, error)); ok {
|
||||
return rf(purpose, keyLength)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func([]byte, int) []byte); ok {
|
||||
r0 = rf(purpose, keyLength)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]byte)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func([]byte, int) error); ok {
|
||||
r1 = rf(purpose, keyLength)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// DisableIndexRefresh provides a mock function with no fields
|
||||
func (_m *DirectRepository) DisableIndexRefresh() {
|
||||
_m.Called()
|
||||
}
|
||||
|
||||
// FindManifests provides a mock function with given fields: ctx, labels
|
||||
func (_m *DirectRepository) FindManifests(ctx context.Context, labels map[string]string) ([]*manifest.EntryMetadata, error) {
|
||||
ret := _m.Called(ctx, labels)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for FindManifests")
|
||||
}
|
||||
|
||||
var r0 []*manifest.EntryMetadata
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, map[string]string) ([]*manifest.EntryMetadata, error)); ok {
|
||||
return rf(ctx, labels)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, map[string]string) []*manifest.EntryMetadata); ok {
|
||||
r0 = rf(ctx, labels)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]*manifest.EntryMetadata)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, map[string]string) error); ok {
|
||||
r1 = rf(ctx, labels)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// FormatManager provides a mock function with no fields
|
||||
func (_m *DirectRepository) FormatManager() *format.Manager {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for FormatManager")
|
||||
}
|
||||
|
||||
var r0 *format.Manager
|
||||
if rf, ok := ret.Get(0).(func() *format.Manager); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*format.Manager)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// GetManifest provides a mock function with given fields: ctx, id, data
|
||||
func (_m *DirectRepository) GetManifest(ctx context.Context, id manifest.ID, data interface{}) (*manifest.EntryMetadata, error) {
|
||||
ret := _m.Called(ctx, id, data)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for GetManifest")
|
||||
}
|
||||
|
||||
var r0 *manifest.EntryMetadata
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, manifest.ID, interface{}) (*manifest.EntryMetadata, error)); ok {
|
||||
return rf(ctx, id, data)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, manifest.ID, interface{}) *manifest.EntryMetadata); ok {
|
||||
r0 = rf(ctx, id, data)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*manifest.EntryMetadata)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, manifest.ID, interface{}) error); ok {
|
||||
r1 = rf(ctx, id, data)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// IndexBlobs provides a mock function with given fields: ctx, includeInactive
|
||||
func (_m *DirectRepository) IndexBlobs(ctx context.Context, includeInactive bool) ([]indexblob.Metadata, error) {
|
||||
ret := _m.Called(ctx, includeInactive)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for IndexBlobs")
|
||||
}
|
||||
|
||||
var r0 []indexblob.Metadata
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, bool) ([]indexblob.Metadata, error)); ok {
|
||||
return rf(ctx, includeInactive)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, bool) []indexblob.Metadata); ok {
|
||||
r0 = rf(ctx, includeInactive)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]indexblob.Metadata)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, bool) error); ok {
|
||||
r1 = rf(ctx, includeInactive)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// NewDirectWriter provides a mock function with given fields: ctx, opt
|
||||
func (_m *DirectRepository) NewDirectWriter(ctx context.Context, opt repo.WriteSessionOptions) (context.Context, repo.DirectRepositoryWriter, error) {
|
||||
ret := _m.Called(ctx, opt)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for NewDirectWriter")
|
||||
}
|
||||
|
||||
var r0 context.Context
|
||||
var r1 repo.DirectRepositoryWriter
|
||||
var r2 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, repo.WriteSessionOptions) (context.Context, repo.DirectRepositoryWriter, error)); ok {
|
||||
return rf(ctx, opt)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, repo.WriteSessionOptions) context.Context); ok {
|
||||
r0 = rf(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(context.Context)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, repo.WriteSessionOptions) repo.DirectRepositoryWriter); ok {
|
||||
r1 = rf(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(1) != nil {
|
||||
r1 = ret.Get(1).(repo.DirectRepositoryWriter)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(2).(func(context.Context, repo.WriteSessionOptions) error); ok {
|
||||
r2 = rf(ctx, opt)
|
||||
} else {
|
||||
r2 = ret.Error(2)
|
||||
}
|
||||
|
||||
return r0, r1, r2
|
||||
}
|
||||
|
||||
// NewWriter provides a mock function with given fields: ctx, opt
|
||||
func (_m *DirectRepository) NewWriter(ctx context.Context, opt repo.WriteSessionOptions) (context.Context, repo.RepositoryWriter, error) {
|
||||
ret := _m.Called(ctx, opt)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for NewWriter")
|
||||
}
|
||||
|
||||
var r0 context.Context
|
||||
var r1 repo.RepositoryWriter
|
||||
var r2 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, repo.WriteSessionOptions) (context.Context, repo.RepositoryWriter, error)); ok {
|
||||
return rf(ctx, opt)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, repo.WriteSessionOptions) context.Context); ok {
|
||||
r0 = rf(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(context.Context)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, repo.WriteSessionOptions) repo.RepositoryWriter); ok {
|
||||
r1 = rf(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(1) != nil {
|
||||
r1 = ret.Get(1).(repo.RepositoryWriter)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(2).(func(context.Context, repo.WriteSessionOptions) error); ok {
|
||||
r2 = rf(ctx, opt)
|
||||
} else {
|
||||
r2 = ret.Error(2)
|
||||
}
|
||||
|
||||
return r0, r1, r2
|
||||
}
|
||||
|
||||
// ObjectFormat provides a mock function with no fields
|
||||
func (_m *DirectRepository) ObjectFormat() format.ObjectFormat {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ObjectFormat")
|
||||
}
|
||||
|
||||
var r0 format.ObjectFormat
|
||||
if rf, ok := ret.Get(0).(func() format.ObjectFormat); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Get(0).(format.ObjectFormat)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// OpenObject provides a mock function with given fields: ctx, id
|
||||
func (_m *DirectRepository) OpenObject(ctx context.Context, id object.ID) (object.Reader, error) {
|
||||
ret := _m.Called(ctx, id)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for OpenObject")
|
||||
}
|
||||
|
||||
var r0 object.Reader
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, object.ID) (object.Reader, error)); ok {
|
||||
return rf(ctx, id)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, object.ID) object.Reader); ok {
|
||||
r0 = rf(ctx, id)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(object.Reader)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, object.ID) error); ok {
|
||||
r1 = rf(ctx, id)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// PrefetchContents provides a mock function with given fields: ctx, contentIDs, hint
|
||||
func (_m *DirectRepository) PrefetchContents(ctx context.Context, contentIDs []index.ID, hint string) []index.ID {
|
||||
ret := _m.Called(ctx, contentIDs, hint)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for PrefetchContents")
|
||||
}
|
||||
|
||||
var r0 []index.ID
|
||||
if rf, ok := ret.Get(0).(func(context.Context, []index.ID, string) []index.ID); ok {
|
||||
r0 = rf(ctx, contentIDs, hint)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]index.ID)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// PrefetchObjects provides a mock function with given fields: ctx, objectIDs, hint
|
||||
func (_m *DirectRepository) PrefetchObjects(ctx context.Context, objectIDs []object.ID, hint string) ([]index.ID, error) {
|
||||
ret := _m.Called(ctx, objectIDs, hint)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for PrefetchObjects")
|
||||
}
|
||||
|
||||
var r0 []index.ID
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, []object.ID, string) ([]index.ID, error)); ok {
|
||||
return rf(ctx, objectIDs, hint)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, []object.ID, string) []index.ID); ok {
|
||||
r0 = rf(ctx, objectIDs, hint)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]index.ID)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, []object.ID, string) error); ok {
|
||||
r1 = rf(ctx, objectIDs, hint)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Refresh provides a mock function with given fields: ctx
|
||||
func (_m *DirectRepository) Refresh(ctx context.Context) error {
|
||||
ret := _m.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Refresh")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
|
||||
r0 = rf(ctx)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// Throttler provides a mock function with no fields
|
||||
func (_m *DirectRepository) Throttler() throttling.SettableThrottler {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Throttler")
|
||||
}
|
||||
|
||||
var r0 throttling.SettableThrottler
|
||||
if rf, ok := ret.Get(0).(func() throttling.SettableThrottler); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(throttling.SettableThrottler)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// Time provides a mock function with no fields
|
||||
func (_m *DirectRepository) Time() time.Time {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Time")
|
||||
}
|
||||
|
||||
var r0 time.Time
|
||||
if rf, ok := ret.Get(0).(func() time.Time); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Get(0).(time.Time)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// Token provides a mock function with given fields: password
|
||||
func (_m *DirectRepository) Token(password string) (string, error) {
|
||||
ret := _m.Called(password)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Token")
|
||||
}
|
||||
|
||||
var r0 string
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(string) (string, error)); ok {
|
||||
return rf(password)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(string) string); ok {
|
||||
r0 = rf(password)
|
||||
} else {
|
||||
r0 = ret.Get(0).(string)
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(string) error); ok {
|
||||
r1 = rf(password)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// UniqueID provides a mock function with no fields
|
||||
func (_m *DirectRepository) UniqueID() []byte {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for UniqueID")
|
||||
}
|
||||
|
||||
var r0 []byte
|
||||
if rf, ok := ret.Get(0).(func() []byte); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]byte)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// UpdateDescription provides a mock function with given fields: d
|
||||
func (_m *DirectRepository) UpdateDescription(d string) {
|
||||
_m.Called(d)
|
||||
}
|
||||
|
||||
// VerifyObject provides a mock function with given fields: ctx, id
|
||||
func (_m *DirectRepository) VerifyObject(ctx context.Context, id object.ID) ([]index.ID, error) {
|
||||
ret := _m.Called(ctx, id)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for VerifyObject")
|
||||
}
|
||||
|
||||
var r0 []index.ID
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, object.ID) ([]index.ID, error)); ok {
|
||||
return rf(ctx, id)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, object.ID) []index.ID); ok {
|
||||
r0 = rf(ctx, id)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]index.ID)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, object.ID) error); ok {
|
||||
r1 = rf(ctx, id)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// NewDirectRepository creates a new instance of DirectRepository. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewDirectRepository(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *DirectRepository {
|
||||
mock := &DirectRepository{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
@@ -1,854 +0,0 @@
|
||||
// Code generated by mockery v2.53.2. DO NOT EDIT.
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
blob "github.com/kopia/kopia/repo/blob"
|
||||
content "github.com/kopia/kopia/repo/content"
|
||||
|
||||
context "context"
|
||||
|
||||
format "github.com/kopia/kopia/repo/format"
|
||||
|
||||
index "github.com/kopia/kopia/repo/content/index"
|
||||
|
||||
indexblob "github.com/kopia/kopia/repo/content/indexblob"
|
||||
|
||||
manifest "github.com/kopia/kopia/repo/manifest"
|
||||
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
|
||||
object "github.com/kopia/kopia/repo/object"
|
||||
|
||||
repo "github.com/kopia/kopia/repo"
|
||||
|
||||
throttling "github.com/kopia/kopia/repo/blob/throttling"
|
||||
|
||||
time "time"
|
||||
)
|
||||
|
||||
// DirectRepositoryWriter is an autogenerated mock type for the DirectRepositoryWriter type
|
||||
type DirectRepositoryWriter struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
// AlsoLogToContentLog provides a mock function with given fields: ctx
|
||||
func (_m *DirectRepositoryWriter) AlsoLogToContentLog(ctx context.Context) context.Context {
|
||||
ret := _m.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for AlsoLogToContentLog")
|
||||
}
|
||||
|
||||
var r0 context.Context
|
||||
if rf, ok := ret.Get(0).(func(context.Context) context.Context); ok {
|
||||
r0 = rf(ctx)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(context.Context)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BlobReader provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) BlobReader() blob.Reader {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for BlobReader")
|
||||
}
|
||||
|
||||
var r0 blob.Reader
|
||||
if rf, ok := ret.Get(0).(func() blob.Reader); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(blob.Reader)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BlobStorage provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) BlobStorage() blob.Storage {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for BlobStorage")
|
||||
}
|
||||
|
||||
var r0 blob.Storage
|
||||
if rf, ok := ret.Get(0).(func() blob.Storage); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(blob.Storage)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// BlobVolume provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) BlobVolume() blob.Volume {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for BlobVolume")
|
||||
}
|
||||
|
||||
var r0 blob.Volume
|
||||
if rf, ok := ret.Get(0).(func() blob.Volume); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(blob.Volume)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ClientOptions provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) ClientOptions() repo.ClientOptions {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ClientOptions")
|
||||
}
|
||||
|
||||
var r0 repo.ClientOptions
|
||||
if rf, ok := ret.Get(0).(func() repo.ClientOptions); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Get(0).(repo.ClientOptions)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// Close provides a mock function with given fields: ctx
|
||||
func (_m *DirectRepositoryWriter) Close(ctx context.Context) error {
|
||||
ret := _m.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Close")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
|
||||
r0 = rf(ctx)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ConcatenateObjects provides a mock function with given fields: ctx, objectIDs, opt
|
||||
func (_m *DirectRepositoryWriter) ConcatenateObjects(ctx context.Context, objectIDs []object.ID, opt repo.ConcatenateOptions) (object.ID, error) {
|
||||
ret := _m.Called(ctx, objectIDs, opt)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ConcatenateObjects")
|
||||
}
|
||||
|
||||
var r0 object.ID
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, []object.ID, repo.ConcatenateOptions) (object.ID, error)); ok {
|
||||
return rf(ctx, objectIDs, opt)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, []object.ID, repo.ConcatenateOptions) object.ID); ok {
|
||||
r0 = rf(ctx, objectIDs, opt)
|
||||
} else {
|
||||
r0 = ret.Get(0).(object.ID)
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, []object.ID, repo.ConcatenateOptions) error); ok {
|
||||
r1 = rf(ctx, objectIDs, opt)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// ConfigFilename provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) ConfigFilename() string {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ConfigFilename")
|
||||
}
|
||||
|
||||
var r0 string
|
||||
if rf, ok := ret.Get(0).(func() string); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Get(0).(string)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ContentInfo provides a mock function with given fields: ctx, contentID
|
||||
func (_m *DirectRepositoryWriter) ContentInfo(ctx context.Context, contentID index.ID) (index.Info, error) {
|
||||
ret := _m.Called(ctx, contentID)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ContentInfo")
|
||||
}
|
||||
|
||||
var r0 index.Info
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, index.ID) (index.Info, error)); ok {
|
||||
return rf(ctx, contentID)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, index.ID) index.Info); ok {
|
||||
r0 = rf(ctx, contentID)
|
||||
} else {
|
||||
r0 = ret.Get(0).(index.Info)
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, index.ID) error); ok {
|
||||
r1 = rf(ctx, contentID)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// ContentManager provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) ContentManager() *content.WriteManager {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ContentManager")
|
||||
}
|
||||
|
||||
var r0 *content.WriteManager
|
||||
if rf, ok := ret.Get(0).(func() *content.WriteManager); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*content.WriteManager)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ContentReader provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) ContentReader() content.Reader {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ContentReader")
|
||||
}
|
||||
|
||||
var r0 content.Reader
|
||||
if rf, ok := ret.Get(0).(func() content.Reader); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(content.Reader)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// DeleteManifest provides a mock function with given fields: ctx, id
|
||||
func (_m *DirectRepositoryWriter) DeleteManifest(ctx context.Context, id manifest.ID) error {
|
||||
ret := _m.Called(ctx, id)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for DeleteManifest")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, manifest.ID) error); ok {
|
||||
r0 = rf(ctx, id)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// DeriveKey provides a mock function with given fields: purpose, keyLength
|
||||
func (_m *DirectRepositoryWriter) DeriveKey(purpose []byte, keyLength int) ([]byte, error) {
|
||||
ret := _m.Called(purpose, keyLength)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for DeriveKey")
|
||||
}
|
||||
|
||||
var r0 []byte
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func([]byte, int) ([]byte, error)); ok {
|
||||
return rf(purpose, keyLength)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func([]byte, int) []byte); ok {
|
||||
r0 = rf(purpose, keyLength)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]byte)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func([]byte, int) error); ok {
|
||||
r1 = rf(purpose, keyLength)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// DisableIndexRefresh provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) DisableIndexRefresh() {
|
||||
_m.Called()
|
||||
}
|
||||
|
||||
// FindManifests provides a mock function with given fields: ctx, labels
|
||||
func (_m *DirectRepositoryWriter) FindManifests(ctx context.Context, labels map[string]string) ([]*manifest.EntryMetadata, error) {
|
||||
ret := _m.Called(ctx, labels)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for FindManifests")
|
||||
}
|
||||
|
||||
var r0 []*manifest.EntryMetadata
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, map[string]string) ([]*manifest.EntryMetadata, error)); ok {
|
||||
return rf(ctx, labels)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, map[string]string) []*manifest.EntryMetadata); ok {
|
||||
r0 = rf(ctx, labels)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]*manifest.EntryMetadata)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, map[string]string) error); ok {
|
||||
r1 = rf(ctx, labels)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Flush provides a mock function with given fields: ctx
|
||||
func (_m *DirectRepositoryWriter) Flush(ctx context.Context) error {
|
||||
ret := _m.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Flush")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
|
||||
r0 = rf(ctx)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// FormatManager provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) FormatManager() *format.Manager {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for FormatManager")
|
||||
}
|
||||
|
||||
var r0 *format.Manager
|
||||
if rf, ok := ret.Get(0).(func() *format.Manager); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*format.Manager)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// GetManifest provides a mock function with given fields: ctx, id, data
|
||||
func (_m *DirectRepositoryWriter) GetManifest(ctx context.Context, id manifest.ID, data interface{}) (*manifest.EntryMetadata, error) {
|
||||
ret := _m.Called(ctx, id, data)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for GetManifest")
|
||||
}
|
||||
|
||||
var r0 *manifest.EntryMetadata
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, manifest.ID, interface{}) (*manifest.EntryMetadata, error)); ok {
|
||||
return rf(ctx, id, data)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, manifest.ID, interface{}) *manifest.EntryMetadata); ok {
|
||||
r0 = rf(ctx, id, data)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*manifest.EntryMetadata)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, manifest.ID, interface{}) error); ok {
|
||||
r1 = rf(ctx, id, data)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// IndexBlobs provides a mock function with given fields: ctx, includeInactive
|
||||
func (_m *DirectRepositoryWriter) IndexBlobs(ctx context.Context, includeInactive bool) ([]indexblob.Metadata, error) {
|
||||
ret := _m.Called(ctx, includeInactive)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for IndexBlobs")
|
||||
}
|
||||
|
||||
var r0 []indexblob.Metadata
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, bool) ([]indexblob.Metadata, error)); ok {
|
||||
return rf(ctx, includeInactive)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, bool) []indexblob.Metadata); ok {
|
||||
r0 = rf(ctx, includeInactive)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]indexblob.Metadata)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, bool) error); ok {
|
||||
r1 = rf(ctx, includeInactive)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// NewDirectWriter provides a mock function with given fields: ctx, opt
|
||||
func (_m *DirectRepositoryWriter) NewDirectWriter(ctx context.Context, opt repo.WriteSessionOptions) (context.Context, repo.DirectRepositoryWriter, error) {
|
||||
ret := _m.Called(ctx, opt)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for NewDirectWriter")
|
||||
}
|
||||
|
||||
var r0 context.Context
|
||||
var r1 repo.DirectRepositoryWriter
|
||||
var r2 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, repo.WriteSessionOptions) (context.Context, repo.DirectRepositoryWriter, error)); ok {
|
||||
return rf(ctx, opt)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, repo.WriteSessionOptions) context.Context); ok {
|
||||
r0 = rf(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(context.Context)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, repo.WriteSessionOptions) repo.DirectRepositoryWriter); ok {
|
||||
r1 = rf(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(1) != nil {
|
||||
r1 = ret.Get(1).(repo.DirectRepositoryWriter)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(2).(func(context.Context, repo.WriteSessionOptions) error); ok {
|
||||
r2 = rf(ctx, opt)
|
||||
} else {
|
||||
r2 = ret.Error(2)
|
||||
}
|
||||
|
||||
return r0, r1, r2
|
||||
}
|
||||
|
||||
// NewObjectWriter provides a mock function with given fields: ctx, opt
|
||||
func (_m *DirectRepositoryWriter) NewObjectWriter(ctx context.Context, opt object.WriterOptions) object.Writer {
|
||||
ret := _m.Called(ctx, opt)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for NewObjectWriter")
|
||||
}
|
||||
|
||||
var r0 object.Writer
|
||||
if rf, ok := ret.Get(0).(func(context.Context, object.WriterOptions) object.Writer); ok {
|
||||
r0 = rf(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(object.Writer)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// NewWriter provides a mock function with given fields: ctx, opt
|
||||
func (_m *DirectRepositoryWriter) NewWriter(ctx context.Context, opt repo.WriteSessionOptions) (context.Context, repo.RepositoryWriter, error) {
|
||||
ret := _m.Called(ctx, opt)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for NewWriter")
|
||||
}
|
||||
|
||||
var r0 context.Context
|
||||
var r1 repo.RepositoryWriter
|
||||
var r2 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, repo.WriteSessionOptions) (context.Context, repo.RepositoryWriter, error)); ok {
|
||||
return rf(ctx, opt)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, repo.WriteSessionOptions) context.Context); ok {
|
||||
r0 = rf(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(context.Context)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, repo.WriteSessionOptions) repo.RepositoryWriter); ok {
|
||||
r1 = rf(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(1) != nil {
|
||||
r1 = ret.Get(1).(repo.RepositoryWriter)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(2).(func(context.Context, repo.WriteSessionOptions) error); ok {
|
||||
r2 = rf(ctx, opt)
|
||||
} else {
|
||||
r2 = ret.Error(2)
|
||||
}
|
||||
|
||||
return r0, r1, r2
|
||||
}
|
||||
|
||||
// ObjectFormat provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) ObjectFormat() format.ObjectFormat {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ObjectFormat")
|
||||
}
|
||||
|
||||
var r0 format.ObjectFormat
|
||||
if rf, ok := ret.Get(0).(func() format.ObjectFormat); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Get(0).(format.ObjectFormat)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// OnSuccessfulFlush provides a mock function with given fields: callback
|
||||
func (_m *DirectRepositoryWriter) OnSuccessfulFlush(callback repo.RepositoryWriterCallback) {
|
||||
_m.Called(callback)
|
||||
}
|
||||
|
||||
// OpenObject provides a mock function with given fields: ctx, id
|
||||
func (_m *DirectRepositoryWriter) OpenObject(ctx context.Context, id object.ID) (object.Reader, error) {
|
||||
ret := _m.Called(ctx, id)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for OpenObject")
|
||||
}
|
||||
|
||||
var r0 object.Reader
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, object.ID) (object.Reader, error)); ok {
|
||||
return rf(ctx, id)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, object.ID) object.Reader); ok {
|
||||
r0 = rf(ctx, id)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(object.Reader)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, object.ID) error); ok {
|
||||
r1 = rf(ctx, id)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// PrefetchContents provides a mock function with given fields: ctx, contentIDs, hint
|
||||
func (_m *DirectRepositoryWriter) PrefetchContents(ctx context.Context, contentIDs []index.ID, hint string) []index.ID {
|
||||
ret := _m.Called(ctx, contentIDs, hint)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for PrefetchContents")
|
||||
}
|
||||
|
||||
var r0 []index.ID
|
||||
if rf, ok := ret.Get(0).(func(context.Context, []index.ID, string) []index.ID); ok {
|
||||
r0 = rf(ctx, contentIDs, hint)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]index.ID)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// PrefetchObjects provides a mock function with given fields: ctx, objectIDs, hint
|
||||
func (_m *DirectRepositoryWriter) PrefetchObjects(ctx context.Context, objectIDs []object.ID, hint string) ([]index.ID, error) {
|
||||
ret := _m.Called(ctx, objectIDs, hint)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for PrefetchObjects")
|
||||
}
|
||||
|
||||
var r0 []index.ID
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, []object.ID, string) ([]index.ID, error)); ok {
|
||||
return rf(ctx, objectIDs, hint)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, []object.ID, string) []index.ID); ok {
|
||||
r0 = rf(ctx, objectIDs, hint)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]index.ID)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, []object.ID, string) error); ok {
|
||||
r1 = rf(ctx, objectIDs, hint)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// PutManifest provides a mock function with given fields: ctx, labels, payload
|
||||
func (_m *DirectRepositoryWriter) PutManifest(ctx context.Context, labels map[string]string, payload interface{}) (manifest.ID, error) {
|
||||
ret := _m.Called(ctx, labels, payload)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for PutManifest")
|
||||
}
|
||||
|
||||
var r0 manifest.ID
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, map[string]string, interface{}) (manifest.ID, error)); ok {
|
||||
return rf(ctx, labels, payload)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, map[string]string, interface{}) manifest.ID); ok {
|
||||
r0 = rf(ctx, labels, payload)
|
||||
} else {
|
||||
r0 = ret.Get(0).(manifest.ID)
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, map[string]string, interface{}) error); ok {
|
||||
r1 = rf(ctx, labels, payload)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Refresh provides a mock function with given fields: ctx
|
||||
func (_m *DirectRepositoryWriter) Refresh(ctx context.Context) error {
|
||||
ret := _m.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Refresh")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
|
||||
r0 = rf(ctx)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ReplaceManifests provides a mock function with given fields: ctx, labels, payload
|
||||
func (_m *DirectRepositoryWriter) ReplaceManifests(ctx context.Context, labels map[string]string, payload interface{}) (manifest.ID, error) {
|
||||
ret := _m.Called(ctx, labels, payload)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ReplaceManifests")
|
||||
}
|
||||
|
||||
var r0 manifest.ID
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, map[string]string, interface{}) (manifest.ID, error)); ok {
|
||||
return rf(ctx, labels, payload)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, map[string]string, interface{}) manifest.ID); ok {
|
||||
r0 = rf(ctx, labels, payload)
|
||||
} else {
|
||||
r0 = ret.Get(0).(manifest.ID)
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, map[string]string, interface{}) error); ok {
|
||||
r1 = rf(ctx, labels, payload)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Throttler provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) Throttler() throttling.SettableThrottler {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Throttler")
|
||||
}
|
||||
|
||||
var r0 throttling.SettableThrottler
|
||||
if rf, ok := ret.Get(0).(func() throttling.SettableThrottler); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(throttling.SettableThrottler)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// Time provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) Time() time.Time {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Time")
|
||||
}
|
||||
|
||||
var r0 time.Time
|
||||
if rf, ok := ret.Get(0).(func() time.Time); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Get(0).(time.Time)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// Token provides a mock function with given fields: password
|
||||
func (_m *DirectRepositoryWriter) Token(password string) (string, error) {
|
||||
ret := _m.Called(password)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Token")
|
||||
}
|
||||
|
||||
var r0 string
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(string) (string, error)); ok {
|
||||
return rf(password)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(string) string); ok {
|
||||
r0 = rf(password)
|
||||
} else {
|
||||
r0 = ret.Get(0).(string)
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(string) error); ok {
|
||||
r1 = rf(password)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// UniqueID provides a mock function with no fields
|
||||
func (_m *DirectRepositoryWriter) UniqueID() []byte {
|
||||
ret := _m.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for UniqueID")
|
||||
}
|
||||
|
||||
var r0 []byte
|
||||
if rf, ok := ret.Get(0).(func() []byte); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]byte)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// UpdateDescription provides a mock function with given fields: d
|
||||
func (_m *DirectRepositoryWriter) UpdateDescription(d string) {
|
||||
_m.Called(d)
|
||||
}
|
||||
|
||||
// VerifyObject provides a mock function with given fields: ctx, id
|
||||
func (_m *DirectRepositoryWriter) VerifyObject(ctx context.Context, id object.ID) ([]index.ID, error) {
|
||||
ret := _m.Called(ctx, id)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for VerifyObject")
|
||||
}
|
||||
|
||||
var r0 []index.ID
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(context.Context, object.ID) ([]index.ID, error)); ok {
|
||||
return rf(ctx, id)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(context.Context, object.ID) []index.ID); ok {
|
||||
r0 = rf(ctx, id)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]index.ID)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(context.Context, object.ID) error); ok {
|
||||
r1 = rf(ctx, id)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// NewDirectRepositoryWriter creates a new instance of DirectRepositoryWriter. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewDirectRepositoryWriter(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *DirectRepositoryWriter {
|
||||
mock := &DirectRepositoryWriter{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
832
pkg/repository/udmrepo/kopialib/backend/mocks/repository.go
Normal file
832
pkg/repository/udmrepo/kopialib/backend/mocks/repository.go
Normal file
@@ -0,0 +1,832 @@
|
||||
// Code generated by mockery; DO NOT EDIT.
|
||||
// github.com/vektra/mockery
|
||||
// template: testify
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/kopia/kopia/repo"
|
||||
"github.com/kopia/kopia/repo/content"
|
||||
"github.com/kopia/kopia/repo/manifest"
|
||||
"github.com/kopia/kopia/repo/object"
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
// NewMockRepository creates a new instance of MockRepository. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewMockRepository(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *MockRepository {
|
||||
mock := &MockRepository{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
|
||||
// MockRepository is an autogenerated mock type for the Repository type
|
||||
type MockRepository struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
type MockRepository_Expecter struct {
|
||||
mock *mock.Mock
|
||||
}
|
||||
|
||||
func (_m *MockRepository) EXPECT() *MockRepository_Expecter {
|
||||
return &MockRepository_Expecter{mock: &_m.Mock}
|
||||
}
|
||||
|
||||
// ClientOptions provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) ClientOptions() repo.ClientOptions {
|
||||
ret := _mock.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ClientOptions")
|
||||
}
|
||||
|
||||
var r0 repo.ClientOptions
|
||||
if returnFunc, ok := ret.Get(0).(func() repo.ClientOptions); ok {
|
||||
r0 = returnFunc()
|
||||
} else {
|
||||
r0 = ret.Get(0).(repo.ClientOptions)
|
||||
}
|
||||
return r0
|
||||
}
|
||||
|
||||
// MockRepository_ClientOptions_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'ClientOptions'
|
||||
type MockRepository_ClientOptions_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// ClientOptions is a helper method to define mock.On call
|
||||
func (_e *MockRepository_Expecter) ClientOptions() *MockRepository_ClientOptions_Call {
|
||||
return &MockRepository_ClientOptions_Call{Call: _e.mock.On("ClientOptions")}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_ClientOptions_Call) Run(run func()) *MockRepository_ClientOptions_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run()
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_ClientOptions_Call) Return(clientOptions repo.ClientOptions) *MockRepository_ClientOptions_Call {
|
||||
_c.Call.Return(clientOptions)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_ClientOptions_Call) RunAndReturn(run func() repo.ClientOptions) *MockRepository_ClientOptions_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// Close provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) Close(ctx context.Context) error {
|
||||
ret := _mock.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Close")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context) error); ok {
|
||||
r0 = returnFunc(ctx)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
return r0
|
||||
}
|
||||
|
||||
// MockRepository_Close_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Close'
|
||||
type MockRepository_Close_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// Close is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
func (_e *MockRepository_Expecter) Close(ctx interface{}) *MockRepository_Close_Call {
|
||||
return &MockRepository_Close_Call{Call: _e.mock.On("Close", ctx)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_Close_Call) Run(run func(ctx context.Context)) *MockRepository_Close_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_Close_Call) Return(err error) *MockRepository_Close_Call {
|
||||
_c.Call.Return(err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_Close_Call) RunAndReturn(run func(ctx context.Context) error) *MockRepository_Close_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// ContentInfo provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) ContentInfo(ctx context.Context, contentID content.ID) (content.Info, error) {
|
||||
ret := _mock.Called(ctx, contentID)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for ContentInfo")
|
||||
}
|
||||
|
||||
var r0 content.Info
|
||||
var r1 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, content.ID) (content.Info, error)); ok {
|
||||
return returnFunc(ctx, contentID)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, content.ID) content.Info); ok {
|
||||
r0 = returnFunc(ctx, contentID)
|
||||
} else {
|
||||
r0 = ret.Get(0).(content.Info)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context, content.ID) error); ok {
|
||||
r1 = returnFunc(ctx, contentID)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// MockRepository_ContentInfo_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'ContentInfo'
|
||||
type MockRepository_ContentInfo_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// ContentInfo is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - contentID content.ID
|
||||
func (_e *MockRepository_Expecter) ContentInfo(ctx interface{}, contentID interface{}) *MockRepository_ContentInfo_Call {
|
||||
return &MockRepository_ContentInfo_Call{Call: _e.mock.On("ContentInfo", ctx, contentID)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_ContentInfo_Call) Run(run func(ctx context.Context, contentID content.ID)) *MockRepository_ContentInfo_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 content.ID
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].(content.ID)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_ContentInfo_Call) Return(v content.Info, err error) *MockRepository_ContentInfo_Call {
|
||||
_c.Call.Return(v, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_ContentInfo_Call) RunAndReturn(run func(ctx context.Context, contentID content.ID) (content.Info, error)) *MockRepository_ContentInfo_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// FindManifests provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) FindManifests(ctx context.Context, labels map[string]string) ([]*manifest.EntryMetadata, error) {
|
||||
ret := _mock.Called(ctx, labels)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for FindManifests")
|
||||
}
|
||||
|
||||
var r0 []*manifest.EntryMetadata
|
||||
var r1 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, map[string]string) ([]*manifest.EntryMetadata, error)); ok {
|
||||
return returnFunc(ctx, labels)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, map[string]string) []*manifest.EntryMetadata); ok {
|
||||
r0 = returnFunc(ctx, labels)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]*manifest.EntryMetadata)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context, map[string]string) error); ok {
|
||||
r1 = returnFunc(ctx, labels)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// MockRepository_FindManifests_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'FindManifests'
|
||||
type MockRepository_FindManifests_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// FindManifests is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - labels map[string]string
|
||||
func (_e *MockRepository_Expecter) FindManifests(ctx interface{}, labels interface{}) *MockRepository_FindManifests_Call {
|
||||
return &MockRepository_FindManifests_Call{Call: _e.mock.On("FindManifests", ctx, labels)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_FindManifests_Call) Run(run func(ctx context.Context, labels map[string]string)) *MockRepository_FindManifests_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 map[string]string
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].(map[string]string)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_FindManifests_Call) Return(entryMetadatas []*manifest.EntryMetadata, err error) *MockRepository_FindManifests_Call {
|
||||
_c.Call.Return(entryMetadatas, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_FindManifests_Call) RunAndReturn(run func(ctx context.Context, labels map[string]string) ([]*manifest.EntryMetadata, error)) *MockRepository_FindManifests_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// GetManifest provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) GetManifest(ctx context.Context, id manifest.ID, data any) (*manifest.EntryMetadata, error) {
|
||||
ret := _mock.Called(ctx, id, data)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for GetManifest")
|
||||
}
|
||||
|
||||
var r0 *manifest.EntryMetadata
|
||||
var r1 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, manifest.ID, any) (*manifest.EntryMetadata, error)); ok {
|
||||
return returnFunc(ctx, id, data)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, manifest.ID, any) *manifest.EntryMetadata); ok {
|
||||
r0 = returnFunc(ctx, id, data)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*manifest.EntryMetadata)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context, manifest.ID, any) error); ok {
|
||||
r1 = returnFunc(ctx, id, data)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// MockRepository_GetManifest_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'GetManifest'
|
||||
type MockRepository_GetManifest_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// GetManifest is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - id manifest.ID
|
||||
// - data any
|
||||
func (_e *MockRepository_Expecter) GetManifest(ctx interface{}, id interface{}, data interface{}) *MockRepository_GetManifest_Call {
|
||||
return &MockRepository_GetManifest_Call{Call: _e.mock.On("GetManifest", ctx, id, data)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_GetManifest_Call) Run(run func(ctx context.Context, id manifest.ID, data any)) *MockRepository_GetManifest_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 manifest.ID
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].(manifest.ID)
|
||||
}
|
||||
var arg2 any
|
||||
if args[2] != nil {
|
||||
arg2 = args[2].(any)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
arg2,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_GetManifest_Call) Return(entryMetadata *manifest.EntryMetadata, err error) *MockRepository_GetManifest_Call {
|
||||
_c.Call.Return(entryMetadata, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_GetManifest_Call) RunAndReturn(run func(ctx context.Context, id manifest.ID, data any) (*manifest.EntryMetadata, error)) *MockRepository_GetManifest_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// NewWriter provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) NewWriter(ctx context.Context, opt repo.WriteSessionOptions) (context.Context, repo.RepositoryWriter, error) {
|
||||
ret := _mock.Called(ctx, opt)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for NewWriter")
|
||||
}
|
||||
|
||||
var r0 context.Context
|
||||
var r1 repo.RepositoryWriter
|
||||
var r2 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, repo.WriteSessionOptions) (context.Context, repo.RepositoryWriter, error)); ok {
|
||||
return returnFunc(ctx, opt)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, repo.WriteSessionOptions) context.Context); ok {
|
||||
r0 = returnFunc(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(context.Context)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context, repo.WriteSessionOptions) repo.RepositoryWriter); ok {
|
||||
r1 = returnFunc(ctx, opt)
|
||||
} else {
|
||||
if ret.Get(1) != nil {
|
||||
r1 = ret.Get(1).(repo.RepositoryWriter)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(2).(func(context.Context, repo.WriteSessionOptions) error); ok {
|
||||
r2 = returnFunc(ctx, opt)
|
||||
} else {
|
||||
r2 = ret.Error(2)
|
||||
}
|
||||
return r0, r1, r2
|
||||
}
|
||||
|
||||
// MockRepository_NewWriter_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'NewWriter'
|
||||
type MockRepository_NewWriter_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// NewWriter is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - opt repo.WriteSessionOptions
|
||||
func (_e *MockRepository_Expecter) NewWriter(ctx interface{}, opt interface{}) *MockRepository_NewWriter_Call {
|
||||
return &MockRepository_NewWriter_Call{Call: _e.mock.On("NewWriter", ctx, opt)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_NewWriter_Call) Run(run func(ctx context.Context, opt repo.WriteSessionOptions)) *MockRepository_NewWriter_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 repo.WriteSessionOptions
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].(repo.WriteSessionOptions)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_NewWriter_Call) Return(context1 context.Context, repositoryWriter repo.RepositoryWriter, err error) *MockRepository_NewWriter_Call {
|
||||
_c.Call.Return(context1, repositoryWriter, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_NewWriter_Call) RunAndReturn(run func(ctx context.Context, opt repo.WriteSessionOptions) (context.Context, repo.RepositoryWriter, error)) *MockRepository_NewWriter_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// OpenObject provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) OpenObject(ctx context.Context, id object.ID) (object.Reader, error) {
|
||||
ret := _mock.Called(ctx, id)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for OpenObject")
|
||||
}
|
||||
|
||||
var r0 object.Reader
|
||||
var r1 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, object.ID) (object.Reader, error)); ok {
|
||||
return returnFunc(ctx, id)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, object.ID) object.Reader); ok {
|
||||
r0 = returnFunc(ctx, id)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(object.Reader)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context, object.ID) error); ok {
|
||||
r1 = returnFunc(ctx, id)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// MockRepository_OpenObject_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'OpenObject'
|
||||
type MockRepository_OpenObject_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// OpenObject is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - id object.ID
|
||||
func (_e *MockRepository_Expecter) OpenObject(ctx interface{}, id interface{}) *MockRepository_OpenObject_Call {
|
||||
return &MockRepository_OpenObject_Call{Call: _e.mock.On("OpenObject", ctx, id)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_OpenObject_Call) Run(run func(ctx context.Context, id object.ID)) *MockRepository_OpenObject_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 object.ID
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].(object.ID)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_OpenObject_Call) Return(reader object.Reader, err error) *MockRepository_OpenObject_Call {
|
||||
_c.Call.Return(reader, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_OpenObject_Call) RunAndReturn(run func(ctx context.Context, id object.ID) (object.Reader, error)) *MockRepository_OpenObject_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// PrefetchContents provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) PrefetchContents(ctx context.Context, contentIDs []content.ID, hint string) []content.ID {
|
||||
ret := _mock.Called(ctx, contentIDs, hint)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for PrefetchContents")
|
||||
}
|
||||
|
||||
var r0 []content.ID
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, []content.ID, string) []content.ID); ok {
|
||||
r0 = returnFunc(ctx, contentIDs, hint)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]content.ID)
|
||||
}
|
||||
}
|
||||
return r0
|
||||
}
|
||||
|
||||
// MockRepository_PrefetchContents_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'PrefetchContents'
|
||||
type MockRepository_PrefetchContents_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// PrefetchContents is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - contentIDs []content.ID
|
||||
// - hint string
|
||||
func (_e *MockRepository_Expecter) PrefetchContents(ctx interface{}, contentIDs interface{}, hint interface{}) *MockRepository_PrefetchContents_Call {
|
||||
return &MockRepository_PrefetchContents_Call{Call: _e.mock.On("PrefetchContents", ctx, contentIDs, hint)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_PrefetchContents_Call) Run(run func(ctx context.Context, contentIDs []content.ID, hint string)) *MockRepository_PrefetchContents_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 []content.ID
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].([]content.ID)
|
||||
}
|
||||
var arg2 string
|
||||
if args[2] != nil {
|
||||
arg2 = args[2].(string)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
arg2,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_PrefetchContents_Call) Return(vs []content.ID) *MockRepository_PrefetchContents_Call {
|
||||
_c.Call.Return(vs)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_PrefetchContents_Call) RunAndReturn(run func(ctx context.Context, contentIDs []content.ID, hint string) []content.ID) *MockRepository_PrefetchContents_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// PrefetchObjects provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) PrefetchObjects(ctx context.Context, objectIDs []object.ID, hint string) ([]content.ID, error) {
|
||||
ret := _mock.Called(ctx, objectIDs, hint)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for PrefetchObjects")
|
||||
}
|
||||
|
||||
var r0 []content.ID
|
||||
var r1 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, []object.ID, string) ([]content.ID, error)); ok {
|
||||
return returnFunc(ctx, objectIDs, hint)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, []object.ID, string) []content.ID); ok {
|
||||
r0 = returnFunc(ctx, objectIDs, hint)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]content.ID)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context, []object.ID, string) error); ok {
|
||||
r1 = returnFunc(ctx, objectIDs, hint)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// MockRepository_PrefetchObjects_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'PrefetchObjects'
|
||||
type MockRepository_PrefetchObjects_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// PrefetchObjects is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - objectIDs []object.ID
|
||||
// - hint string
|
||||
func (_e *MockRepository_Expecter) PrefetchObjects(ctx interface{}, objectIDs interface{}, hint interface{}) *MockRepository_PrefetchObjects_Call {
|
||||
return &MockRepository_PrefetchObjects_Call{Call: _e.mock.On("PrefetchObjects", ctx, objectIDs, hint)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_PrefetchObjects_Call) Run(run func(ctx context.Context, objectIDs []object.ID, hint string)) *MockRepository_PrefetchObjects_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 []object.ID
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].([]object.ID)
|
||||
}
|
||||
var arg2 string
|
||||
if args[2] != nil {
|
||||
arg2 = args[2].(string)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
arg2,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_PrefetchObjects_Call) Return(vs []content.ID, err error) *MockRepository_PrefetchObjects_Call {
|
||||
_c.Call.Return(vs, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_PrefetchObjects_Call) RunAndReturn(run func(ctx context.Context, objectIDs []object.ID, hint string) ([]content.ID, error)) *MockRepository_PrefetchObjects_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// Refresh provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) Refresh(ctx context.Context) error {
|
||||
ret := _mock.Called(ctx)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Refresh")
|
||||
}
|
||||
|
||||
var r0 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context) error); ok {
|
||||
r0 = returnFunc(ctx)
|
||||
} else {
|
||||
r0 = ret.Error(0)
|
||||
}
|
||||
return r0
|
||||
}
|
||||
|
||||
// MockRepository_Refresh_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Refresh'
|
||||
type MockRepository_Refresh_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// Refresh is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
func (_e *MockRepository_Expecter) Refresh(ctx interface{}) *MockRepository_Refresh_Call {
|
||||
return &MockRepository_Refresh_Call{Call: _e.mock.On("Refresh", ctx)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_Refresh_Call) Run(run func(ctx context.Context)) *MockRepository_Refresh_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_Refresh_Call) Return(err error) *MockRepository_Refresh_Call {
|
||||
_c.Call.Return(err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_Refresh_Call) RunAndReturn(run func(ctx context.Context) error) *MockRepository_Refresh_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// Time provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) Time() time.Time {
|
||||
ret := _mock.Called()
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for Time")
|
||||
}
|
||||
|
||||
var r0 time.Time
|
||||
if returnFunc, ok := ret.Get(0).(func() time.Time); ok {
|
||||
r0 = returnFunc()
|
||||
} else {
|
||||
r0 = ret.Get(0).(time.Time)
|
||||
}
|
||||
return r0
|
||||
}
|
||||
|
||||
// MockRepository_Time_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'Time'
|
||||
type MockRepository_Time_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// Time is a helper method to define mock.On call
|
||||
func (_e *MockRepository_Expecter) Time() *MockRepository_Time_Call {
|
||||
return &MockRepository_Time_Call{Call: _e.mock.On("Time")}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_Time_Call) Run(run func()) *MockRepository_Time_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
run()
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_Time_Call) Return(time1 time.Time) *MockRepository_Time_Call {
|
||||
_c.Call.Return(time1)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_Time_Call) RunAndReturn(run func() time.Time) *MockRepository_Time_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// UpdateDescription provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) UpdateDescription(d string) {
|
||||
_mock.Called(d)
|
||||
return
|
||||
}
|
||||
|
||||
// MockRepository_UpdateDescription_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'UpdateDescription'
|
||||
type MockRepository_UpdateDescription_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// UpdateDescription is a helper method to define mock.On call
|
||||
// - d string
|
||||
func (_e *MockRepository_Expecter) UpdateDescription(d interface{}) *MockRepository_UpdateDescription_Call {
|
||||
return &MockRepository_UpdateDescription_Call{Call: _e.mock.On("UpdateDescription", d)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_UpdateDescription_Call) Run(run func(d string)) *MockRepository_UpdateDescription_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 string
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(string)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_UpdateDescription_Call) Return() *MockRepository_UpdateDescription_Call {
|
||||
_c.Call.Return()
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_UpdateDescription_Call) RunAndReturn(run func(d string)) *MockRepository_UpdateDescription_Call {
|
||||
_c.Run(run)
|
||||
return _c
|
||||
}
|
||||
|
||||
// VerifyObject provides a mock function for the type MockRepository
|
||||
func (_mock *MockRepository) VerifyObject(ctx context.Context, id object.ID) ([]content.ID, error) {
|
||||
ret := _mock.Called(ctx, id)
|
||||
|
||||
if len(ret) == 0 {
|
||||
panic("no return value specified for VerifyObject")
|
||||
}
|
||||
|
||||
var r0 []content.ID
|
||||
var r1 error
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, object.ID) ([]content.ID, error)); ok {
|
||||
return returnFunc(ctx, id)
|
||||
}
|
||||
if returnFunc, ok := ret.Get(0).(func(context.Context, object.ID) []content.ID); ok {
|
||||
r0 = returnFunc(ctx, id)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]content.ID)
|
||||
}
|
||||
}
|
||||
if returnFunc, ok := ret.Get(1).(func(context.Context, object.ID) error); ok {
|
||||
r1 = returnFunc(ctx, id)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// MockRepository_VerifyObject_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'VerifyObject'
|
||||
type MockRepository_VerifyObject_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// VerifyObject is a helper method to define mock.On call
|
||||
// - ctx context.Context
|
||||
// - id object.ID
|
||||
func (_e *MockRepository_Expecter) VerifyObject(ctx interface{}, id interface{}) *MockRepository_VerifyObject_Call {
|
||||
return &MockRepository_VerifyObject_Call{Call: _e.mock.On("VerifyObject", ctx, id)}
|
||||
}
|
||||
|
||||
func (_c *MockRepository_VerifyObject_Call) Run(run func(ctx context.Context, id object.ID)) *MockRepository_VerifyObject_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 context.Context
|
||||
if args[0] != nil {
|
||||
arg0 = args[0].(context.Context)
|
||||
}
|
||||
var arg1 object.ID
|
||||
if args[1] != nil {
|
||||
arg1 = args[1].(object.ID)
|
||||
}
|
||||
run(
|
||||
arg0,
|
||||
arg1,
|
||||
)
|
||||
})
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_VerifyObject_Call) Return(vs []content.ID, err error) *MockRepository_VerifyObject_Call {
|
||||
_c.Call.Return(vs, err)
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *MockRepository_VerifyObject_Call) RunAndReturn(run func(ctx context.Context, id object.ID) ([]content.ID, error)) *MockRepository_VerifyObject_Call {
|
||||
_c.Call.Return(run)
|
||||
return _c
|
||||
}
|
||||
1255
pkg/repository/udmrepo/kopialib/backend/mocks/repository_writer.go
Normal file
1255
pkg/repository/udmrepo/kopialib/backend/mocks/repository_writer.go
Normal file
File diff suppressed because it is too large
Load Diff
@@ -19,6 +19,7 @@ package kopialib
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
@@ -74,7 +75,9 @@ type kopiaObjectWriter struct {
|
||||
rawWriter object.Writer
|
||||
}
|
||||
|
||||
type openOptions struct{}
|
||||
type openOptions struct {
|
||||
repoLogger io.Writer
|
||||
}
|
||||
|
||||
const (
|
||||
defaultLogInterval = time.Second * 10
|
||||
@@ -154,7 +157,7 @@ func (ks *kopiaRepoService) Open(ctx context.Context, repoOption udmrepo.RepoOpt
|
||||
|
||||
repoCtx := kopia.SetupKopiaLog(ctx, ks.logger)
|
||||
|
||||
r, err := openKopiaRepo(repoCtx, repoConfig, repoOption.RepoPassword, nil)
|
||||
r, err := openKopiaRepo(repoCtx, repoConfig, repoOption.RepoPassword, &openOptions{repoLogger: kopia.RepositoryLogger(ks.logger)})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -199,7 +202,7 @@ func (ks *kopiaRepoService) Maintain(ctx context.Context, repoOption udmrepo.Rep
|
||||
|
||||
ks.logger.Info("Start to open repo for maintenance, allow index write on load")
|
||||
|
||||
r, err := openKopiaRepo(repoCtx, repoConfig, repoOption.RepoPassword, nil)
|
||||
r, err := openKopiaRepo(repoCtx, repoConfig, repoOption.RepoPassword, &openOptions{repoLogger: kopia.RepositoryLogger(ks.logger)})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -625,8 +628,10 @@ func (lt *logThrottle) shouldLog() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func openKopiaRepo(ctx context.Context, configFile string, password string, _ *openOptions) (repo.Repository, error) {
|
||||
r, err := kopiaRepoOpen(ctx, configFile, password, &repo.Options{})
|
||||
func openKopiaRepo(ctx context.Context, configFile string, password string, options *openOptions) (repo.Repository, error) {
|
||||
r, err := kopiaRepoOpen(ctx, configFile, password, &repo.Options{
|
||||
ContentLogWriter: options.repoLogger,
|
||||
})
|
||||
if os.IsNotExist(err) {
|
||||
return nil, errors.Wrap(err, "error to open repo, repo doesn't exist")
|
||||
}
|
||||
|
||||
@@ -38,13 +38,14 @@ import (
|
||||
)
|
||||
|
||||
func TestOpen(t *testing.T) {
|
||||
var directRpo *repomocks.DirectRepository
|
||||
var directRpo *repomocks.MockRepository
|
||||
testCases := []struct {
|
||||
name string
|
||||
repoOptions udmrepo.RepoOptions
|
||||
returnRepo *repomocks.DirectRepository
|
||||
returnRepo *repomocks.MockRepository
|
||||
repoOpen func(context.Context, string, string, *repo.Options) (repo.Repository, error)
|
||||
newWriterError error
|
||||
mockClose bool
|
||||
expectedErr string
|
||||
expected *kopiaRepository
|
||||
}{
|
||||
@@ -87,7 +88,8 @@ func TestOpen(t *testing.T) {
|
||||
repoOpen: func(context.Context, string, string, *repo.Options) (repo.Repository, error) {
|
||||
return directRpo, nil
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
returnRepo: repomocks.NewMockRepository(t),
|
||||
mockClose: true,
|
||||
newWriterError: errors.New("fake-new-writer-error"),
|
||||
expectedErr: "error to create repo writer: fake-new-writer-error",
|
||||
},
|
||||
@@ -100,7 +102,7 @@ func TestOpen(t *testing.T) {
|
||||
repoOpen: func(context.Context, string, string, *repo.Options) (repo.Repository, error) {
|
||||
return directRpo, nil
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
returnRepo: repomocks.NewMockRepository(t),
|
||||
expected: &kopiaRepository{
|
||||
description: "fake-description",
|
||||
throttle: logThrottle{
|
||||
@@ -128,7 +130,10 @@ func TestOpen(t *testing.T) {
|
||||
|
||||
if tc.returnRepo != nil {
|
||||
tc.returnRepo.On("NewWriter", mock.Anything, mock.Anything).Return(nil, nil, tc.newWriterError)
|
||||
tc.returnRepo.On("Close", mock.Anything).Return(nil)
|
||||
|
||||
if tc.mockClose {
|
||||
tc.returnRepo.On("Close", mock.Anything).Return(nil)
|
||||
}
|
||||
}
|
||||
|
||||
repo, err := service.Open(t.Context(), tc.repoOptions)
|
||||
@@ -149,12 +154,11 @@ func TestOpen(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestMaintain(t *testing.T) {
|
||||
var directRpo *repomocks.DirectRepository
|
||||
testCases := []struct {
|
||||
name string
|
||||
repoOptions udmrepo.RepoOptions
|
||||
returnRepo *repomocks.DirectRepository
|
||||
returnRepoWriter *repomocks.DirectRepositoryWriter
|
||||
returnRepo *repomocks.MockRepository
|
||||
returnRepoWriter *repomocks.MockRepositoryWriter
|
||||
repoOpen func(context.Context, string, string, *repo.Options) (repo.Repository, error)
|
||||
newRepoWriterError error
|
||||
findManifestError error
|
||||
@@ -193,33 +197,6 @@ func TestMaintain(t *testing.T) {
|
||||
},
|
||||
expectedErr: "error to open repo: fake-repo-open-error",
|
||||
},
|
||||
{
|
||||
name: "write session fail",
|
||||
repoOptions: udmrepo.RepoOptions{
|
||||
ConfigFilePath: "/tmp",
|
||||
GeneralOptions: map[string]string{},
|
||||
},
|
||||
repoOpen: func(context.Context, string, string, *repo.Options) (repo.Repository, error) {
|
||||
return directRpo, nil
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
newRepoWriterError: errors.New("fake-new-direct-writer-error"),
|
||||
expectedErr: "error to maintain repo: unable to create direct writer: fake-new-direct-writer-error",
|
||||
},
|
||||
{
|
||||
name: "maintain fail",
|
||||
repoOptions: udmrepo.RepoOptions{
|
||||
ConfigFilePath: "/tmp",
|
||||
GeneralOptions: map[string]string{},
|
||||
},
|
||||
repoOpen: func(context.Context, string, string, *repo.Options) (repo.Repository, error) {
|
||||
return directRpo, nil
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
returnRepoWriter: new(repomocks.DirectRepositoryWriter),
|
||||
findManifestError: errors.New("fake-find-manifest-error"),
|
||||
expectedErr: "error to maintain repo: error to run maintenance under mode auto: unable to get maintenance params: error looking for maintenance manifest: fake-find-manifest-error",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
@@ -236,22 +213,9 @@ func TestMaintain(t *testing.T) {
|
||||
}
|
||||
|
||||
if tc.returnRepo != nil {
|
||||
directRpo = tc.returnRepo
|
||||
}
|
||||
|
||||
if tc.returnRepo != nil {
|
||||
tc.returnRepo.On("NewDirectWriter", mock.Anything, mock.Anything).Return(ctx, tc.returnRepoWriter, tc.newRepoWriterError)
|
||||
tc.returnRepo.On("Close", mock.Anything).Return(nil)
|
||||
}
|
||||
|
||||
if tc.returnRepoWriter != nil {
|
||||
tc.returnRepoWriter.On("DisableIndexRefresh").Return()
|
||||
tc.returnRepoWriter.On("AlsoLogToContentLog", mock.Anything).Return(nil)
|
||||
tc.returnRepoWriter.On("Close", mock.Anything).Return(nil)
|
||||
tc.returnRepoWriter.On("FindManifests", mock.Anything, mock.Anything).Return(nil, tc.findManifestError)
|
||||
tc.returnRepoWriter.On("ClientOptions").Return(repo.ClientOptions{})
|
||||
}
|
||||
|
||||
err := service.Maintain(ctx, tc.repoOptions)
|
||||
|
||||
if tc.expectedErr == "" {
|
||||
@@ -314,7 +278,7 @@ func TestShouldLog(t *testing.T) {
|
||||
func TestOpenObject(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
rawRepo *repomocks.DirectRepository
|
||||
rawRepo *repomocks.MockRepository
|
||||
objectID string
|
||||
retErr error
|
||||
expectedErr string
|
||||
@@ -325,13 +289,13 @@ func TestOpenObject(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "objectID is invalid",
|
||||
rawRepo: repomocks.NewDirectRepository(t),
|
||||
rawRepo: repomocks.NewMockRepository(t),
|
||||
objectID: "fake-id",
|
||||
expectedErr: "error to parse object ID from fake-id: malformed content ID: \"fake-id\": invalid content prefix",
|
||||
},
|
||||
{
|
||||
name: "raw open fail",
|
||||
rawRepo: repomocks.NewDirectRepository(t),
|
||||
rawRepo: repomocks.NewMockRepository(t),
|
||||
retErr: errors.New("fake-open-error"),
|
||||
expectedErr: "error to open object: fake-open-error",
|
||||
},
|
||||
@@ -363,7 +327,7 @@ func TestOpenObject(t *testing.T) {
|
||||
func TestGetManifest(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
rawRepo *repomocks.DirectRepository
|
||||
rawRepo *repomocks.MockRepository
|
||||
retErr error
|
||||
expectedErr string
|
||||
}{
|
||||
@@ -373,7 +337,7 @@ func TestGetManifest(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "raw get fail",
|
||||
rawRepo: repomocks.NewDirectRepository(t),
|
||||
rawRepo: repomocks.NewMockRepository(t),
|
||||
retErr: errors.New("fake-get-error"),
|
||||
expectedErr: "error to get manifest: fake-get-error",
|
||||
},
|
||||
@@ -405,7 +369,7 @@ func TestGetManifest(t *testing.T) {
|
||||
func TestFindManifests(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
rawRepo *repomocks.DirectRepository
|
||||
rawRepo *repomocks.MockRepository
|
||||
retErr error
|
||||
expectedErr string
|
||||
}{
|
||||
@@ -415,7 +379,7 @@ func TestFindManifests(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "raw find fail",
|
||||
rawRepo: repomocks.NewDirectRepository(t),
|
||||
rawRepo: repomocks.NewMockRepository(t),
|
||||
retErr: errors.New("fake-find-error"),
|
||||
expectedErr: "error to find manifests: fake-find-error",
|
||||
},
|
||||
@@ -444,8 +408,8 @@ func TestFindManifests(t *testing.T) {
|
||||
func TestClose(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
rawRepo *repomocks.DirectRepository
|
||||
rawWriter *repomocks.DirectRepositoryWriter
|
||||
rawRepo *repomocks.MockRepository
|
||||
rawWriter *repomocks.MockRepositoryWriter
|
||||
rawRepoRetErr error
|
||||
rawWriterRetErr error
|
||||
expectedErr string
|
||||
@@ -455,21 +419,21 @@ func TestClose(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "writer is not nil",
|
||||
rawWriter: repomocks.NewDirectRepositoryWriter(t),
|
||||
rawWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
},
|
||||
{
|
||||
name: "repo is not nil",
|
||||
rawRepo: repomocks.NewDirectRepository(t),
|
||||
rawRepo: repomocks.NewMockRepository(t),
|
||||
},
|
||||
{
|
||||
name: "writer close error",
|
||||
rawWriter: repomocks.NewDirectRepositoryWriter(t),
|
||||
rawWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
rawWriterRetErr: errors.New("fake-writer-close-error"),
|
||||
expectedErr: "error to close repo writer: fake-writer-close-error",
|
||||
},
|
||||
{
|
||||
name: "repo is not nil",
|
||||
rawRepo: repomocks.NewDirectRepository(t),
|
||||
rawRepo: repomocks.NewMockRepository(t),
|
||||
rawRepoRetErr: errors.New("fake-repo-close-error"),
|
||||
expectedErr: "error to close repo: fake-repo-close-error",
|
||||
},
|
||||
@@ -503,7 +467,7 @@ func TestClose(t *testing.T) {
|
||||
func TestPutManifest(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
rawWriter *repomocks.DirectRepositoryWriter
|
||||
rawWriter *repomocks.MockRepositoryWriter
|
||||
rawWriterRetErr error
|
||||
expectedErr string
|
||||
}{
|
||||
@@ -513,7 +477,7 @@ func TestPutManifest(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "raw put fail",
|
||||
rawWriter: repomocks.NewDirectRepositoryWriter(t),
|
||||
rawWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
rawWriterRetErr: errors.New("fake-writer-put-error"),
|
||||
expectedErr: "error to put manifest: fake-writer-put-error",
|
||||
},
|
||||
@@ -544,7 +508,7 @@ func TestPutManifest(t *testing.T) {
|
||||
func TestDeleteManifest(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
rawWriter *repomocks.DirectRepositoryWriter
|
||||
rawWriter *repomocks.MockRepositoryWriter
|
||||
rawWriterRetErr error
|
||||
expectedErr string
|
||||
}{
|
||||
@@ -554,7 +518,7 @@ func TestDeleteManifest(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "raw delete fail",
|
||||
rawWriter: repomocks.NewDirectRepositoryWriter(t),
|
||||
rawWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
rawWriterRetErr: errors.New("fake-writer-delete-error"),
|
||||
expectedErr: "error to delete manifest: fake-writer-delete-error",
|
||||
},
|
||||
@@ -583,7 +547,7 @@ func TestDeleteManifest(t *testing.T) {
|
||||
func TestFlush(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
rawWriter *repomocks.DirectRepositoryWriter
|
||||
rawWriter *repomocks.MockRepositoryWriter
|
||||
rawWriterRetErr error
|
||||
expectedErr string
|
||||
}{
|
||||
@@ -593,7 +557,7 @@ func TestFlush(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "raw flush fail",
|
||||
rawWriter: repomocks.NewDirectRepositoryWriter(t),
|
||||
rawWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
rawWriterRetErr: errors.New("fake-writer-flush-error"),
|
||||
expectedErr: "error to flush repo: fake-writer-flush-error",
|
||||
},
|
||||
@@ -623,7 +587,7 @@ func TestConcatenateObjects(t *testing.T) {
|
||||
testCases := []struct {
|
||||
name string
|
||||
setWriter bool
|
||||
rawWriter *repomocks.DirectRepositoryWriter
|
||||
rawWriter *repomocks.MockRepositoryWriter
|
||||
rawWriterRetErr error
|
||||
objectIDs []udmrepo.ID
|
||||
expectedErr string
|
||||
@@ -649,7 +613,7 @@ func TestConcatenateObjects(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "concatenate error",
|
||||
rawWriter: repomocks.NewDirectRepositoryWriter(t),
|
||||
rawWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
rawWriterRetErr: errors.New("fake-concatenate-error"),
|
||||
objectIDs: []udmrepo.ID{
|
||||
"I123456",
|
||||
@@ -659,7 +623,7 @@ func TestConcatenateObjects(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "succeed",
|
||||
rawWriter: repomocks.NewDirectRepositoryWriter(t),
|
||||
rawWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
objectIDs: []udmrepo.ID{
|
||||
"I123456",
|
||||
},
|
||||
@@ -695,7 +659,7 @@ func TestNewObjectWriter(t *testing.T) {
|
||||
rawObjWriter := repomocks.NewWriter(t)
|
||||
testCases := []struct {
|
||||
name string
|
||||
rawWriter *repomocks.DirectRepositoryWriter
|
||||
rawWriter *repomocks.MockRepositoryWriter
|
||||
rawWriterRet object.Writer
|
||||
expectedRet udmrepo.ObjectWriter
|
||||
}{
|
||||
@@ -704,11 +668,11 @@ func TestNewObjectWriter(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "new object writer fail",
|
||||
rawWriter: repomocks.NewDirectRepositoryWriter(t),
|
||||
rawWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
},
|
||||
{
|
||||
name: "succeed",
|
||||
rawWriter: repomocks.NewDirectRepositoryWriter(t),
|
||||
rawWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
rawWriterRet: rawObjWriter,
|
||||
expectedRet: &kopiaObjectWriter{rawWriter: rawObjWriter},
|
||||
},
|
||||
|
||||
@@ -32,6 +32,7 @@ import (
|
||||
"github.com/kopia/kopia/repo/maintenance"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/kopia"
|
||||
"github.com/vmware-tanzu/velero/pkg/repository/udmrepo"
|
||||
"github.com/vmware-tanzu/velero/pkg/repository/udmrepo/kopialib/backend"
|
||||
)
|
||||
@@ -354,7 +355,7 @@ func (b *byteBufferReader) Seek(offset int64, whence int) (int64, error) {
|
||||
var funcGetParam = maintenance.GetParams
|
||||
|
||||
func writeInitParameters(ctx context.Context, repoOption udmrepo.RepoOptions, logger logrus.FieldLogger) error {
|
||||
r, err := openKopiaRepo(ctx, repoOption.ConfigFilePath, repoOption.RepoPassword, nil)
|
||||
r, err := openKopiaRepo(ctx, repoOption.ConfigFilePath, repoOption.RepoPassword, &openOptions{repoLogger: kopia.RepositoryLogger(logger)})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -471,15 +471,15 @@ func TestGetRepositoryStatus(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestWriteInitParameters(t *testing.T) {
|
||||
var directRpo *repomocks.DirectRepository
|
||||
var directRpo *repomocks.MockRepository
|
||||
assertFullMaintIntervalEqual := func(expected, actual *maintenance.Params) bool {
|
||||
return assert.Equal(t, expected.FullCycle.Interval, actual.FullCycle.Interval)
|
||||
}
|
||||
testCases := []struct {
|
||||
name string
|
||||
repoOptions udmrepo.RepoOptions
|
||||
returnRepo *repomocks.DirectRepository
|
||||
returnRepoWriter *repomocks.DirectRepositoryWriter
|
||||
returnRepo *repomocks.MockRepository
|
||||
returnRepoWriter *repomocks.MockRepositoryWriter
|
||||
repoOpen func(context.Context, string, string, *repo.Options) (repo.Repository, error)
|
||||
newRepoWriterError error
|
||||
replaceManifestError error
|
||||
@@ -488,6 +488,8 @@ func TestWriteInitParameters(t *testing.T) {
|
||||
expectedReplaceManifestsParams *maintenance.Params
|
||||
// allows for asserting only certain fields are set as expected
|
||||
assertReplaceManifestsParams func(*maintenance.Params, *maintenance.Params) bool
|
||||
mockNewWriter bool
|
||||
mockClientOptions bool
|
||||
expectedErr string
|
||||
}{
|
||||
{
|
||||
@@ -524,7 +526,7 @@ func TestWriteInitParameters(t *testing.T) {
|
||||
getParam: func(context.Context, repo.Repository) (*maintenance.Params, error) {
|
||||
return nil, errors.New("fake-get-param-error")
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
returnRepo: repomocks.NewMockRepository(t),
|
||||
expectedErr: "error getting existing maintenance params: fake-get-param-error",
|
||||
},
|
||||
{
|
||||
@@ -541,6 +543,7 @@ func TestWriteInitParameters(t *testing.T) {
|
||||
Owner: "default@default",
|
||||
}, nil
|
||||
},
|
||||
returnRepo: repomocks.NewMockRepository(t),
|
||||
},
|
||||
{
|
||||
name: "existing param with different owner",
|
||||
@@ -556,13 +559,15 @@ func TestWriteInitParameters(t *testing.T) {
|
||||
Owner: "fake-owner",
|
||||
}, nil
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
returnRepoWriter: new(repomocks.DirectRepositoryWriter),
|
||||
returnRepo: repomocks.NewMockRepository(t),
|
||||
returnRepoWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
expectedReplaceManifestsParams: &maintenance.Params{
|
||||
FullCycle: maintenance.CycleParams{
|
||||
Interval: udmrepo.NormalGCInterval,
|
||||
},
|
||||
},
|
||||
mockNewWriter: true,
|
||||
mockClientOptions: true,
|
||||
assertReplaceManifestsParams: assertFullMaintIntervalEqual,
|
||||
},
|
||||
{
|
||||
@@ -574,7 +579,8 @@ func TestWriteInitParameters(t *testing.T) {
|
||||
repoOpen: func(context.Context, string, string, *repo.Options) (repo.Repository, error) {
|
||||
return directRpo, nil
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
returnRepo: repomocks.NewMockRepository(t),
|
||||
mockNewWriter: true,
|
||||
newRepoWriterError: errors.New("fake-new-writer-error"),
|
||||
expectedErr: "error to init write repo parameters: unable to create writer: fake-new-writer-error",
|
||||
},
|
||||
@@ -587,8 +593,10 @@ func TestWriteInitParameters(t *testing.T) {
|
||||
repoOpen: func(context.Context, string, string, *repo.Options) (repo.Repository, error) {
|
||||
return directRpo, nil
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
returnRepoWriter: new(repomocks.DirectRepositoryWriter),
|
||||
returnRepo: repomocks.NewMockRepository(t),
|
||||
returnRepoWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
mockNewWriter: true,
|
||||
mockClientOptions: true,
|
||||
replaceManifestError: errors.New("fake-replace-manifest-error"),
|
||||
expectedErr: "error to init write repo parameters: error to set maintenance params: put manifest: fake-replace-manifest-error",
|
||||
},
|
||||
@@ -603,8 +611,10 @@ func TestWriteInitParameters(t *testing.T) {
|
||||
repoOpen: func(context.Context, string, string, *repo.Options) (repo.Repository, error) {
|
||||
return directRpo, nil
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
returnRepoWriter: new(repomocks.DirectRepositoryWriter),
|
||||
returnRepo: repomocks.NewMockRepository(t),
|
||||
returnRepoWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
mockNewWriter: true,
|
||||
mockClientOptions: true,
|
||||
expectedReplaceManifestsParams: &maintenance.Params{
|
||||
FullCycle: maintenance.CycleParams{
|
||||
Interval: udmrepo.FastGCInterval,
|
||||
@@ -623,8 +633,10 @@ func TestWriteInitParameters(t *testing.T) {
|
||||
repoOpen: func(context.Context, string, string, *repo.Options) (repo.Repository, error) {
|
||||
return directRpo, nil
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
returnRepoWriter: new(repomocks.DirectRepositoryWriter),
|
||||
returnRepo: repomocks.NewMockRepository(t),
|
||||
returnRepoWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
mockNewWriter: true,
|
||||
mockClientOptions: true,
|
||||
expectedReplaceManifestsParams: &maintenance.Params{
|
||||
FullCycle: maintenance.CycleParams{
|
||||
Interval: udmrepo.NormalGCInterval,
|
||||
@@ -643,8 +655,9 @@ func TestWriteInitParameters(t *testing.T) {
|
||||
repoOpen: func(context.Context, string, string, *repo.Options) (repo.Repository, error) {
|
||||
return directRpo, nil
|
||||
},
|
||||
returnRepo: new(repomocks.DirectRepository),
|
||||
returnRepoWriter: new(repomocks.DirectRepositoryWriter),
|
||||
returnRepo: repomocks.NewMockRepository(t),
|
||||
returnRepoWriter: repomocks.NewMockRepositoryWriter(t),
|
||||
mockNewWriter: true,
|
||||
expectedErr: "error to init write repo parameters: invalid full maintenance interval option foo",
|
||||
},
|
||||
}
|
||||
@@ -663,8 +676,14 @@ func TestWriteInitParameters(t *testing.T) {
|
||||
}
|
||||
|
||||
if tc.returnRepo != nil {
|
||||
tc.returnRepo.On("NewWriter", mock.Anything, mock.Anything).Return(ctx, tc.returnRepoWriter, tc.newRepoWriterError)
|
||||
tc.returnRepo.On("ClientOptions").Return(repo.ClientOptions{})
|
||||
if tc.mockNewWriter {
|
||||
tc.returnRepo.On("NewWriter", mock.Anything, mock.Anything).Return(ctx, tc.returnRepoWriter, tc.newRepoWriterError)
|
||||
}
|
||||
|
||||
if tc.mockClientOptions {
|
||||
tc.returnRepo.On("ClientOptions").Return(repo.ClientOptions{})
|
||||
}
|
||||
|
||||
tc.returnRepo.On("Close", mock.Anything).Return(nil)
|
||||
}
|
||||
|
||||
|
||||
@@ -68,10 +68,10 @@ type RestorePVC struct {
|
||||
|
||||
type CachePVC struct {
|
||||
// StorageClass specifies the storage class for cache PVC
|
||||
StorageClass string
|
||||
StorageClass string `json:"storageClass,omitempty"`
|
||||
|
||||
// ResidentThreshold specifies the minimum size of the backup data to create cache PVC
|
||||
ResidentThreshold int64
|
||||
// ResidentThresholdInMB specifies the minimum size of the backup data to create cache PVC
|
||||
ResidentThresholdInMB int64 `json:"residentThresholdInMB,omitempty"`
|
||||
}
|
||||
|
||||
type NodeAgentConfigs struct {
|
||||
|
||||
@@ -140,7 +140,13 @@ func EnsureDeletePod(ctx context.Context, podGetter corev1client.CoreV1Interface
|
||||
func IsPodUnrecoverable(pod *corev1api.Pod, log logrus.FieldLogger) (bool, string) {
|
||||
// Check the Phase field
|
||||
if pod.Status.Phase == corev1api.PodFailed || pod.Status.Phase == corev1api.PodUnknown {
|
||||
message := GetPodTerminateMessage(pod)
|
||||
message := ""
|
||||
if pod.Status.Message != "" {
|
||||
message += pod.Status.Message + "/"
|
||||
}
|
||||
|
||||
message += GetPodTerminateMessage(pod)
|
||||
|
||||
log.Warnf("Pod is in abnormal state %s, message [%s]", pod.Status.Phase, message)
|
||||
return true, fmt.Sprintf("Pod is in abnormal state [%s], message [%s]", pod.Status.Phase, message)
|
||||
}
|
||||
@@ -269,7 +275,7 @@ func ToSystemAffinity(loadAffinities []*LoadAffinity) *corev1api.Affinity {
|
||||
}
|
||||
|
||||
func DiagnosePod(pod *corev1api.Pod, events *corev1api.EventList) string {
|
||||
diag := fmt.Sprintf("Pod %s/%s, phase %s, node name %s\n", pod.Namespace, pod.Name, pod.Status.Phase, pod.Spec.NodeName)
|
||||
diag := fmt.Sprintf("Pod %s/%s, phase %s, node name %s, message %s\n", pod.Namespace, pod.Name, pod.Status.Phase, pod.Spec.NodeName, pod.Status.Message)
|
||||
|
||||
for _, condition := range pod.Status.Conditions {
|
||||
diag += fmt.Sprintf("Pod condition %s, status %s, reason %s, message %s\n", condition.Type, condition.Status, condition.Reason, condition.Message)
|
||||
|
||||
@@ -925,9 +925,10 @@ func TestDiagnosePod(t *testing.T) {
|
||||
Message: "fake-message-2",
|
||||
},
|
||||
},
|
||||
Message: "fake-message-3",
|
||||
},
|
||||
},
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\n",
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node, message fake-message-3\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\n",
|
||||
},
|
||||
{
|
||||
name: "pod with all info and empty event list",
|
||||
@@ -955,10 +956,11 @@ func TestDiagnosePod(t *testing.T) {
|
||||
Message: "fake-message-2",
|
||||
},
|
||||
},
|
||||
Message: "fake-message-3",
|
||||
},
|
||||
},
|
||||
events: &corev1api.EventList{},
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\n",
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node, message fake-message-3\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\n",
|
||||
},
|
||||
{
|
||||
name: "pod with all info and events",
|
||||
@@ -987,6 +989,7 @@ func TestDiagnosePod(t *testing.T) {
|
||||
Message: "fake-message-2",
|
||||
},
|
||||
},
|
||||
Message: "fake-message-3",
|
||||
},
|
||||
},
|
||||
events: &corev1api.EventList{Items: []corev1api.Event{
|
||||
@@ -1027,7 +1030,7 @@ func TestDiagnosePod(t *testing.T) {
|
||||
Message: "message-6",
|
||||
},
|
||||
}},
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\nPod event reason reason-3, message message-3\nPod event reason reason-6, message message-6\n",
|
||||
expected: "Pod fake-ns/fake-pod, phase Pending, node name fake-node, message fake-message-3\nPod condition Initialized, status True, reason fake-reason-1, message fake-message-1\nPod condition PodScheduled, status False, reason fake-reason-2, message fake-message-2\nPod event reason reason-3, message message-3\nPod event reason reason-6, message message-6\n",
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -17,9 +17,8 @@ Velero supports storage providers for both cloud-provider environments and on-pr
|
||||
|
||||
### Velero on Windows
|
||||
|
||||
Velero does not officially support Windows. In testing, the Velero team was able to backup stateless Windows applications only. The File System Backup and backups of stateful applications or PersistentVolumes were not supported.
|
||||
|
||||
If you want to perform your own testing of Velero on Windows, you must deploy Velero as a Windows container. Velero does not provide official Windows images, but its possible for you to build your own Velero Windows container image to use. Note that you must build this image on a Windows node.
|
||||
Velero supports to backup and restore Windows workloads, either stateless or stateful.
|
||||
Velero node-agent and data mover pods could run in Windows nodes. To keep compatibility to the existing Velero plugins, Velero server runs in linux nodes only, so Velero requires at least one linux node in the cluster. Velero provides Windows images for specific Windows versions. For more information see [Backup Restore Windows Workloads][6].
|
||||
|
||||
## Install the CLI
|
||||
|
||||
@@ -71,3 +70,4 @@ Please refer to [this part of the documentation][5].
|
||||
[3]: overview-plugins.md
|
||||
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
|
||||
[5]: customize-installation.md#optional-velero-cli-configurations
|
||||
[6]: backup-restore-windows.md
|
||||
|
||||
@@ -42,6 +42,46 @@ A command to do this is `make new-changelog CHANGELOG_BODY="Changes you have mad
|
||||
|
||||
If a PR does not warrant a changelog, the CI check for a changelog can be skipped by applying a `changelog-not-required` label on the PR. If you are making a PR on a release branch, you should still make a new file in the `changelogs/unreleased` folder on the release branch for your change.
|
||||
|
||||
## AI-Generated Content
|
||||
|
||||
We welcome contributions from all developers, including those who use AI tools to assist in their work. However, to maintain code quality and ensure contributions are accurate and appropriate, please follow these guidelines:
|
||||
|
||||
### Using AI Assistance
|
||||
|
||||
**Acceptable use:**
|
||||
- Using AI tools (like GitHub Copilot, ChatGPT, Claude, etc.) to generate scaffolding or boilerplate code
|
||||
- Getting AI assistance for writing unit tests
|
||||
- Using AI to help understand complex code patterns
|
||||
- AI-assisted debugging and problem-solving
|
||||
- Using AI to help with documentation writing
|
||||
|
||||
**Requirements when using AI:**
|
||||
1. **Always review and verify** AI-generated content before submitting
|
||||
2. **Test thoroughly** - ensure the code works as expected in your environment
|
||||
3. **Verify technical accuracy** - check that all version numbers, configurations, and technical details are correct
|
||||
4. **Remove placeholders** - ensure there are no example or placeholder content
|
||||
5. **Understand the code** - be able to explain and defend your changes during code review
|
||||
6. **Disclose AI usage** - if a significant portion of your PR was AI-generated, mention it in the PR description
|
||||
|
||||
### What to Avoid
|
||||
|
||||
**Unacceptable practices:**
|
||||
- Submitting entirely AI-generated PRs or issues without review or verification
|
||||
- Including hallucinated information (false version numbers, non-existent APIs, etc.)
|
||||
- Copying AI-generated content with placeholder or example data
|
||||
- Submitting AI-generated issues describing problems you haven't actually experienced
|
||||
- Using AI to generate issues about features or bugs without verifying they exist
|
||||
|
||||
### For Issues
|
||||
|
||||
When creating issues with AI assistance:
|
||||
- Ensure the issue describes a **real problem** you have experienced
|
||||
- Verify all version numbers, error messages, and configurations are from your actual environment
|
||||
- Remove any AI-generated boilerplate or overly formal structure
|
||||
- Focus on clarity and accuracy over comprehensive formatting
|
||||
|
||||
Issues that appear to be entirely AI-generated without proper verification may be labeled as `potential-ai-generated` and flagged for additional review.
|
||||
|
||||
## Copyright header
|
||||
|
||||
Whenever a source code file is being modified, the copyright notice should be updated to our standard copyright notice. That is, it should read “Copyright the Velero contributors.”
|
||||
|
||||
@@ -16,7 +16,7 @@ A sample of cache PVC configuration as part of the ConfigMap would look like:
|
||||
```json
|
||||
{
|
||||
"cachePVC": {
|
||||
"thresholdInGB": 1,
|
||||
"residentThresholdInMB": 1024,
|
||||
"storageClass": "sc-wffc"
|
||||
}
|
||||
}
|
||||
@@ -29,7 +29,7 @@ kubectl create cm node-agent-config -n velero --from-file=<json file name>
|
||||
|
||||
A must-have field in the configuration is `storageClass` which tells Velero which storage class is used to provision the cache PVC. Velero relies on Kubernetes dynamic provision process to provision the PVC, static provision is not supported.
|
||||
|
||||
The cache PVC behavior could be further fine tuned through `thresholdInGB`. Its value is compared to the size of the backup, if the size is smaller than this value, no cache PVC would be created when restoring from the backup. This ensures that cache PVCs are not created in vain when the backup size is too small and can be accommodated in the data mover pods' root disk.
|
||||
The cache PVC behavior could be further fine tuned through `residentThresholdInMB`. Its value is compared to the size of the backup, if the size is smaller than this value, no cache PVC would be created when restoring from the backup. This ensures that cache PVCs are not created in vain when the backup size is too small and can be accommodated in the data mover pods' root disk.
|
||||
|
||||
This configuration decides whether and how to provision cache PVCs, but it doesn't decide their size. Instead, the size is decided by the specific backup repository. Specifically, Velero asks a cache limit from the backup repository and uses this limit to calculate the cache PVC size.
|
||||
The cache limit is decided by the backup repository itself, for Kopia repository, if `cacheLimitMB` is specified in the backup repository configuration, its value will be used; otherwise, a default limit (5 GB) is used.
|
||||
|
||||
@@ -77,19 +77,6 @@ data:
|
||||
},
|
||||
"keepLatestMaintenanceJobs": 1,
|
||||
"loadAffinity": [
|
||||
{
|
||||
"nodeSelector": {
|
||||
"matchExpressions": [
|
||||
{
|
||||
"key": "cloud.google.com/machine-family",
|
||||
"operator": "In",
|
||||
"values": [
|
||||
"e2"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"nodeSelector": {
|
||||
"matchExpressions": [
|
||||
@@ -119,10 +106,10 @@ data:
|
||||
}
|
||||
EOF
|
||||
```
|
||||
This sample showcases two affinity configurations:
|
||||
- matchLabels: maintenance job runs on nodes with label key `cloud.google.com/machine-family` and value `e2`.
|
||||
Notice: although loadAffinity is an array, Velero only takes the first element of the array.
|
||||
|
||||
This sample showcases how to use affinity configuration:
|
||||
- matchLabels: maintenance job runs on nodes located in `us-central1-a`, `us-central1-b` and `us-central1-c`.
|
||||
The nodes matching one of the two conditions are selected.
|
||||
|
||||
To create the configMap, users need to save something like the above sample to a json file and then run below command:
|
||||
```
|
||||
|
||||
@@ -278,6 +278,9 @@ The policies YAML config file would look like this:
|
||||
storageClass:
|
||||
- gp2
|
||||
- standard
|
||||
# pvc matches specific phase(s)
|
||||
pvcPhase:
|
||||
- Pending
|
||||
action:
|
||||
type: skip
|
||||
- conditions:
|
||||
@@ -370,6 +373,7 @@ Currently, Velero supports the volume attributes listed below:
|
||||
- "5Gi" which is not supported and will be failed in validating the configuration
|
||||
- storageClass: matching volumes those with specified `storageClass`, such as `gp2`, `ebs-sc` in eks
|
||||
- volume sources: matching volumes that used specified volume sources. Currently we support nfs or csi backend volume source
|
||||
- pvcPhase: matching volumes based on the phase of their associated PVCs (Pending, Bound, Lost)
|
||||
|
||||
Velero supported conditions and format listed below:
|
||||
- capacity
|
||||
@@ -462,9 +466,60 @@ Velero supported conditions and format listed below:
|
||||
type: skip
|
||||
```
|
||||
|
||||
#### VolumePolicies rules
|
||||
- Velero already has lots of include or exclude filters. the volume policies are the final filters after others include or exclude filters in one backup processing workflow. So if use a defined similar filter like the opt-in approach to backup one pod volume but skip backup of the same pod volume in volume policies, as volume policies are the final filters that are applied, the volume will not be backed up.
|
||||
- If volume policies conflict with themselves the first matched policy will be respected when many policies are defined.
|
||||
- pvc Phase
|
||||
|
||||
This condition filters volumes based on the phase of their associated PVCs. The condition is specified as a list of phases to match. The volume matches this condition if the PVC's phase matches any of the phases in the list. Supported phases are: `Pending`, `Bound`, and `Lost`.
|
||||
```yaml
|
||||
pvcPhase:
|
||||
- Pending
|
||||
```
|
||||
|
||||
Some examples:
|
||||
- Skip Pending PVCs: Skip backup of volumes whose associated PVC is in `Pending` phase (useful for PVCs that haven't been bound to a PV yet).
|
||||
```yaml
|
||||
volumePolicies:
|
||||
- conditions:
|
||||
pvcPhase:
|
||||
- Pending
|
||||
action:
|
||||
type: skip
|
||||
```
|
||||
- Skip multiple phases: Skip backup of volumes whose associated PVC is either in `Pending` or `Lost` phase.
|
||||
```yaml
|
||||
volumePolicies:
|
||||
- conditions:
|
||||
pvcPhase:
|
||||
- Pending
|
||||
- Lost
|
||||
action:
|
||||
type: skip
|
||||
```
|
||||
- Backup only Bound PVCs: Only backup volumes whose associated PVC is in `Bound` phase.
|
||||
```yaml
|
||||
volumePolicies:
|
||||
- conditions:
|
||||
pvcPhase:
|
||||
- Bound
|
||||
action:
|
||||
type: snapshot
|
||||
```
|
||||
- Combine with other conditions: You can combine PVC phase conditions with other conditions like storage class or labels.
|
||||
```yaml
|
||||
volumePolicies:
|
||||
- conditions:
|
||||
pvcPhase:
|
||||
- Pending
|
||||
storageClass:
|
||||
- gp2
|
||||
action:
|
||||
type: skip
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Resource policies rules
|
||||
- Velero already has lots of include or exclude filters. the resource policies are the final filters after others include or exclude filters in one backup processing workflow. So if use a defined similar filter like the opt-in approach to backup one pod volume but skip backup of the same pod volume in resource policies, as resource policies are the final filters that are applied, the volume will not be backed up.
|
||||
- If volume resource policies conflict with themselves the first matched policy will be respected when many policies are defined.
|
||||
|
||||
#### VolumePolicy priority with existing filters
|
||||
* [Includes filters](#includes) and [Excludes filters](#excludes) have the highest priority. The filtered-out resources by them cannot reach to the VolumePolicy.
|
||||
|
||||
@@ -17,9 +17,8 @@ Velero supports storage providers for both cloud-provider environments and on-pr
|
||||
|
||||
### Velero on Windows
|
||||
|
||||
Velero does not officially support Windows. In testing, the Velero team was able to backup stateless Windows applications only. The File System Backup and backups of stateful applications or PersistentVolumes were not supported.
|
||||
|
||||
If you want to perform your own testing of Velero on Windows, you must deploy Velero as a Windows container. Velero does not provide official Windows images, but its possible for you to build your own Velero Windows container image to use. Note that you must build this image on a Windows node.
|
||||
Velero supports to backup and restore Windows workloads, either stateless or stateful.
|
||||
Velero node-agent and data mover pods could run in Windows nodes. To keep compatibility to the existing Velero plugins, Velero server runs in linux nodes only, so Velero requires at least one linux node in the cluster. Velero provides Windows images for specific Windows versions. For more information see [Backup Restore Windows Workloads][6].
|
||||
|
||||
## Install the CLI
|
||||
|
||||
@@ -71,3 +70,4 @@ Please refer to [this part of the documentation][5].
|
||||
[3]: overview-plugins.md
|
||||
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
|
||||
[5]: customize-installation.md#optional-velero-cli-configurations
|
||||
[6]: backup-restore-windows.md
|
||||
|
||||
@@ -17,9 +17,8 @@ Velero supports storage providers for both cloud-provider environments and on-pr
|
||||
|
||||
### Velero on Windows
|
||||
|
||||
Velero does not officially support Windows. In testing, the Velero team was able to backup stateless Windows applications only. The File System Backup and backups of stateful applications or PersistentVolumes were not supported.
|
||||
|
||||
If you want to perform your own testing of Velero on Windows, you must deploy Velero as a Windows container. Velero does not provide official Windows images, but its possible for you to build your own Velero Windows container image to use. Note that you must build this image on a Windows node.
|
||||
Velero supports to backup and restore Windows workloads, either stateless or stateful.
|
||||
Velero node-agent and data mover pods could run in Windows nodes. To keep compatibility to the existing Velero plugins, Velero server runs in linux nodes only, so Velero requires at least one linux node in the cluster. Velero provides Windows images for specific Windows versions. For more information see [Backup Restore Windows Workloads][6].
|
||||
|
||||
## Install the CLI
|
||||
|
||||
@@ -71,3 +70,4 @@ Please refer to [this part of the documentation][5].
|
||||
[3]: overview-plugins.md
|
||||
[4]: customize-installation.md#install-an-additional-volume-snapshot-provider
|
||||
[5]: customize-installation.md#optional-velero-cli-configurations
|
||||
[6]: backup-restore-windows.md
|
||||
|
||||
@@ -76,7 +76,7 @@ toc:
|
||||
- page: Data Movement Node Selection Configuration
|
||||
url: /data-movement-node-selection
|
||||
- page: Data Movement Cache PVC Configuration
|
||||
url: /data-movement-cache-volume.md
|
||||
url: /data-movement-cache-volume
|
||||
- page: Node-agent Concurrency
|
||||
url: /node-agent-concurrency
|
||||
- title: Plugins
|
||||
|
||||
@@ -316,12 +316,14 @@ func cleanVSpherePluginConfig(c clientset.Interface, ns, secretName, configMapNa
|
||||
// version can be in the format of 'main', 'release-x.y(-dev)', or 'vX.Y(.Z)'
|
||||
func ValidateVeleroVersion(version string) error {
|
||||
mainRe := regexp.MustCompile(`^main$`)
|
||||
releaseRe := regexp.MustCompile(`^release-(\d)\.(\d)(-dev)?$`)
|
||||
releaseRe := regexp.MustCompile(`^release-(\d+)\.(\d+)(-dev)?$`)
|
||||
tagRe := regexp.MustCompile(`^v(\d+)\.(\d+)(\.\d+)?$`)
|
||||
|
||||
if mainRe.MatchString(version) || releaseRe.MatchString(version) || tagRe.MatchString(version) {
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Println("Invalid Velero version:", version)
|
||||
return fmt.Errorf("invalid Velero version: %s, Velero version must be 'main', 'release-x.y(-dev)', or 'vX.Y.Z'", version)
|
||||
}
|
||||
|
||||
@@ -331,13 +333,14 @@ func ValidateVeleroVersion(version string) error {
|
||||
// return true if version is no older than targetVersion
|
||||
func VersionNoOlderThan(version string, targetVersion string) (bool, error) {
|
||||
mainRe := regexp.MustCompile(`^main$`)
|
||||
releaseRe := regexp.MustCompile(`^release-(\d)\.(\d)(-dev)?$`)
|
||||
tagRe := regexp.MustCompile(`^v(\d)\.(\d)(\.\d+)?$`)
|
||||
releaseRe := regexp.MustCompile(`^release-(\d+)\.(\d+)(-dev)?$`)
|
||||
tagRe := regexp.MustCompile(`^v(\d+)\.(\d+)(\.\d+)?$`)
|
||||
|
||||
if err := ValidateVeleroVersion(version); err != nil {
|
||||
return false, err
|
||||
}
|
||||
if !tagRe.MatchString(targetVersion) && !mainRe.MatchString(targetVersion) {
|
||||
fmt.Printf("targetVersion %s is invalid. it must be in the format of 'main', or 'vX.Y.(Z)'.\n", targetVersion)
|
||||
return false, fmt.Errorf("targetVersion is invalid. it must be in the format of 'main', or 'vX.Y.(Z)'.")
|
||||
}
|
||||
|
||||
@@ -383,7 +386,8 @@ func VersionNoOlderThan(version string, targetVersion string) (bool, error) {
|
||||
}
|
||||
}
|
||||
|
||||
return false, fmt.Errorf("unknown error in VersionNoOlderThan")
|
||||
fmt.Printf("Unknown version %s in VersionNoOlderThan\n", version)
|
||||
return false, fmt.Errorf("unknown version in VersionNoOlderThan: %s", version)
|
||||
}
|
||||
|
||||
func installVeleroServer(
|
||||
|
||||
Reference in New Issue
Block a user