Compare commits

..

11 Commits

Author SHA1 Message Date
Daniel Jiang
4729274d07 Merge pull request #4385 from ywk253100/211122_rc
Add change log for 1.7.1
2021-11-22 17:30:00 +08:00
Wenkai Yin(尹文开)
cdf3acab5a Add change log for 1.7.1
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2021-11-22 15:36:14 +08:00
Daniel Jiang
80b43f8f40 Merge pull request #4358 from ywk253100/211117_pager
[cherry-pick]fix buggy pager func
2021-11-17 16:05:28 +08:00
Alay Patel
bf10709f98 add 4358 changelog
Signed-off-by: Alay Patel <alay1431@gmail.com>
2021-11-17 15:00:40 +08:00
Alay Patel
8c6ed31528 - fix buggy pager func
fix paging items in to use list options passed by the paging function

The client-go pager sets the Limit options for the list call
to paginate the request[1]. This PR fixes the paging function
to use the options passed by the pager instead of shadowed options
This is required for the pagination to work correctly.

- simplify the pager list implementation by using pager.List()
The List() function already implements a lot of the logic that was
needed for paging here, using it simplifies the code.

1. 3f40906dd8/staging/src/k8s.io/client-go/tools/pager/pager.go (L219)

Signed-off-by: Alay Patel <alay1431@gmail.com>
2021-11-17 14:58:13 +08:00
Wenkai Yin(尹文开)
37a712ef2f Fix CVE-2020-29652 and CVE-2020-26160 (#4315)
Bump up restic to v0.12.1 to fix CVE-2020-26160.
Bump up module "github.com/vmware-tanzu/crash-diagnostics" to v0.3.7 to fix CVE-2020-29652.
The "github.com/vmware-tanzu/crash-diagnostics" updates client-go to v0.22.2 which introduces several break changes, this commit updates the related codes as well

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2021-11-09 17:04:25 -08:00
Frangipani Gold
1da212b0e3 Namespace validation now allows asterisks and empty string (#4316)
Validation allows empty string namespace

Signed-off-by: F. Gold <fgold@vmware.com>
2021-11-08 09:34:05 -08:00
Daniel Jiang
9996dc5ce9 Comment in Dockerfile to explain the digest of base image (#4224)
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2021-10-08 08:57:29 -04:00
Wenkai Yin(尹文开)
9e52260568 Merge pull request #4182 from ywk253100/210922_snapshot_cherrypick
Specify the "--snapshot-volumes=false" option explicitly when running backup with Restic
2021-09-22 22:00:31 +08:00
Wenkai Yin(尹文开)
4863ff4119 Specify the "--snapshot-volumes=false" option explicitly when running backup with Restic
If the "--snapshot-volumes=false" isn't specified explicitly, the vSphere plugin will always take snapshots for the volumes even though the "--default-volumes-to-restic" is specified
This can be removed if the logic of vSphere plugin changes

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2021-09-22 21:50:54 +08:00
Daniel Jiang
3327d209f7 Pin the base image for v1.7 (#4180)
To improve the reproducibility of the images of velero, this commit pins
the golang and distroless images to specific tag and digest.

Signed-off-by: Daniel Jiang <jiangd@vmware.com>
2021-09-22 07:50:07 -04:00
84 changed files with 642 additions and 2669 deletions

View File

@@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=$BUILDPLATFORM golang:1.16 as builder-env
FROM --platform=$BUILDPLATFORM golang:1.16.8 as builder-env
ARG GOPROXY
ARG PKG
@@ -50,7 +50,8 @@ RUN mkdir -p /output/usr/bin && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN}
FROM gcr.io/distroless/base-debian10:nonroot
# The digest of tag "nonroot" at the time of v1.7.0
FROM gcr.io/distroless/base-debian10@sha256:a74f307185001c69bc362a40dbab7b67d410a872678132b187774fa21718fa13
LABEL maintainer="Nolan Brubaker <brubakern@vmware.com>"

View File

@@ -26,8 +26,7 @@
| Feature Area | Lead |
| ----------------------------- | :---------------------: |
| Architect | Dave Smith-Uchida (dsu-igeek) |
| Technical Lead | Daniel Jiang (reasonerjt) |
| Technical Lead | Dave Smith-Uchida (dsu-igeek) |
| Kubernetes CSI Liaison | |
| Deployment | JenTing Hsiao (jenting) |
| Community Management | Jonas Rosland (jonasrosland) |

View File

@@ -15,28 +15,33 @@ We work with and rely on community feedback to focus our efforts to improve Vele
The following table includes the current roadmap for Velero. If you have any questions or would like to contribute to Velero, please attend a [community meeting](https://velero.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt.
Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Velero.
`Last Updated: October 2021`
`Last Updated: July 2021`
#### 1.8.0 Roadmap (to be delivered January/February 2021)
#### 1.7.0 Roadmap (to be delivered early fall)
The release roadmap is split into Core items that are required for the release and desired items that may slip the release.
|Issue|Description|Timeline|Notes|
|---|---|---|---|
|[4108](https://github.com/vmware-tanzu/velero/issues/4108), [4109](https://github.com/vmware-tanzu/velero/issues/4109)|Solution for CSI - Azure and AWS|2022 H1|Currently, Velero plugins for AWS and Azure cannot back up persistent volumes that were provisioned using the CSI driver. This will fix that.|
|[3229](https://github.com/vmware-tanzu/velero/issues/3229),[4112](https://github.com/vmware-tanzu/velero/issues/4112)|Moving data mover functionality from the Velero Plugin for vSphere into Velero proper|2022 H1|This work is a precursor to decoupling the Astrolabe snapshotting infrastructure.|
|[3533](https://github.com/vmware-tanzu/velero/issues/3533)|Upload Progress Monitoring|2022 H1|Finishing up the work done in the 1.7 timeframe. The data mover work depends on this.|
|[1975](https://github.com/vmware-tanzu/velero/issues/1975)|Test dual stack mode|2022 H1|We already tested IPv6, but we want to confirm that dual stack mode works as well.|
|[2082](https://github.com/vmware-tanzu/velero/issues/2082)|Delete Backup CRs on removing target location. |2022 H1||
|[3516](https://github.com/vmware-tanzu/velero/issues/3516)|Restore issue with MutatingWebhookConfiguration v1beta1 API version|2022 H1||
|[2308](https://github.com/vmware-tanzu/velero/issues/2308)|Restoring nodePort service that has nodePort preservation always fails if service already exists in the namespace|2022 H1||
|[4115](https://github.com/vmware-tanzu/velero/issues/4115)|Support for multiple set of credentials for VolumeSnapshotLocations|2022 H1||
|[1980](https://github.com/vmware-tanzu/velero/issues/1980)|Velero triggers backup immediately for scheduled backups|2022 H1||
|[4067](https://github.com/vmware-tanzu/velero/issues/4067)|Pre and post backup and restore hooks|2022 H1||
|[3742](https://github.com/vmware-tanzu/velero/issues/3742)|Carvel packaging for Velero for vSphere|2022 H1|AWS and Azure have been completed already.|
|[3285](https://github.com/vmware-tanzu/velero/issues/3285)|Design doc for Velero plugin versioning|2022 H1||
|[4231](https://github.com/vmware-tanzu/velero/issues/4231)|Technical health (prioritizing giving developers confidence and saving developers time)|2022 H1|More automated tests (especially the pre-release manual tests) and more automation of the running of tests.|
|[4110](https://github.com/vmware-tanzu/velero/issues/4110)|Solution for CSI - GCP|2022 H1|Currently, the Velero plugin for GCP cannot back up persistent volumes that were provisioned using the CSI driver. This will fix that.|
|[3742](https://github.com/vmware-tanzu/velero/issues/3742)|Carvel packaging for Velero for restic|2022 H1|AWS and Azure have been completed already.|
|[3454](https://github.com/vmware-tanzu/velero/issues/3454),[4134](https://github.com/vmware-tanzu/velero/issues/4134),[4135](https://github.com/vmware-tanzu/velero/issues/4135)|Kubebuilder tech debt|2022 H1||
|[4111](https://github.com/vmware-tanzu/velero/issues/4111)|Ignore items returned by ItemSnapshotter.AlsoHandles during backup|2022 H1|This will enable backup of complex objects, because we can then tell Velero to ignore things that were already backed up when Velero was previously called recursively.|
##### Core items
The top priority of 1.7 is to increase the technical health of Velero and be more efficient with Velero developer time by streamlining the release process and automating and expanding the E2E test suite.
Other work may make it into the 1.8 release, but this is the work that will be prioritized first.
|Issue|Description|
|---|---|
||Streamline release process|
||Automate the running of the E2E tests|
||Convert pre-release manual tests to automated E2E tests|
|[3493](https://github.com/vmware-tanzu/velero/issues/3493)|[Carvel](https://github.com/vmware-tanzu/velero/issues/3493) based installation (in addition to the existing *velero install* CLI).|
|[675](https://github.com/vmware-tanzu/velero/issues/675)|Velero command to generate debugging information. Will integrate with [Crashd - Crash Diagnostics](https://github.com/vmware-tanzu/velero/issues/675)|
|[3285](https://github.com/vmware-tanzu/velero/issues/3285)|Design doc for Velero plugin versioning|
|[1975](https://github.com/vmware-tanzu/velero/issues/1975)|IPV6 support|
|[3533](https://github.com/vmware-tanzu/velero/issues/3533)|Upload Progress Monitoring|
|[3500](https://github.com/vmware-tanzu/velero/issues/3500)|Use distroless containers as a base|
##### Items formerly in 1.7 that will slip due to staffing changes
|Issue|Description|
|---|---|
|[3536](https://github.com/vmware-tanzu/velero/issues/3536)|Manifest for backup/restore|
|[2066](https://github.com/vmware-tanzu/velero/issues/2066)|CSI Snapshots GA|
|[3535](https://github.com/vmware-tanzu/velero/issues/3535)|Design doc for multiple cluster support|
|[2922](https://github.com/vmware-tanzu/velero/issues/2922)|Plugin timeouts|
|[3531](https://github.com/vmware-tanzu/velero/issues/3531)|Test plan for Velero|

View File

@@ -16,7 +16,7 @@ k8s_yaml([
# default values
settings = {
"default_registry": "docker.io/velero",
"default_registry": "",
"enable_restic": False,
"enable_debug": False,
"debug_continue_on_start": True, # Continue the velero process by default when in debug mode
@@ -90,14 +90,14 @@ def get_debug_flag():
# Set up a local_resource build of the Velero binary. The binary is written to _tiltbuild/velero.
local_resource(
"velero_server_binary",
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild;PKG=. BIN=velero GOOS=linux GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild ' + get_debug_flag() + ' REGISTRY=' + settings.get("default_registry") + ' ./hack/build.sh',
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild;PKG=. BIN=velero GOOS=linux GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild ' + get_debug_flag() + ' ./hack/build.sh',
deps = ["cmd", "internal", "pkg"],
ignore = ["pkg/cmd"],
)
local_resource(
"velero_local_binary",
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/local;PKG=. BIN=velero GOOS=' + local_goos + ' GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild/local ' + get_debug_flag() + ' REGISTRY=' + settings.get("default_registry") + ' ./hack/build.sh',
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/local;PKG=. BIN=velero GOOS=' + local_goos + ' GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild/local ' + get_debug_flag() + ' ./hack/build.sh',
deps = ["internal", "pkg/cmd"],
)

View File

@@ -1,3 +1,23 @@
## v1.7.1
### 2021-11-22
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.7.1
### Container Image
`velero/velero:v1.7.1`
### Documentation
https://velero.io/docs/v1.7/
### Upgrading
https://velero.io/docs/v1.7/upgrade-to-1.7/
### All changes
* fix buggy pager func (#4358, @alaypatel07)
* Fix CVE-2020-29652 and CVE-2020-26160 (#4315, @ywk253100)
## v1.7.0
### 2021-09-07

View File

@@ -1 +0,0 @@
Add upgrade test in E2E test

View File

@@ -1 +0,0 @@
Verify group before treating resource as cohabitating

View File

@@ -1 +0,0 @@
Fix plugins incompatible issue in upgrade test

View File

@@ -1 +0,0 @@
Refine tag-release.sh to align with change in release process

View File

@@ -1 +0,0 @@
Fix CVE-2020-29652 and CVE-2020-26160

View File

@@ -1 +0,0 @@
Don't create a backup immediately after creating a schedule

View File

@@ -1,122 +0,0 @@
# `velero debug` command for gathering troubleshooting information
## Abstract
To simplify the communication between velero users and developers, this document proposes the `velero debug` command to generate a tarball including the logs needed for debugging.
Github issue: https://github.com/vmware-tanzu/velero/issues/675
## Background
Gathering information to troubleshoot a Velero deployment is currently spread across multiple commands, and is not very efficient. Logs for the Velero server itself are accessed via a kubectl logs command, while information on specific backups or restores are accessed via a Velero subcommand. Restic logs are even more complicated to retrieve, since one must gather logs for every instance of the daemonset, and theres currently no good mechanism to locate which node a particular restic backup ran against.
A dedicated subcommand can lower this effort and reduce back-and-forth between user and developer for collecting the logs.
## Goals
- Enable efficient log collection for Velero and associated components, like plugins and restic.
## Non Goals
- Collecting logs for components that do not belong to velero such as storage service.
- Automated log analysis.
## High-Level Design
With the introduction of the new command `velero debug`, the command would download all of the following information:
- velero deployment logs
- restic DaemonSet logs
- plugin logs
- All the resources in the group `velero.io` that are created such as:
- Backup
- Restore
- BackupStorageLocation
- PodVolumeBackup
- PodVolumeRestore
- *etc ...*
- Log of the backup and restore, if specified in the param
A project called `crash-diagnostics` (or `crashd`) (https://github.com/vmware-tanzu/crash-diagnostics) implements the Kubernetes API queries and provides Starlark scripting language to abstract details, and collect the information into a local copy. It can be used as a standalone CLI executing a Starlark script file.
With the capabilities of embedding files in Go 1.16, we can define a Starlark script gathering the necessary information, embed the script at build time, then the velero debug command will invoke `crashd`, passing in the scripts text contents.
## Detailed Design
### Triggering the script
The Starlark script to be called by crashd:
```python
def capture_backup_logs(cmd, namespace):
if args.backup:
log("Collecting log and information for backup: {}".format(args.backup))
backupDescCmd = "{} --namespace={} backup describe {} --details".format(cmd, namespace, args.backup)
capture_local(cmd=backupDescCmd, file_name="backup_describe_{}.txt".format(args.backup))
backupLogsCmd = "{} --namespace={} backup logs {}".format(cmd, namespace, args.backup)
capture_local(cmd=backupLogsCmd, file_name="backup_{}.log".format(args.backup))
def capture_restore_logs(cmd, namespace):
if args.restore:
log("Collecting log and information for restore: {}".format(args.restore))
restoreDescCmd = "{} --namespace={} restore describe {} --details".format(cmd, namespace, args.restore)
capture_local(cmd=restoreDescCmd, file_name="restore_describe_{}.txt".format(args.restore))
restoreLogsCmd = "{} --namespace={} restore logs {}".format(cmd, namespace, args.restore)
capture_local(cmd=restoreLogsCmd, file_name="restore_{}.log".format(args.restore))
ns = args.namespace if args.namespace else "velero"
output = args.output if args.output else "bundle.tar.gz"
cmd = args.cmd if args.cmd else "velero"
# Working dir for writing during script execution
crshd = crashd_config(workdir="./velero-bundle")
set_defaults(kube_config(path=args.kubeconfig, cluster_context=args.kubecontext))
log("Collecting velero resources in namespace: {}". format(ns))
kube_capture(what="objects", namespaces=[ns], groups=['velero.io'])
capture_local(cmd="{} version -n {}".format(cmd, ns), file_name="version.txt")
log("Collecting velero deployment logs in namespace: {}". format(ns))
kube_capture(what="logs", namespaces=[ns])
capture_backup_logs(cmd, ns)
capture_restore_logs(cmd, ns)
archive(output_file=output, source_paths=[crshd.workdir])
log("Generated debug information bundle: {}".format(output))
```
The sample command to trigger the script via crashd:
```shell
./crashd run ./velero.cshd --args
'backup=harbor-backup-2nd,namespace=velero,basedir=,restore=,kubeconfig=/home/.kube/minikube-250-224/config,output='
```
To trigger the script in `velero debug`, in the package `pkg/cmd/cli/debug` a struct `option` will be introduced
```go
type option struct {
// currCmd the velero command
currCmd string
// workdir for crashd will be $baseDir/velero-debug
baseDir string
// the namespace where velero server is installed
namespace string
// the absolute path for the log bundle to be generated
outputPath string
// the absolute path for the kubeconfig file that will be read by crashd for calling K8S API
kubeconfigPath string
// the kubecontext to be used for calling K8S API
kubeContext string
// optional, the name of the backup resource whose log will be packaged into the debug bundle
backup string
// optional, the name of the restore resource whose log will be packaged into the debug bundle
restore string
// optional, it controls whether to print the debug log messages when calling crashd
verbose bool
}
```
The code will consolidate the input parameters and execution context of the `velero` CLI to form the option struct, which can be transformed into the `argsMap` that can be used when calling the func `exec.Execute` in `crashd`:
https://github.com/vmware-tanzu/crash-diagnostics/blob/v0.3.4/exec/executor.go#L17
## Alternatives Considered
The collection could be done via the kubernetes client-go API, but such integration is not necessarily trivial to implement, therefore, `crashd` is preferred approach
## Security Considerations
- The starlark script will be embedded into the velero binary, and the byte slice will be passed to the `exec.Execute` func directly, so theres little risk that the script will be modified before being executed.
## Compatibility
As the `crashd` project evolves the behavior of the internal functions used in the Starlark script may change. Well ensure the correctness of the script via regular E2E tests.
## Implementation
1. Bump up to use Go v1.16 to compile velero
2. Embed the starlark script
3. Implement the `velero debug` sub-command to call the script
4. Add E2E test case
## Open Questions
- **Command dependencies:** In the Starlark script, for collecting version info and backup logs, it calls the `velero backup logs` and `velero version`, which makes the call stack like velero debug -> crashd -> velero xxx. We need to make sure this works under different PATH settings.
- **Progress and error handling:** The log collection may take a relatively long time, log messages should be printed to indicate the progress when different items are being downloaded and packaged. Additionally, when an error happens, `crashd` may omit some errors, so before the script is executed we'll do some validation and make sure the `debug` command fail early if some parameters are incorrect.

View File

@@ -1,219 +0,0 @@
# Object Graph Manifest for Velero
## Abstract
One to two sentences that describes the goal of this proposal and the problem being solved by the proposed change.
The reader should be able to tell by the title, and the opening paragraph, if this document is relevant to them.
Currently, Velero does not have a complete manifest of everything in the backup, aside from the backup tarball itself.
This change introduces a new data structure to be stored with a backup in object storage which will allow for more efficient operations in reporting of what a backup contains.
Additionally, this manifest should enable advancements in Velero's features and architecture, enabling dry-run support, concurrent backup and restore operations, and reliable restoration of complex applications.
## Background
Right now, Velero backs up items one at a time, sorted by API Group and namespace.
It also restores items one at a time, using the restoreResourcePriorities flag to indicate which order API Groups should have their objects restored first.
While this does work currently, it presents challenges for more complex applications that have their dependencies in the form of a graph rather than strictly linear.
For example, Cluster API clusters are a set of complex Kubernetes objects that require that the "root" objects are restored first, before their "leaf" objects.
If a Cluster that a ClusterResourceSetBinding refers to does not exist, then a restore of the CAPI cluster will fail.
Additionally, Velero does not have a reliable way to communicate what objects will be affected in a backup or restore operation without actually performing the operation.
This complicates dry-run tasks, because a user must simply perform the action without knowing what will be touched.
It also complicates allowing backups and restores to run in parallel, because there is currently no way to know if a single Kubernetes object is included in multiple backups or restores, which can lead to unreliability, deadlocking, and race conditions were Velero made to be more concurrent today.
## Goals
- Introduce a manifest data structure that defines the contents of a backup.
- Store the manifest data into object storage alongside existing backup data.
## Non Goals
This proposal seeks to enable, but not define, the following.
- Implementing concurrency beyond what already exists in Velero.
- Implementing a dry-run feature.
- Implementing a new restore ordering procedure.
While the data structure should take these scenarios into account, they will not be implemented alongside it.
## High-Level Design
To uniquely identify a Kubernetes object within a cluster or backup, the following fields are sufficient:
- API Group and Version (example: backup.velero.io/v1)
- Namespace
- Name
- Labels
This criteria covers the majority of Velero's inclusion or exclusion logic.
However, some additional fields enable further use cases.
- Owners, which are other Kubernetes objects that have some relationship to this object. They may be strict or soft dependencies.
- Annotations, which provide extra metadata about the object that might be useful for other programs to consume.
- UUID generated by Kubernetes. This is useful in defining Owner relationships, providing a single, immutable key to find an object. This is _not_ considered at restore time, only internally for defining links.
All of this information already exists within a Velero backup's tarball of resources, but extracting such data is inefficient.
The entire tarball must be downloaded and extracted, and then JSON within parsed to read labels, owners, annotations, and a UUID.
The rest of the information is encoded in the file system structure within the Velero backup tarball.
While doable, this is heavyweight in terms of time and potentially memory.
Instead, this proposal suggests adding a new manifest structure that is kept alongside the backup tarball.
This structure would contain the above fields only, and could be used to perform inclusion/exclusion logic on a backup, select a resource from within a backup, and do set operations over backup or restore contents to identify overlapping resources.
Here are some use cases that this data structure should enable, that have been difficult to implement prior to its existence:
- A dry-run operation on backup, informing the user what would be selected if they were to perform the operation.
A manifest could be created and saved, allowing for a user to do a dry-run, then accept it to perform the backup.
Restore operations can be treated similarly.
- Efficient, non-overlapping parallelization of backup and restore operations.
By building or reading a manifest before performing a backup or restore, Velero can determine if there are overlapping resources.
If there are no overlaps, the operations can proceed in parallel.
If there are overlaps, the operations can proveed serially.
- Graph-based restores for non-linear dependencies.
Not all resources in a Kubernetes cluster can be defined in a strict, linear way.
They may have multiple owners, and writing BackupItemActions or RestoreItemActions to simply return a chain of owners is not an efficient way to support the many Kubernetes operators/controllers being written.
Instead, by having a manifest with enough information, Velero can build a discrete list that ensures dependencies are restored before their dependents, with less input from plugin authors.
## Detailed Design
The Manifest data structure would look like this, in Go type structure:
```golang
// NamespacedItems maps a given namespace to all of its contained items.
type NamespacedItems map[string]*Item
// APIGroupNamespaces maps an API group/version to a map of namespaces and their items.
type KindNamespaces map[string]NamespacedItems
type Manifest struct {
// Kinds holds the top level map of all resources in a manifest.
Kinds KindNamespaces
// Index is used to look up an individual item quickly based on UUID.
// This enables fetching owners out of the maps more efficiently at the cost of memory space.
Index map[string]*Item
}
// Item represents a Kubernetes resource within a backup based on it's selectable criteria.
// It is not the whole Kubernetes resource as retrieved from the API server, but rather a collection of important fields needed for filtering.
type Item struct {
// Kubernetes API group which this Item belongs to.
// Could be a core resource, or a CustomResourceDefinition.
APIGroup string
// Version of the APIGroup that the Item belongs to.
APIVersion string
// Kubernetes namespace which contains this item.
// Empty string for cluster-level resource.
Namespace string
// Item's given name.
Name string
// Map of labels that the Item had at backup time.
Labels map[string]string
// Map of annotations that the Item had at Backup time.
// Useful for plugins that may decide to process only Items with specific annotations.
Annotations map[string]string
// Owners is a list of UUIDs to other items that own or refer to this item.
Owners []string
// Manifest is a pointer to the Manifest in which this object is contained.
// Useful for getting access to things like the Manifest.Index map.
Manifest *Manifest
}
```
In addition to the new types, the following Go interfaces would be provided for convenience.
```golang
type Itermer interface {
// Returns the Item as a string, following the current Velero backup version 1.1.0 tarball structure format.
// <APIGroup>/<Namespace>/<APIVersion>/<name>.json
String() string
// Owners returns a slice of realized Items that own or refer to the current Item.
// Useful for building out a full graph of Items to restore.
// Will use the UUIDs in Item.Owners to look up the owner Items in the Manifest.
Owners() []*Item
// Kind returns the Kind of an object, which is a combination of the APIGroup and APIVersion.
// Useful for verifying the needed CustomResourceDefinition exists before actually restoring this Item.
Kind() *Item
// Children returns a slice of all Items that refer to this item as an Owner.
Children() []*Items
}
// This error type is being created in order to make reliable sentinel errors.
// See https://dave.cheney.net/2019/06/10/constant-time for more details.
type ManifestError string
func (e ManifestError) Error() string {
return string(e)
}
const ItemAlreadyExists = ManifestError("item already exists in manifest")
type Manifester interface {
// Set returns the entire list of resources as a set of strings (using Itemer.String).
// This is useful for comparing two manifests and determining if they have any overlapping resources.
// In the future, when implementing concurrent operations, this can be used as a sanity check to ensure resources aren't being backed up or restored by two operations at once.
Set() sets.String
// Adds an item to the appropriate APIGroup and Namespace within a Manifest
// Returns (true, nil) if the Item is successfully added to the Manifest,
// Returns (false, ItemAlreadyExists) if the Item is already in the Manifest.
Add(*Item) (bool, error)
}
```
### Serialization
The entire `Manifest` should be serialized into the `manifest.json` file within the object storage for a single backup.
It is possible that this file could also be compressed for space efficiency.
### Memory Concerns
Because the `Manifest` is holding a minimal amount of data, memory sizes should not be a concern for most clusters.
TODO: Document known limits on API group name, resource name, and kind name character limits.
## Security Considerations
Introducing this manifest does not increase the attack surface of Velero, as this data is already present in the existing backups.
Storing the manifest.json file next to the existing backup data in the object storage does not change access patterns.
## Compatibility
The introduction of this file should trigger Velero backup version 1.2.0, but it will not interfere with Velero versions that do not support the `Manifest` as the file will be additive.
In time, this file will replace the `<backupname>-resource-list.json.gz` file, but for compatibility the two will appear side by side.
When first implemented, Velero should simply build the `Manifest` as it backs up items, and serialize it at the end.
Any logic changes that rely on the `Manifest` file must be introduced with their own design document, with their own compatibility concerns.
## Implementation
The `Manifest` object will _not_ be implemented as a Kubernetes CustomResourceDefinition, but rather one of Velero's own internal constructs.
Implementation for the data structure alone should be minimal - the types will need to be defined in a `manifest` package.
Then, the backup process should create a `Manifest`, passing it to the various `*Backuppers` in the `backup` package.
These methods will insert individual `Items` into the `Manifest`.
Finally, logic should be added to the `persistence` package to ensure that the new `manifest.json` file is uploadable and allowed.
## Alternatives Considered
None so far.
## Open Issues
- When should compatibility with the `<backupname>-resource-list.json.gz` file be dropped?
- What are some good test case Kubernetes resources and controllers to try this out with?
Cluster API seems like an obvious choice, but are there others?
- Since it is not implemented as a CustomResourceDefinition, how can a `Manifest` be retained so that users could issue a dry-run command, then perform their actual desire operation?
Could it be stored in Velero's temp directories?
Note that this is making Velero itself more stateful.

120
design/velero-debug.md Normal file
View File

@@ -0,0 +1,120 @@
# `velero debug` command for gathering troubleshooting information
## Abstract
To simplify the communication between velero users and developers, this document proposes the `velero debug` command to generate a tarball including the logs needed for debugging.
Github issue: https://github.com/vmware-tanzu/velero/issues/675
## Background
Gathering information to troubleshoot a Velero deployment is currently spread across multiple commands, and is not very efficient. Logs for the Velero server itself are accessed via a kubectl logs command, while information on specific backups or restores are accessed via a Velero subcommand. Restic logs are even more complicated to retrieve, since one must gather logs for every instance of the daemonset, and theres currently no good mechanism to locate which node a particular restic backup ran against.
A dedicated subcommand can lower this effort and reduce back-and-forth between user and developer for collecting the logs.
## Goals
- Enable efficient log collection for Velero and associated components, like plugins and restic.
## Non Goals
- Collecting logs for components that do not belong to velero such as storage service.
- Automated log analysis.
## High-Level Design
With the introduction of the new command `velero debug`, the command would download all of the following information:
- velero deployment logs
- restic DaemonSet logs
- Plugin logs - need clarification for vSphere plugin see open quetions
- Resource and log of the backup and restore, if specified in the param
- Resources:
- BackupStorageLocation
- PodVolumeBackups
- PodVolumeRestores
A project called `crash-diagnostics` (or `crashd`) (https://github.com/vmware-tanzu/crash-diagnostics) implements the Kubernetes API queries and provides Starlark scripting language to abstract details, and collect the information into a local copy. It can be used as a standalone CLI executing a Starlark script file.
With the capabilities of embedding files in Go 1.16, we can define a Starlark script gathering the necessary information, embed the script at build time, then the velero debug command will invoke `crashd`, passing in the scripts text contents.
## Detailed Design
### Triggering the script
The Starlark script to be called by crashd:
```python
def capture_backup_logs():
if args.backup:
kube_capture(what="objects", kinds=['backups'], names=[args.backup])
backupLogsCmd = "velero backup logs {}".format(args.backup)
capture_local(cmd=backupLogsCmd)
def capture_restore_logs():
if args.restore:
kube_capture(what="objects", kinds=['restores'], names=[args.restore])
restoreLogsCmd = "velero restore logs {}".format(args.restore)
capture_local(cmd=restoreLogsCmd)
ns = args.namespace if args.namespace else "velero"
basedir = args.basedir if args.basedir else os.home
output = args.output if args.output else "bundle.tar.gz"
# Working dir for writing during script execution
crshd = crashd_config(workdir="{0}/velero-bundle".format(basedir))
set_defaults(kube_config(path=args.kubeconfig))
capture_local(cmd="velero version -n {}".format(ns))
capture_backup_logs()
capture_restore_logs()
kube_capture(what="logs", namespaces=[ns])
kube_capture(what="objects", namespaces=[ns], kinds=['backupstoragelocations', 'podvolumebackups', 'podvolumerestores'])
archive(output_file=output, source_paths=[crshd.workdir])
```
The sample command to trigger the script via crashd:
```shell
./crashd run ./velero.cshd --args
'backup=harbor-backup-2nd,namespace=velero,basedir=,restore=,kubeconfig=/home/.kube/minikube-250-224/config,output='
```
To trigger the script in `velero debug`, in the package `pkg/cmd/cli/debug` a struct `option` will be introduced
```go
type option struct {
// workdir for crashd will be $baseDir/tmp/crashd
baseDir string
// the namespace where velero server is installed
namespace string
// the absolute path for the log bundle to be generated
outputPath string
// the absolute path for the kubeconfig file that will be read by crashd for calling K8S API
kubeconfigPath string
// optional, the name of the backup resource whose log will be packaged into the debug bundle
backup string
// optional, the name of the restore resource whose log will be packaged into the debug bundle
restore string
}
```
The code will consolidate the input parameters and execution context of the `velero` CLI to form the option struct, which can be transformed into the `args` string for `crashd`
### kubeconfig
When it comes to accessing the API of k8s, `crashd` has a limitation that it can only accept a path of kubeconfig file, without customizing the `context`, and it does not honor the environment variables such as `KUBECONFIG`. `velero` does honor the environment variables and allow user to customize the path to kubeconfig and the `context`
There are two ways to make crashd have consistent behavior as velero in terms of getting the kube configuration:
1. Modify crashd to make it honor the environment variable and allow user to set context while calling k8s APIs. This is a preferred approach and it does make `crashd` better, but it may take longer time because we need to convince the maintainers of `crashd`, and double check the change will not break their current use cases.
There are 2 issues opened:
https://github.com/vmware-tanzu/crash-diagnostics/issues/208
https://github.com/vmware-tanzu/crash-diagnostics/issues/122
I'll try to contact the maintainers of `crashd` to see the feasibility for velero v1.7
2. Before calling the `crashd` script velero CLI will use `client-go` to generate a temp `kubeconfig` file honoring the environment variable and global flags, and pass it to crashd. Although theres no permission elevation and the temp file will be removed, theres still some security concern because the temp file is accessible by other programs before its deleted, or it may not be deleted if an error happens.
Therefore, we should consider `option 1` the better choice, and see `option 2` as the backup.
## Alternatives Considered
The collection could be done via the kubernetes client-go API, but such integration is not necessarily trivial to implement, therefore, `crashd` is preferred approach
## Security Considerations
- The current released version of `crashd` depends on `client-go v0.19.0` which has a known CVE, we need to make sure that when its compiled into velero it uses the version that has the CVE fixed. We should write a PR or push crashd maintainer to fix the CVE-2021-3121 in 0.19.0
- The starlark script will be embedded into the velero binary, so theres little risk that the script will be modified before being called.
- There may be minor security issues if we choose to create a temp `kubeconfig` file for `crashd` and remove it afterwards. If we have to choose this option, we need to review it with security experts to better understand the risks.
## Compatibility
As the `crashd` project evolves the behavior of the internal functions used in the Starlark script may change. Well ensure the correctness of the script via regular E2E tests.
## Implementation
1. Bump up to use Go v1.16 to compile velero
2. Embed the starlark script
3. Implement the `velero debug` sub-command to call the script
4. Add E2E test case
## Open Questions
- **Log collection for vsphere plugin:** Per the design of vsphere plugin: https://github.com/vmware-tanzu/velero-plugin-for-vsphere#architecture when user backup resource on a guest cluster the code in component in the supervisor cluster may be called. Per discussion in v1.7 we will only support collecting logs of process running in one k8s cluster. In terms of implementation, we will do investigate the possibility to call extra script in crashd and ask vsphere plugin developer to provide a script to do the log collection, but the details remain TBD.
- **Command dependencies:** In the Starlark script, for collecting version info and backup logs, it calls the `velero backup logs` and `velero version`, which makes the call stack like velero debug -> crashd -> velero xxx. We need to make sure this works under different PATH settings.
- **Progress and error handling:** The log collection may take a relatively long time, log messages should be printed to indicate the progress when different items are being downloaded and packaged. Additionally, when an error happens, we need to double check if its omitted by crashd.

View File

@@ -89,31 +89,18 @@ fi
# Since we're past the validation of the VELERO_VERSION, parse the version's individual components.
eval $(go run $DIR/chk_version.go)
printf "To clarify, you've provided a version string of $VELERO_VERSION.\n"
printf "Based on this, the following assumptions have been made: \n"
# $VELERO_PATCH gets populated by the chk_version.go scrip that parses and verifies the given version format
# If we've got a patch release, we assume the tag is on release branch.
if [[ "$VELERO_PATCH" != 0 ]]; then
printf "*\t This is a patch release.\n"
ON_RELEASE_BRANCH=TRUE
fi
[[ "$VELERO_PATCH" != 0 ]] && printf "*\t This is a patch release.\n"
# $VELERO_PRERELEASE gets populated by the chk_version.go script that parses and verifies the given version format
# If we've got a GA release, we assume the tag is on release branch.
# $VELERO_PRERELEASE gets populated by the chk_version.go script that parses and verifies the given version format
# -n is "string is non-empty"
[[ -n $VELERO_PRERELEASE ]] && printf "*\t This is a pre-release.\n"
# -z is "string is empty"
if [[ -z $VELERO_PRERELEASE ]]; then
printf "*\t This is a GA release.\n"
ON_RELEASE_BRANCH=TRUE
fi
if [[ "$ON_RELEASE_BRANCH" == "TRUE" ]]; then
release_branch_name=release-$VELERO_MAJOR.$VELERO_MINOR
printf "*\t The commit to tag is on branch: %s. Please make sure this branch has been created.\n" $release_branch_name
fi
[[ -z $VELERO_PRERELEASE ]] && printf "*\t This is a GA release.\n"
if [[ $publish == "TRUE" ]]; then
echo "If this is all correct, press enter/return to proceed to TAG THE RELEASE and UPLOAD THE TAG TO GITHUB."
@@ -130,29 +117,55 @@ echo "Alright, let's go."
echo "Pulling down all git tags and branches before doing any work."
git fetch "$remote" --tags
if [[ -n $release_branch_name ]]; then
# Tag on release branch
# $VELERO_PATCH gets populated by the chk_version.go scrip that parses and verifies the given version format
# If we've got a patch release, we'll need to create a release branch for it.
if [[ "$VELERO_PATCH" > 0 ]]; then
release_branch_name=release-$VELERO_MAJOR.$VELERO_MINOR
remote_release_branch_name="$remote/$release_branch_name"
# Determine whether the local and remote release branches already exist
local_branch=$(git branch | grep "$release_branch_name")
remote_branch=$(git branch -r | grep "$remote_release_branch_name")
if [[ -z $remote_branch ]]; then
echo "The branch $remote_release_branch_name must be created before you tag the release."
exit 1
fi
if [[ -z $local_branch ]]; then
if [[ -n $remote_branch ]]; then
if [[ -z $local_branch ]]; then
# Remote branch exists, but does not exist locally. Checkout and track the remote branch.
git checkout --track "$remote_release_branch_name"
else
else
# Checkout the local release branch and ensure it is up to date with the remote
git checkout "$release_branch_name"
git pull --set-upstream "$remote" "$release_branch_name"
fi
else
if [[ -z $local_branch ]]; then
# Neither the remote or local release branch exists, create it
git checkout -b $release_branch_name
else
# The local branch exists so check it out.
git checkout $release_branch_name
fi
fi
echo "Now you'll need to cherry-pick any relevant git commits into this release branch."
echo "Either pause this script with ctrl-z, or open a new terminal window and do the cherry-picking."
if [[ $publish == "TRUE" ]]; then
read -p "Press enter when you're done cherry-picking. THIS WILL MAKE A TAG PUSH THE BRANCH TO $remote"
else
read -p "Press enter when you're done cherry-picking."
fi
# TODO can/should we add a way to review the cherry-picked commits before the push?
if [[ $publish == "TRUE" ]]; then
echo "Pushing $release_branch_name to \"$remote\" remote"
git push --set-upstream "$remote" $release_branch_name
fi
tag_and_push
else
echo "Checking out $remote/main."
git checkout "$remote"/main
tag_and_push
fi

View File

@@ -29,7 +29,6 @@ import (
"github.com/vmware-tanzu/velero/pkg/archive"
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
deleteactionitemv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
"github.com/vmware-tanzu/velero/pkg/util/collections"
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
)
@@ -38,7 +37,7 @@ import (
type Context struct {
Backup *velerov1api.Backup
BackupReader io.Reader
Actions []deleteactionitemv2.DeleteItemAction
Actions []velero.DeleteItemAction
Filesystem filesystem.Interface
Log logrus.FieldLogger
DiscoveryHelper discovery.Helper
@@ -164,7 +163,7 @@ func (ctx *Context) getApplicableActions(groupResource schema.GroupResource, nam
// resolvedActions are DeleteItemActions decorated with resource/namespace include/exclude collections, as well as label selectors for easy comparison.
type resolvedAction struct {
deleteactionitemv2.DeleteItemAction
velero.DeleteItemAction
resourceIncludesExcludes *collections.IncludesExcludes
namespaceIncludesExcludes *collections.IncludesExcludes
@@ -172,7 +171,7 @@ type resolvedAction struct {
}
// resolveActions resolves the AppliesTo ResourceSelectors of DeleteItemActions plugins against the Kubernetes discovery API for fully-qualified names.
func resolveActions(actions []deleteactionitemv2.DeleteItemAction, helper discovery.Helper) ([]resolvedAction, error) {
func resolveActions(actions []velero.DeleteItemAction, helper discovery.Helper) ([]resolvedAction, error) {
var resolved []resolvedAction
for _, action := range actions {

View File

@@ -44,8 +44,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/discovery"
velerov1client "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/typed/velero/v1"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/podexec"
"github.com/vmware-tanzu/velero/pkg/restic"
"github.com/vmware-tanzu/velero/pkg/util/collections"
@@ -62,8 +61,7 @@ const BackupFormatVersion = "1.1.0"
type Backupper interface {
// Backup takes a backup using the specification in the velerov1api.Backup and writes backup and log data
// to the given writers.
Backup(logger logrus.FieldLogger, backup *Request, backupFile io.Writer,
actions []backupitemactionv2.BackupItemAction, volumeSnapshotterGetter VolumeSnapshotterGetter) error
Backup(logger logrus.FieldLogger, backup *Request, backupFile io.Writer, actions []velero.BackupItemAction, volumeSnapshotterGetter VolumeSnapshotterGetter) error
}
// kubernetesBackupper implements Backupper.
@@ -79,7 +77,7 @@ type kubernetesBackupper struct {
}
type resolvedAction struct {
backupitemactionv2.BackupItemAction
velero.BackupItemAction
resourceIncludesExcludes *collections.IncludesExcludes
namespaceIncludesExcludes *collections.IncludesExcludes
@@ -123,7 +121,7 @@ func NewKubernetesBackupper(
}, nil
}
func resolveActions(actions []backupitemactionv2.BackupItemAction, helper discovery.Helper) ([]resolvedAction, error) {
func resolveActions(actions []velero.BackupItemAction, helper discovery.Helper) ([]resolvedAction, error) {
var resolved []resolvedAction
for _, action := range actions {
@@ -199,7 +197,7 @@ func getResourceHook(hookSpec velerov1api.BackupResourceHookSpec, discoveryHelpe
}
type VolumeSnapshotterGetter interface {
GetVolumeSnapshotter(name string) (volumesnapshotterv2.VolumeSnapshotter, error)
GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error)
}
// Backup backs up the items specified in the Backup, placing them in a gzip-compressed tar file
@@ -207,8 +205,7 @@ type VolumeSnapshotterGetter interface {
// a complete backup failure is returned. Errors that constitute partial failures (i.e. failures to
// back up individual resources that don't prevent the backup from continuing to be processed) are logged
// to the backup log.
func (kb *kubernetesBackupper) Backup(log logrus.FieldLogger, backupRequest *Request, backupFile io.Writer,
actions []backupitemactionv2.BackupItemAction, volumeSnapshotterGetter VolumeSnapshotterGetter) error {
func (kb *kubernetesBackupper) Backup(log logrus.FieldLogger, backupRequest *Request, backupFile io.Writer, actions []velero.BackupItemAction, volumeSnapshotterGetter VolumeSnapshotterGetter) error {
gzippedData := gzip.NewWriter(backupFile)
defer gzippedData.Close()

View File

@@ -47,7 +47,6 @@ import (
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
"github.com/vmware-tanzu/velero/pkg/restic"
"github.com/vmware-tanzu/velero/pkg/test"
testutil "github.com/vmware-tanzu/velero/pkg/test"
@@ -971,30 +970,6 @@ func TestBackupResourceCohabitation(t *testing.T) {
"resources/deployments.apps/v1-preferredversion/namespaces/zoo/raz.json",
},
},
{
name: "when deployments exist that are not in the cohabitating groups those are backed up along with apps/deployments",
backup: defaultBackup().Result(),
apiResources: []*test.APIResource{
test.VeleroDeployments(
builder.ForTestCR("Deployment", "foo", "bar").Result(),
builder.ForTestCR("Deployment", "zoo", "raz").Result(),
),
test.Deployments(
builder.ForDeployment("foo", "bar").Result(),
builder.ForDeployment("zoo", "raz").Result(),
),
},
want: []string{
"resources/deployments.apps/namespaces/foo/bar.json",
"resources/deployments.apps/namespaces/zoo/raz.json",
"resources/deployments.apps/v1-preferredversion/namespaces/foo/bar.json",
"resources/deployments.apps/v1-preferredversion/namespaces/zoo/raz.json",
"resources/deployments.velero.io/namespaces/foo/bar.json",
"resources/deployments.velero.io/namespaces/zoo/raz.json",
"resources/deployments.velero.io/v1-preferredversion/namespaces/foo/bar.json",
"resources/deployments.velero.io/v1-preferredversion/namespaces/zoo/raz.json",
},
},
}
for _, tc := range tests {
@@ -1332,7 +1307,7 @@ func TestBackupActionsRunForCorrectItems(t *testing.T) {
h.addItems(t, resource)
}
actions := []backupitemactionv2.BackupItemAction{}
actions := []velero.BackupItemAction{}
for action := range tc.actions {
actions = append(actions, action)
}
@@ -1358,7 +1333,7 @@ func TestBackupWithInvalidActions(t *testing.T) {
name string
backup *velerov1.Backup
apiResources []*test.APIResource
actions []backupitemactionv2.BackupItemAction
actions []velero.BackupItemAction
}{
{
name: "action with invalid label selector results in an error",
@@ -1374,7 +1349,7 @@ func TestBackupWithInvalidActions(t *testing.T) {
builder.ForPersistentVolume("baz").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
new(recordResourcesAction).ForLabelSelector("=invalid-selector"),
},
},
@@ -1392,7 +1367,7 @@ func TestBackupWithInvalidActions(t *testing.T) {
builder.ForPersistentVolume("baz").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
&appliesToErrorAction{},
},
},
@@ -1454,7 +1429,7 @@ func TestBackupActionModifications(t *testing.T) {
name string
backup *velerov1.Backup
apiResources []*test.APIResource
actions []backupitemactionv2.BackupItemAction
actions []velero.BackupItemAction
want map[string]unstructuredObject
}{
{
@@ -1465,7 +1440,7 @@ func TestBackupActionModifications(t *testing.T) {
builder.ForPod("ns-1", "pod-1").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
modifyingActionGetter(func(item *unstructured.Unstructured) {
item.SetLabels(map[string]string{"updated": "true"})
}),
@@ -1482,7 +1457,7 @@ func TestBackupActionModifications(t *testing.T) {
builder.ForPod("ns-1", "pod-1").ObjectMeta(builder.WithLabels("should-be-removed", "true")).Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
modifyingActionGetter(func(item *unstructured.Unstructured) {
item.SetLabels(nil)
}),
@@ -1499,7 +1474,7 @@ func TestBackupActionModifications(t *testing.T) {
builder.ForPod("ns-1", "pod-1").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
modifyingActionGetter(func(item *unstructured.Unstructured) {
item.Object["spec"].(map[string]interface{})["nodeName"] = "foo"
}),
@@ -1517,7 +1492,7 @@ func TestBackupActionModifications(t *testing.T) {
builder.ForPod("ns-1", "pod-1").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
modifyingActionGetter(func(item *unstructured.Unstructured) {
item.SetName(item.GetName() + "-updated")
item.SetNamespace(item.GetNamespace() + "-updated")
@@ -1558,7 +1533,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
name string
backup *velerov1.Backup
apiResources []*test.APIResource
actions []backupitemactionv2.BackupItemAction
actions []velero.BackupItemAction
want []string
}{
{
@@ -1571,7 +1546,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
builder.ForPod("ns-3", "pod-3").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
&pluggableAction{
selector: velero.ResourceSelector{IncludedNamespaces: []string{"ns-1"}},
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
@@ -1603,7 +1578,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
builder.ForPod("ns-3", "pod-3").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
&pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
@@ -1633,7 +1608,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
builder.ForPersistentVolume("pv-2").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
&pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
@@ -1666,7 +1641,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
builder.ForPersistentVolume("pv-2").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
&pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
@@ -1696,7 +1671,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
builder.ForPersistentVolume("pv-2").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
&pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
@@ -1727,7 +1702,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
builder.ForPersistentVolume("pv-2").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
&pluggableAction{
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
additionalItems := []velero.ResourceIdentifier{
@@ -1757,7 +1732,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
builder.ForPod("ns-3", "pod-3").Result(),
),
},
actions: []backupitemactionv2.BackupItemAction{
actions: []velero.BackupItemAction{
&pluggableAction{
selector: velero.ResourceSelector{IncludedNamespaces: []string{"ns-1"}},
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {

View File

@@ -39,7 +39,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/client"
"github.com/vmware-tanzu/velero/pkg/discovery"
"github.com/vmware-tanzu/velero/pkg/kuberesource"
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/restic"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/volume"
@@ -56,7 +56,7 @@ type itemBackupper struct {
volumeSnapshotterGetter VolumeSnapshotterGetter
itemHookHandler hook.ItemHookHandler
snapshotLocationVolumeSnapshotters map[string]volumesnapshotterv2.VolumeSnapshotter
snapshotLocationVolumeSnapshotters map[string]velero.VolumeSnapshotter
}
// backupItem backs up an individual item to tarWriter. The item may be excluded based on the
@@ -367,8 +367,7 @@ func (ib *itemBackupper) executeActions(
// volumeSnapshotter instantiates and initializes a VolumeSnapshotter given a VolumeSnapshotLocation,
// or returns an existing one if one's already been initialized for the location.
func (ib *itemBackupper) volumeSnapshotter(snapshotLocation *velerov1api.VolumeSnapshotLocation) (
volumesnapshotterv2.VolumeSnapshotter, error) {
func (ib *itemBackupper) volumeSnapshotter(snapshotLocation *velerov1api.VolumeSnapshotLocation) (velero.VolumeSnapshotter, error) {
if bs, ok := ib.snapshotLocationVolumeSnapshotters[snapshotLocation.Name]; ok {
return bs, nil
}
@@ -383,7 +382,7 @@ func (ib *itemBackupper) volumeSnapshotter(snapshotLocation *velerov1api.VolumeS
}
if ib.snapshotLocationVolumeSnapshotters == nil {
ib.snapshotLocationVolumeSnapshotters = make(map[string]volumesnapshotterv2.VolumeSnapshotter)
ib.snapshotLocationVolumeSnapshotters = make(map[string]velero.VolumeSnapshotter)
}
ib.snapshotLocationVolumeSnapshotters[snapshotLocation.Name] = bs
@@ -439,7 +438,7 @@ func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.Fie
var (
volumeID, location string
volumeSnapshotter volumesnapshotterv2.VolumeSnapshotter
volumeSnapshotter velero.VolumeSnapshotter
)
for _, snapshotLocation := range ib.backupRequest.SnapshotLocations {

View File

@@ -26,7 +26,7 @@ import (
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/labels"
@@ -209,18 +209,16 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
}
if cohabitator, found := r.cohabitatingResources[resource.Name]; found {
if gv.Group == cohabitator.groupResource1.Group || gv.Group == cohabitator.groupResource2.Group {
if cohabitator.seen {
log.WithFields(
logrus.Fields{
"cohabitatingResource1": cohabitator.groupResource1.String(),
"cohabitatingResource2": cohabitator.groupResource2.String(),
},
).Infof("Skipping resource because it cohabitates and we've already processed it")
return nil, nil
}
cohabitator.seen = true
if cohabitator.seen {
log.WithFields(
logrus.Fields{
"cohabitatingResource1": cohabitator.groupResource1.String(),
"cohabitatingResource2": cohabitator.groupResource2.String(),
},
).Infof("Skipping resource because it cohabitates and we've already processed it")
return nil, nil
}
cohabitator.seen = true
}
namespacesToList := getNamespacesToList(r.backupRequest.NamespaceIncludesExcludes)
@@ -295,7 +293,6 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
if selector := r.backupRequest.Spec.LabelSelector; selector != nil {
labelSelector = metav1.FormatLabelSelector(selector)
}
listOptions := metav1.ListOptions{LabelSelector: labelSelector}
log.Info("Listing items")
unstructuredItems := make([]unstructured.Unstructured, 0)
@@ -303,50 +300,42 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
if r.pageSize > 0 {
// If limit is positive, use a pager to split list over multiple requests
// Use Velero's dynamic list function instead of the default
listFunc := pager.SimplePageFunc(func(opts metav1.ListOptions) (runtime.Object, error) {
list, err := resourceClient.List(listOptions)
if err != nil {
return nil, err
}
return list, nil
})
listPager := pager.New(listFunc)
listPager := pager.New(pager.SimplePageFunc(func(opts metav1.ListOptions) (runtime.Object, error) {
return resourceClient.List(opts)
}))
// Use the page size defined in the server config
// TODO allow configuration of page buffer size
listPager.PageSize = int64(r.pageSize)
// Add each item to temporary slice
var items []unstructured.Unstructured
err := listPager.EachListItem(context.Background(), listOptions, func(object runtime.Object) error {
item, isUnstructured := object.(*unstructured.Unstructured)
if !isUnstructured {
// We should never hit this
log.Error("Got type other than Unstructured from pager func")
return nil
}
items = append(items, *item)
return nil
})
if statusError, isStatusError := err.(*apierrors.StatusError); isStatusError && statusError.Status().Reason == metav1.StatusReasonExpired {
log.WithError(errors.WithStack(err)).Error("Error paging item list. Falling back on unpaginated list")
unstructuredList, err := resourceClient.List(listOptions)
if err != nil {
log.WithError(errors.WithStack(err)).Error("Error listing items")
continue
}
items = unstructuredList.Items
} else if err != nil {
log.WithError(errors.WithStack(err)).Error("Error paging item list")
list, paginated, err := listPager.List(context.Background(), metav1.ListOptions{LabelSelector: labelSelector})
if err != nil {
log.WithError(errors.WithStack(err)).Error("Error listing resources")
continue
}
if !paginated {
log.Infof("list for groupResource %s was not paginated", gr)
}
err = meta.EachListItem(list, func(object runtime.Object) error {
u, ok := object.(*unstructured.Unstructured)
if !ok {
log.WithError(errors.WithStack(fmt.Errorf("expected *unstructured.Unstructured but got %T", u))).Error("unable to understand entry in the list")
return fmt.Errorf("expected *unstructured.Unstructured but got %T", u)
}
unstructuredItems = append(unstructuredItems, *u)
return nil
})
if err != nil {
log.WithError(errors.WithStack(err)).Error("unable to understand paginated list")
continue
}
unstructuredItems = append(unstructuredItems, items...)
} else {
// If limit is not positive, do not use paging. Instead, request all items at once
unstructuredList, err := resourceClient.List(metav1.ListOptions{LabelSelector: labelSelector})
unstructuredItems = append(unstructuredItems, unstructuredList.Items...)
if err != nil {
log.WithError(errors.WithStack(err)).Error("Error listing items")
continue
}
unstructuredItems = append(unstructuredItems, unstructuredList.Items...)
}
log.Infof("Retrieved %d items", len(unstructuredItems))

View File

@@ -1,77 +0,0 @@
/*
Copyright the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package builder
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
)
// CustomResourceBuilder builds objects based on velero APIVersion CRDs.
type TestCRBuilder struct {
object *TestCR
}
// ForTestCR is the constructor for a TestCRBuilder.
func ForTestCR(crdKind, ns, name string) *TestCRBuilder {
return &TestCRBuilder{
object: &TestCR{
TypeMeta: metav1.TypeMeta{
APIVersion: velerov1api.SchemeGroupVersion.String(),
Kind: crdKind,
},
ObjectMeta: metav1.ObjectMeta{
Namespace: ns,
Name: name,
},
},
}
}
// Result returns the built TestCR.
func (b *TestCRBuilder) Result() *TestCR {
return b.object
}
// ObjectMeta applies functional options to the TestCR's ObjectMeta.
func (b *TestCRBuilder) ObjectMeta(opts ...ObjectMetaOpt) *TestCRBuilder {
for _, opt := range opts {
opt(b.object)
}
return b
}
type TestCR struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ObjectMeta `json:"metadata,omitempty"`
// +optional
Spec TestCRSpec `json:"spec,omitempty"`
// +optional
Status TestCRStatus `json:"status,omitempty"`
}
type TestCRSpec struct {
}
type TestCRStatus struct {
}

View File

@@ -303,8 +303,7 @@ func newServer(f client.Factory, config serverConfig, logger *logrus.Logger) (*s
corev1api.AddToScheme(scheme)
mgr, err := ctrl.NewManager(clientConfig, ctrl.Options{
Scheme: scheme,
Namespace: f.Namespace(),
Scheme: scheme,
})
if err != nil {
cancelFunc()

View File

@@ -48,7 +48,7 @@ import (
persistencemocks "github.com/vmware-tanzu/velero/pkg/persistence/mocks"
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
pluginmocks "github.com/vmware-tanzu/velero/pkg/plugin/mocks"
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
"github.com/vmware-tanzu/velero/pkg/util/logging"
@@ -58,7 +58,7 @@ type fakeBackupper struct {
mock.Mock
}
func (b *fakeBackupper) Backup(logger logrus.FieldLogger, backup *pkgbackup.Request, backupFile io.Writer, actions []backupitemactionv2.BackupItemAction, volumeSnapshotterGetter pkgbackup.VolumeSnapshotterGetter) error {
func (b *fakeBackupper) Backup(logger logrus.FieldLogger, backup *pkgbackup.Request, backupFile io.Writer, actions []velero.BackupItemAction, volumeSnapshotterGetter pkgbackup.VolumeSnapshotterGetter) error {
args := b.Called(logger, backup, backupFile, actions, volumeSnapshotterGetter)
return args.Error(0)
}
@@ -825,7 +825,7 @@ func TestProcessBackupCompletions(t *testing.T) {
pluginManager.On("GetBackupItemActions").Return(nil, nil)
pluginManager.On("CleanupClients").Return(nil)
backupper.On("Backup", mock.Anything, mock.Anything, mock.Anything, []backupitemactionv2.BackupItemAction(nil), pluginManager).Return(nil)
backupper.On("Backup", mock.Anything, mock.Anything, mock.Anything, []velero.BackupItemAction(nil), pluginManager).Return(nil)
backupStore.On("BackupExists", test.backupLocation.Spec.StorageType.ObjectStorage.Bucket, test.backup.Name).Return(test.backupExists, test.existenceCheckError)
// Ensure we have a CompletionTimestamp when uploading and that the backup name matches the backup in the object store.

View File

@@ -46,7 +46,7 @@ import (
"github.com/vmware-tanzu/velero/pkg/metrics"
"github.com/vmware-tanzu/velero/pkg/persistence"
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
volumesnapshotter "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/restic"
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
"github.com/vmware-tanzu/velero/pkg/util/kube"
@@ -333,7 +333,7 @@ func (c *backupDeletionController) processRequest(req *velerov1api.DeleteBackupR
if snapshots, err := backupStore.GetBackupVolumeSnapshots(backup.Name); err != nil {
errs = append(errs, errors.Wrap(err, "error getting backup's volume snapshots").Error())
} else {
volumeSnapshotters := make(map[string]volumesnapshotter.VolumeSnapshotter)
volumeSnapshotters := make(map[string]velero.VolumeSnapshotter)
for _, snapshot := range snapshots {
log.WithField("providerSnapshotID", snapshot.Status.ProviderSnapshotID).Info("Removing snapshot associated with backup")
@@ -433,7 +433,7 @@ func volumeSnapshotterForSnapshotLocation(
namespace, snapshotLocationName string,
snapshotLocationLister velerov1listers.VolumeSnapshotLocationLister,
pluginManager clientmgmt.Manager,
) (volumesnapshotter.VolumeSnapshotter, error) {
) (velero.VolumeSnapshotter, error) {
snapshotLocation, err := snapshotLocationLister.VolumeSnapshotLocations(namespace).Get(snapshotLocationName)
if err != nil {
return nil, errors.Wrapf(err, "error getting volume snapshot location %s", snapshotLocationName)

View File

@@ -45,7 +45,7 @@ import (
persistencemocks "github.com/vmware-tanzu/velero/pkg/persistence/mocks"
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
pluginmocks "github.com/vmware-tanzu/velero/pkg/plugin/mocks"
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/plugin/velero/mocks"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
"github.com/vmware-tanzu/velero/pkg/volume"
@@ -802,7 +802,7 @@ func TestBackupDeletionControllerProcessRequest(t *testing.T) {
pluginManager := &pluginmocks.Manager{}
pluginManager.On("GetVolumeSnapshotter", "provider-1").Return(td.volumeSnapshotter, nil)
pluginManager.On("GetDeleteItemActions").Return([]deleteitemactionv2.DeleteItemAction{}, nil)
pluginManager.On("GetDeleteItemActions").Return([]velero.DeleteItemAction{}, nil)
pluginManager.On("CleanupClients")
td.controller.newPluginManager = func(logrus.FieldLogger) clientmgmt.Manager { return pluginManager }
@@ -932,7 +932,7 @@ func TestBackupDeletionControllerProcessRequest(t *testing.T) {
pluginManager := &pluginmocks.Manager{}
pluginManager.On("GetVolumeSnapshotter", "provider-1").Return(td.volumeSnapshotter, nil)
pluginManager.On("GetDeleteItemActions").Return([]deleteitemactionv2.DeleteItemAction{new(mocks.DeleteItemAction)}, nil)
pluginManager.On("GetDeleteItemActions").Return([]velero.DeleteItemAction{new(mocks.DeleteItemAction)}, nil)
pluginManager.On("CleanupClients")
td.controller.newPluginManager = func(logrus.FieldLogger) clientmgmt.Manager { return pluginManager }

View File

@@ -276,11 +276,11 @@ func (c *scheduleController) submitBackupIfDue(item *api.Schedule, cronSchedule
}
func getNextRunTime(schedule *api.Schedule, cronSchedule cron.Schedule, asOf time.Time) (bool, time.Time) {
// get the latest run time (if the schedule hasn't run yet, this will be the zero value which will trigger
// an immediate backup)
var lastBackupTime time.Time
if schedule.Status.LastBackup != nil {
lastBackupTime = schedule.Status.LastBackup.Time
} else {
lastBackupTime = schedule.CreationTimestamp.Time
}
nextRunTime := cronSchedule.Next(lastBackupTime)

View File

@@ -274,7 +274,7 @@ func TestGetNextRunTime(t *testing.T) {
{
name: "first run",
schedule: defaultSchedule(),
expectedDue: false,
expectedDue: true,
expectedNextRunTimeOffset: "5m",
},
{
@@ -319,9 +319,6 @@ func TestGetNextRunTime(t *testing.T) {
require.NoError(t, err, "unable to parse test.lastRanOffset: %v", err)
test.schedule.Status.LastBackup = &metav1.Time{Time: testClock.Now().Add(-offsetDuration)}
test.schedule.CreationTimestamp = *test.schedule.Status.LastBackup
} else {
test.schedule.CreationTimestamp = metav1.Time{Time: testClock.Now()}
}
nextRunTimeOffset, err := time.ParseDuration(test.expectedNextRunTimeOffset)
@@ -329,11 +326,11 @@ func TestGetNextRunTime(t *testing.T) {
panic(err)
}
// calculate expected next run time (if the schedule hasn't run yet, this
// will be the zero value which will trigger an immediate backup)
var baseTime time.Time
if test.lastRanOffset != "" {
baseTime = test.schedule.Status.LastBackup.Time
} else {
baseTime = test.schedule.CreationTimestamp.Time
}
expectedNextRunTime := baseTime.Add(nextRunTimeOffset)

View File

@@ -33,7 +33,7 @@ import (
"github.com/vmware-tanzu/velero/internal/credentials"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/scheme"
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
"github.com/vmware-tanzu/velero/pkg/volume"
)
@@ -80,16 +80,16 @@ type BackupStore interface {
const DownloadURLTTL = 10 * time.Minute
type objectBackupStore struct {
objectStore objectstorev2.ObjectStore
objectStore velero.ObjectStore
bucket string
layout *ObjectStoreLayout
logger logrus.FieldLogger
}
// ObjectStoreGetter is a type that can get a objectstorev2.ObjectStore
// ObjectStoreGetter is a type that can get a velero.ObjectStore
// from a provider name.
type ObjectStoreGetter interface {
GetObjectStore(provider string) (objectstorev2.ObjectStore, error)
GetObjectStore(provider string) (velero.ObjectStore, error)
}
// ObjectBackupStoreGetter is a type that can get a velero.BackupStore for a
@@ -326,7 +326,7 @@ func (s *objectBackupStore) GetBackupVolumeSnapshots(name string) ([]*volume.Sna
// tryGet returns the object with the given key if it exists, nil if it does not exist,
// or an error if it was unable to check existence or get the object.
func tryGet(objectStore objectstorev2.ObjectStore, bucket, key string) (io.ReadCloser, error) {
func tryGet(objectStore velero.ObjectStore, bucket, key string) (io.ReadCloser, error) {
exists, err := objectStore.ObjectExists(bucket, key)
if err != nil {
return nil, errors.WithStack(err)
@@ -494,7 +494,7 @@ func seekToBeginning(r io.Reader) error {
return err
}
func seekAndPutObject(objectStore objectstorev2.ObjectStore, bucket, key string, file io.Reader) error {
func seekAndPutObject(objectStore velero.ObjectStore, bucket, key string, file io.Reader) error {
if file == nil {
return nil
}

View File

@@ -36,8 +36,8 @@ import (
"github.com/vmware-tanzu/velero/internal/credentials"
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/builder"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
providermocks "github.com/vmware-tanzu/velero/pkg/plugin/velero/mocks"
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
velerotest "github.com/vmware-tanzu/velero/pkg/test"
"github.com/vmware-tanzu/velero/pkg/util/encode"
"github.com/vmware-tanzu/velero/pkg/volume"
@@ -595,9 +595,9 @@ func TestGetDownloadURL(t *testing.T) {
}
}
type objectStoreGetter map[string]objectstorev2.ObjectStore
type objectStoreGetter map[string]velero.ObjectStore
func (osg objectStoreGetter) GetObjectStore(provider string) (objectstorev2.ObjectStore, error) {
func (osg objectStoreGetter) GetObjectStore(provider string) (velero.ObjectStore, error) {
res, ok := osg[provider]
if !ok {
return nil, errors.New("object store not found")

View File

@@ -1,5 +1,5 @@
/*
Copyright 2021 the Velero contributors.
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -73,12 +73,6 @@ func (b *clientBuilder) clientConfig() *hcplugin.ClientConfig {
string(framework.PluginKindPluginLister): &framework.PluginListerPlugin{},
string(framework.PluginKindRestoreItemAction): framework.NewRestoreItemActionPlugin(framework.ClientLogger(b.clientLogger)),
string(framework.PluginKindDeleteItemAction): framework.NewDeleteItemActionPlugin(framework.ClientLogger(b.clientLogger)),
// Version 2
string(framework.PluginKindBackupItemActionV2): framework.NewBackupItemActionPlugin(framework.ClientLogger(b.clientLogger)),
string(framework.PluginKindVolumeSnapshotterV2): framework.NewVolumeSnapshotterPlugin(framework.ClientLogger(b.clientLogger)),
string(framework.PluginKindObjectStoreV2): framework.NewObjectStorePlugin(framework.ClientLogger(b.clientLogger)),
string(framework.PluginKindRestoreItemActionV2): framework.NewRestoreItemActionPlugin(framework.ClientLogger(b.clientLogger)),
string(framework.PluginKindDeleteItemActionV2): framework.NewDeleteItemActionPlugin(framework.ClientLogger(b.clientLogger)),
},
Logger: b.pluginLogger,
Cmd: exec.Command(b.commandName, b.commandArgs...),

View File

@@ -1,5 +1,5 @@
/*
Copyright 2021 the Velero contributors.
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -17,46 +17,40 @@ limitations under the License.
package clientmgmt
import (
"errors"
"fmt"
"strings"
"sync"
"github.com/sirupsen/logrus"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// Manager manages the lifecycles of plugins.
type Manager interface {
// GetObjectStore returns the ObjectStore plugin for name.
GetObjectStore(name string) (objectstorev2.ObjectStore, error)
GetObjectStore(name string) (velero.ObjectStore, error)
// GetVolumeSnapshotter returns the VolumeSnapshotter plugin for name.
GetVolumeSnapshotter(name string) (volumesnapshotterv2.VolumeSnapshotter, error)
GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error)
// GetBackupItemActions returns all backup item action plugins.
GetBackupItemActions() ([]backupitemactionv2.BackupItemAction, error)
GetBackupItemActions() ([]velero.BackupItemAction, error)
// GetBackupItemAction returns the backup item action plugin for name.
GetBackupItemAction(name string) (backupitemactionv2.BackupItemAction, error)
GetBackupItemAction(name string) (velero.BackupItemAction, error)
// GetRestoreItemActions returns all restore item action plugins.
GetRestoreItemActions() ([]restoreitemactionv2.RestoreItemAction, error)
GetRestoreItemActions() ([]velero.RestoreItemAction, error)
// GetRestoreItemAction returns the restore item action plugin for name.
GetRestoreItemAction(name string) (restoreitemactionv2.RestoreItemAction, error)
GetRestoreItemAction(name string) (velero.RestoreItemAction, error)
// GetDeleteItemActions returns all delete item action plugins.
GetDeleteItemActions() ([]deleteitemactionv2.DeleteItemAction, error)
GetDeleteItemActions() ([]velero.DeleteItemAction, error)
// GetDeleteItemAction returns the delete item action plugin for name.
GetDeleteItemAction(name string) (deleteitemactionv2.DeleteItemAction, error)
GetDeleteItemAction(name string) (velero.DeleteItemAction, error)
// CleanupClients terminates all of the Manager's running plugin processes.
CleanupClients()
@@ -135,82 +129,39 @@ func (m *manager) getRestartableProcess(kind framework.PluginKind, name string)
return restartableProcess, nil
}
type RestartableObjectStore struct {
kind framework.PluginKind
// Get returns a restartable ObjectStore for the given name and process, wrapping if necessary
Get func(name string, restartableProcess RestartableProcess) objectstorev2.ObjectStore
}
func (m *manager) restartableObjectStores() []RestartableObjectStore {
return []RestartableObjectStore{
{
kind: framework.PluginKindObjectStoreV2,
Get: newRestartableObjectStoreV2,
},
{
kind: framework.PluginKindObjectStore,
Get: newAdaptedV1ObjectStore, // Adapt v1 plugin to v2
},
}
}
// GetObjectStore returns a restartableObjectStore for name.
func (m *manager) GetObjectStore(name string) (objectstorev2.ObjectStore, error) {
func (m *manager) GetObjectStore(name string) (velero.ObjectStore, error) {
name = sanitizeName(name)
for _, restartableObjStore := range m.restartableObjectStores() {
restartableProcess, err := m.getRestartableProcess(restartableObjStore.kind, name)
if err != nil {
// Check if plugin was not found
if errors.Is(err, &pluginNotFoundError{}) {
continue
}
return nil, err
}
return restartableObjStore.Get(name, restartableProcess), nil
}
return nil, fmt.Errorf("unable to get valid ObjectStore for %q", name)
}
type RestartableVolumeSnapshotter struct {
kind framework.PluginKind
// Get returns a restartable VolumeSnapshotter for the given name and process, wrapping if necessary
Get func(name string, restartableProcess RestartableProcess) volumesnapshotterv2.VolumeSnapshotter
}
func (m *manager) restartableVolumeSnapshotters() []RestartableVolumeSnapshotter {
return []RestartableVolumeSnapshotter{
{
kind: framework.PluginKindVolumeSnapshotterV2,
Get: newRestartableVolumeSnapshotterV2,
},
{
kind: framework.PluginKindVolumeSnapshotter,
Get: newAdaptedV1VolumeSnapshotter, // Adapt v1 plugin to v2
},
restartableProcess, err := m.getRestartableProcess(framework.PluginKindObjectStore, name)
if err != nil {
return nil, err
}
r := newRestartableObjectStore(name, restartableProcess)
return r, nil
}
// GetVolumeSnapshotter returns a restartableVolumeSnapshotter for name.
func (m *manager) GetVolumeSnapshotter(name string) (volumesnapshotterv2.VolumeSnapshotter, error) {
func (m *manager) GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error) {
name = sanitizeName(name)
for _, restartableVolumeSnapshotter := range m.restartableVolumeSnapshotters() {
restartableProcess, err := m.getRestartableProcess(restartableVolumeSnapshotter.kind, name)
if err != nil {
// Check if plugin was not found
if errors.Is(err, &pluginNotFoundError{}) {
continue
}
return nil, err
}
return restartableVolumeSnapshotter.Get(name, restartableProcess), nil
restartableProcess, err := m.getRestartableProcess(framework.PluginKindVolumeSnapshotter, name)
if err != nil {
return nil, err
}
return nil, fmt.Errorf("unable to get valid VolumeSnapshotter for %q", name)
r := newRestartableVolumeSnapshotter(name, restartableProcess)
return r, nil
}
// GetBackupItemActions returns all backup item actions as restartableBackupItemActions.
func (m *manager) GetBackupItemActions() ([]backupitemactionv2.BackupItemAction, error) {
list := m.registry.ListForKinds(framework.BackupItemActionKinds())
actions := make([]backupitemactionv2.BackupItemAction, 0, len(list))
func (m *manager) GetBackupItemActions() ([]velero.BackupItemAction, error) {
list := m.registry.List(framework.PluginKindBackupItemAction)
actions := make([]velero.BackupItemAction, 0, len(list))
for i := range list {
id := list[i]
@@ -226,47 +177,24 @@ func (m *manager) GetBackupItemActions() ([]backupitemactionv2.BackupItemAction,
return actions, nil
}
type RestartableBackupItemAction struct {
kind framework.PluginKind
// Get returns a restartable BackupItemAction for the given name and process, wrapping if necessary
Get func(name string, restartableProcess RestartableProcess) backupitemactionv2.BackupItemAction
}
func (m *manager) restartableBackupItemActions() []RestartableBackupItemAction {
return []RestartableBackupItemAction{
{
kind: framework.PluginKindBackupItemActionV2,
Get: newRestartableBackupItemActionV2,
},
{
kind: framework.PluginKindBackupItemAction,
Get: newAdaptedV1BackupItemAction, // Adapt v1 plugin to v2
},
}
}
// GetBackupItemAction returns a restartableBackupItemAction for name.
func (m *manager) GetBackupItemAction(name string) (backupitemactionv2.BackupItemAction, error) {
func (m *manager) GetBackupItemAction(name string) (velero.BackupItemAction, error) {
name = sanitizeName(name)
for _, restartableBackupItemAction := range m.restartableBackupItemActions() {
restartableProcess, err := m.getRestartableProcess(restartableBackupItemAction.kind, name)
if err != nil {
// Check if plugin was not found
if errors.Is(err, &pluginNotFoundError{}) {
continue
}
return nil, err
}
return restartableBackupItemAction.Get(name, restartableProcess), nil
restartableProcess, err := m.getRestartableProcess(framework.PluginKindBackupItemAction, name)
if err != nil {
return nil, err
}
return nil, fmt.Errorf("unable to get valid BackupItemAction for %q", name)
r := newRestartableBackupItemAction(name, restartableProcess)
return r, nil
}
// GetRestoreItemActions returns all restore item actions as restartableRestoreItemActions.
func (m *manager) GetRestoreItemActions() ([]restoreitemactionv2.RestoreItemAction, error) {
list := m.registry.ListForKinds(framework.RestoreItemActionKinds())
func (m *manager) GetRestoreItemActions() ([]velero.RestoreItemAction, error) {
list := m.registry.List(framework.PluginKindRestoreItemAction)
actions := make([]restoreitemactionv2.RestoreItemAction, 0, len(list))
actions := make([]velero.RestoreItemAction, 0, len(list))
for i := range list {
id := list[i]
@@ -282,47 +210,24 @@ func (m *manager) GetRestoreItemActions() ([]restoreitemactionv2.RestoreItemActi
return actions, nil
}
type RestartableRestoreItemAction struct {
kind framework.PluginKind
// Get returns a restartable RestoreItemAction for the given name and process, wrapping if necessary
Get func(name string, restartableProcess RestartableProcess) restoreitemactionv2.RestoreItemAction
}
func (m *manager) restartableRestoreItemActions() []RestartableRestoreItemAction {
return []RestartableRestoreItemAction{
{
kind: framework.PluginKindRestoreItemActionV2,
Get: newRestartableRestoreItemActionV2,
},
{
kind: framework.PluginKindRestoreItemAction,
Get: newAdaptedV1RestoreItemAction, // Adapt v1 plugin to v2
},
}
}
// GetRestoreItemAction returns a restartableRestoreItemAction for name.
func (m *manager) GetRestoreItemAction(name string) (restoreitemactionv2.RestoreItemAction, error) {
func (m *manager) GetRestoreItemAction(name string) (velero.RestoreItemAction, error) {
name = sanitizeName(name)
for _, restartableRestoreItemAction := range m.restartableRestoreItemActions() {
restartableProcess, err := m.getRestartableProcess(restartableRestoreItemAction.kind, name)
if err != nil {
// Check if plugin was not found
if errors.Is(err, &pluginNotFoundError{}) {
continue
}
return nil, err
}
return restartableRestoreItemAction.Get(name, restartableProcess), nil
restartableProcess, err := m.getRestartableProcess(framework.PluginKindRestoreItemAction, name)
if err != nil {
return nil, err
}
return nil, fmt.Errorf("unable to get valid RestoreItemAction for %q", name)
r := newRestartableRestoreItemAction(name, restartableProcess)
return r, nil
}
// GetDeleteItemActions returns all delete item actions as restartableDeleteItemActions.
func (m *manager) GetDeleteItemActions() ([]deleteitemactionv2.DeleteItemAction, error) {
list := m.registry.ListForKinds(framework.DeleteItemActionKinds())
func (m *manager) GetDeleteItemActions() ([]velero.DeleteItemAction, error) {
list := m.registry.List(framework.PluginKindDeleteItemAction)
actions := make([]deleteitemactionv2.DeleteItemAction, 0, len(list))
actions := make([]velero.DeleteItemAction, 0, len(list))
for i := range list {
id := list[i]
@@ -338,40 +243,17 @@ func (m *manager) GetDeleteItemActions() ([]deleteitemactionv2.DeleteItemAction,
return actions, nil
}
type RestartableDeleteItemAction struct {
kind framework.PluginKind
// Get returns a restartable DeleteItemAction for the given name and process, wrapping if necessary
Get func(name string, restartableProcess RestartableProcess) deleteitemactionv2.DeleteItemAction
}
func (m *manager) restartableDeleteItemActions() []RestartableDeleteItemAction {
return []RestartableDeleteItemAction{
{
kind: framework.PluginKindDeleteItemActionV2,
Get: newRestartableDeleteItemActionV2,
},
{
kind: framework.PluginKindDeleteItemAction,
Get: newAdaptedV1DeleteItemAction, // Adapt v1 plugin to v2
},
}
}
// GetDeleteItemAction returns a restartableDeleteItemAction for name.
func (m *manager) GetDeleteItemAction(name string) (deleteitemactionv2.DeleteItemAction, error) {
func (m *manager) GetDeleteItemAction(name string) (velero.DeleteItemAction, error) {
name = sanitizeName(name)
for _, restartableDeleteItemAction := range m.restartableDeleteItemActions() {
restartableProcess, err := m.getRestartableProcess(restartableDeleteItemAction.kind, name)
if err != nil {
// Check if plugin was not found
if errors.Is(err, &pluginNotFoundError{}) {
continue
}
return nil, err
}
return restartableDeleteItemAction.Get(name, restartableProcess), nil
restartableProcess, err := m.getRestartableProcess(framework.PluginKindDeleteItemAction, name)
if err != nil {
return nil, err
}
return nil, fmt.Errorf("unable to get valid DeleteItemAction for %q", name)
r := newRestartableDeleteItemAction(name, restartableProcess)
return r, nil
}
// sanitizeName adds "velero.io" to legacy plugins that weren't namespaced.

View File

@@ -34,8 +34,6 @@ type Registry interface {
DiscoverPlugins() error
// List returns all PluginIdentifiers for kind.
List(kind framework.PluginKind) []framework.PluginIdentifier
// List returns all PluginIdentifiers for a list of kinds.
ListForKinds(kinds []framework.PluginKind) (list []framework.PluginIdentifier)
// Get returns the PluginIdentifier for kind and name.
Get(kind framework.PluginKind, name string) (framework.PluginIdentifier, error)
}
@@ -110,13 +108,6 @@ func (r *registry) discoverPlugins(commands []string) error {
return nil
}
func (r *registry) ListForKinds(kinds []framework.PluginKind) (list []framework.PluginIdentifier) {
for _, kind := range kinds {
list = append(list, r.pluginsByKind[kind]...)
}
return
}
// List returns info about all plugin binaries that implement the given
// PluginKind.
func (r *registry) List(kind framework.PluginKind) []framework.PluginIdentifier {

View File

@@ -1,105 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clientmgmt
import (
"context"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
backupitemactionv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v1"
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
)
type restartableAdaptedV1BackupItemAction struct {
key kindAndName
sharedPluginProcess RestartableProcess
}
// newAdaptedV1BackupItemAction returns a new restartableAdaptedV1BackupItemAction.
func newAdaptedV1BackupItemAction(
name string, sharedPluginProcess RestartableProcess) backupitemactionv2.BackupItemAction {
r := &restartableAdaptedV1BackupItemAction{
key: kindAndName{kind: framework.PluginKindBackupItemAction, name: name},
sharedPluginProcess: sharedPluginProcess,
}
return r
}
// getBackupItemAction returns the backup item action for this restartableAdaptedV1BackupItemAction.
// It does *not* restart the plugin process.
func (r *restartableAdaptedV1BackupItemAction) getBackupItemAction() (backupitemactionv1.BackupItemAction, error) {
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
if err != nil {
return nil, err
}
backupItemAction, ok := plugin.(backupitemactionv1.BackupItemAction)
if !ok {
return nil, errors.Errorf("%T is not a BackupItemAction!", plugin)
}
return backupItemAction, nil
}
// getDelegate restarts the plugin process (if needed) and returns the backup item
// action for this restartableAdaptedV1BackupItemAction.
func (r *restartableAdaptedV1BackupItemAction) getDelegate() (backupitemactionv1.BackupItemAction, error) {
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
return nil, err
}
return r.getBackupItemAction()
}
// AppliesTo restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1BackupItemAction) AppliesTo() (velero.ResourceSelector, error) {
delegate, err := r.getDelegate()
if err != nil {
return velero.ResourceSelector{}, err
}
return delegate.AppliesTo()
}
// Execute restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1BackupItemAction) Execute(
item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, nil, err
}
return delegate.Execute(item, backup)
}
// Version 2: simply discard ctx and call version 1 function.
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1BackupItemAction) ExecuteV2(
ctx context.Context, item runtime.Unstructured, backup *api.Backup) (
runtime.Unstructured, []velero.ResourceIdentifier, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, nil, err
}
return delegate.Execute(item, backup)
}

View File

@@ -1,100 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clientmgmt
import (
"context"
"github.com/pkg/errors"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
deleteitemactionv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v1"
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
)
type restartableAdaptedV1DeleteItemAction struct {
key kindAndName
sharedPluginProcess RestartableProcess
config map[string]string
}
// newAdaptedV1DeleteItemAction returns a new restartableAdaptedV1DeleteItemAction.
func newAdaptedV1DeleteItemAction(
name string, sharedPluginProcess RestartableProcess) deleteitemactionv2.DeleteItemAction {
r := &restartableAdaptedV1DeleteItemAction{
key: kindAndName{kind: framework.PluginKindDeleteItemAction, name: name},
sharedPluginProcess: sharedPluginProcess,
}
return r
}
// getDeleteItemAction returns the delete item action for this restartableDeleteItemAction.
// It does *not* restart the plugin process.
func (r *restartableAdaptedV1DeleteItemAction) getDeleteItemAction() (deleteitemactionv1.DeleteItemAction, error) {
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
if err != nil {
return nil, err
}
deleteItemAction, ok := plugin.(deleteitemactionv1.DeleteItemAction)
if !ok {
return nil, errors.Errorf("%T is not a DeleteItemAction!", plugin)
}
return deleteItemAction, nil
}
// getDelegate restarts the plugin process (if needed) and returns the delete item action for this restartableDeleteItemAction.
func (r *restartableAdaptedV1DeleteItemAction) getDelegate() (deleteitemactionv1.DeleteItemAction, error) {
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
return nil, err
}
return r.getDeleteItemAction()
}
// AppliesTo restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1DeleteItemAction) AppliesTo() (velero.ResourceSelector, error) {
delegate, err := r.getDelegate()
if err != nil {
return velero.ResourceSelector{}, err
}
return delegate.AppliesTo()
}
// Execute restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1DeleteItemAction) Execute(input *velero.DeleteItemActionExecuteInput) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.Execute(input)
}
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1DeleteItemAction) ExecuteV2(
ctx context.Context, input *velero.DeleteItemActionExecuteInput) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.Execute(input)
}

View File

@@ -1,246 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clientmgmt
import (
"context"
"io"
"time"
"github.com/pkg/errors"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
objectstorev1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v1"
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
)
// restartableAdaptedV1ObjectStore is restartableAdaptedV1ObjectStore version 1 adaptive to version 2 plugin
type restartableAdaptedV1ObjectStore struct {
restartableObjectStore
}
// newAdaptedV1ObjectStore returns a new restartableAdaptedV1ObjectStore.
func newAdaptedV1ObjectStore(name string, sharedPluginProcess RestartableProcess) objectstorev2.ObjectStore {
key := kindAndName{kind: framework.PluginKindObjectStore, name: name}
r := &restartableAdaptedV1ObjectStore{
restartableObjectStore: restartableObjectStore{
key: key,
sharedPluginProcess: sharedPluginProcess,
},
}
// Register our reinitializer so we can reinitialize after a restart with r.config.
sharedPluginProcess.addReinitializer(key, r)
return r
}
// reinitialize reinitializes a re-dispensed plugin using the initial data passed to Init().
func (r *restartableAdaptedV1ObjectStore) reinitialize(dispensed interface{}) error {
objectStore, ok := dispensed.(objectstorev1.ObjectStore)
if !ok {
return errors.Errorf("%T is not a ObjectStore!", dispensed)
}
return r.init(objectStore, r.config)
}
// getObjectStore returns the object store for this restartableObjectStore.
// It does *not* restart the plugin process.
func (r *restartableAdaptedV1ObjectStore) getObjectStore() (objectstorev1.ObjectStore, error) {
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
if err != nil {
return nil, err
}
objectStore, ok := plugin.(objectstorev1.ObjectStore)
if !ok {
return nil, errors.Errorf("%T is not a ObjectStore!", plugin)
}
return objectStore, nil
}
// getDelegate restarts the plugin process (if needed) and returns the object store for this restartableObjectStore.
func (r *restartableAdaptedV1ObjectStore) getDelegate() (objectstorev1.ObjectStore, error) {
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
return nil, err
}
return r.getObjectStore()
}
// Init initializes the object store instance using config. If this is the first invocation, r stores config for future
// reinitialization needs. Init does NOT restart the shared plugin process. Init may only be called once.
func (r *restartableAdaptedV1ObjectStore) Init(config map[string]string) error {
if r.config != nil {
return errors.Errorf("already initialized")
}
// Not using getDelegate() to avoid possible infinite recursion
delegate, err := r.getObjectStore()
if err != nil {
return err
}
r.config = config
return r.init(delegate, config)
}
func (r *restartableAdaptedV1ObjectStore) InitV2(ctx context.Context, config map[string]string) error {
return r.Init(config)
}
// init calls Init on objectStore with config. This is split out from Init() so that both Init() and reinitialize() may
// call it using a specific ObjectStore.
func (r *restartableAdaptedV1ObjectStore) init(objectStore objectstorev1.ObjectStore, config map[string]string) error {
return objectStore.Init(config)
}
// PutObject restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) PutObject(bucket string, key string, body io.Reader) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.PutObject(bucket, key, body)
}
// ObjectExists restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) ObjectExists(bucket, key string) (bool, error) {
delegate, err := r.getDelegate()
if err != nil {
return false, err
}
return delegate.ObjectExists(bucket, key)
}
// GetObject restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) GetObject(bucket string, key string) (io.ReadCloser, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.GetObject(bucket, key)
}
// ListCommonPrefixes restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) ListCommonPrefixes(
bucket string, prefix string, delimiter string) ([]string, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.ListCommonPrefixes(bucket, prefix, delimiter)
}
// ListObjects restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) ListObjects(bucket string, prefix string) ([]string, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.ListObjects(bucket, prefix)
}
// DeleteObject restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) DeleteObject(bucket string, key string) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.DeleteObject(bucket, key)
}
// CreateSignedURL restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) CreateSignedURL(
bucket string, key string, ttl time.Duration) (string, error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.CreateSignedURL(bucket, key, ttl)
}
// Version 2. Simply discard ctx.
// PutObjectV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) PutObjectV2(
ctx context.Context, bucket string, key string, body io.Reader) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.PutObject(bucket, key, body)
}
// ObjectExistsV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) ObjectExistsV2(ctx context.Context, bucket, key string) (bool, error) {
delegate, err := r.getDelegate()
if err != nil {
return false, err
}
return delegate.ObjectExists(bucket, key)
}
// GetObjectV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) GetObjectV2(
ctx context.Context, bucket string, key string) (io.ReadCloser, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.GetObject(bucket, key)
}
// ListCommonPrefixesV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) ListCommonPrefixesV2(
ctx context.Context, bucket string, prefix string, delimiter string) ([]string, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.ListCommonPrefixes(bucket, prefix, delimiter)
}
// ListObjectsV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) ListObjectsV2(
ctx context.Context, bucket string, prefix string) ([]string, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.ListObjects(bucket, prefix)
}
// DeleteObjectV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) DeleteObjectV2(ctx context.Context, bucket string, key string) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.DeleteObject(bucket, key)
}
// CreateSignedURLV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1ObjectStore) CreateSignedURLV2(
ctx context.Context, bucket string, key string, ttl time.Duration) (string, error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.CreateSignedURL(bucket, key, ttl)
}

View File

@@ -1,100 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clientmgmt
import (
"context"
"github.com/pkg/errors"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
restoreitemactionv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v1"
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
)
type restartableAdaptedV1RestoreItemAction struct {
key kindAndName
sharedPluginProcess RestartableProcess
config map[string]string
}
// newRestartableRestoreItemAction returns a new restartableRestoreItemAction.
func newAdaptedV1RestoreItemAction(
name string, sharedPluginProcess RestartableProcess) restoreitemactionv2.RestoreItemAction {
r := &restartableAdaptedV1RestoreItemAction{
key: kindAndName{kind: framework.PluginKindRestoreItemAction, name: name},
sharedPluginProcess: sharedPluginProcess,
}
return r
}
// getRestoreItemAction returns the restore item action for this restartableRestoreItemAction.
// It does *not* restart the plugin process.
func (r *restartableAdaptedV1RestoreItemAction) getRestoreItemAction() (restoreitemactionv1.RestoreItemAction, error) {
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
if err != nil {
return nil, err
}
restoreItemAction, ok := plugin.(restoreitemactionv1.RestoreItemAction)
if !ok {
return nil, errors.Errorf("%T is not a RestoreItemAction!", plugin)
}
return restoreItemAction, nil
}
// getDelegate restarts the plugin process (if needed) and returns the restore item action for this restartableRestoreItemAction.
func (r *restartableAdaptedV1RestoreItemAction) getDelegate() (restoreitemactionv1.RestoreItemAction, error) {
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
return nil, err
}
return r.getRestoreItemAction()
}
// AppliesTo restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1RestoreItemAction) AppliesTo() (velero.ResourceSelector, error) {
delegate, err := r.getDelegate()
if err != nil {
return velero.ResourceSelector{}, err
}
return delegate.AppliesTo()
}
// Execute restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1RestoreItemAction) Execute(input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.Execute(input)
}
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1RestoreItemAction) ExecuteV2(
ctx context.Context, input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.Execute(input)
}

View File

@@ -1,233 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clientmgmt
import (
"context"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
volumesnapshotterv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v1"
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
)
// restartableAdaptedV1VolumeSnapshotter
type restartableAdaptedV1VolumeSnapshotter struct {
key kindAndName
sharedPluginProcess RestartableProcess
config map[string]string
}
// newAdaptedV1VolumeSnapshotter returns a new restartableAdaptedV1VolumeSnapshotter.
func newAdaptedV1VolumeSnapshotter(
name string, sharedPluginProcess RestartableProcess) volumesnapshotterv2.VolumeSnapshotter {
key := kindAndName{kind: framework.PluginKindVolumeSnapshotter, name: name}
r := &restartableAdaptedV1VolumeSnapshotter{
key: key,
sharedPluginProcess: sharedPluginProcess,
}
// Register our reinitializer so we can reinitialize after a restart with r.config.
sharedPluginProcess.addReinitializer(key, r)
return r
}
// reinitialize reinitializes a re-dispensed plugin using the initial data passed to Init().
func (r *restartableAdaptedV1VolumeSnapshotter) reinitialize(dispensed interface{}) error {
volumeSnapshotter, ok := dispensed.(volumesnapshotterv1.VolumeSnapshotter)
if !ok {
return errors.Errorf("%T is not a VolumeSnapshotter!", dispensed)
}
return r.init(volumeSnapshotter, r.config)
}
// getVolumeSnapshotter returns the volume snapshotter for this restartableVolumeSnapshotter.
// It does *not* restart the plugin process.
func (r *restartableAdaptedV1VolumeSnapshotter) getVolumeSnapshotter() (volumesnapshotterv1.VolumeSnapshotter, error) {
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
if err != nil {
return nil, err
}
volumeSnapshotter, ok := plugin.(volumesnapshotterv1.VolumeSnapshotter)
if !ok {
return nil, errors.Errorf("%T is not a VolumeSnapshotter!", plugin)
}
return volumeSnapshotter, nil
}
// getDelegate restarts the plugin process (if needed) and returns the volume snapshotter
// for this restartableVolumeSnapshotter.
func (r *restartableAdaptedV1VolumeSnapshotter) getDelegate() (volumesnapshotterv1.VolumeSnapshotter, error) {
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
return nil, err
}
return r.getVolumeSnapshotter()
}
// Init initializes the volume snapshotter instance using config. If this is the first invocation,
// r stores config for future reinitialization needs. Init does NOT restart the shared plugin process.
// Init may only be called once.
func (r *restartableAdaptedV1VolumeSnapshotter) Init(config map[string]string) error {
if r.config != nil {
return errors.Errorf("already initialized")
}
// Not using getDelegate() to avoid possible infinite recursion
delegate, err := r.getVolumeSnapshotter()
if err != nil {
return err
}
r.config = config
return r.init(delegate, config)
}
// init calls Init on volumeSnapshotter with config. This is split out from Init() so that both Init()
// and reinitialize() may call it using a specific VolumeSnapshotter.
func (r *restartableAdaptedV1VolumeSnapshotter) init(
volumeSnapshotter volumesnapshotterv1.VolumeSnapshotter, config map[string]string) error {
return volumeSnapshotter.Init(config)
}
// CreateVolumeFromSnapshot restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) CreateVolumeFromSnapshot(
snapshotID string, volumeType string, volumeAZ string, iops *int64) (volumeID string, err error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ, iops)
}
// GetVolumeID restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) GetVolumeID(pv runtime.Unstructured) (string, error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.GetVolumeID(pv)
}
// SetVolumeID restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) SetVolumeID(
pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.SetVolumeID(pv, volumeID)
}
// GetVolumeInfo restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) GetVolumeInfo(
volumeID string, volumeAZ string) (string, *int64, error) {
delegate, err := r.getDelegate()
if err != nil {
return "", nil, err
}
return delegate.GetVolumeInfo(volumeID, volumeAZ)
}
// CreateSnapshot restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) CreateSnapshot(
volumeID string, volumeAZ string, tags map[string]string) (snapshotID string, err error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.CreateSnapshot(volumeID, volumeAZ, tags)
}
// DeleteSnapshot restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) DeleteSnapshot(snapshotID string) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.DeleteSnapshot(snapshotID)
}
// Version 2 simply discard ctx then call Version 1 function
func (r *restartableAdaptedV1VolumeSnapshotter) InitV2(ctx context.Context, config map[string]string) error {
return r.Init(config)
}
// CreateVolumeFromSnapshotV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) CreateVolumeFromSnapshotV2(
ctx context.Context, snapshotID string, volumeType string, volumeAZ string, iops *int64) (volumeID string, err error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ, iops)
}
// GetVolumeIDV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) GetVolumeIDV2(
ctx context.Context, pv runtime.Unstructured) (string, error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.GetVolumeID(pv)
}
// SetVolumeIDV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) SetVolumeIDV2(
ctx context.Context, pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.SetVolumeID(pv, volumeID)
}
// GetVolumeInfoV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) GetVolumeInfoV2(
ctx context.Context, volumeID string, volumeAZ string) (string, *int64, error) {
delegate, err := r.getDelegate()
if err != nil {
return "", nil, err
}
return delegate.GetVolumeInfo(volumeID, volumeAZ)
}
// CreateSnapshotV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) CreateSnapshotV2(
ctx context.Context, volumeID string, volumeAZ string, tags map[string]string) (snapshotID string, err error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.CreateSnapshot(volumeID, volumeAZ, tags)
}
// DeleteSnapshotV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableAdaptedV1VolumeSnapshotter) DeleteSnapshotV2(ctx context.Context, snapshotID string) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.DeleteSnapshot(snapshotID)
}

View File

@@ -1,5 +1,5 @@
/*
Copyright 2018, 2021 the Velero contributors.
Copyright 2018 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -17,15 +17,12 @@ limitations under the License.
package clientmgmt
import (
"context"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
)
// restartableBackupItemAction is a backup item action for a given implementation (such as "pod"). It is associated with
@@ -37,11 +34,10 @@ type restartableBackupItemAction struct {
sharedPluginProcess RestartableProcess
}
// newRestartableBackupItemActionV2 returns a new restartableBackupItemAction.
func newRestartableBackupItemActionV2(
name string, sharedPluginProcess RestartableProcess) backupitemactionv2.BackupItemAction {
// newRestartableBackupItemAction returns a new restartableBackupItemAction.
func newRestartableBackupItemAction(name string, sharedPluginProcess RestartableProcess) *restartableBackupItemAction {
r := &restartableBackupItemAction{
key: kindAndName{kind: framework.PluginKindBackupItemActionV2, name: name},
key: kindAndName{kind: framework.PluginKindBackupItemAction, name: name},
sharedPluginProcess: sharedPluginProcess,
}
return r
@@ -49,13 +45,13 @@ func newRestartableBackupItemActionV2(
// getBackupItemAction returns the backup item action for this restartableBackupItemAction. It does *not* restart the
// plugin process.
func (r *restartableBackupItemAction) getBackupItemAction() (backupitemactionv2.BackupItemAction, error) {
func (r *restartableBackupItemAction) getBackupItemAction() (velero.BackupItemAction, error) {
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
if err != nil {
return nil, err
}
backupItemAction, ok := plugin.(backupitemactionv2.BackupItemAction)
backupItemAction, ok := plugin.(velero.BackupItemAction)
if !ok {
return nil, errors.Errorf("%T is not a BackupItemAction!", plugin)
}
@@ -64,7 +60,7 @@ func (r *restartableBackupItemAction) getBackupItemAction() (backupitemactionv2.
}
// getDelegate restarts the plugin process (if needed) and returns the backup item action for this restartableBackupItemAction.
func (r *restartableBackupItemAction) getDelegate() (backupitemactionv2.BackupItemAction, error) {
func (r *restartableBackupItemAction) getDelegate() (velero.BackupItemAction, error) {
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
return nil, err
}
@@ -91,13 +87,3 @@ func (r *restartableBackupItemAction) Execute(item runtime.Unstructured, backup
return delegate.Execute(item, backup)
}
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableBackupItemAction) ExecuteV2(ctx context.Context, item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, nil, err
}
return delegate.ExecuteV2(ctx, item, backup)
}

View File

@@ -17,13 +17,10 @@ limitations under the License.
package clientmgmt
import (
"context"
"github.com/pkg/errors"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
)
// restartableDeleteItemAction is a delete item action for a given implementation (such as "pod"). It is associated with
@@ -37,8 +34,7 @@ type restartableDeleteItemAction struct {
}
// newRestartableDeleteItemAction returns a new restartableDeleteItemAction.
func newRestartableDeleteItemActionV2(
name string, sharedPluginProcess RestartableProcess) deleteitemactionv2.DeleteItemAction {
func newRestartableDeleteItemAction(name string, sharedPluginProcess RestartableProcess) *restartableDeleteItemAction {
r := &restartableDeleteItemAction{
key: kindAndName{kind: framework.PluginKindDeleteItemAction, name: name},
sharedPluginProcess: sharedPluginProcess,
@@ -48,13 +44,13 @@ func newRestartableDeleteItemActionV2(
// getDeleteItemAction returns the delete item action for this restartableDeleteItemAction. It does *not* restart the
// plugin process.
func (r *restartableDeleteItemAction) getDeleteItemAction() (deleteitemactionv2.DeleteItemAction, error) {
func (r *restartableDeleteItemAction) getDeleteItemAction() (velero.DeleteItemAction, error) {
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
if err != nil {
return nil, err
}
deleteItemAction, ok := plugin.(deleteitemactionv2.DeleteItemAction)
deleteItemAction, ok := plugin.(velero.DeleteItemAction)
if !ok {
return nil, errors.Errorf("%T is not a DeleteItemAction!", plugin)
}
@@ -63,7 +59,7 @@ func (r *restartableDeleteItemAction) getDeleteItemAction() (deleteitemactionv2.
}
// getDelegate restarts the plugin process (if needed) and returns the delete item action for this restartableDeleteItemAction.
func (r *restartableDeleteItemAction) getDelegate() (deleteitemactionv2.DeleteItemAction, error) {
func (r *restartableDeleteItemAction) getDelegate() (velero.DeleteItemAction, error) {
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
return nil, err
}
@@ -90,13 +86,3 @@ func (r *restartableDeleteItemAction) Execute(input *velero.DeleteItemActionExec
return delegate.Execute(input)
}
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableDeleteItemAction) ExecuteV2(ctx context.Context, input *velero.DeleteItemActionExecuteInput) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.ExecuteV2(ctx, input)
}

View File

@@ -1,5 +1,5 @@
/*
Copyright 2021 the Velero contributors.
Copyright 2018 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -17,14 +17,13 @@ limitations under the License.
package clientmgmt
import (
"context"
"io"
"time"
"github.com/pkg/errors"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// restartableObjectStore is an object store for a given implementation (such as "aws"). It is associated with
@@ -39,9 +38,9 @@ type restartableObjectStore struct {
config map[string]string
}
// newRestartableObjectStoreV2 returns a new objectstorev2.ObjectStore for PluginKindObjectStoreV2
func newRestartableObjectStoreV2(name string, sharedPluginProcess RestartableProcess) objectstorev2.ObjectStore {
key := kindAndName{kind: framework.PluginKindObjectStoreV2, name: name}
// newRestartableObjectStore returns a new restartableObjectStore.
func newRestartableObjectStore(name string, sharedPluginProcess RestartableProcess) *restartableObjectStore {
key := kindAndName{kind: framework.PluginKindObjectStore, name: name}
r := &restartableObjectStore{
key: key,
sharedPluginProcess: sharedPluginProcess,
@@ -55,7 +54,7 @@ func newRestartableObjectStoreV2(name string, sharedPluginProcess RestartablePro
// reinitialize reinitializes a re-dispensed plugin using the initial data passed to Init().
func (r *restartableObjectStore) reinitialize(dispensed interface{}) error {
objectStore, ok := dispensed.(objectstorev2.ObjectStore)
objectStore, ok := dispensed.(velero.ObjectStore)
if !ok {
return errors.Errorf("%T is not a ObjectStore!", dispensed)
}
@@ -65,13 +64,13 @@ func (r *restartableObjectStore) reinitialize(dispensed interface{}) error {
// getObjectStore returns the object store for this restartableObjectStore. It does *not* restart the
// plugin process.
func (r *restartableObjectStore) getObjectStore() (objectstorev2.ObjectStore, error) {
func (r *restartableObjectStore) getObjectStore() (velero.ObjectStore, error) {
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
if err != nil {
return nil, err
}
objectStore, ok := plugin.(objectstorev2.ObjectStore)
objectStore, ok := plugin.(velero.ObjectStore)
if !ok {
return nil, errors.Errorf("%T is not a ObjectStore!", plugin)
}
@@ -80,7 +79,7 @@ func (r *restartableObjectStore) getObjectStore() (objectstorev2.ObjectStore, er
}
// getDelegate restarts the plugin process (if needed) and returns the object store for this restartableObjectStore.
func (r *restartableObjectStore) getDelegate() (objectstorev2.ObjectStore, error) {
func (r *restartableObjectStore) getDelegate() (velero.ObjectStore, error) {
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
return nil, err
}
@@ -106,15 +105,9 @@ func (r *restartableObjectStore) Init(config map[string]string) error {
return r.init(delegate, config)
}
// InitV2 initializes the object store instance using config. If this is the first invocation, r stores config for future
// reinitialization needs. Init does NOT restart the shared plugin process. Init may only be called once.
func (r *restartableObjectStore) InitV2(ctx context.Context, config map[string]string) error {
return r.Init(config)
}
// init calls Init on objectStore with config. This is split out from Init() so that both Init() and reinitialize() may
// call it using a specific ObjectStore.
func (r *restartableObjectStore) init(objectStore objectstorev2.ObjectStore, config map[string]string) error {
func (r *restartableObjectStore) init(objectStore velero.ObjectStore, config map[string]string) error {
return objectStore.Init(config)
}
@@ -180,68 +173,3 @@ func (r *restartableObjectStore) CreateSignedURL(bucket string, key string, ttl
}
return delegate.CreateSignedURL(bucket, key, ttl)
}
// Version 2
// PutObjectV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableObjectStore) PutObjectV2(ctx context.Context, bucket string, key string, body io.Reader) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.PutObjectV2(ctx, bucket, key, body)
}
// ObjectExistsV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableObjectStore) ObjectExistsV2(ctx context.Context, bucket, key string) (bool, error) {
delegate, err := r.getDelegate()
if err != nil {
return false, err
}
return delegate.ObjectExistsV2(ctx, bucket, key)
}
// GetObjectV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableObjectStore) GetObjectV2(ctx context.Context, bucket string, key string) (io.ReadCloser, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.GetObjectV2(ctx, bucket, key)
}
// ListCommonPrefixesV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableObjectStore) ListCommonPrefixesV2(
ctx context.Context, bucket string, prefix string, delimiter string) ([]string, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.ListCommonPrefixesV2(ctx, bucket, prefix, delimiter)
}
// ListObjectsV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableObjectStore) ListObjectsV2(ctx context.Context, bucket string, prefix string) ([]string, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.ListObjectsV2(ctx, bucket, prefix)
}
// DeleteObjectV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableObjectStore) DeleteObjectV2(ctx context.Context, bucket string, key string) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.DeleteObjectV2(ctx, bucket, key)
}
// CreateSignedURLV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableObjectStore) CreateSignedURLV2(ctx context.Context, bucket string, key string, ttl time.Duration) (string, error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.CreateSignedURLV2(ctx, bucket, key, ttl)
}

View File

@@ -1,5 +1,5 @@
/*
Copyright 2021 the Velero contributors.
Copyright 2018 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -17,13 +17,10 @@ limitations under the License.
package clientmgmt
import (
"context"
"github.com/pkg/errors"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
)
// restartableRestoreItemAction is a restore item action for a given implementation (such as "pod"). It is associated with
@@ -36,11 +33,10 @@ type restartableRestoreItemAction struct {
config map[string]string
}
// newRestartableRestoreItemActionV2 returns a new restartableRestoreItemAction.
func newRestartableRestoreItemActionV2(
name string, sharedPluginProcess RestartableProcess) restoreitemactionv2.RestoreItemAction {
// newRestartableRestoreItemAction returns a new restartableRestoreItemAction.
func newRestartableRestoreItemAction(name string, sharedPluginProcess RestartableProcess) *restartableRestoreItemAction {
r := &restartableRestoreItemAction{
key: kindAndName{kind: framework.PluginKindRestoreItemActionV2, name: name},
key: kindAndName{kind: framework.PluginKindRestoreItemAction, name: name},
sharedPluginProcess: sharedPluginProcess,
}
return r
@@ -48,13 +44,13 @@ func newRestartableRestoreItemActionV2(
// getRestoreItemAction returns the restore item action for this restartableRestoreItemAction. It does *not* restart the
// plugin process.
func (r *restartableRestoreItemAction) getRestoreItemAction() (restoreitemactionv2.RestoreItemAction, error) {
func (r *restartableRestoreItemAction) getRestoreItemAction() (velero.RestoreItemAction, error) {
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
if err != nil {
return nil, err
}
restoreItemAction, ok := plugin.(restoreitemactionv2.RestoreItemAction)
restoreItemAction, ok := plugin.(velero.RestoreItemAction)
if !ok {
return nil, errors.Errorf("%T is not a RestoreItemAction!", plugin)
}
@@ -63,7 +59,7 @@ func (r *restartableRestoreItemAction) getRestoreItemAction() (restoreitemaction
}
// getDelegate restarts the plugin process (if needed) and returns the restore item action for this restartableRestoreItemAction.
func (r *restartableRestoreItemAction) getDelegate() (restoreitemactionv2.RestoreItemAction, error) {
func (r *restartableRestoreItemAction) getDelegate() (velero.RestoreItemAction, error) {
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
return nil, err
}
@@ -90,14 +86,3 @@ func (r *restartableRestoreItemAction) Execute(input *velero.RestoreItemActionEx
return delegate.Execute(input)
}
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableRestoreItemAction) ExecuteV2(
ctx context.Context, input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.ExecuteV2(ctx, input)
}

View File

@@ -1,5 +1,5 @@
/*
Copyright 2021 the Velero contributors.
Copyright 2018 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -17,13 +17,11 @@ limitations under the License.
package clientmgmt
import (
"context"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// restartableVolumeSnapshotter is a volume snapshotter for a given implementation (such as "aws"). It is associated with
@@ -36,10 +34,9 @@ type restartableVolumeSnapshotter struct {
config map[string]string
}
// newRestartableVolumeSnapshotterV2 returns a new restartableVolumeSnapshotter.
func newRestartableVolumeSnapshotterV2(
name string, sharedPluginProcess RestartableProcess) volumesnapshotterv2.VolumeSnapshotter {
key := kindAndName{kind: framework.PluginKindVolumeSnapshotterV2, name: name}
// newRestartableVolumeSnapshotter returns a new restartableVolumeSnapshotter.
func newRestartableVolumeSnapshotter(name string, sharedPluginProcess RestartableProcess) *restartableVolumeSnapshotter {
key := kindAndName{kind: framework.PluginKindVolumeSnapshotter, name: name}
r := &restartableVolumeSnapshotter{
key: key,
sharedPluginProcess: sharedPluginProcess,
@@ -53,7 +50,7 @@ func newRestartableVolumeSnapshotterV2(
// reinitialize reinitializes a re-dispensed plugin using the initial data passed to Init().
func (r *restartableVolumeSnapshotter) reinitialize(dispensed interface{}) error {
volumeSnapshotter, ok := dispensed.(volumesnapshotterv2.VolumeSnapshotter)
volumeSnapshotter, ok := dispensed.(velero.VolumeSnapshotter)
if !ok {
return errors.Errorf("%T is not a VolumeSnapshotter!", dispensed)
}
@@ -62,13 +59,13 @@ func (r *restartableVolumeSnapshotter) reinitialize(dispensed interface{}) error
// getVolumeSnapshotter returns the volume snapshotter for this restartableVolumeSnapshotter. It does *not* restart the
// plugin process.
func (r *restartableVolumeSnapshotter) getVolumeSnapshotter() (volumesnapshotterv2.VolumeSnapshotter, error) {
func (r *restartableVolumeSnapshotter) getVolumeSnapshotter() (velero.VolumeSnapshotter, error) {
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
if err != nil {
return nil, err
}
volumeSnapshotter, ok := plugin.(volumesnapshotterv2.VolumeSnapshotter)
volumeSnapshotter, ok := plugin.(velero.VolumeSnapshotter)
if !ok {
return nil, errors.Errorf("%T is not a VolumeSnapshotter!", plugin)
}
@@ -77,7 +74,7 @@ func (r *restartableVolumeSnapshotter) getVolumeSnapshotter() (volumesnapshotter
}
// getDelegate restarts the plugin process (if needed) and returns the volume snapshotter for this restartableVolumeSnapshotter.
func (r *restartableVolumeSnapshotter) getDelegate() (volumesnapshotterv2.VolumeSnapshotter, error) {
func (r *restartableVolumeSnapshotter) getDelegate() (velero.VolumeSnapshotter, error) {
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
return nil, err
}
@@ -105,7 +102,7 @@ func (r *restartableVolumeSnapshotter) Init(config map[string]string) error {
// init calls Init on volumeSnapshotter with config. This is split out from Init() so that both Init() and reinitialize() may
// call it using a specific VolumeSnapshotter.
func (r *restartableVolumeSnapshotter) init(volumeSnapshotter volumesnapshotterv2.VolumeSnapshotter, config map[string]string) error {
func (r *restartableVolumeSnapshotter) init(volumeSnapshotter velero.VolumeSnapshotter, config map[string]string) error {
return volumeSnapshotter.Init(config)
}
@@ -162,67 +159,3 @@ func (r *restartableVolumeSnapshotter) DeleteSnapshot(snapshotID string) error {
}
return delegate.DeleteSnapshot(snapshotID)
}
// Version 2
func (r *restartableVolumeSnapshotter) InitV2(ctx context.Context, config map[string]string) error {
return r.Init(config)
}
// CreateVolumeFromSnapshotV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableVolumeSnapshotter) CreateVolumeFromSnapshotV2(
ctx context.Context, snapshotID string, volumeType string, volumeAZ string, iops *int64) (volumeID string, err error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.CreateVolumeFromSnapshotV2(ctx, snapshotID, volumeType, volumeAZ, iops)
}
// GetVolumeIDV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableVolumeSnapshotter) GetVolumeIDV2(
ctx context.Context, pv runtime.Unstructured) (string, error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.GetVolumeIDV2(ctx, pv)
}
// SetVolumeIDV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableVolumeSnapshotter) SetVolumeIDV2(
ctx context.Context, pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error) {
delegate, err := r.getDelegate()
if err != nil {
return nil, err
}
return delegate.SetVolumeIDV2(ctx, pv, volumeID)
}
// GetVolumeInfoV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableVolumeSnapshotter) GetVolumeInfoV2(
ctx context.Context, volumeID string, volumeAZ string) (string, *int64, error) {
delegate, err := r.getDelegate()
if err != nil {
return "", nil, err
}
return delegate.GetVolumeInfoV2(ctx, volumeID, volumeAZ)
}
// CreateSnapshotV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableVolumeSnapshotter) CreateSnapshotV2(
ctx context.Context, volumeID string, volumeAZ string, tags map[string]string) (snapshotID string, err error) {
delegate, err := r.getDelegate()
if err != nil {
return "", err
}
return delegate.CreateSnapshotV2(ctx, volumeID, volumeAZ, tags)
}
// DeleteSnapshotV2 restarts the plugin's process if needed, then delegates the call.
func (r *restartableVolumeSnapshotter) DeleteSnapshotV2(ctx context.Context, snapshotID string) error {
delegate, err := r.getDelegate()
if err != nil {
return err
}
return delegate.DeleteSnapshotV2(ctx, snapshotID)
}

View File

@@ -1,5 +1,5 @@
/*
Copyright 2021 the Velero contributors.
Copyright 2019 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -76,13 +76,6 @@ func (c *BackupItemActionGRPCClient) AppliesTo() (velero.ResourceSelector, error
}
func (c *BackupItemActionGRPCClient) Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
return c.ExecuteV2(context.Background(), item, backup)
}
func (c *BackupItemActionGRPCClient) ExecuteV2(
ctx context.Context, item runtime.Unstructured, backup *api.Backup) (
runtime.Unstructured, []velero.ResourceIdentifier, error) {
itemJSON, err := json.Marshal(item.UnstructuredContent())
if err != nil {
return nil, nil, errors.WithStack(err)
@@ -99,7 +92,7 @@ func (c *BackupItemActionGRPCClient) ExecuteV2(
Backup: backupJSON,
}
res, err := c.grpcClient.Execute(ctx, req)
res, err := c.grpcClient.Execute(context.Background(), req)
if err != nil {
return nil, nil, fromGRPCError(err)
}

View File

@@ -26,7 +26,6 @@ import (
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
)
// BackupItemActionGRPCServer implements the proto-generated BackupItemAction interface, and accepts
@@ -35,13 +34,13 @@ type BackupItemActionGRPCServer struct {
mux *serverMux
}
func (s *BackupItemActionGRPCServer) getImpl(name string) (backupitemactionv2.BackupItemAction, error) {
func (s *BackupItemActionGRPCServer) getImpl(name string) (velero.BackupItemAction, error) {
impl, err := s.mux.getHandler(name)
if err != nil {
return nil, err
}
itemAction, ok := impl.(backupitemactionv2.BackupItemAction)
itemAction, ok := impl.(velero.BackupItemAction)
if !ok {
return nil, errors.Errorf("%T is not a backup item action", impl)
}
@@ -99,7 +98,7 @@ func (s *BackupItemActionGRPCServer) Execute(ctx context.Context, req *proto.Exe
return nil, newGRPCError(errors.WithStack(err))
}
updatedItem, additionalItems, err := impl.ExecuteV2(ctx, &item, &backup)
updatedItem, additionalItems, err := impl.Execute(&item, &backup)
if err != nil {
return nil, newGRPCError(err)
}

View File

@@ -25,10 +25,9 @@ import (
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
)
var _ deleteitemactionv2.DeleteItemAction = &DeleteItemActionGRPCClient{}
var _ velero.DeleteItemAction = &DeleteItemActionGRPCClient{}
// NewDeleteItemActionPlugin constructs a DeleteItemActionPlugin.
func NewDeleteItemActionPlugin(options ...PluginOption) *DeleteItemActionPlugin {
@@ -71,10 +70,6 @@ func (c *DeleteItemActionGRPCClient) AppliesTo() (velero.ResourceSelector, error
}
func (c *DeleteItemActionGRPCClient) Execute(input *velero.DeleteItemActionExecuteInput) error {
return c.ExecuteV2(context.Background(), input)
}
func (c *DeleteItemActionGRPCClient) ExecuteV2(ctx context.Context, input *velero.DeleteItemActionExecuteInput) error {
itemJSON, err := json.Marshal(input.Item.UnstructuredContent())
if err != nil {
return errors.WithStack(err)
@@ -92,7 +87,7 @@ func (c *DeleteItemActionGRPCClient) ExecuteV2(ctx context.Context, input *veler
}
// First return item is just an empty struct no matter what.
if _, err = c.grpcClient.Execute(ctx, req); err != nil {
if _, err = c.grpcClient.Execute(context.Background(), req); err != nil {
return fromGRPCError(err)
}

View File

@@ -26,7 +26,6 @@ import (
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
)
// DeleteItemActionGRPCServer implements the proto-generated DeleteItemActionServer interface, and accepts
@@ -35,13 +34,13 @@ type DeleteItemActionGRPCServer struct {
mux *serverMux
}
func (s *DeleteItemActionGRPCServer) getImpl(name string) (deleteitemactionv2.DeleteItemAction, error) {
func (s *DeleteItemActionGRPCServer) getImpl(name string) (velero.DeleteItemAction, error) {
impl, err := s.mux.getHandler(name)
if err != nil {
return nil, err
}
itemAction, ok := impl.(deleteitemactionv2.DeleteItemAction)
itemAction, ok := impl.(velero.DeleteItemAction)
if !ok {
return nil, errors.Errorf("%T is not a delete item action", impl)
}
@@ -77,8 +76,7 @@ func (s *DeleteItemActionGRPCServer) AppliesTo(ctx context.Context, req *proto.D
}, nil
}
func (s *DeleteItemActionGRPCServer) Execute(
ctx context.Context, req *proto.DeleteItemActionExecuteRequest) (_ *proto.Empty, err error) {
func (s *DeleteItemActionGRPCServer) Execute(ctx context.Context, req *proto.DeleteItemActionExecuteRequest) (_ *proto.Empty, err error) {
defer func() {
if recoveredErr := handlePanic(recover()); recoveredErr != nil {
err = recoveredErr
@@ -103,7 +101,7 @@ func (s *DeleteItemActionGRPCServer) Execute(
return nil, newGRPCError(errors.WithStack(err))
}
if err := impl.ExecuteV2(ctx, &velero.DeleteItemActionExecuteInput{
if err := impl.Execute(&velero.DeleteItemActionExecuteInput{
Item: &item,
Backup: &backup,
}); err != nil {

View File

@@ -69,13 +69,7 @@ func (c *ObjectStoreGRPCClient) Init(config map[string]string) error {
// PutObject creates a new object using the data in body within the specified
// object storage bucket with the given key.
func (c *ObjectStoreGRPCClient) PutObject(bucket, key string, body io.Reader) error {
return c.PutObjectV2(context.Background(), bucket, key, body)
}
// PutObjectV2 creates a new object using the data in body within the specified
// object storage bucket with the given key.
func (c *ObjectStoreGRPCClient) PutObjectV2(ctx context.Context, bucket, key string, body io.Reader) error {
stream, err := c.grpcClient.PutObject(ctx)
stream, err := c.grpcClient.PutObject(context.Background())
if err != nil {
return fromGRPCError(err)
}
@@ -104,18 +98,13 @@ func (c *ObjectStoreGRPCClient) PutObjectV2(ctx context.Context, bucket, key str
// ObjectExists checks if there is an object with the given key in the object storage bucket.
func (c *ObjectStoreGRPCClient) ObjectExists(bucket, key string) (bool, error) {
return c.ObjectExistsV2(context.Background(), bucket, key)
}
// ObjectExistsV2 checks if there is an object with the given key in the object storage bucket.
func (c *ObjectStoreGRPCClient) ObjectExistsV2(ctx context.Context, bucket, key string) (bool, error) {
req := &proto.ObjectExistsRequest{
Plugin: c.plugin,
Bucket: bucket,
Key: key,
}
res, err := c.grpcClient.ObjectExists(ctx, req)
res, err := c.grpcClient.ObjectExists(context.Background(), req)
if err != nil {
return false, err
}
@@ -126,19 +115,13 @@ func (c *ObjectStoreGRPCClient) ObjectExistsV2(ctx context.Context, bucket, key
// GetObject retrieves the object with the given key from the specified
// bucket in object storage.
func (c *ObjectStoreGRPCClient) GetObject(bucket, key string) (io.ReadCloser, error) {
return c.GetObjectV2(context.Background(), bucket, key)
}
// GetObjectV2 retrieves the object with the given key from the specified
// bucket in object storage.
func (c *ObjectStoreGRPCClient) GetObjectV2(ctx context.Context, bucket, key string) (io.ReadCloser, error) {
req := &proto.GetObjectRequest{
Plugin: c.plugin,
Bucket: bucket,
Key: key,
}
stream, err := c.grpcClient.GetObject(ctx, req)
stream, err := c.grpcClient.GetObject(context.Background(), req)
if err != nil {
return nil, fromGRPCError(err)
}
@@ -172,14 +155,6 @@ func (c *ObjectStoreGRPCClient) GetObjectV2(ctx context.Context, bucket, key str
// after the provided prefix and before the provided delimiter (this is
// often used to simulate a directory hierarchy in object storage).
func (c *ObjectStoreGRPCClient) ListCommonPrefixes(bucket, prefix, delimiter string) ([]string, error) {
return c.ListCommonPrefixesV2(context.Background(), bucket, prefix, delimiter)
}
// ListCommonPrefixesV2 gets a list of all object key prefixes that come
// after the provided prefix and before the provided delimiter (this is
// often used to simulate a directory hierarchy in object storage).
func (c *ObjectStoreGRPCClient) ListCommonPrefixesV2(
ctx context.Context, bucket, prefix, delimiter string) ([]string, error) {
req := &proto.ListCommonPrefixesRequest{
Plugin: c.plugin,
Bucket: bucket,
@@ -187,7 +162,7 @@ func (c *ObjectStoreGRPCClient) ListCommonPrefixesV2(
Delimiter: delimiter,
}
res, err := c.grpcClient.ListCommonPrefixes(ctx, req)
res, err := c.grpcClient.ListCommonPrefixes(context.Background(), req)
if err != nil {
return nil, fromGRPCError(err)
}
@@ -197,19 +172,13 @@ func (c *ObjectStoreGRPCClient) ListCommonPrefixesV2(
// ListObjects gets a list of all objects in bucket that have the same prefix.
func (c *ObjectStoreGRPCClient) ListObjects(bucket, prefix string) ([]string, error) {
return c.ListObjectsV2(context.Background(), bucket, prefix)
}
// ListObjectsV2 gets a list of all objects in bucket that have the same prefix.
func (c *ObjectStoreGRPCClient) ListObjectsV2(
ctx context.Context, bucket, prefix string) ([]string, error) {
req := &proto.ListObjectsRequest{
Plugin: c.plugin,
Bucket: bucket,
Prefix: prefix,
}
res, err := c.grpcClient.ListObjects(ctx, req)
res, err := c.grpcClient.ListObjects(context.Background(), req)
if err != nil {
return nil, fromGRPCError(err)
}
@@ -220,19 +189,13 @@ func (c *ObjectStoreGRPCClient) ListObjectsV2(
// DeleteObject removes object with the specified key from the given
// bucket.
func (c *ObjectStoreGRPCClient) DeleteObject(bucket, key string) error {
return c.DeleteObjectV2(context.Background(), bucket, key)
}
// DeleteObjectV2 removes object with the specified key from the given bucket.
func (c *ObjectStoreGRPCClient) DeleteObjectV2(
ctx context.Context, bucket, key string) error {
req := &proto.DeleteObjectRequest{
Plugin: c.plugin,
Bucket: bucket,
Key: key,
}
if _, err := c.grpcClient.DeleteObject(ctx, req); err != nil {
if _, err := c.grpcClient.DeleteObject(context.Background(), req); err != nil {
return fromGRPCError(err)
}
@@ -241,12 +204,6 @@ func (c *ObjectStoreGRPCClient) DeleteObjectV2(
// CreateSignedURL creates a pre-signed URL for the given bucket and key that expires after ttl.
func (c *ObjectStoreGRPCClient) CreateSignedURL(bucket, key string, ttl time.Duration) (string, error) {
return c.CreateSignedURLV2(context.Background(), bucket, key, ttl)
}
// CreateSignedURLV2 creates a pre-signed URL for the given bucket and key that expires after ttl.
func (c *ObjectStoreGRPCClient) CreateSignedURLV2(
ctx context.Context, bucket, key string, ttl time.Duration) (string, error) {
req := &proto.CreateSignedURLRequest{
Plugin: c.plugin,
Bucket: bucket,
@@ -254,7 +211,7 @@ func (c *ObjectStoreGRPCClient) CreateSignedURLV2(
Ttl: int64(ttl),
}
res, err := c.grpcClient.CreateSignedURL(ctx, req)
res, err := c.grpcClient.CreateSignedURL(context.Background(), req)
if err != nil {
return "", fromGRPCError(err)
}

View File

@@ -24,7 +24,7 @@ import (
"golang.org/x/net/context"
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// ObjectStoreGRPCServer implements the proto-generated ObjectStoreServer interface, and accepts
@@ -33,13 +33,13 @@ type ObjectStoreGRPCServer struct {
mux *serverMux
}
func (s *ObjectStoreGRPCServer) getImpl(name string) (objectstorev2.ObjectStore, error) {
func (s *ObjectStoreGRPCServer) getImpl(name string) (velero.ObjectStore, error) {
impl, err := s.mux.getHandler(name)
if err != nil {
return nil, err
}
itemAction, ok := impl.(objectstorev2.ObjectStore)
itemAction, ok := impl.(velero.ObjectStore)
if !ok {
return nil, errors.Errorf("%T is not an object store", impl)
}
@@ -62,7 +62,7 @@ func (s *ObjectStoreGRPCServer) Init(ctx context.Context, req *proto.ObjectStore
return nil, newGRPCError(err)
}
if err := impl.InitV2(ctx, req.Config); err != nil {
if err := impl.Init(req.Config); err != nil {
return nil, newGRPCError(err)
}
@@ -141,7 +141,7 @@ func (s *ObjectStoreGRPCServer) ObjectExists(ctx context.Context, req *proto.Obj
return nil, newGRPCError(err)
}
exists, err := impl.ObjectExistsV2(ctx, req.Bucket, req.Key)
exists, err := impl.ObjectExists(req.Bucket, req.Key)
if err != nil {
return nil, newGRPCError(err)
}
@@ -200,7 +200,7 @@ func (s *ObjectStoreGRPCServer) ListCommonPrefixes(ctx context.Context, req *pro
return nil, newGRPCError(err)
}
prefixes, err := impl.ListCommonPrefixesV2(ctx, req.Bucket, req.Prefix, req.Delimiter)
prefixes, err := impl.ListCommonPrefixes(req.Bucket, req.Prefix, req.Delimiter)
if err != nil {
return nil, newGRPCError(err)
}
@@ -221,7 +221,7 @@ func (s *ObjectStoreGRPCServer) ListObjects(ctx context.Context, req *proto.List
return nil, newGRPCError(err)
}
keys, err := impl.ListObjectsV2(ctx, req.Bucket, req.Prefix)
keys, err := impl.ListObjects(req.Bucket, req.Prefix)
if err != nil {
return nil, newGRPCError(err)
}
@@ -243,7 +243,7 @@ func (s *ObjectStoreGRPCServer) DeleteObject(ctx context.Context, req *proto.Del
return nil, newGRPCError(err)
}
if err := impl.DeleteObjectV2(ctx, req.Bucket, req.Key); err != nil {
if err := impl.DeleteObject(req.Bucket, req.Key); err != nil {
return nil, newGRPCError(err)
}
@@ -263,7 +263,7 @@ func (s *ObjectStoreGRPCServer) CreateSignedURL(ctx context.Context, req *proto.
return nil, newGRPCError(err)
}
url, err := impl.CreateSignedURLV2(ctx, req.Bucket, req.Key, time.Duration(req.Ttl))
url, err := impl.CreateSignedURL(req.Bucket, req.Key, time.Duration(req.Ttl))
if err != nil {
return nil, newGRPCError(err)
}

View File

@@ -45,58 +45,6 @@ const (
PluginKindPluginLister PluginKind = "PluginLister"
)
const (
// PluginKindObjectStoreV2 represents an object store plugin version 2.
PluginKindObjectStoreV2 PluginKind = "ObjectStoreV2"
// PluginKindVolumeSnapshotterV2 represents a volume snapshotter plugin version 2.
PluginKindVolumeSnapshotterV2 PluginKind = "VolumeSnapshotterV2"
// PluginKindBackupItemActionV2 represents a backup item action plugin version 2.
PluginKindBackupItemActionV2 PluginKind = "BackupItemActionV2"
// PluginKindRestoreItemActionV2 represents a restore item action plugin version 2.
PluginKindRestoreItemActionV2 PluginKind = "RestoreItemActionV2"
// PluginKindDeleteItemActionV2 represents a delete item action plugin version 2.
PluginKindDeleteItemActionV2 PluginKind = "DeleteItemActionV2"
)
func ObjectStoreKinds() []PluginKind {
return []PluginKind{
PluginKindObjectStoreV2,
PluginKindObjectStore,
}
}
func VolumeSnapshotterKinds() []PluginKind {
return []PluginKind{
PluginKindVolumeSnapshotterV2,
PluginKindVolumeSnapshotter,
}
}
func BackupItemActionKinds() []PluginKind {
return []PluginKind{
PluginKindBackupItemActionV2,
PluginKindBackupItemAction,
}
}
func RestoreItemActionKinds() []PluginKind {
return []PluginKind{
PluginKindRestoreItemActionV2,
PluginKindRestoreItemAction,
}
}
func DeleteItemActionKinds() []PluginKind {
return []PluginKind{
PluginKindDeleteItemActionV2,
PluginKindDeleteItemAction,
}
}
// AllPluginKinds contains all the valid plugin kinds that Velero supports, excluding PluginLister because that is not a
// kind that a developer would ever need to implement (it's handled by Velero and the Velero plugin library code).
func AllPluginKinds() map[string]PluginKind {
@@ -106,11 +54,5 @@ func AllPluginKinds() map[string]PluginKind {
allPluginKinds[PluginKindBackupItemAction.String()] = PluginKindBackupItemAction
allPluginKinds[PluginKindRestoreItemAction.String()] = PluginKindRestoreItemAction
allPluginKinds[PluginKindDeleteItemAction.String()] = PluginKindDeleteItemAction
// Version 2
allPluginKinds[PluginKindObjectStoreV2.String()] = PluginKindObjectStoreV2
allPluginKinds[PluginKindVolumeSnapshotterV2.String()] = PluginKindVolumeSnapshotterV2
allPluginKinds[PluginKindBackupItemActionV2.String()] = PluginKindBackupItemActionV2
allPluginKinds[PluginKindRestoreItemActionV2.String()] = PluginKindRestoreItemActionV2
allPluginKinds[PluginKindDeleteItemActionV2.String()] = PluginKindDeleteItemActionV2
return allPluginKinds
}

View File

@@ -27,10 +27,9 @@ import (
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
)
var _ restoreitemactionv2.RestoreItemAction = &RestoreItemActionGRPCClient{}
var _ velero.RestoreItemAction = &RestoreItemActionGRPCClient{}
// NewRestoreItemActionPlugin constructs a RestoreItemActionPlugin.
func NewRestoreItemActionPlugin(options ...PluginOption) *RestoreItemActionPlugin {
@@ -72,14 +71,7 @@ func (c *RestoreItemActionGRPCClient) AppliesTo() (velero.ResourceSelector, erro
}, nil
}
func (c *RestoreItemActionGRPCClient) Execute(
input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
return c.ExecuteV2(context.Background(), input)
}
func (c *RestoreItemActionGRPCClient) ExecuteV2(
ctx context.Context, input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
func (c *RestoreItemActionGRPCClient) Execute(input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
itemJSON, err := json.Marshal(input.Item.UnstructuredContent())
if err != nil {
return nil, errors.WithStack(err)
@@ -102,7 +94,7 @@ func (c *RestoreItemActionGRPCClient) ExecuteV2(
Restore: restoreJSON,
}
res, err := c.grpcClient.Execute(ctx, req)
res, err := c.grpcClient.Execute(context.Background(), req)
if err != nil {
return nil, fromGRPCError(err)
}

View File

@@ -26,7 +26,6 @@ import (
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
)
// RestoreItemActionGRPCServer implements the proto-generated RestoreItemActionServer interface, and accepts
@@ -35,13 +34,13 @@ type RestoreItemActionGRPCServer struct {
mux *serverMux
}
func (s *RestoreItemActionGRPCServer) getImpl(name string) (restoreitemactionv2.RestoreItemAction, error) {
func (s *RestoreItemActionGRPCServer) getImpl(name string) (velero.RestoreItemAction, error) {
impl, err := s.mux.getHandler(name)
if err != nil {
return nil, err
}
itemAction, ok := impl.(restoreitemactionv2.RestoreItemAction)
itemAction, ok := impl.(velero.RestoreItemAction)
if !ok {
return nil, errors.Errorf("%T is not a restore item action", impl)
}
@@ -77,9 +76,7 @@ func (s *RestoreItemActionGRPCServer) AppliesTo(ctx context.Context, req *proto.
}, nil
}
func (s *RestoreItemActionGRPCServer) Execute(
ctx context.Context, req *proto.RestoreItemActionExecuteRequest) (response *proto.RestoreItemActionExecuteResponse, err error) {
func (s *RestoreItemActionGRPCServer) Execute(ctx context.Context, req *proto.RestoreItemActionExecuteRequest) (response *proto.RestoreItemActionExecuteResponse, err error) {
defer func() {
if recoveredErr := handlePanic(recover()); recoveredErr != nil {
err = recoveredErr
@@ -109,12 +106,11 @@ func (s *RestoreItemActionGRPCServer) Execute(
return nil, newGRPCError(errors.WithStack(err))
}
executeOutput, err := impl.ExecuteV2(ctx,
&velero.RestoreItemActionExecuteInput{
Item: &item,
ItemFromBackup: &itemFromBackup,
Restore: &restoreObj,
})
executeOutput, err := impl.Execute(&velero.RestoreItemActionExecuteInput{
Item: &item,
ItemFromBackup: &itemFromBackup,
Restore: &restoreObj,
})
if err != nil {
return nil, newGRPCError(err)
}

View File

@@ -74,65 +74,21 @@ type Server interface {
// RegisterDeleteItemActions registers multiple Delete item actions.
RegisterDeleteItemActions(map[string]HandlerInitializer) Server
// Version 2
// RegisterVolumeSnapshottersV2 registers multiple volume snapshotters.
RegisterVolumeSnapshottersV2(map[string]HandlerInitializer) Server
// RegisterObjectStoreV2 registers an object store. Accepted format
// for the plugin name is <DNS subdomain>/<non-empty name>.
RegisterObjectStoreV2(pluginName string, initializer HandlerInitializer) Server
// RegisterBackupItemActionV2 registers a backup item action. Accepted format
// for the plugin name is <DNS subdomain>/<non-empty name>.
RegisterBackupItemActionV2(pluginName string, initializer HandlerInitializer) Server
// RegisterBackupItemActionsV2 registers multiple backup item actions.
RegisterBackupItemActionsV2(map[string]HandlerInitializer) Server
// RegisterVolumeSnapshotterV2 registers a volume snapshotter. Accepted format
// for the plugin name is <DNS subdomain>/<non-empty name>.
RegisterVolumeSnapshotterV2(pluginName string, initializer HandlerInitializer) Server
// RegisterObjectStoresV2 registers multiple object stores.
RegisterObjectStoresV2(map[string]HandlerInitializer) Server
// RegisterRestoreItemActionV2 registers a restore item action. Accepted format
// for the plugin name is <DNS subdomain>/<non-empty name>.
RegisterRestoreItemActionV2(pluginName string, initializer HandlerInitializer) Server
// RegisterRestoreItemActionsV2 registers multiple restore item actions.
RegisterRestoreItemActionsV2(map[string]HandlerInitializer) Server
// RegisterDeleteItemActionV2 registers a delete item action. Accepted format
// for the plugin name is <DNS subdomain>/<non-empty name>.
RegisterDeleteItemActionV2(pluginName string, initializer HandlerInitializer) Server
// RegisterDeleteItemActionsV2 registers multiple Delete item actions.
RegisterDeleteItemActionsV2(map[string]HandlerInitializer) Server
// Server runs the plugin server.
Serve()
}
// server implements Server.
type server struct {
log *logrus.Logger
logLevelFlag *logging.LevelFlag
flagSet *pflag.FlagSet
featureSet *veleroflag.StringArray
// Version 1
log *logrus.Logger
logLevelFlag *logging.LevelFlag
flagSet *pflag.FlagSet
featureSet *veleroflag.StringArray
backupItemAction *BackupItemActionPlugin
volumeSnapshotter *VolumeSnapshotterPlugin
objectStore *ObjectStorePlugin
restoreItemAction *RestoreItemActionPlugin
deleteItemAction *DeleteItemActionPlugin
// Version 2
backupItemActionV2 *BackupItemActionPlugin
volumeSnapshotterV2 *VolumeSnapshotterPlugin
objectStoreV2 *ObjectStorePlugin
restoreItemActionV2 *RestoreItemActionPlugin
deleteItemActionV2 *DeleteItemActionPlugin
}
// NewServer returns a new Server
@@ -141,19 +97,14 @@ func NewServer() Server {
features := veleroflag.NewStringArray()
return &server{
log: log,
logLevelFlag: logging.LogLevelFlag(log.Level),
featureSet: &features,
backupItemAction: NewBackupItemActionPlugin(serverLogger(log)),
volumeSnapshotter: NewVolumeSnapshotterPlugin(serverLogger(log)),
objectStore: NewObjectStorePlugin(serverLogger(log)),
restoreItemAction: NewRestoreItemActionPlugin(serverLogger(log)),
deleteItemAction: NewDeleteItemActionPlugin(serverLogger(log)),
backupItemActionV2: NewBackupItemActionPlugin(serverLogger(log)),
volumeSnapshotterV2: NewVolumeSnapshotterPlugin(serverLogger(log)),
objectStoreV2: NewObjectStorePlugin(serverLogger(log)),
restoreItemActionV2: NewRestoreItemActionPlugin(serverLogger(log)),
deleteItemActionV2: NewDeleteItemActionPlugin(serverLogger(log)),
log: log,
logLevelFlag: logging.LogLevelFlag(log.Level),
featureSet: &features,
backupItemAction: NewBackupItemActionPlugin(serverLogger(log)),
volumeSnapshotter: NewVolumeSnapshotterPlugin(serverLogger(log)),
objectStore: NewObjectStorePlugin(serverLogger(log)),
restoreItemAction: NewRestoreItemActionPlugin(serverLogger(log)),
deleteItemAction: NewDeleteItemActionPlugin(serverLogger(log)),
}
}
@@ -226,67 +177,6 @@ func (s *server) RegisterDeleteItemActions(m map[string]HandlerInitializer) Serv
return s
}
// Version 2
func (s *server) RegisterBackupItemActionV2(name string, initializer HandlerInitializer) Server {
s.backupItemActionV2.register(name, initializer)
return s
}
func (s *server) RegisterBackupItemActionsV2(m map[string]HandlerInitializer) Server {
for name := range m {
s.RegisterBackupItemActionV2(name, m[name])
}
return s
}
func (s *server) RegisterVolumeSnapshotterV2(name string, initializer HandlerInitializer) Server {
s.volumeSnapshotterV2.register(name, initializer)
return s
}
func (s *server) RegisterVolumeSnapshottersV2(m map[string]HandlerInitializer) Server {
for name := range m {
s.RegisterVolumeSnapshotterV2(name, m[name])
}
return s
}
func (s *server) RegisterObjectStoreV2(name string, initializer HandlerInitializer) Server {
s.objectStoreV2.register(name, initializer)
return s
}
func (s *server) RegisterObjectStoresV2(m map[string]HandlerInitializer) Server {
for name := range m {
s.RegisterObjectStoreV2(name, m[name])
}
return s
}
func (s *server) RegisterRestoreItemActionV2(name string, initializer HandlerInitializer) Server {
s.restoreItemActionV2.register(name, initializer)
return s
}
func (s *server) RegisterRestoreItemActionsV2(m map[string]HandlerInitializer) Server {
for name := range m {
s.RegisterRestoreItemActionV2(name, m[name])
}
return s
}
func (s *server) RegisterDeleteItemActionV2(name string, initializer HandlerInitializer) Server {
s.deleteItemActionV2.register(name, initializer)
return s
}
func (s *server) RegisterDeleteItemActionsV2(m map[string]HandlerInitializer) Server {
for name := range m {
s.RegisterDeleteItemActionV2(name, m[name])
}
return s
}
// getNames returns a list of PluginIdentifiers registered with plugin.
func getNames(command string, kind PluginKind, plugin Interface) []PluginIdentifier {
var pluginIdentifiers []PluginIdentifier
@@ -316,12 +206,6 @@ func (s *server) Serve() {
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindObjectStore, s.objectStore)...)
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindRestoreItemAction, s.restoreItemAction)...)
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindDeleteItemAction, s.deleteItemAction)...)
// Version 2
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindBackupItemActionV2, s.backupItemActionV2)...)
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindVolumeSnapshotterV2, s.volumeSnapshotterV2)...)
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindObjectStoreV2, s.objectStoreV2)...)
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindRestoreItemActionV2, s.restoreItemActionV2)...)
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindDeleteItemActionV2, s.deleteItemActionV2)...)
pluginLister := NewPluginLister(pluginIdentifiers...)
@@ -334,12 +218,6 @@ func (s *server) Serve() {
string(PluginKindPluginLister): NewPluginListerPlugin(pluginLister),
string(PluginKindRestoreItemAction): s.restoreItemAction,
string(PluginKindDeleteItemAction): s.deleteItemAction,
// Version 2
string(PluginKindBackupItemActionV2): s.backupItemActionV2,
string(PluginKindVolumeSnapshotterV2): s.volumeSnapshotterV2,
string(PluginKindObjectStoreV2): s.objectStoreV2,
string(PluginKindRestoreItemActionV2): s.restoreItemActionV2,
string(PluginKindDeleteItemActionV2): s.deleteItemActionV2,
},
GRPCServer: plugin.DefaultGRPCServer,
})

View File

@@ -53,19 +53,12 @@ func newVolumeSnapshotterGRPCClient(base *clientBase, clientConn *grpc.ClientCon
// configuration key-value pairs. It returns an error if the VolumeSnapshotter
// cannot be initialized from the provided config.
func (c *VolumeSnapshotterGRPCClient) Init(config map[string]string) error {
return c.InitV2(context.Background(), config)
}
// InitV2 prepares the VolumeSnapshotter for usage using the provided map of
// configuration key-value pairs. It returns an error if the VolumeSnapshotter
// cannot be initialized from the provided config.
func (c *VolumeSnapshotterGRPCClient) InitV2(ctx context.Context, config map[string]string) error {
req := &proto.VolumeSnapshotterInitRequest{
Plugin: c.plugin,
Config: config,
}
if _, err := c.grpcClient.Init(ctx, req); err != nil {
if _, err := c.grpcClient.Init(context.Background(), req); err != nil {
return fromGRPCError(err)
}
@@ -74,14 +67,7 @@ func (c *VolumeSnapshotterGRPCClient) InitV2(ctx context.Context, config map[str
// CreateVolumeFromSnapshot creates a new block volume, initialized from the provided snapshot,
// and with the specified type and IOPS (if using provisioned IOPS).
func (c *VolumeSnapshotterGRPCClient) CreateVolumeFromSnapshot(
snapshotID, volumeType, volumeAZ string, iops *int64) (string, error) {
return c.CreateVolumeFromSnapshotV2(context.Background(), snapshotID, volumeType, volumeAZ, iops)
}
func (c *VolumeSnapshotterGRPCClient) CreateVolumeFromSnapshotV2(
ctx context.Context, snapshotID, volumeType, volumeAZ string, iops *int64) (string, error) {
func (c *VolumeSnapshotterGRPCClient) CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ string, iops *int64) (string, error) {
req := &proto.CreateVolumeRequest{
Plugin: c.plugin,
SnapshotID: snapshotID,
@@ -95,7 +81,7 @@ func (c *VolumeSnapshotterGRPCClient) CreateVolumeFromSnapshotV2(
req.Iops = *iops
}
res, err := c.grpcClient.CreateVolumeFromSnapshot(ctx, req)
res, err := c.grpcClient.CreateVolumeFromSnapshot(context.Background(), req)
if err != nil {
return "", fromGRPCError(err)
}
@@ -106,19 +92,13 @@ func (c *VolumeSnapshotterGRPCClient) CreateVolumeFromSnapshotV2(
// GetVolumeInfo returns the type and IOPS (if using provisioned IOPS) for a specified block
// volume.
func (c *VolumeSnapshotterGRPCClient) GetVolumeInfo(volumeID, volumeAZ string) (string, *int64, error) {
return c.GetVolumeInfoV2(context.Background(), volumeID, volumeAZ)
}
func (c *VolumeSnapshotterGRPCClient) GetVolumeInfoV2(
ctx context.Context, volumeID, volumeAZ string) (string, *int64, error) {
req := &proto.GetVolumeInfoRequest{
Plugin: c.plugin,
VolumeID: volumeID,
VolumeAZ: volumeAZ,
}
res, err := c.grpcClient.GetVolumeInfo(ctx, req)
res, err := c.grpcClient.GetVolumeInfo(context.Background(), req)
if err != nil {
return "", nil, fromGRPCError(err)
}
@@ -134,11 +114,6 @@ func (c *VolumeSnapshotterGRPCClient) GetVolumeInfoV2(
// CreateSnapshot creates a snapshot of the specified block volume, and applies the provided
// set of tags to the snapshot.
func (c *VolumeSnapshotterGRPCClient) CreateSnapshot(volumeID, volumeAZ string, tags map[string]string) (string, error) {
return c.CreateSnapshotV2(context.Background(), volumeID, volumeID, tags)
}
func (c *VolumeSnapshotterGRPCClient) CreateSnapshotV2(
ctx context.Context, volumeID, volumeAZ string, tags map[string]string) (string, error) {
req := &proto.CreateSnapshotRequest{
Plugin: c.plugin,
VolumeID: volumeID,
@@ -146,7 +121,7 @@ func (c *VolumeSnapshotterGRPCClient) CreateSnapshotV2(
Tags: tags,
}
res, err := c.grpcClient.CreateSnapshot(ctx, req)
res, err := c.grpcClient.CreateSnapshot(context.Background(), req)
if err != nil {
return "", fromGRPCError(err)
}
@@ -156,17 +131,12 @@ func (c *VolumeSnapshotterGRPCClient) CreateSnapshotV2(
// DeleteSnapshot deletes the specified volume snapshot.
func (c *VolumeSnapshotterGRPCClient) DeleteSnapshot(snapshotID string) error {
return c.DeleteSnapshotV2(context.Background(), snapshotID)
}
func (c *VolumeSnapshotterGRPCClient) DeleteSnapshotV2(
ctx context.Context, snapshotID string) error {
req := &proto.DeleteSnapshotRequest{
Plugin: c.plugin,
SnapshotID: snapshotID,
}
if _, err := c.grpcClient.DeleteSnapshot(ctx, req); err != nil {
if _, err := c.grpcClient.DeleteSnapshot(context.Background(), req); err != nil {
return fromGRPCError(err)
}
@@ -174,11 +144,6 @@ func (c *VolumeSnapshotterGRPCClient) DeleteSnapshotV2(
}
func (c *VolumeSnapshotterGRPCClient) GetVolumeID(pv runtime.Unstructured) (string, error) {
return c.GetVolumeIDV2(context.Background(), pv)
}
func (c *VolumeSnapshotterGRPCClient) GetVolumeIDV2(
ctx context.Context, pv runtime.Unstructured) (string, error) {
encodedPV, err := json.Marshal(pv.UnstructuredContent())
if err != nil {
return "", errors.WithStack(err)
@@ -189,7 +154,7 @@ func (c *VolumeSnapshotterGRPCClient) GetVolumeIDV2(
PersistentVolume: encodedPV,
}
resp, err := c.grpcClient.GetVolumeID(ctx, req)
resp, err := c.grpcClient.GetVolumeID(context.Background(), req)
if err != nil {
return "", fromGRPCError(err)
}
@@ -198,11 +163,6 @@ func (c *VolumeSnapshotterGRPCClient) GetVolumeIDV2(
}
func (c *VolumeSnapshotterGRPCClient) SetVolumeID(pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error) {
return c.SetVolumeIDV2(context.Background(), pv, volumeID)
}
func (c *VolumeSnapshotterGRPCClient) SetVolumeIDV2(
ctx context.Context, pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error) {
encodedPV, err := json.Marshal(pv.UnstructuredContent())
if err != nil {
return nil, errors.WithStack(err)
@@ -214,7 +174,7 @@ func (c *VolumeSnapshotterGRPCClient) SetVolumeIDV2(
VolumeID: volumeID,
}
resp, err := c.grpcClient.SetVolumeID(ctx, req)
resp, err := c.grpcClient.SetVolumeID(context.Background(), req)
if err != nil {
return nil, fromGRPCError(err)
}

View File

@@ -24,7 +24,7 @@ import (
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// VolumeSnapshotterGRPCServer implements the proto-generated VolumeSnapshotterServer interface, and accepts
@@ -33,13 +33,13 @@ type VolumeSnapshotterGRPCServer struct {
mux *serverMux
}
func (s *VolumeSnapshotterGRPCServer) getImpl(name string) (volumesnapshotterv2.VolumeSnapshotter, error) {
func (s *VolumeSnapshotterGRPCServer) getImpl(name string) (velero.VolumeSnapshotter, error) {
impl, err := s.mux.getHandler(name)
if err != nil {
return nil, err
}
volumeSnapshotter, ok := impl.(volumesnapshotterv2.VolumeSnapshotter)
volumeSnapshotter, ok := impl.(velero.VolumeSnapshotter)
if !ok {
return nil, errors.Errorf("%T is not a volume snapshotter", impl)
}
@@ -62,7 +62,7 @@ func (s *VolumeSnapshotterGRPCServer) Init(ctx context.Context, req *proto.Volum
return nil, newGRPCError(err)
}
if err := impl.InitV2(ctx, req.Config); err != nil {
if err := impl.Init(req.Config); err != nil {
return nil, newGRPCError(err)
}
@@ -92,7 +92,7 @@ func (s *VolumeSnapshotterGRPCServer) CreateVolumeFromSnapshot(ctx context.Conte
iops = &req.Iops
}
volumeID, err := impl.CreateVolumeFromSnapshotV2(ctx, snapshotID, volumeType, volumeAZ, iops)
volumeID, err := impl.CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ, iops)
if err != nil {
return nil, newGRPCError(err)
}
@@ -114,7 +114,7 @@ func (s *VolumeSnapshotterGRPCServer) GetVolumeInfo(ctx context.Context, req *pr
return nil, newGRPCError(err)
}
volumeType, iops, err := impl.GetVolumeInfoV2(ctx, req.VolumeID, req.VolumeAZ)
volumeType, iops, err := impl.GetVolumeInfo(req.VolumeID, req.VolumeAZ)
if err != nil {
return nil, newGRPCError(err)
}
@@ -144,7 +144,7 @@ func (s *VolumeSnapshotterGRPCServer) CreateSnapshot(ctx context.Context, req *p
return nil, newGRPCError(err)
}
snapshotID, err := impl.CreateSnapshotV2(ctx, req.VolumeID, req.VolumeAZ, req.Tags)
snapshotID, err := impl.CreateSnapshot(req.VolumeID, req.VolumeAZ, req.Tags)
if err != nil {
return nil, newGRPCError(err)
}
@@ -165,7 +165,7 @@ func (s *VolumeSnapshotterGRPCServer) DeleteSnapshot(ctx context.Context, req *p
return nil, newGRPCError(err)
}
if err := impl.DeleteSnapshotV2(ctx, req.SnapshotID); err != nil {
if err := impl.DeleteSnapshot(req.SnapshotID); err != nil {
return nil, newGRPCError(err)
}
@@ -190,7 +190,7 @@ func (s *VolumeSnapshotterGRPCServer) GetVolumeID(ctx context.Context, req *prot
return nil, newGRPCError(errors.WithStack(err))
}
volumeID, err := impl.GetVolumeIDV2(ctx, &pv)
volumeID, err := impl.GetVolumeID(&pv)
if err != nil {
return nil, newGRPCError(err)
}
@@ -215,7 +215,7 @@ func (s *VolumeSnapshotterGRPCServer) SetVolumeID(ctx context.Context, req *prot
return nil, newGRPCError(errors.WithStack(err))
}
updatedPV, err := impl.SetVolumeIDV2(ctx, &pv, req.VolumeID)
updatedPV, err := impl.SetVolumeID(&pv, req.VolumeID)
if err != nil {
return nil, newGRPCError(err)
}

View File

@@ -4,12 +4,7 @@ package mocks
import (
mock "github.com/stretchr/testify/mock"
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
velero "github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// Manager is an autogenerated mock type for the Manager type
@@ -23,15 +18,15 @@ func (_m *Manager) CleanupClients() {
}
// GetBackupItemAction provides a mock function with given fields: name
func (_m *Manager) GetBackupItemAction(name string) (backupitemactionv2.BackupItemAction, error) {
func (_m *Manager) GetBackupItemAction(name string) (velero.BackupItemAction, error) {
ret := _m.Called(name)
var r0 backupitemactionv2.BackupItemAction
if rf, ok := ret.Get(0).(func(string) backupitemactionv2.BackupItemAction); ok {
var r0 velero.BackupItemAction
if rf, ok := ret.Get(0).(func(string) velero.BackupItemAction); ok {
r0 = rf(name)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(backupitemactionv2.BackupItemAction)
r0 = ret.Get(0).(velero.BackupItemAction)
}
}
@@ -46,15 +41,15 @@ func (_m *Manager) GetBackupItemAction(name string) (backupitemactionv2.BackupIt
}
// GetBackupItemActions provides a mock function with given fields:
func (_m *Manager) GetBackupItemActions() ([]backupitemactionv2.BackupItemAction, error) {
func (_m *Manager) GetBackupItemActions() ([]velero.BackupItemAction, error) {
ret := _m.Called()
var r0 []backupitemactionv2.BackupItemAction
if rf, ok := ret.Get(0).(func() []backupitemactionv2.BackupItemAction); ok {
var r0 []velero.BackupItemAction
if rf, ok := ret.Get(0).(func() []velero.BackupItemAction); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]backupitemactionv2.BackupItemAction)
r0 = ret.Get(0).([]velero.BackupItemAction)
}
}
@@ -69,15 +64,15 @@ func (_m *Manager) GetBackupItemActions() ([]backupitemactionv2.BackupItemAction
}
// GetDeleteItemAction provides a mock function with given fields: name
func (_m *Manager) GetDeleteItemAction(name string) (deleteitemactionv2.DeleteItemAction, error) {
func (_m *Manager) GetDeleteItemAction(name string) (velero.DeleteItemAction, error) {
ret := _m.Called(name)
var r0 deleteitemactionv2.DeleteItemAction
if rf, ok := ret.Get(0).(func(string) deleteitemactionv2.DeleteItemAction); ok {
var r0 velero.DeleteItemAction
if rf, ok := ret.Get(0).(func(string) velero.DeleteItemAction); ok {
r0 = rf(name)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(deleteitemactionv2.DeleteItemAction)
r0 = ret.Get(0).(velero.DeleteItemAction)
}
}
@@ -92,15 +87,15 @@ func (_m *Manager) GetDeleteItemAction(name string) (deleteitemactionv2.DeleteIt
}
// GetDeleteItemActions provides a mock function with given fields:
func (_m *Manager) GetDeleteItemActions() ([]deleteitemactionv2.DeleteItemAction, error) {
func (_m *Manager) GetDeleteItemActions() ([]velero.DeleteItemAction, error) {
ret := _m.Called()
var r0 []deleteitemactionv2.DeleteItemAction
if rf, ok := ret.Get(0).(func() []deleteitemactionv2.DeleteItemAction); ok {
var r0 []velero.DeleteItemAction
if rf, ok := ret.Get(0).(func() []velero.DeleteItemAction); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]deleteitemactionv2.DeleteItemAction)
r0 = ret.Get(0).([]velero.DeleteItemAction)
}
}
@@ -115,15 +110,15 @@ func (_m *Manager) GetDeleteItemActions() ([]deleteitemactionv2.DeleteItemAction
}
// GetObjectStore provides a mock function with given fields: name
func (_m *Manager) GetObjectStore(name string) (objectstorev2.ObjectStore, error) {
func (_m *Manager) GetObjectStore(name string) (velero.ObjectStore, error) {
ret := _m.Called(name)
var r0 objectstorev2.ObjectStore
if rf, ok := ret.Get(0).(func(string) objectstorev2.ObjectStore); ok {
var r0 velero.ObjectStore
if rf, ok := ret.Get(0).(func(string) velero.ObjectStore); ok {
r0 = rf(name)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(objectstorev2.ObjectStore)
r0 = ret.Get(0).(velero.ObjectStore)
}
}
@@ -138,15 +133,15 @@ func (_m *Manager) GetObjectStore(name string) (objectstorev2.ObjectStore, error
}
// GetRestoreItemAction provides a mock function with given fields: name
func (_m *Manager) GetRestoreItemAction(name string) (restoreitemactionv2.RestoreItemAction, error) {
func (_m *Manager) GetRestoreItemAction(name string) (velero.RestoreItemAction, error) {
ret := _m.Called(name)
var r0 restoreitemactionv2.RestoreItemAction
if rf, ok := ret.Get(0).(func(string) restoreitemactionv2.RestoreItemAction); ok {
var r0 velero.RestoreItemAction
if rf, ok := ret.Get(0).(func(string) velero.RestoreItemAction); ok {
r0 = rf(name)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(restoreitemactionv2.RestoreItemAction)
r0 = ret.Get(0).(velero.RestoreItemAction)
}
}
@@ -161,15 +156,15 @@ func (_m *Manager) GetRestoreItemAction(name string) (restoreitemactionv2.Restor
}
// GetRestoreItemActions provides a mock function with given fields:
func (_m *Manager) GetRestoreItemActions() ([]restoreitemactionv2.RestoreItemAction, error) {
func (_m *Manager) GetRestoreItemActions() ([]velero.RestoreItemAction, error) {
ret := _m.Called()
var r0 []restoreitemactionv2.RestoreItemAction
if rf, ok := ret.Get(0).(func() []restoreitemactionv2.RestoreItemAction); ok {
var r0 []velero.RestoreItemAction
if rf, ok := ret.Get(0).(func() []velero.RestoreItemAction); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).([]restoreitemactionv2.RestoreItemAction)
r0 = ret.Get(0).([]velero.RestoreItemAction)
}
}
@@ -184,15 +179,15 @@ func (_m *Manager) GetRestoreItemActions() ([]restoreitemactionv2.RestoreItemAct
}
// GetVolumeSnapshotter provides a mock function with given fields: name
func (_m *Manager) GetVolumeSnapshotter(name string) (volumesnapshotterv2.VolumeSnapshotter, error) {
func (_m *Manager) GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error) {
ret := _m.Called(name)
var r0 volumesnapshotterv2.VolumeSnapshotter
if rf, ok := ret.Get(0).(func(string) volumesnapshotterv2.VolumeSnapshotter); ok {
var r0 velero.VolumeSnapshotter
if rf, ok := ret.Get(0).(func(string) velero.VolumeSnapshotter); ok {
r0 = rf(name)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(volumesnapshotterv2.VolumeSnapshotter)
r0 = ret.Get(0).(velero.VolumeSnapshotter)
}
}

View File

@@ -1,5 +1,5 @@
/*
Copyright 2021 the Velero contributors.
Copyright 2017 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -14,13 +14,13 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
package velero
import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// BackupItemAction is an actor that performs an operation on an individual item being backed up.
@@ -28,12 +28,18 @@ type BackupItemAction interface {
// AppliesTo returns information about which resources this action should be invoked for.
// A BackupItemAction's Execute function will only be invoked on items that match the returned
// selector. A zero-valued ResourceSelector matches all resources.
AppliesTo() (velero.ResourceSelector, error)
AppliesTo() (ResourceSelector, error)
// Execute allows the ItemAction to perform arbitrary logic with the item being backed up,
// including mutating the item itself prior to backup. The item (unmodified or modified)
// should be returned, along with an optional slice of ResourceIdentifiers specifying
// additional related items that should be backed up.
Execute(item runtime.Unstructured, backup *api.Backup) (
runtime.Unstructured, []velero.ResourceIdentifier, error)
Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []ResourceIdentifier, error)
}
// ResourceIdentifier describes a single item by its group, resource, namespace, and name.
type ResourceIdentifier struct {
schema.GroupResource
Namespace string
Name string
}

View File

@@ -1,38 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v2
import (
"k8s.io/apimachinery/pkg/runtime"
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
v1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v1"
"context"
)
type BackupItemAction interface {
v1.BackupItemAction
// ExecuteV2 allows the ItemAction to perform arbitrary logic with the item being backed up,
// including mutating the item itself prior to backup. The item (unmodified or modified)
// should be returned, along with an optional slice of ResourceIdentifiers specifying
// additional related items that should be backed up.
ExecuteV2(ctx context.Context, item runtime.Unstructured,
backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error)
}

View File

@@ -1,5 +1,5 @@
/*
Copyright 2020, 2021 the Velero contributors.
Copyright 2020 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -22,6 +22,20 @@ import (
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
)
// DeleteItemAction is an actor that performs an operation on an individual item being restored.
type DeleteItemAction interface {
// AppliesTo returns information about which resources this action should be invoked for.
// A DeleteItemAction's Execute function will only be invoked on items that match the returned
// selector. A zero-valued ResourceSelector matches all resources.
AppliesTo() (ResourceSelector, error)
// Execute allows the ItemAction to perform arbitrary logic with the item being deleted.
// An error should be returned if there were problems with the deletion process, but the
// overall deletion process cannot be stopped.
// Returned errors are logged.
Execute(input *DeleteItemActionExecuteInput) error
}
// DeleteItemActionExecuteInput contains the input parameters for the ItemAction's Execute function.
type DeleteItemActionExecuteInput struct {
// Item is the item taken from the pristine backed up version of resource.

View File

@@ -1,35 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// DeleteItemAction is an actor that performs an operation on an individual item being restored.
type DeleteItemAction interface {
// AppliesTo returns information about which resources this action should be invoked for.
// A DeleteItemAction's Execute function will only be invoked on items that match the returned
// selector. A zero-valued ResourceSelector matches all resources.
AppliesTo() (velero.ResourceSelector, error)
// Execute allows the ItemAction to perform arbitrary logic with the item being deleted.
// An error should be returned if there were problems with the deletion process, but the
// overall deletion process cannot be stopped.
// Returned errors are logged.
Execute(input *velero.DeleteItemActionExecuteInput) error
}

View File

@@ -1,34 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v2
import (
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
v1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v1"
"context"
)
type DeleteItemAction interface {
v1.DeleteItemAction
// ExecuteV2 allows the ItemAction to perform arbitrary logic with the item being deleted.
// An error should be returned if there were problems with the deletion process, but the
// overall deletion process cannot be stopped.
// Returned errors are logged.
ExecuteV2(ctx context.Context, input *velero.DeleteItemActionExecuteInput) error
}

View File

@@ -4,7 +4,7 @@ package mocks
import (
mock "github.com/stretchr/testify/mock"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
velero "github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// DeleteItemAction is an autogenerated mock type for the DeleteItemAction type
@@ -46,8 +46,3 @@ func (_m *DeleteItemAction) Execute(input *velero.DeleteItemActionExecuteInput)
return r0
}
// ExecuteV2 provides a mock function with given fields: ctx, input
func (_m *DeleteItemAction) ExecuteV2(ctx context.Context, input *velero.DeleteItemActionExecuteInput) error {
return _m.Execute(input)
}

View File

@@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
package velero
import (
"io"

View File

@@ -1,68 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v2
import (
v1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v1"
"context"
"io"
"time"
)
type ObjectStore interface {
v1.ObjectStore
// InitV2 prepares the ObjectStore for usage using the provided map of
// configuration key-value pairs. It returns an error if the ObjectStore
// cannot be initialized from the provided config.
InitV2(ctx context.Context, config map[string]string) error
// PutObjectV2 creates a new object using the data in body within the specified
// object storage bucket with the given key.
PutObjectV2(ctx context.Context, bucket, key string, body io.Reader) error
// ObjectExistsV2 checks if there is an object with the given key in the object storage bucket.
ObjectExistsV2(ctx context.Context, bucket, key string) (bool, error)
// GetObjectV2 retrieves the object with the given key from the specified
// bucket in object storage.
GetObjectV2(ctx context.Context, bucket, key string) (io.ReadCloser, error)
// ListCommonPrefixesV2 gets a list of all object key prefixes that start with
// the specified prefix and stop at the next instance of the provided delimiter.
//
// For example, if the bucket contains the following keys:
// a-prefix/foo-1/bar
// a-prefix/foo-1/baz
// a-prefix/foo-2/baz
// some-other-prefix/foo-3/bar
// and the provided prefix arg is "a-prefix/", and the delimiter is "/",
// this will return the slice {"a-prefix/foo-1/", "a-prefix/foo-2/"}.
ListCommonPrefixesV2(ctx context.Context, bucket, prefix, delimiter string) ([]string, error)
// ListObjectsV2 gets a list of all keys in the specified bucket
// that have the given prefix.
ListObjectsV2(ctx context.Context, bucket, prefix string) ([]string, error)
// DeleteObjectV2 removes the object with the specified key from the given
// bucket.
DeleteObjectV2(ctx context.Context, bucket, key string) error
// CreateSignedURLV2 creates a pre-signed URL for the given bucket and key that expires after ttl.
CreateSignedURLV2(ctx context.Context, bucket, key string, ttl time.Duration) (string, error)
}

View File

@@ -1,5 +1,5 @@
/*
Copyright 2017, 2019, 2021 the Velero contributors.
Copyright 2017, 2019 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -22,6 +22,22 @@ import (
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
)
// RestoreItemAction is an actor that performs an operation on an individual item being restored.
type RestoreItemAction interface {
// AppliesTo returns information about which resources this action should be invoked for.
// A RestoreItemAction's Execute function will only be invoked on items that match the returned
// selector. A zero-valued ResourceSelector matches all resources.
AppliesTo() (ResourceSelector, error)
// Execute allows the ItemAction to perform arbitrary logic with the item being restored,
// including mutating the item itself prior to restore. The item (unmodified or modified)
// should be returned, along with an optional slice of ResourceIdentifiers specifying additional
// related items that should be restored, a warning (which will be logged but will not prevent
// the item from being restored) or error (which will be logged and will prevent the item
// from being restored) if applicable.
Execute(input *RestoreItemActionExecuteInput) (*RestoreItemActionExecuteOutput, error)
}
// RestoreItemActionExecuteInput contains the input parameters for the ItemAction's Execute function.
type RestoreItemActionExecuteInput struct {
// Item is the item being restored. It is likely different from the pristine backed up version

View File

@@ -1,37 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
import (
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
)
// RestoreItemAction is an actor that performs an operation on an individual item being restored.
type RestoreItemAction interface {
// AppliesTo returns information about which resources this action should be invoked for.
// A RestoreItemAction's Execute function will only be invoked on items that match the returned
// selector. A zero-valued ResourceSelector matches all resources.
AppliesTo() (velero.ResourceSelector, error)
// Execute allows the ItemAction to perform arbitrary logic with the item being restored,
// including mutating the item itself prior to restore. The item (unmodified or modified)
// should be returned, along with an optional slice of ResourceIdentifiers specifying additional
// related items that should be restored, a warning (which will be logged but will not prevent
// the item from being restored) or error (which will be logged and will prevent the item
// from being restored) if applicable.
Execute(input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error)
}

View File

@@ -1,37 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v2
import (
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
v1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v1"
"context"
)
type RestoreItemAction interface {
v1.RestoreItemAction
// ExecuteV2 allows the ItemAction to perform arbitrary logic with the item being restored,
// including mutating the item itself prior to restore. The item (unmodified or modified)
// should be returned, along with an optional slice of ResourceIdentifiers specifying additional
// related items that should be restored, a warning (which will be logged but will not prevent
// the item from being restored) or error (which will be logged and will prevent the item
// from being restored) if applicable.
ExecuteV2(ctx context.Context,
input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error)
}

View File

@@ -1,5 +1,5 @@
/*
Copyright 2019, 2021 the Velero contributors.
Copyright 2019 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -20,10 +20,6 @@ limitations under the License.
// plugins of any type can be implemented.
package velero
import (
"k8s.io/apimachinery/pkg/runtime/schema"
)
// ResourceSelector is a collection of included/excluded namespaces,
// included/excluded resources, and a label-selector that can be used
// to match a set of items from a cluster.
@@ -52,10 +48,3 @@ type ResourceSelector struct {
// for details on syntax.
LabelSelector string
}
// ResourceIdentifier describes a single item by its group, resource, namespace, and name.
type ResourceIdentifier struct {
schema.GroupResource
Namespace string
Name string
}

View File

@@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package v1
package velero
import (
"k8s.io/apimachinery/pkg/runtime"

View File

@@ -1,59 +0,0 @@
/*
Copyright 2021 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v2
import (
"k8s.io/apimachinery/pkg/runtime"
v1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v1"
"context"
)
type VolumeSnapshotter interface {
v1.VolumeSnapshotter
// InitV2 prepares the VolumeSnapshotter for usage using the provided map of
// configuration key-value pairs. It returns an error if the VolumeSnapshotter
// cannot be initialized from the provided config.
InitV2(ctx context.Context, config map[string]string) error
// CreateVolumeFromSnapshotV2 creates a new volume in the specified
// availability zone, initialized from the provided snapshot,
// and with the specified type and IOPS (if using provisioned IOPS).
CreateVolumeFromSnapshotV2(ctx context.Context,
snapshotID, volumeType, volumeAZ string, iops *int64) (volumeID string, err error)
// GetVolumeIDV2 returns the cloud provider specific identifier for the PersistentVolume.
GetVolumeIDV2(ctx context.Context, pv runtime.Unstructured) (string, error)
// SetVolumeIDV2 sets the cloud provider specific identifier for the PersistentVolume.
SetVolumeIDV2(ctx context.Context,
pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error)
// GetVolumeInfoV2 returns the type and IOPS (if using provisioned IOPS) for
// the specified volume in the given availability zone.
GetVolumeInfoV2(ctx context.Context, volumeID, volumeAZ string) (string, *int64, error)
// CreateSnapshotV2 creates a snapshot of the specified volume, and applies the provided
// set of tags to the snapshot.
CreateSnapshotV2(ctx context.Context,
volumeID, volumeAZ string, tags map[string]string) (snapshotID string, err error)
// DeleteSnapshotV2 deletes the specified volume snapshot.
DeleteSnapshotV2(ctx context.Context, snapshotID string) error
}

View File

@@ -57,8 +57,6 @@ import (
"github.com/vmware-tanzu/velero/pkg/kuberesource"
"github.com/vmware-tanzu/velero/pkg/label"
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
restoreitemaction "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
volumesnapshotter "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
"github.com/vmware-tanzu/velero/pkg/podexec"
"github.com/vmware-tanzu/velero/pkg/restic"
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
@@ -77,7 +75,7 @@ const KubeAnnBoundByController = "pv.kubernetes.io/bound-by-controller"
const KubeAnnDynamicallyProvisioned = "pv.kubernetes.io/provisioned-by"
type VolumeSnapshotterGetter interface {
GetVolumeSnapshotter(name string) (volumesnapshotter.VolumeSnapshotter, error)
GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error)
}
type Request struct {
@@ -94,7 +92,7 @@ type Request struct {
type Restorer interface {
// Restore restores the backup data from backupReader, returning warnings and errors.
Restore(req Request,
actions []restoreitemaction.RestoreItemAction,
actions []velero.RestoreItemAction,
snapshotLocationLister listers.VolumeSnapshotLocationLister,
volumeSnapshotterGetter VolumeSnapshotterGetter,
) (Result, Result)
@@ -160,7 +158,7 @@ func NewKubernetesRestorer(
// respectively, summarizing info about the restore.
func (kr *kubernetesRestorer) Restore(
req Request,
actions []restoreitemaction.RestoreItemAction,
actions []velero.RestoreItemAction,
snapshotLocationLister listers.VolumeSnapshotLocationLister,
volumeSnapshotterGetter VolumeSnapshotterGetter,
) (Result, Result) {
@@ -280,14 +278,14 @@ func (kr *kubernetesRestorer) Restore(
}
type resolvedAction struct {
restoreitemaction.RestoreItemAction
velero.RestoreItemAction
resourceIncludesExcludes *collections.IncludesExcludes
namespaceIncludesExcludes *collections.IncludesExcludes
selector labels.Selector
}
func resolveActions(actions []restoreitemaction.RestoreItemAction, helper discovery.Helper) ([]resolvedAction, error) {
func resolveActions(actions []velero.RestoreItemAction, helper discovery.Helper) ([]resolvedAction, error) {
var resolved []resolvedAction
for _, action := range actions {

View File

@@ -57,7 +57,6 @@ func NewAPIServer(t *testing.T) *APIServer {
{Group: "apiextensions.k8s.io", Version: "v1beta1", Resource: "customresourcedefinitions"}: "CRDList",
{Group: "velero.io", Version: "v1", Resource: "volumesnapshotlocations"}: "VSLList",
{Group: "extensions", Version: "v1", Resource: "deployments"}: "ExtDeploymentsList",
{Group: "velero.io", Version: "v1", Resource: "deployments"}: "VeleroDeploymentsList",
})
discoveryClient = &DiscoveryClient{FakeDiscovery: kubeClient.Discovery().(*discoveryfake.FakeDiscovery)}
)

View File

@@ -108,18 +108,6 @@ func ExtensionsDeployments(items ...metav1.Object) *APIResource {
}
}
// test CRD
func VeleroDeployments(items ...metav1.Object) *APIResource {
return &APIResource{
Group: "velero.io",
Version: "v1",
Name: "deployments",
ShortName: "deploy",
Namespaced: true,
Items: items,
}
}
func Namespaces(items ...metav1.Object) *APIResource {
return &APIResource{
Group: "",

View File

@@ -12,7 +12,7 @@ params:
hero:
backgroundColor: med-blue
versioning: true
latest: v1.7
latest: v1.6
versions:
- main
- v1.7

View File

@@ -1,7 +0,0 @@
---
first_name: Daniel
last_name: Jiang
image: /img/contributors/daniel-jiang.png
github_handle: reasonerjt
---
Technical Lead

View File

@@ -4,5 +4,5 @@ last_name: Smith-Uchida
image: /img/contributors/dave.png
github_handle: dsu-igeek
---
Architect
Technical Lead

View File

@@ -1,7 +0,0 @@
---
first_name: Wenkai
last_name: Yin
image: /img/contributors/wenkai-yin.png
github_handle: ywk253100
---
Engineer

View File

@@ -82,12 +82,6 @@ For each major or minor release, create and publish a blog post to let folks kno
- Do a review of the diffs, and/or run `make serve-docs` and review the site.
- Submit a PR containing the changelog and the version-tagged docs.
### Pin the base image
The image of velero is built based on [Distroless docker image](https://github.com/GoogleContainerTools/distroless).
For the reproducibility of the release, before the release candidate is tagged, we need to make sure the in the Dockerfile
on the release branch, the base image is referenced by digest, such as
https://github.com/vmware-tanzu/velero/blob/release-1.7/Dockerfile#L53-L54
## Velero release
### Notes
- Pre-requisite: PR with the changelog and docs is merged, so that it's included in the release tag.

View File

@@ -10,41 +10,41 @@ the supported cloud providers block storage offerings (Amazon EBS Volumes, Az
It also provides a plugin model that enables anyone to implement additional object and block storage backends, outside the
main Velero repository.
Velero's Restic integration was added to give you an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to Velero's capabilities, not a replacement for existing functionality. If you're running on AWS, and taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using Restic. However, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
local, or any other volume type that doesn't have a native snapshot concept, Restic might be for you.
The restic integration was added to give you an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to Velero's capabilities, not a replacement for existing functionality. If you're running on AWS, and taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using restic. However, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
local, or any other volume type that doesn't have a native snapshot concept, restic might be for you.
Restic is not tied to a specific storage platform, which means that this integration also paves the way for future work to enable
cross-volume-type data migrations.
**NOTE:** hostPath volumes are not supported, but the [local volume type][4] is supported.
## Setup Restic
## Setup restic
### Prerequisites
- Understand how Velero performs [backups with the Restic integration](#how-backup-and-restore-work-with-restic).
- Understand how Velero performs [backups with the restic integration](#how-backup-and-restore-work-with-restic).
- [Download][3] the latest Velero release.
- Kubernetes v1.12.0 and later. Velero's Restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.12.0 and later.
- Kubernetes v1.12.0 and later. Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.12.0 and later.
### Install Restic
### Install restic
To install Restic, use the `--use-restic` flag in the `velero install` command. See the [install overview][2] for more details on other flags for the install command.
To install restic, use the `--use-restic` flag in the `velero install` command. See the [install overview][2] for more details on other flags for the install command.
```
velero install --use-restic
```
When using Restic on a storage provider that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
When using restic on a storage provider that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
### Configure Restic DaemonSet spec
### Configure restic DaemonSet spec
After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the Restic DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure.
After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the restic DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure.
**RancherOS**
Update the host path for volumes in the Restic DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`.
Update the host path for volumes in the restic DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`.
```yaml
hostPath:
@@ -62,7 +62,7 @@ hostPath:
**OpenShift**
To mount the correct hostpath to pods volumes, run the Restic pod in `privileged` mode.
To mount the correct hostpath to pods volumes, run the restic pod in `privileged` mode.
1. Add the `velero` ServiceAccount to the `privileged` SCC:
@@ -125,7 +125,7 @@ To mount the correct hostpath to pods volumes, run the Restic pod in `privileged
```
If Restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to relax the security in your cluster so that Restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
If restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to relax the security in your cluster so that restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
By default a userland openshift namespace will not schedule pods on all nodes in the cluster.
@@ -147,7 +147,7 @@ oc create -n <velero namespace> -f ds.yaml
**VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS)**
You need to enable the `Allow Privileged` option in your plan configuration so that Restic is able to mount the hostpath.
You need to enable the `Allow Privileged` option in your plan configuration so that restic is able to mount the hostpath.
The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods`
@@ -172,16 +172,16 @@ kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME> \
## To back up
Velero supports two approaches of discovering pod volumes that need to be backed up using Restic:
Velero supports two approaches of discovering pod volumes that need to be backed up using restic:
- Opt-in approach: Where every pod containing a volume to be backed up using Restic must be annotated with the volume's name.
- Opt-out approach: Where all pod volumes are backed up using Restic, with the ability to opt-out any volumes that should not be backed up.
- Opt-in approach: Where every pod containing a volume to be backed up using restic must be annotated with the volume's name.
- Opt-out approach: Where all pod volumes are backed up using restic, with the ability to opt-out any volumes that should not be backed up.
The following sections provide more details on the two approaches.
### Using the opt-out approach
In this approach, Velero will back up all pod volumes using Restic with the exception of:
In this approach, Velero will back up all pod volumes using restic with the exception of:
- Volumes mounting the default service account token, kubernetes secrets, and config maps
- Hostpath volumes
@@ -190,7 +190,7 @@ It is possible to exclude volumes from being backed up using the `backup.velero.
Instructions to back up using this approach are as follows:
1. Run the following command on each pod that contains volumes that should **not** be backed up using Restic
1. Run the following command on each pod that contains volumes that should **not** be backed up using restic
```bash
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes-excludes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
@@ -221,7 +221,7 @@ Instructions to back up using this approach are as follows:
- name: pvc2-vm
claimName: pvc2
```
to exclude Restic backup of volume `pvc1-vm`, you would run:
to exclude restic backup of volume `pvc1-vm`, you would run:
```bash
kubectl -n sample annotate pod/app1 backup.velero.io/backup-volumes-excludes=pvc1-vm
@@ -248,7 +248,7 @@ Instructions to back up using this approach are as follows:
### Using opt-in pod volume backup
Velero, by default, uses this approach to discover pod volumes that need to be backed up using Restic, where every pod containing a volume to be backed up using Restic must be annotated with the volume's name.
Velero, by default, uses this approach to discover pod volumes that need to be backed up using restic, where every pod containing a volume to be backed up using restic must be annotated with the volume's name.
Instructions to back up using this approach are as follows:
@@ -310,7 +310,7 @@ Instructions to back up using this approach are as follows:
## To restore
Regardless of how volumes are discovered for backup using Restic, the process of restoring remains the same.
Regardless of how volumes are discovered for backup using restic, the process of restoring remains the same.
1. Restore from your Velero backup:
@@ -331,20 +331,20 @@ Regardless of how volumes are discovered for backup using Restic, the process of
- `hostPath` volumes are not supported. [Local persistent volumes][4] are supported.
- Those of you familiar with [restic][1] may know that it encrypts all of its data. Velero uses a static,
common encryption key for all Restic repositories it creates. **This means that anyone who has access to your
bucket can decrypt your Restic backup data**. Make sure that you limit access to the Restic bucket
common encryption key for all restic repositories it creates. **This means that anyone who has access to your
bucket can decrypt your restic backup data**. Make sure that you limit access to the restic bucket
appropriately.
- An incremental backup chain will be maintained across pod reschedules for PVCs. However, for pod volumes that are *not*
PVCs, such as `emptyDir` volumes, when a pod is deleted/recreated (for example, by a ReplicaSet/Deployment), the next backup of those
volumes will be full rather than incremental, because the pod volume's lifecycle is assumed to be defined by its pod.
- Restic scans each file in a single thread. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual
difference is small.
- If you plan to use Velero's Restic integration to backup 100GB of data or more, you may need to [customize the resource limits](/docs/main/customize-installation/#customize-resource-requests-and-limits) to make sure backups complete successfully.
- Velero's Restic integration backs up data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, Velero's Restic integration can only backup volumes that are mounted by a pod and not directly from the PVC. For orphan PVC/PV pairs (without running pods), some Velero users overcame this limitation running a staging pod (i.e. a busybox or alpine container with an infinite sleep) to mount these PVC/PV pairs prior taking a Velero backup.
- If you plan to use the Velero restic integration to backup 100GB of data or more, you may need to [customize the resource limits](/docs/main/customize-installation/#customize-resource-requests-and-limits) to make sure backups complete successfully.
- Velero's restic integration backs up data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, restic integration can only backup volumes that are mounted by a pod and not directly from the PVC.
## Customize Restore Helper Container
Velero uses a helper init container when performing a Restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
Velero uses a helper init container when performing a restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with
the alternate image.
@@ -410,7 +410,7 @@ Are your Velero server and daemonset pods running?
kubectl get pods -n velero
```
Does your Restic repository exist, and is it ready?
Does your restic repository exist, and is it ready?
```bash
velero restic repo get
@@ -446,31 +446,31 @@ kubectl -n velero logs DAEMON_POD_NAME
**NOTE**: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument
to the container command in the deployment/daemonset pod template spec.
## How backup and restore work with Restic
## How backup and restore work with restic
Velero has three custom resource definitions and associated controllers:
- `ResticRepository` - represents/manages the lifecycle of Velero's [restic repositories][5]. Velero creates
a Restic repository per namespace when the first Restic backup for a namespace is requested. The controller
for this custom resource executes Restic repository lifecycle commands -- `restic init`, `restic check`,
a restic repository per namespace when the first restic backup for a namespace is requested. The controller
for this custom resource executes restic repository lifecycle commands -- `restic init`, `restic check`,
and `restic prune`.
You can see information about your Velero's Restic repositories by running `velero restic repo get`.
You can see information about your Velero restic repositories by running `velero restic repo get`.
- `PodVolumeBackup` - represents a Restic backup of a volume in a pod. The main Velero backup process creates
- `PodVolumeBackup` - represents a restic backup of a volume in a pod. The main Velero backup process creates
one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this
resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. The controller executes
`restic backup` commands to backup pod volume data.
- `PodVolumeRestore` - represents a Restic restore of a pod volume. The main Velero restore process creates one
or more of these when it encounters a pod that has associated Restic backups. Each node in the cluster runs a
- `PodVolumeRestore` - represents a restic restore of a pod volume. The main Velero restore process creates one
or more of these when it encounters a pod that has associated restic backups. Each node in the cluster runs a
controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods
on that node. The controller executes `restic restore` commands to restore pod volume data.
### Backup
1. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using Restic.
1. When found, Velero first ensures a Restic repository exists for the pod's namespace, by:
1. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using restic.
1. When found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it
1. Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation
@@ -485,14 +485,14 @@ on that node. The controller executes `restic restore` commands to restore pod v
### Restore
1. The main Velero restore process checks each existing `PodVolumeBackup` custom resource in the cluster to backup from.
1. For each `PodVolumeBackup` found, Velero first ensures a Restic repository exists for the pod's namespace, by:
1. For each `PodVolumeBackup` found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it (note that
in this case, the actual repository should already exist in object storage, so the Velero controller will simply
check it for integrity)
1. Velero adds an init container to the pod, whose job is to wait for all Restic restores for the pod to complete (more
1. Velero adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more
on this shortly)
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API. Then, the Kubernetes scheduler schedules this pod to a worker node, and the pod must be in a running state. If the pod fails to start for some reason (i.e. lack of cluster resources), the Restic restore will not be done.
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API
1. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod
1. The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail
1. Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which:
@@ -512,7 +512,7 @@ on to running other init containers/the main containers.
### Monitor backup annotation
Velero does not provide a mechanism to detect persistent volume claims that are missing the Restic backup annotation.
Velero does not provide a mechanism to detect persistent volume claims that are missing the restic backup annotation.
To solve this, a controller was written by Thomann Bits&Beats: [velero-pvc-watcher][7]
@@ -526,3 +526,4 @@ To solve this, a controller was written by Thomann Bits&Beats: [velero-pvc-watch
[8]: https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
[9]: https://github.com/restic/restic/issues/1800
[11]: customize-installation.md#default-pod-volume-backup-to-restic

View File

@@ -10,41 +10,41 @@ the supported cloud providers block storage offerings (Amazon EBS Volumes, Az
It also provides a plugin model that enables anyone to implement additional object and block storage backends, outside the
main Velero repository.
Velero's Restic integration was added to give you an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to Velero's capabilities, not a replacement for existing functionality. If you're running on AWS, and taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using Restic. However, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
local, or any other volume type that doesn't have a native snapshot concept, Restic might be for you.
The restic intergation was added to give you an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to Velero's capabilities, not a replacement for existing functionality. If you're running on AWS, and taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using restic. However, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
local, or any other volume type that doesn't have a native snapshot concept, restic might be for you.
Restic is not tied to a specific storage platform, which means that this integration also paves the way for future work to enable
cross-volume-type data migrations.
**NOTE:** hostPath volumes are not supported, but the [local volume type][4] is supported.
## Setup Restic
## Setup restic
### Prerequisites
- Understand how Velero performs [backups with the Restic integration](#how-backup-and-restore-work-with-restic).
- Understand how Velero performs [backups with the restic integration](#how-backup-and-restore-work-with-restic).
- [Download][3] the latest Velero release.
- Kubernetes v1.12.0 and later. Velero's Restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.12.0 and later.
- Kubernetes v1.12.0 and later. Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.12.0 and later.
### Install Restic
### Install restic
To install Restic, use the `--use-restic` flag in the `velero install` command. See the [install overview][2] for more details on other flags for the install command.
To install restic, use the `--use-restic` flag in the `velero install` command. See the [install overview][2] for more details on other flags for the install command.
```
velero install --use-restic
```
When using Restic on a storage provider that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
When using restic on a storage provider that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
### Configure Restic DaemonSet spec
### Configure restic DaemonSet spec
After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the Restic DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure.
After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the restic DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure.
**RancherOS**
Update the host path for volumes in the Restic DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`.
Update the host path for volumes in the restic DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`.
```yaml
hostPath:
@@ -62,7 +62,7 @@ hostPath:
**OpenShift**
To mount the correct hostpath to pods volumes, run the Restic pod in `privileged` mode.
To mount the correct hostpath to pods volumes, run the restic pod in `privileged` mode.
1. Add the `velero` ServiceAccount to the `privileged` SCC:
@@ -125,7 +125,7 @@ To mount the correct hostpath to pods volumes, run the Restic pod in `privileged
```
If Restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to relax the security in your cluster so that Restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
If restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to relax the security in your cluster so that restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
By default a userland openshift namespace will not schedule pods on all nodes in the cluster.
@@ -147,7 +147,7 @@ oc create -n <velero namespace> -f ds.yaml
**VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS)**
You need to enable the `Allow Privileged` option in your plan configuration so that Restic is able to mount the hostpath.
You need to enable the `Allow Privileged` option in your plan configuration so that restic is able to mount the hostpath.
The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods`
@@ -172,16 +172,16 @@ kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME> \
## To back up
Velero supports two approaches of discovering pod volumes that need to be backed up using Restic:
Velero supports two approaches of discovering pod volumes that need to be backed up using restic:
- Opt-in approach: Where every pod containing a volume to be backed up using Restic must be annotated with the volume's name.
- Opt-out approach: Where all pod volumes are backed up using Restic, with the ability to opt-out any volumes that should not be backed up.
- Opt-in approach: Where every pod containing a volume to be backed up using restic must be annotated with the volume's name.
- Opt-out approach: Where all pod volumes are backed up using restic, with the ability to opt-out any volumes that should not be backed up.
The following sections provide more details on the two approaches.
### Using the opt-out approach
In this approach, Velero will back up all pod volumes using Restic with the exception of:
In this approach, Velero will back up all pod volumes using restic with the exception of:
- Volumes mounting the default service account token, kubernetes secrets, and config maps
- Hostpath volumes
@@ -190,12 +190,12 @@ It is possible to exclude volumes from being backed up using the `backup.velero.
Instructions to back up using this approach are as follows:
1. Run the following command on each pod that contains volumes that should **not** be backed up using Restic
1. Run the following command on each pod that contains volumes that should **not** be backed up using restic
```bash
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes-excludes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
```
where the volume names are the names of the volumes in the pod spec.
where the volume names are the names of the volumes in the pod sepc.
For example, in the following pod:
@@ -221,7 +221,7 @@ Instructions to back up using this approach are as follows:
- name: pvc2-vm
claimName: pvc2
```
to exclude Restic backup of volume `pvc1-vm`, you would run:
to exclude restic backup of volume `pvc1-vm`, you would run:
```bash
kubectl -n sample annotate pod/app1 backup.velero.io/backup-volumes-excludes=pvc1-vm
@@ -248,7 +248,7 @@ Instructions to back up using this approach are as follows:
### Using opt-in pod volume backup
Velero, by default, uses this approach to discover pod volumes that need to be backed up using Restic, where every pod containing a volume to be backed up using Restic must be annotated with the volume's name.
Velero, by default, uses this approach to discover pod volumes that need to be backed up using restic, where every pod containing a volume to be backed up using restic must be annotated with the volume's name.
Instructions to back up using this approach are as follows:
@@ -310,7 +310,7 @@ Instructions to back up using this approach are as follows:
## To restore
Regardless of how volumes are discovered for backup using Restic, the process of restoring remains the same.
Regardless of how volumes are discovered for backup using restic, the process of restoring remains the same.
1. Restore from your Velero backup:
@@ -331,20 +331,20 @@ Regardless of how volumes are discovered for backup using Restic, the process of
- `hostPath` volumes are not supported. [Local persistent volumes][4] are supported.
- Those of you familiar with [restic][1] may know that it encrypts all of its data. Velero uses a static,
common encryption key for all Restic repositories it creates. **This means that anyone who has access to your
bucket can decrypt your Restic backup data**. Make sure that you limit access to the Restic bucket
common encryption key for all restic repositories it creates. **This means that anyone who has access to your
bucket can decrypt your restic backup data**. Make sure that you limit access to the restic bucket
appropriately.
- An incremental backup chain will be maintained across pod reschedules for PVCs. However, for pod volumes that are *not*
PVCs, such as `emptyDir` volumes, when a pod is deleted/recreated (for example, by a ReplicaSet/Deployment), the next backup of those
volumes will be full rather than incremental, because the pod volume's lifecycle is assumed to be defined by its pod.
- Restic scans each file in a single thread. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual
difference is small.
- If you plan to use Velero's Restic integration to backup 100GB of data or more, you may need to [customize the resource limits](/docs/main/customize-installation/#customize-resource-requests-and-limits) to make sure backups complete successfully.
- Velero's Restic integration backs up data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, Velero's Restic integration can only backup volumes that are mounted by a pod and not directly from the PVC. For orphan PVC/PV pairs (without running pods), some Velero users overcame this limitation running a staging pod (i.e. a busybox or alpine container with an infinite sleep) to mount these PVC/PV pairs prior taking a Velero backup.
- If you plan to use the Velero restic integration to backup 100GB of data or more, you may need to [customize the resource limits](/docs/main/customize-installation/#customize-resource-requests-and-limits) to make sure backups complete successfully.
- Velero's restic integration backs up data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, restic integration can only backup volumes that are mounted by a pod and not directly from the PVC.
## Customize Restore Helper Container
Velero uses a helper init container when performing a Restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
Velero uses a helper init container when performing a restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with
the alternate image.
@@ -410,7 +410,7 @@ Are your Velero server and daemonset pods running?
kubectl get pods -n velero
```
Does your Restic repository exist, and is it ready?
Does your restic repository exist, and is it ready?
```bash
velero restic repo get
@@ -446,31 +446,31 @@ kubectl -n velero logs DAEMON_POD_NAME
**NOTE**: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument
to the container command in the deployment/daemonset pod template spec.
## How backup and restore work with Restic
## How backup and restore work with restic
Velero has three custom resource definitions and associated controllers:
- `ResticRepository` - represents/manages the lifecycle of Velero's [restic repositories][5]. Velero creates
a Restic repository per namespace when the first Restic backup for a namespace is requested. The controller
for this custom resource executes Restic repository lifecycle commands -- `restic init`, `restic check`,
a restic repository per namespace when the first restic backup for a namespace is requested. The controller
for this custom resource executes restic repository lifecycle commands -- `restic init`, `restic check`,
and `restic prune`.
You can see information about your Velero's Restic repositories by running `velero restic repo get`.
You can see information about your Velero restic repositories by running `velero restic repo get`.
- `PodVolumeBackup` - represents a Restic backup of a volume in a pod. The main Velero backup process creates
- `PodVolumeBackup` - represents a restic backup of a volume in a pod. The main Velero backup process creates
one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this
resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. The controller executes
`restic backup` commands to backup pod volume data.
- `PodVolumeRestore` - represents a Restic restore of a pod volume. The main Velero restore process creates one
or more of these when it encounters a pod that has associated Restic backups. Each node in the cluster runs a
- `PodVolumeRestore` - represents a restic restore of a pod volume. The main Velero restore process creates one
or more of these when it encounters a pod that has associated restic backups. Each node in the cluster runs a
controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods
on that node. The controller executes `restic restore` commands to restore pod volume data.
### Backup
1. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using Restic.
1. When found, Velero first ensures a Restic repository exists for the pod's namespace, by:
1. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using restic.
1. When found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it
1. Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation
@@ -485,14 +485,14 @@ on that node. The controller executes `restic restore` commands to restore pod v
### Restore
1. The main Velero restore process checks each existing `PodVolumeBackup` custom resource in the cluster to backup from.
1. For each `PodVolumeBackup` found, Velero first ensures a Restic repository exists for the pod's namespace, by:
1. For each `PodVolumeBackup` found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it (note that
in this case, the actual repository should already exist in object storage, so the Velero controller will simply
check it for integrity)
1. Velero adds an init container to the pod, whose job is to wait for all Restic restores for the pod to complete (more
1. Velero adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more
on this shortly)
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API. Then, the Kubernetes scheduler schedules this pod to a worker node, and the pod must be in a running state. If the pod fails to start for some reason (i.e. lack of cluster resources), the Restic restore will not be done.
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API
1. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod
1. The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail
1. Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which:
@@ -512,7 +512,7 @@ on to running other init containers/the main containers.
### Monitor backup annotation
Velero does not provide a mechanism to detect persistent volume claims that are missing the Restic backup annotation.
Velero does not provide a mechanism to detect persistent volume claims that are missing the restic backup annotation.
To solve this, a controller was written by Thomann Bits&Beats: [velero-pvc-watcher][7]

Binary file not shown.

Before

Width:  |  Height:  |  Size: 681 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 216 KiB

View File

@@ -39,7 +39,7 @@ spec:
value: /plugins
- name: AWS_SHARED_CREDENTIALS_FILE
value: /credentials/cloud
- name: AZURE_CREDENTIALS_FILE
- name: AZURE_SHARED_CREDENTIALS_FILE
value: /credentials/cloud
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /credentials/cloud

View File

@@ -37,7 +37,7 @@ spec:
value: /scratch
- name: AWS_SHARED_CREDENTIALS_FILE
value: /credentials/cloud
- name: AZURE_CREDENTIALS_FILE
- name: AZURE_SHARED_CREDENTIALS_FILE
value: /credentials/cloud
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /credentials/cloud