mirror of
https://github.com/vmware-tanzu/velero.git
synced 2026-01-14 08:42:51 +00:00
Compare commits
35 Commits
release-1.
...
plugin-int
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
80828f727e | ||
|
|
de360a4b31 | ||
|
|
3cbd7976bd | ||
|
|
79d1616ecb | ||
|
|
f40f0d4e5b | ||
|
|
5c707d20c1 | ||
|
|
b059030666 | ||
|
|
9f54451e58 | ||
|
|
550efddd88 | ||
|
|
9f0ea22c60 | ||
|
|
4a792c71ef | ||
|
|
51307130a2 | ||
|
|
de0fe7ff67 | ||
|
|
163e96b62d | ||
|
|
b3c3d2351d | ||
|
|
48cac824b2 | ||
|
|
430410c763 | ||
|
|
211e490c2c | ||
|
|
afe43b2c9d | ||
|
|
7afac2a05c | ||
|
|
9f06a1b451 | ||
|
|
54fa63939a | ||
|
|
033dc06475 | ||
|
|
e1e6332e07 | ||
|
|
90adb5602f | ||
|
|
f67dd4cbde | ||
|
|
b5e6ba455d | ||
|
|
4c670fb46b | ||
|
|
5c77847f02 | ||
|
|
f4171413c4 | ||
|
|
4c8318cb7c | ||
|
|
a6fca1da87 | ||
|
|
c7c94ef891 | ||
|
|
eb332e6a77 | ||
|
|
d08c4bae4d |
5
.github/workflows/e2e-test-kind.yaml
vendored
5
.github/workflows/e2e-test-kind.yaml
vendored
@@ -71,6 +71,11 @@ jobs:
|
||||
- 1.22.0
|
||||
fail-fast: false
|
||||
steps:
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: 1.16
|
||||
id: go
|
||||
- name: Check out the code
|
||||
uses: actions/checkout@v2
|
||||
- name: Install MinIO
|
||||
|
||||
@@ -26,7 +26,8 @@
|
||||
|
||||
| Feature Area | Lead |
|
||||
| ----------------------------- | :---------------------: |
|
||||
| Technical Lead | Dave Smith-Uchida (dsu-igeek) |
|
||||
| Architect | Dave Smith-Uchida (dsu-igeek) |
|
||||
| Technical Lead | Daniel Jiang (reasonerjt) |
|
||||
| Kubernetes CSI Liaison | |
|
||||
| Deployment | JenTing Hsiao (jenting) |
|
||||
| Community Management | Jonas Rosland (jonasrosland) |
|
||||
|
||||
4
Makefile
4
Makefile
@@ -81,8 +81,8 @@ buildx not enabled, refusing to run this recipe
|
||||
see: https://velero.io/docs/main/build-from-source/#making-images-and-updating-velero for more info
|
||||
endef
|
||||
|
||||
# The version of restic binary to be downloaded for power architecture
|
||||
RESTIC_VERSION ?= 0.12.0
|
||||
# The version of restic binary to be downloaded
|
||||
RESTIC_VERSION ?= 0.12.1
|
||||
|
||||
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 windows-amd64 linux-ppc64le
|
||||
BUILDX_PLATFORMS ?= $(subst -,/,$(ARCH))
|
||||
|
||||
49
ROADMAP.md
49
ROADMAP.md
@@ -15,33 +15,28 @@ We work with and rely on community feedback to focus our efforts to improve Vele
|
||||
The following table includes the current roadmap for Velero. If you have any questions or would like to contribute to Velero, please attend a [community meeting](https://velero.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt.
|
||||
Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Velero.
|
||||
|
||||
`Last Updated: July 2021`
|
||||
`Last Updated: October 2021`
|
||||
|
||||
#### 1.7.0 Roadmap (to be delivered early fall)
|
||||
The release roadmap is split into Core items that are required for the release and desired items that may slip the release.
|
||||
#### 1.8.0 Roadmap (to be delivered January/February 2021)
|
||||
|
||||
##### Core items
|
||||
The top priority of 1.7 is to increase the technical health of Velero and be more efficient with Velero developer time by streamlining the release process and automating and expanding the E2E test suite.
|
||||
|Issue|Description|Timeline|Notes|
|
||||
|---|---|---|---|
|
||||
|[4108](https://github.com/vmware-tanzu/velero/issues/4108), [4109](https://github.com/vmware-tanzu/velero/issues/4109)|Solution for CSI - Azure and AWS|2022 H1|Currently, Velero plugins for AWS and Azure cannot back up persistent volumes that were provisioned using the CSI driver. This will fix that.|
|
||||
|[3229](https://github.com/vmware-tanzu/velero/issues/3229),[4112](https://github.com/vmware-tanzu/velero/issues/4112)|Moving data mover functionality from the Velero Plugin for vSphere into Velero proper|2022 H1|This work is a precursor to decoupling the Astrolabe snapshotting infrastructure.|
|
||||
|[3533](https://github.com/vmware-tanzu/velero/issues/3533)|Upload Progress Monitoring|2022 H1|Finishing up the work done in the 1.7 timeframe. The data mover work depends on this.|
|
||||
|[1975](https://github.com/vmware-tanzu/velero/issues/1975)|Test dual stack mode|2022 H1|We already tested IPv6, but we want to confirm that dual stack mode works as well.|
|
||||
|[2082](https://github.com/vmware-tanzu/velero/issues/2082)|Delete Backup CRs on removing target location. |2022 H1||
|
||||
|[3516](https://github.com/vmware-tanzu/velero/issues/3516)|Restore issue with MutatingWebhookConfiguration v1beta1 API version|2022 H1||
|
||||
|[2308](https://github.com/vmware-tanzu/velero/issues/2308)|Restoring nodePort service that has nodePort preservation always fails if service already exists in the namespace|2022 H1||
|
||||
|[4115](https://github.com/vmware-tanzu/velero/issues/4115)|Support for multiple set of credentials for VolumeSnapshotLocations|2022 H1||
|
||||
|[1980](https://github.com/vmware-tanzu/velero/issues/1980)|Velero triggers backup immediately for scheduled backups|2022 H1||
|
||||
|[4067](https://github.com/vmware-tanzu/velero/issues/4067)|Pre and post backup and restore hooks|2022 H1||
|
||||
|[3742](https://github.com/vmware-tanzu/velero/issues/3742)|Carvel packaging for Velero for vSphere|2022 H1|AWS and Azure have been completed already.|
|
||||
|[3285](https://github.com/vmware-tanzu/velero/issues/3285)|Design doc for Velero plugin versioning|2022 H1||
|
||||
|[4231](https://github.com/vmware-tanzu/velero/issues/4231)|Technical health (prioritizing giving developers confidence and saving developers time)|2022 H1|More automated tests (especially the pre-release manual tests) and more automation of the running of tests.|
|
||||
|[4110](https://github.com/vmware-tanzu/velero/issues/4110)|Solution for CSI - GCP|2022 H1|Currently, the Velero plugin for GCP cannot back up persistent volumes that were provisioned using the CSI driver. This will fix that.|
|
||||
|[3742](https://github.com/vmware-tanzu/velero/issues/3742)|Carvel packaging for Velero for restic|2022 H1|AWS and Azure have been completed already.|
|
||||
|[3454](https://github.com/vmware-tanzu/velero/issues/3454),[4134](https://github.com/vmware-tanzu/velero/issues/4134),[4135](https://github.com/vmware-tanzu/velero/issues/4135)|Kubebuilder tech debt|2022 H1||
|
||||
|[4111](https://github.com/vmware-tanzu/velero/issues/4111)|Ignore items returned by ItemSnapshotter.AlsoHandles during backup|2022 H1|This will enable backup of complex objects, because we can then tell Velero to ignore things that were already backed up when Velero was previously called recursively.|
|
||||
|
||||
|Issue|Description|
|
||||
|---|---|
|
||||
||Streamline release process|
|
||||
||Automate the running of the E2E tests|
|
||||
||Convert pre-release manual tests to automated E2E tests|
|
||||
|[3493](https://github.com/vmware-tanzu/velero/issues/3493)|[Carvel](https://github.com/vmware-tanzu/velero/issues/3493) based installation (in addition to the existing *velero install* CLI).|
|
||||
|[675](https://github.com/vmware-tanzu/velero/issues/675)|Velero command to generate debugging information. Will integrate with [Crashd - Crash Diagnostics](https://github.com/vmware-tanzu/velero/issues/675)|
|
||||
|[3285](https://github.com/vmware-tanzu/velero/issues/3285)|Design doc for Velero plugin versioning|
|
||||
|[1975](https://github.com/vmware-tanzu/velero/issues/1975)|IPV6 support|
|
||||
|[3533](https://github.com/vmware-tanzu/velero/issues/3533)|Upload Progress Monitoring|
|
||||
|[3500](https://github.com/vmware-tanzu/velero/issues/3500)|Use distroless containers as a base|
|
||||
|
||||
|
||||
|
||||
##### Items formerly in 1.7 that will slip due to staffing changes
|
||||
|Issue|Description|
|
||||
|---|---|
|
||||
|[3536](https://github.com/vmware-tanzu/velero/issues/3536)|Manifest for backup/restore|
|
||||
|[2066](https://github.com/vmware-tanzu/velero/issues/2066)|CSI Snapshots GA|
|
||||
|[3535](https://github.com/vmware-tanzu/velero/issues/3535)|Design doc for multiple cluster support|
|
||||
|[2922](https://github.com/vmware-tanzu/velero/issues/2922)|Plugin timeouts|
|
||||
|[3531](https://github.com/vmware-tanzu/velero/issues/3531)|Test plan for Velero|
|
||||
Other work may make it into the 1.8 release, but this is the work that will be prioritized first.
|
||||
6
Tiltfile
6
Tiltfile
@@ -16,7 +16,7 @@ k8s_yaml([
|
||||
|
||||
# default values
|
||||
settings = {
|
||||
"default_registry": "",
|
||||
"default_registry": "docker.io/velero",
|
||||
"enable_restic": False,
|
||||
"enable_debug": False,
|
||||
"debug_continue_on_start": True, # Continue the velero process by default when in debug mode
|
||||
@@ -90,14 +90,14 @@ def get_debug_flag():
|
||||
# Set up a local_resource build of the Velero binary. The binary is written to _tiltbuild/velero.
|
||||
local_resource(
|
||||
"velero_server_binary",
|
||||
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild;PKG=. BIN=velero GOOS=linux GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild ' + get_debug_flag() + ' ./hack/build.sh',
|
||||
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild;PKG=. BIN=velero GOOS=linux GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild ' + get_debug_flag() + ' REGISTRY=' + settings.get("default_registry") + ' ./hack/build.sh',
|
||||
deps = ["cmd", "internal", "pkg"],
|
||||
ignore = ["pkg/cmd"],
|
||||
)
|
||||
|
||||
local_resource(
|
||||
"velero_local_binary",
|
||||
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/local;PKG=. BIN=velero GOOS=' + local_goos + ' GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild/local ' + get_debug_flag() + ' ./hack/build.sh',
|
||||
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/local;PKG=. BIN=velero GOOS=' + local_goos + ' GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild/local ' + get_debug_flag() + ' REGISTRY=' + settings.get("default_registry") + ' ./hack/build.sh',
|
||||
deps = ["internal", "pkg/cmd"],
|
||||
)
|
||||
|
||||
|
||||
1
changelogs/unreleased/4126-sseago
Normal file
1
changelogs/unreleased/4126-sseago
Normal file
@@ -0,0 +1 @@
|
||||
Verify group before treating resource as cohabitating
|
||||
1
changelogs/unreleased/4185-reasonerjt
Normal file
1
changelogs/unreleased/4185-reasonerjt
Normal file
@@ -0,0 +1 @@
|
||||
Refine tag-release.sh to align with change in release process
|
||||
1
changelogs/unreleased/4274-ywk253100
Normal file
1
changelogs/unreleased/4274-ywk253100
Normal file
@@ -0,0 +1 @@
|
||||
Fix CVE-2020-29652 and CVE-2020-26160
|
||||
1
changelogs/unreleased/4281-ywk253100
Normal file
1
changelogs/unreleased/4281-ywk253100
Normal file
@@ -0,0 +1 @@
|
||||
Don't create a backup immediately after creating a schedule
|
||||
@@ -205,8 +205,10 @@ spec:
|
||||
are expanded using the container''s environment.
|
||||
If a variable cannot be resolved, the
|
||||
reference in the input string will be
|
||||
unchanged. The $(VAR_NAME) syntax can
|
||||
be escaped with a double $$, ie: $$(VAR_NAME).
|
||||
unchanged. Double $$ are reduced to a
|
||||
single $, which allows for escaping the
|
||||
$(VAR_NAME) syntax: i.e. "$$(VAR_NAME)"
|
||||
will produce the string literal "$(VAR_NAME)".
|
||||
Escaped references will never be expanded,
|
||||
regardless of whether the variable exists
|
||||
or not. Cannot be updated. More info:
|
||||
@@ -221,12 +223,14 @@ spec:
|
||||
references $(VAR_NAME) are expanded using
|
||||
the container''s environment. If a variable
|
||||
cannot be resolved, the reference in the
|
||||
input string will be unchanged. The $(VAR_NAME)
|
||||
syntax can be escaped with a double $$,
|
||||
ie: $$(VAR_NAME). Escaped references will
|
||||
never be expanded, regardless of whether
|
||||
the variable exists or not. Cannot be
|
||||
updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
|
||||
input string will be unchanged. Double
|
||||
$$ are reduced to a single $, which allows
|
||||
for escaping the $(VAR_NAME) syntax: i.e.
|
||||
"$$(VAR_NAME)" will produce the string
|
||||
literal "$(VAR_NAME)". Escaped references
|
||||
will never be expanded, regardless of
|
||||
whether the variable exists or not. Cannot
|
||||
be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
@@ -244,17 +248,19 @@ spec:
|
||||
value:
|
||||
description: 'Variable references
|
||||
$(VAR_NAME) are expanded using the
|
||||
previous defined environment variables
|
||||
previously defined environment variables
|
||||
in the container and any service
|
||||
environment variables. If a variable
|
||||
cannot be resolved, the reference
|
||||
in the input string will be unchanged.
|
||||
The $(VAR_NAME) syntax can be escaped
|
||||
with a double $$, ie: $$(VAR_NAME).
|
||||
Escaped references will never be
|
||||
expanded, regardless of whether
|
||||
the variable exists or not. Defaults
|
||||
to "".'
|
||||
Double $$ are reduced to a single
|
||||
$, which allows for escaping the
|
||||
$(VAR_NAME) syntax: i.e. "$$(VAR_NAME)"
|
||||
will produce the string literal
|
||||
"$(VAR_NAME)". Escaped references
|
||||
will never be expanded, regardless
|
||||
of whether the variable exists or
|
||||
not. Defaults to "".'
|
||||
type: string
|
||||
valueFrom:
|
||||
description: Source for the environment
|
||||
@@ -804,6 +810,30 @@ spec:
|
||||
required:
|
||||
- port
|
||||
type: object
|
||||
terminationGracePeriodSeconds:
|
||||
description: Optional duration in seconds
|
||||
the pod needs to terminate gracefully
|
||||
upon probe failure. The grace period
|
||||
is the duration in seconds after the
|
||||
processes running in the pod are sent
|
||||
a termination signal and the time
|
||||
when the processes are forcibly halted
|
||||
with a kill signal. Set this value
|
||||
longer than the expected cleanup time
|
||||
for your process. If this value is
|
||||
nil, the pod's terminationGracePeriodSeconds
|
||||
will be used. Otherwise, this value
|
||||
overrides the value provided by the
|
||||
pod spec. Value must be non-negative
|
||||
integer. The value zero indicates
|
||||
stop immediately via the kill signal
|
||||
(no opportunity to shut down). This
|
||||
is a beta field and requires enabling
|
||||
ProbeTerminationGracePeriod feature
|
||||
gate. Minimum value is 1. spec.terminationGracePeriodSeconds
|
||||
is used if unset.
|
||||
format: int64
|
||||
type: integer
|
||||
timeoutSeconds:
|
||||
description: 'Number of seconds after
|
||||
which the probe times out. Defaults
|
||||
@@ -1006,6 +1036,30 @@ spec:
|
||||
required:
|
||||
- port
|
||||
type: object
|
||||
terminationGracePeriodSeconds:
|
||||
description: Optional duration in seconds
|
||||
the pod needs to terminate gracefully
|
||||
upon probe failure. The grace period
|
||||
is the duration in seconds after the
|
||||
processes running in the pod are sent
|
||||
a termination signal and the time
|
||||
when the processes are forcibly halted
|
||||
with a kill signal. Set this value
|
||||
longer than the expected cleanup time
|
||||
for your process. If this value is
|
||||
nil, the pod's terminationGracePeriodSeconds
|
||||
will be used. Otherwise, this value
|
||||
overrides the value provided by the
|
||||
pod spec. Value must be non-negative
|
||||
integer. The value zero indicates
|
||||
stop immediately via the kill signal
|
||||
(no opportunity to shut down). This
|
||||
is a beta field and requires enabling
|
||||
ProbeTerminationGracePeriod feature
|
||||
gate. Minimum value is 1. spec.terminationGracePeriodSeconds
|
||||
is used if unset.
|
||||
format: int64
|
||||
type: integer
|
||||
timeoutSeconds:
|
||||
description: 'Number of seconds after
|
||||
which the probe times out. Defaults
|
||||
@@ -1017,7 +1071,7 @@ spec:
|
||||
resources:
|
||||
description: 'Compute Resources required
|
||||
by this container. Cannot be updated.
|
||||
More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
|
||||
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
|
||||
properties:
|
||||
limits:
|
||||
additionalProperties:
|
||||
@@ -1028,7 +1082,7 @@ spec:
|
||||
x-kubernetes-int-or-string: true
|
||||
description: 'Limits describes the maximum
|
||||
amount of compute resources allowed.
|
||||
More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
|
||||
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
|
||||
type: object
|
||||
requests:
|
||||
additionalProperties:
|
||||
@@ -1043,12 +1097,14 @@ spec:
|
||||
a container, it defaults to Limits
|
||||
if that is explicitly specified, otherwise
|
||||
to an implementation-defined value.
|
||||
More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
|
||||
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
|
||||
type: object
|
||||
type: object
|
||||
securityContext:
|
||||
description: 'Security options the pod should
|
||||
run with. More info: https://kubernetes.io/docs/concepts/policy/security-context/
|
||||
description: 'SecurityContext defines the
|
||||
security options the container should
|
||||
be run with. If set, the fields of SecurityContext
|
||||
override the equivalent fields of PodSecurityContext.
|
||||
More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/'
|
||||
properties:
|
||||
allowPrivilegeEscalation:
|
||||
@@ -1217,6 +1273,25 @@ spec:
|
||||
is the name of the GMSA credential
|
||||
spec to use.
|
||||
type: string
|
||||
hostProcess:
|
||||
description: HostProcess determines
|
||||
if a container should be run as
|
||||
a 'Host Process' container. This
|
||||
field is alpha-level and will
|
||||
only be honored by components
|
||||
that enable the WindowsHostProcessContainers
|
||||
feature flag. Setting this field
|
||||
without the feature flag will
|
||||
result in errors when validating
|
||||
the Pod. All of a Pod's containers
|
||||
must have the same effective HostProcess
|
||||
value (it is not allowed to have
|
||||
a mix of HostProcess containers
|
||||
and non-HostProcess containers). In
|
||||
addition, if HostProcess is true
|
||||
then HostNetwork must also be
|
||||
set to true.
|
||||
type: boolean
|
||||
runAsUserName:
|
||||
description: The UserName in Windows
|
||||
to run the entrypoint of the container
|
||||
@@ -1369,6 +1444,30 @@ spec:
|
||||
required:
|
||||
- port
|
||||
type: object
|
||||
terminationGracePeriodSeconds:
|
||||
description: Optional duration in seconds
|
||||
the pod needs to terminate gracefully
|
||||
upon probe failure. The grace period
|
||||
is the duration in seconds after the
|
||||
processes running in the pod are sent
|
||||
a termination signal and the time
|
||||
when the processes are forcibly halted
|
||||
with a kill signal. Set this value
|
||||
longer than the expected cleanup time
|
||||
for your process. If this value is
|
||||
nil, the pod's terminationGracePeriodSeconds
|
||||
will be used. Otherwise, this value
|
||||
overrides the value provided by the
|
||||
pod spec. Value must be non-negative
|
||||
integer. The value zero indicates
|
||||
stop immediately via the kill signal
|
||||
(no opportunity to shut down). This
|
||||
is a beta field and requires enabling
|
||||
ProbeTerminationGracePeriod feature
|
||||
gate. Minimum value is 1. spec.terminationGracePeriodSeconds
|
||||
is used if unset.
|
||||
format: int64
|
||||
type: integer
|
||||
timeoutSeconds:
|
||||
description: 'Number of seconds after
|
||||
which the probe times out. Defaults
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -202,12 +202,14 @@ spec:
|
||||
is not provided. Variable references $(VAR_NAME)
|
||||
are expanded using the container''s environment.
|
||||
If a variable cannot be resolved, the reference
|
||||
in the input string will be unchanged. The
|
||||
$(VAR_NAME) syntax can be escaped with a
|
||||
double $$, ie: $$(VAR_NAME). Escaped references
|
||||
will never be expanded, regardless of whether
|
||||
the variable exists or not. Cannot be updated.
|
||||
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
|
||||
in the input string will be unchanged. Double
|
||||
$$ are reduced to a single $, which allows
|
||||
for escaping the $(VAR_NAME) syntax: i.e.
|
||||
"$$(VAR_NAME)" will produce the string literal
|
||||
"$(VAR_NAME)". Escaped references will never
|
||||
be expanded, regardless of whether the variable
|
||||
exists or not. Cannot be updated. More info:
|
||||
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
@@ -218,12 +220,14 @@ spec:
|
||||
references $(VAR_NAME) are expanded using
|
||||
the container''s environment. If a variable
|
||||
cannot be resolved, the reference in the
|
||||
input string will be unchanged. The $(VAR_NAME)
|
||||
syntax can be escaped with a double $$,
|
||||
ie: $$(VAR_NAME). Escaped references will
|
||||
never be expanded, regardless of whether
|
||||
the variable exists or not. Cannot be updated.
|
||||
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
|
||||
input string will be unchanged. Double $$
|
||||
are reduced to a single $, which allows
|
||||
for escaping the $(VAR_NAME) syntax: i.e.
|
||||
"$$(VAR_NAME)" will produce the string literal
|
||||
"$(VAR_NAME)". Escaped references will never
|
||||
be expanded, regardless of whether the variable
|
||||
exists or not. Cannot be updated. More info:
|
||||
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
@@ -240,17 +244,19 @@ spec:
|
||||
type: string
|
||||
value:
|
||||
description: 'Variable references $(VAR_NAME)
|
||||
are expanded using the previous defined
|
||||
environment variables in the container
|
||||
and any service environment variables.
|
||||
If a variable cannot be resolved,
|
||||
the reference in the input string
|
||||
will be unchanged. The $(VAR_NAME)
|
||||
syntax can be escaped with a double
|
||||
$$, ie: $$(VAR_NAME). Escaped references
|
||||
will never be expanded, regardless
|
||||
of whether the variable exists or
|
||||
not. Defaults to "".'
|
||||
are expanded using the previously
|
||||
defined environment variables in the
|
||||
container and any service environment
|
||||
variables. If a variable cannot be
|
||||
resolved, the reference in the input
|
||||
string will be unchanged. Double $$
|
||||
are reduced to a single $, which allows
|
||||
for escaping the $(VAR_NAME) syntax:
|
||||
i.e. "$$(VAR_NAME)" will produce the
|
||||
string literal "$(VAR_NAME)". Escaped
|
||||
references will never be expanded,
|
||||
regardless of whether the variable
|
||||
exists or not. Defaults to "".'
|
||||
type: string
|
||||
valueFrom:
|
||||
description: Source for the environment
|
||||
@@ -792,6 +798,29 @@ spec:
|
||||
required:
|
||||
- port
|
||||
type: object
|
||||
terminationGracePeriodSeconds:
|
||||
description: Optional duration in seconds
|
||||
the pod needs to terminate gracefully
|
||||
upon probe failure. The grace period
|
||||
is the duration in seconds after the
|
||||
processes running in the pod are sent
|
||||
a termination signal and the time when
|
||||
the processes are forcibly halted with
|
||||
a kill signal. Set this value longer
|
||||
than the expected cleanup time for your
|
||||
process. If this value is nil, the pod's
|
||||
terminationGracePeriodSeconds will be
|
||||
used. Otherwise, this value overrides
|
||||
the value provided by the pod spec.
|
||||
Value must be non-negative integer.
|
||||
The value zero indicates stop immediately
|
||||
via the kill signal (no opportunity
|
||||
to shut down). This is a beta field
|
||||
and requires enabling ProbeTerminationGracePeriod
|
||||
feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds
|
||||
is used if unset.
|
||||
format: int64
|
||||
type: integer
|
||||
timeoutSeconds:
|
||||
description: 'Number of seconds after
|
||||
which the probe times out. Defaults
|
||||
@@ -991,6 +1020,29 @@ spec:
|
||||
required:
|
||||
- port
|
||||
type: object
|
||||
terminationGracePeriodSeconds:
|
||||
description: Optional duration in seconds
|
||||
the pod needs to terminate gracefully
|
||||
upon probe failure. The grace period
|
||||
is the duration in seconds after the
|
||||
processes running in the pod are sent
|
||||
a termination signal and the time when
|
||||
the processes are forcibly halted with
|
||||
a kill signal. Set this value longer
|
||||
than the expected cleanup time for your
|
||||
process. If this value is nil, the pod's
|
||||
terminationGracePeriodSeconds will be
|
||||
used. Otherwise, this value overrides
|
||||
the value provided by the pod spec.
|
||||
Value must be non-negative integer.
|
||||
The value zero indicates stop immediately
|
||||
via the kill signal (no opportunity
|
||||
to shut down). This is a beta field
|
||||
and requires enabling ProbeTerminationGracePeriod
|
||||
feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds
|
||||
is used if unset.
|
||||
format: int64
|
||||
type: integer
|
||||
timeoutSeconds:
|
||||
description: 'Number of seconds after
|
||||
which the probe times out. Defaults
|
||||
@@ -1002,7 +1054,7 @@ spec:
|
||||
resources:
|
||||
description: 'Compute Resources required by
|
||||
this container. Cannot be updated. More
|
||||
info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
|
||||
info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
|
||||
properties:
|
||||
limits:
|
||||
additionalProperties:
|
||||
@@ -1013,7 +1065,7 @@ spec:
|
||||
x-kubernetes-int-or-string: true
|
||||
description: 'Limits describes the maximum
|
||||
amount of compute resources allowed.
|
||||
More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
|
||||
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
|
||||
type: object
|
||||
requests:
|
||||
additionalProperties:
|
||||
@@ -1027,12 +1079,14 @@ spec:
|
||||
If Requests is omitted for a container,
|
||||
it defaults to Limits if that is explicitly
|
||||
specified, otherwise to an implementation-defined
|
||||
value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
|
||||
value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
|
||||
type: object
|
||||
type: object
|
||||
securityContext:
|
||||
description: 'Security options the pod should
|
||||
run with. More info: https://kubernetes.io/docs/concepts/policy/security-context/
|
||||
description: 'SecurityContext defines the
|
||||
security options the container should be
|
||||
run with. If set, the fields of SecurityContext
|
||||
override the equivalent fields of PodSecurityContext.
|
||||
More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/'
|
||||
properties:
|
||||
allowPrivilegeEscalation:
|
||||
@@ -1197,6 +1251,24 @@ spec:
|
||||
is the name of the GMSA credential
|
||||
spec to use.
|
||||
type: string
|
||||
hostProcess:
|
||||
description: HostProcess determines
|
||||
if a container should be run as
|
||||
a 'Host Process' container. This
|
||||
field is alpha-level and will only
|
||||
be honored by components that enable
|
||||
the WindowsHostProcessContainers
|
||||
feature flag. Setting this field
|
||||
without the feature flag will result
|
||||
in errors when validating the Pod.
|
||||
All of a Pod's containers must have
|
||||
the same effective HostProcess value
|
||||
(it is not allowed to have a mix
|
||||
of HostProcess containers and non-HostProcess
|
||||
containers). In addition, if HostProcess
|
||||
is true then HostNetwork must also
|
||||
be set to true.
|
||||
type: boolean
|
||||
runAsUserName:
|
||||
description: The UserName in Windows
|
||||
to run the entrypoint of the container
|
||||
@@ -1346,6 +1418,29 @@ spec:
|
||||
required:
|
||||
- port
|
||||
type: object
|
||||
terminationGracePeriodSeconds:
|
||||
description: Optional duration in seconds
|
||||
the pod needs to terminate gracefully
|
||||
upon probe failure. The grace period
|
||||
is the duration in seconds after the
|
||||
processes running in the pod are sent
|
||||
a termination signal and the time when
|
||||
the processes are forcibly halted with
|
||||
a kill signal. Set this value longer
|
||||
than the expected cleanup time for your
|
||||
process. If this value is nil, the pod's
|
||||
terminationGracePeriodSeconds will be
|
||||
used. Otherwise, this value overrides
|
||||
the value provided by the pod spec.
|
||||
Value must be non-negative integer.
|
||||
The value zero indicates stop immediately
|
||||
via the kill signal (no opportunity
|
||||
to shut down). This is a beta field
|
||||
and requires enabling ProbeTerminationGracePeriod
|
||||
feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds
|
||||
is used if unset.
|
||||
format: int64
|
||||
type: integer
|
||||
timeoutSeconds:
|
||||
description: 'Number of seconds after
|
||||
which the probe times out. Defaults
|
||||
|
||||
File diff suppressed because one or more lines are too long
122
design/Implemented/velero-debug.md
Normal file
122
design/Implemented/velero-debug.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# `velero debug` command for gathering troubleshooting information
|
||||
|
||||
## Abstract
|
||||
To simplify the communication between velero users and developers, this document proposes the `velero debug` command to generate a tarball including the logs needed for debugging.
|
||||
|
||||
Github issue: https://github.com/vmware-tanzu/velero/issues/675
|
||||
|
||||
## Background
|
||||
Gathering information to troubleshoot a Velero deployment is currently spread across multiple commands, and is not very efficient. Logs for the Velero server itself are accessed via a kubectl logs command, while information on specific backups or restores are accessed via a Velero subcommand. Restic logs are even more complicated to retrieve, since one must gather logs for every instance of the daemonset, and there’s currently no good mechanism to locate which node a particular restic backup ran against.
|
||||
A dedicated subcommand can lower this effort and reduce back-and-forth between user and developer for collecting the logs.
|
||||
|
||||
|
||||
## Goals
|
||||
- Enable efficient log collection for Velero and associated components, like plugins and restic.
|
||||
|
||||
## Non Goals
|
||||
- Collecting logs for components that do not belong to velero such as storage service.
|
||||
- Automated log analysis.
|
||||
|
||||
## High-Level Design
|
||||
With the introduction of the new command `velero debug`, the command would download all of the following information:
|
||||
- velero deployment logs
|
||||
- restic DaemonSet logs
|
||||
- plugin logs
|
||||
- All the resources in the group `velero.io` that are created such as:
|
||||
- Backup
|
||||
- Restore
|
||||
- BackupStorageLocation
|
||||
- PodVolumeBackup
|
||||
- PodVolumeRestore
|
||||
- *etc ...*
|
||||
- Log of the backup and restore, if specified in the param
|
||||
|
||||
A project called `crash-diagnostics` (or `crashd`) (https://github.com/vmware-tanzu/crash-diagnostics) implements the Kubernetes API queries and provides Starlark scripting language to abstract details, and collect the information into a local copy. It can be used as a standalone CLI executing a Starlark script file.
|
||||
With the capabilities of embedding files in Go 1.16, we can define a Starlark script gathering the necessary information, embed the script at build time, then the velero debug command will invoke `crashd`, passing in the script’s text contents.
|
||||
|
||||
## Detailed Design
|
||||
### Triggering the script
|
||||
The Starlark script to be called by crashd:
|
||||
|
||||
```python
|
||||
def capture_backup_logs(cmd, namespace):
|
||||
if args.backup:
|
||||
log("Collecting log and information for backup: {}".format(args.backup))
|
||||
backupDescCmd = "{} --namespace={} backup describe {} --details".format(cmd, namespace, args.backup)
|
||||
capture_local(cmd=backupDescCmd, file_name="backup_describe_{}.txt".format(args.backup))
|
||||
backupLogsCmd = "{} --namespace={} backup logs {}".format(cmd, namespace, args.backup)
|
||||
capture_local(cmd=backupLogsCmd, file_name="backup_{}.log".format(args.backup))
|
||||
def capture_restore_logs(cmd, namespace):
|
||||
if args.restore:
|
||||
log("Collecting log and information for restore: {}".format(args.restore))
|
||||
restoreDescCmd = "{} --namespace={} restore describe {} --details".format(cmd, namespace, args.restore)
|
||||
capture_local(cmd=restoreDescCmd, file_name="restore_describe_{}.txt".format(args.restore))
|
||||
restoreLogsCmd = "{} --namespace={} restore logs {}".format(cmd, namespace, args.restore)
|
||||
capture_local(cmd=restoreLogsCmd, file_name="restore_{}.log".format(args.restore))
|
||||
|
||||
ns = args.namespace if args.namespace else "velero"
|
||||
output = args.output if args.output else "bundle.tar.gz"
|
||||
cmd = args.cmd if args.cmd else "velero"
|
||||
# Working dir for writing during script execution
|
||||
crshd = crashd_config(workdir="./velero-bundle")
|
||||
set_defaults(kube_config(path=args.kubeconfig, cluster_context=args.kubecontext))
|
||||
log("Collecting velero resources in namespace: {}". format(ns))
|
||||
kube_capture(what="objects", namespaces=[ns], groups=['velero.io'])
|
||||
capture_local(cmd="{} version -n {}".format(cmd, ns), file_name="version.txt")
|
||||
log("Collecting velero deployment logs in namespace: {}". format(ns))
|
||||
kube_capture(what="logs", namespaces=[ns])
|
||||
capture_backup_logs(cmd, ns)
|
||||
capture_restore_logs(cmd, ns)
|
||||
archive(output_file=output, source_paths=[crshd.workdir])
|
||||
log("Generated debug information bundle: {}".format(output))
|
||||
```
|
||||
The sample command to trigger the script via crashd:
|
||||
```shell
|
||||
./crashd run ./velero.cshd --args
|
||||
'backup=harbor-backup-2nd,namespace=velero,basedir=,restore=,kubeconfig=/home/.kube/minikube-250-224/config,output='
|
||||
```
|
||||
To trigger the script in `velero debug`, in the package `pkg/cmd/cli/debug` a struct `option` will be introduced
|
||||
```go
|
||||
type option struct {
|
||||
// currCmd the velero command
|
||||
currCmd string
|
||||
// workdir for crashd will be $baseDir/velero-debug
|
||||
baseDir string
|
||||
// the namespace where velero server is installed
|
||||
namespace string
|
||||
// the absolute path for the log bundle to be generated
|
||||
outputPath string
|
||||
// the absolute path for the kubeconfig file that will be read by crashd for calling K8S API
|
||||
kubeconfigPath string
|
||||
// the kubecontext to be used for calling K8S API
|
||||
kubeContext string
|
||||
// optional, the name of the backup resource whose log will be packaged into the debug bundle
|
||||
backup string
|
||||
// optional, the name of the restore resource whose log will be packaged into the debug bundle
|
||||
restore string
|
||||
// optional, it controls whether to print the debug log messages when calling crashd
|
||||
verbose bool
|
||||
}
|
||||
```
|
||||
The code will consolidate the input parameters and execution context of the `velero` CLI to form the option struct, which can be transformed into the `argsMap` that can be used when calling the func `exec.Execute` in `crashd`:
|
||||
https://github.com/vmware-tanzu/crash-diagnostics/blob/v0.3.4/exec/executor.go#L17
|
||||
|
||||
## Alternatives Considered
|
||||
The collection could be done via the kubernetes client-go API, but such integration is not necessarily trivial to implement, therefore, `crashd` is preferred approach
|
||||
|
||||
## Security Considerations
|
||||
- The starlark script will be embedded into the velero binary, and the byte slice will be passed to the `exec.Execute` func directly, so there’s little risk that the script will be modified before being executed.
|
||||
|
||||
## Compatibility
|
||||
As the `crashd` project evolves the behavior of the internal functions used in the Starlark script may change. We’ll ensure the correctness of the script via regular E2E tests.
|
||||
|
||||
|
||||
## Implementation
|
||||
1. Bump up to use Go v1.16 to compile velero
|
||||
2. Embed the starlark script
|
||||
3. Implement the `velero debug` sub-command to call the script
|
||||
4. Add E2E test case
|
||||
|
||||
## Open Questions
|
||||
- **Command dependencies:** In the Starlark script, for collecting version info and backup logs, it calls the `velero backup logs` and `velero version`, which makes the call stack like velero debug -> crashd -> velero xxx. We need to make sure this works under different PATH settings.
|
||||
- **Progress and error handling:** The log collection may take a relatively long time, log messages should be printed to indicate the progress when different items are being downloaded and packaged. Additionally, when an error happens, `crashd` may omit some errors, so before the script is executed we'll do some validation and make sure the `debug` command fail early if some parameters are incorrect.
|
||||
219
design/graph-manifest.md
Normal file
219
design/graph-manifest.md
Normal file
@@ -0,0 +1,219 @@
|
||||
# Object Graph Manifest for Velero
|
||||
|
||||
## Abstract
|
||||
|
||||
One to two sentences that describes the goal of this proposal and the problem being solved by the proposed change.
|
||||
The reader should be able to tell by the title, and the opening paragraph, if this document is relevant to them.
|
||||
|
||||
Currently, Velero does not have a complete manifest of everything in the backup, aside from the backup tarball itself.
|
||||
This change introduces a new data structure to be stored with a backup in object storage which will allow for more efficient operations in reporting of what a backup contains.
|
||||
Additionally, this manifest should enable advancements in Velero's features and architecture, enabling dry-run support, concurrent backup and restore operations, and reliable restoration of complex applications.
|
||||
|
||||
## Background
|
||||
|
||||
Right now, Velero backs up items one at a time, sorted by API Group and namespace.
|
||||
It also restores items one at a time, using the restoreResourcePriorities flag to indicate which order API Groups should have their objects restored first.
|
||||
While this does work currently, it presents challenges for more complex applications that have their dependencies in the form of a graph rather than strictly linear.
|
||||
|
||||
For example, Cluster API clusters are a set of complex Kubernetes objects that require that the "root" objects are restored first, before their "leaf" objects.
|
||||
If a Cluster that a ClusterResourceSetBinding refers to does not exist, then a restore of the CAPI cluster will fail.
|
||||
|
||||
Additionally, Velero does not have a reliable way to communicate what objects will be affected in a backup or restore operation without actually performing the operation.
|
||||
This complicates dry-run tasks, because a user must simply perform the action without knowing what will be touched.
|
||||
It also complicates allowing backups and restores to run in parallel, because there is currently no way to know if a single Kubernetes object is included in multiple backups or restores, which can lead to unreliability, deadlocking, and race conditions were Velero made to be more concurrent today.
|
||||
|
||||
## Goals
|
||||
|
||||
- Introduce a manifest data structure that defines the contents of a backup.
|
||||
- Store the manifest data into object storage alongside existing backup data.
|
||||
|
||||
## Non Goals
|
||||
|
||||
This proposal seeks to enable, but not define, the following.
|
||||
|
||||
- Implementing concurrency beyond what already exists in Velero.
|
||||
- Implementing a dry-run feature.
|
||||
- Implementing a new restore ordering procedure.
|
||||
|
||||
While the data structure should take these scenarios into account, they will not be implemented alongside it.
|
||||
|
||||
## High-Level Design
|
||||
|
||||
To uniquely identify a Kubernetes object within a cluster or backup, the following fields are sufficient:
|
||||
|
||||
- API Group and Version (example: backup.velero.io/v1)
|
||||
- Namespace
|
||||
- Name
|
||||
- Labels
|
||||
|
||||
This criteria covers the majority of Velero's inclusion or exclusion logic.
|
||||
However, some additional fields enable further use cases.
|
||||
|
||||
- Owners, which are other Kubernetes objects that have some relationship to this object. They may be strict or soft dependencies.
|
||||
- Annotations, which provide extra metadata about the object that might be useful for other programs to consume.
|
||||
- UUID generated by Kubernetes. This is useful in defining Owner relationships, providing a single, immutable key to find an object. This is _not_ considered at restore time, only internally for defining links.
|
||||
|
||||
All of this information already exists within a Velero backup's tarball of resources, but extracting such data is inefficient.
|
||||
The entire tarball must be downloaded and extracted, and then JSON within parsed to read labels, owners, annotations, and a UUID.
|
||||
The rest of the information is encoded in the file system structure within the Velero backup tarball.
|
||||
While doable, this is heavyweight in terms of time and potentially memory.
|
||||
|
||||
Instead, this proposal suggests adding a new manifest structure that is kept alongside the backup tarball.
|
||||
This structure would contain the above fields only, and could be used to perform inclusion/exclusion logic on a backup, select a resource from within a backup, and do set operations over backup or restore contents to identify overlapping resources.
|
||||
|
||||
Here are some use cases that this data structure should enable, that have been difficult to implement prior to its existence:
|
||||
|
||||
- A dry-run operation on backup, informing the user what would be selected if they were to perform the operation.
|
||||
A manifest could be created and saved, allowing for a user to do a dry-run, then accept it to perform the backup.
|
||||
Restore operations can be treated similarly.
|
||||
- Efficient, non-overlapping parallelization of backup and restore operations.
|
||||
By building or reading a manifest before performing a backup or restore, Velero can determine if there are overlapping resources.
|
||||
If there are no overlaps, the operations can proceed in parallel.
|
||||
If there are overlaps, the operations can proveed serially.
|
||||
- Graph-based restores for non-linear dependencies.
|
||||
Not all resources in a Kubernetes cluster can be defined in a strict, linear way.
|
||||
They may have multiple owners, and writing BackupItemActions or RestoreItemActions to simply return a chain of owners is not an efficient way to support the many Kubernetes operators/controllers being written.
|
||||
Instead, by having a manifest with enough information, Velero can build a discrete list that ensures dependencies are restored before their dependents, with less input from plugin authors.
|
||||
|
||||
## Detailed Design
|
||||
|
||||
The Manifest data structure would look like this, in Go type structure:
|
||||
|
||||
```golang
|
||||
// NamespacedItems maps a given namespace to all of its contained items.
|
||||
type NamespacedItems map[string]*Item
|
||||
|
||||
// APIGroupNamespaces maps an API group/version to a map of namespaces and their items.
|
||||
type KindNamespaces map[string]NamespacedItems
|
||||
|
||||
type Manifest struct {
|
||||
// Kinds holds the top level map of all resources in a manifest.
|
||||
Kinds KindNamespaces
|
||||
|
||||
// Index is used to look up an individual item quickly based on UUID.
|
||||
// This enables fetching owners out of the maps more efficiently at the cost of memory space.
|
||||
Index map[string]*Item
|
||||
}
|
||||
|
||||
|
||||
// Item represents a Kubernetes resource within a backup based on it's selectable criteria.
|
||||
// It is not the whole Kubernetes resource as retrieved from the API server, but rather a collection of important fields needed for filtering.
|
||||
type Item struct {
|
||||
// Kubernetes API group which this Item belongs to.
|
||||
// Could be a core resource, or a CustomResourceDefinition.
|
||||
APIGroup string
|
||||
|
||||
// Version of the APIGroup that the Item belongs to.
|
||||
APIVersion string
|
||||
|
||||
// Kubernetes namespace which contains this item.
|
||||
// Empty string for cluster-level resource.
|
||||
Namespace string
|
||||
|
||||
// Item's given name.
|
||||
Name string
|
||||
|
||||
// Map of labels that the Item had at backup time.
|
||||
Labels map[string]string
|
||||
|
||||
// Map of annotations that the Item had at Backup time.
|
||||
// Useful for plugins that may decide to process only Items with specific annotations.
|
||||
Annotations map[string]string
|
||||
|
||||
// Owners is a list of UUIDs to other items that own or refer to this item.
|
||||
Owners []string
|
||||
|
||||
// Manifest is a pointer to the Manifest in which this object is contained.
|
||||
// Useful for getting access to things like the Manifest.Index map.
|
||||
Manifest *Manifest
|
||||
}
|
||||
```
|
||||
|
||||
In addition to the new types, the following Go interfaces would be provided for convenience.
|
||||
|
||||
```golang
|
||||
type Itermer interface {
|
||||
// Returns the Item as a string, following the current Velero backup version 1.1.0 tarball structure format.
|
||||
// <APIGroup>/<Namespace>/<APIVersion>/<name>.json
|
||||
String() string
|
||||
|
||||
// Owners returns a slice of realized Items that own or refer to the current Item.
|
||||
// Useful for building out a full graph of Items to restore.
|
||||
// Will use the UUIDs in Item.Owners to look up the owner Items in the Manifest.
|
||||
Owners() []*Item
|
||||
|
||||
// Kind returns the Kind of an object, which is a combination of the APIGroup and APIVersion.
|
||||
// Useful for verifying the needed CustomResourceDefinition exists before actually restoring this Item.
|
||||
Kind() *Item
|
||||
|
||||
// Children returns a slice of all Items that refer to this item as an Owner.
|
||||
Children() []*Items
|
||||
}
|
||||
|
||||
// This error type is being created in order to make reliable sentinel errors.
|
||||
// See https://dave.cheney.net/2019/06/10/constant-time for more details.
|
||||
type ManifestError string
|
||||
|
||||
func (e ManifestError) Error() string {
|
||||
return string(e)
|
||||
}
|
||||
|
||||
const ItemAlreadyExists = ManifestError("item already exists in manifest")
|
||||
|
||||
type Manifester interface {
|
||||
// Set returns the entire list of resources as a set of strings (using Itemer.String).
|
||||
// This is useful for comparing two manifests and determining if they have any overlapping resources.
|
||||
// In the future, when implementing concurrent operations, this can be used as a sanity check to ensure resources aren't being backed up or restored by two operations at once.
|
||||
Set() sets.String
|
||||
|
||||
// Adds an item to the appropriate APIGroup and Namespace within a Manifest
|
||||
// Returns (true, nil) if the Item is successfully added to the Manifest,
|
||||
// Returns (false, ItemAlreadyExists) if the Item is already in the Manifest.
|
||||
Add(*Item) (bool, error)
|
||||
}
|
||||
```
|
||||
|
||||
### Serialization
|
||||
|
||||
The entire `Manifest` should be serialized into the `manifest.json` file within the object storage for a single backup.
|
||||
It is possible that this file could also be compressed for space efficiency.
|
||||
|
||||
### Memory Concerns
|
||||
|
||||
Because the `Manifest` is holding a minimal amount of data, memory sizes should not be a concern for most clusters.
|
||||
TODO: Document known limits on API group name, resource name, and kind name character limits.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
Introducing this manifest does not increase the attack surface of Velero, as this data is already present in the existing backups.
|
||||
Storing the manifest.json file next to the existing backup data in the object storage does not change access patterns.
|
||||
|
||||
## Compatibility
|
||||
|
||||
The introduction of this file should trigger Velero backup version 1.2.0, but it will not interfere with Velero versions that do not support the `Manifest` as the file will be additive.
|
||||
In time, this file will replace the `<backupname>-resource-list.json.gz` file, but for compatibility the two will appear side by side.
|
||||
|
||||
When first implemented, Velero should simply build the `Manifest` as it backs up items, and serialize it at the end.
|
||||
Any logic changes that rely on the `Manifest` file must be introduced with their own design document, with their own compatibility concerns.
|
||||
|
||||
## Implementation
|
||||
|
||||
The `Manifest` object will _not_ be implemented as a Kubernetes CustomResourceDefinition, but rather one of Velero's own internal constructs.
|
||||
|
||||
Implementation for the data structure alone should be minimal - the types will need to be defined in a `manifest` package.
|
||||
Then, the backup process should create a `Manifest`, passing it to the various `*Backuppers` in the `backup` package.
|
||||
These methods will insert individual `Items` into the `Manifest`.
|
||||
Finally, logic should be added to the `persistence` package to ensure that the new `manifest.json` file is uploadable and allowed.
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
None so far.
|
||||
|
||||
## Open Issues
|
||||
|
||||
- When should compatibility with the `<backupname>-resource-list.json.gz` file be dropped?
|
||||
- What are some good test case Kubernetes resources and controllers to try this out with?
|
||||
Cluster API seems like an obvious choice, but are there others?
|
||||
- Since it is not implemented as a CustomResourceDefinition, how can a `Manifest` be retained so that users could issue a dry-run command, then perform their actual desire operation?
|
||||
Could it be stored in Velero's temp directories?
|
||||
Note that this is making Velero itself more stateful.
|
||||
@@ -1,120 +0,0 @@
|
||||
# `velero debug` command for gathering troubleshooting information
|
||||
|
||||
## Abstract
|
||||
To simplify the communication between velero users and developers, this document proposes the `velero debug` command to generate a tarball including the logs needed for debugging.
|
||||
|
||||
Github issue: https://github.com/vmware-tanzu/velero/issues/675
|
||||
|
||||
## Background
|
||||
Gathering information to troubleshoot a Velero deployment is currently spread across multiple commands, and is not very efficient. Logs for the Velero server itself are accessed via a kubectl logs command, while information on specific backups or restores are accessed via a Velero subcommand. Restic logs are even more complicated to retrieve, since one must gather logs for every instance of the daemonset, and there’s currently no good mechanism to locate which node a particular restic backup ran against.
|
||||
A dedicated subcommand can lower this effort and reduce back-and-forth between user and developer for collecting the logs.
|
||||
|
||||
|
||||
## Goals
|
||||
- Enable efficient log collection for Velero and associated components, like plugins and restic.
|
||||
|
||||
## Non Goals
|
||||
- Collecting logs for components that do not belong to velero such as storage service.
|
||||
- Automated log analysis.
|
||||
|
||||
## High-Level Design
|
||||
With the introduction of the new command `velero debug`, the command would download all of the following information:
|
||||
- velero deployment logs
|
||||
- restic DaemonSet logs
|
||||
- Plugin logs - need clarification for vSphere plugin see open quetions
|
||||
- Resource and log of the backup and restore, if specified in the param
|
||||
- Resources:
|
||||
- BackupStorageLocation
|
||||
- PodVolumeBackups
|
||||
- PodVolumeRestores
|
||||
|
||||
A project called `crash-diagnostics` (or `crashd`) (https://github.com/vmware-tanzu/crash-diagnostics) implements the Kubernetes API queries and provides Starlark scripting language to abstract details, and collect the information into a local copy. It can be used as a standalone CLI executing a Starlark script file.
|
||||
With the capabilities of embedding files in Go 1.16, we can define a Starlark script gathering the necessary information, embed the script at build time, then the velero debug command will invoke `crashd`, passing in the script’s text contents.
|
||||
|
||||
## Detailed Design
|
||||
### Triggering the script
|
||||
The Starlark script to be called by crashd:
|
||||
|
||||
```python
|
||||
def capture_backup_logs():
|
||||
if args.backup:
|
||||
kube_capture(what="objects", kinds=['backups'], names=[args.backup])
|
||||
backupLogsCmd = "velero backup logs {}".format(args.backup)
|
||||
capture_local(cmd=backupLogsCmd)
|
||||
def capture_restore_logs():
|
||||
if args.restore:
|
||||
kube_capture(what="objects", kinds=['restores'], names=[args.restore])
|
||||
restoreLogsCmd = "velero restore logs {}".format(args.restore)
|
||||
capture_local(cmd=restoreLogsCmd)
|
||||
|
||||
ns = args.namespace if args.namespace else "velero"
|
||||
basedir = args.basedir if args.basedir else os.home
|
||||
output = args.output if args.output else "bundle.tar.gz"
|
||||
# Working dir for writing during script execution
|
||||
crshd = crashd_config(workdir="{0}/velero-bundle".format(basedir))
|
||||
set_defaults(kube_config(path=args.kubeconfig))
|
||||
capture_local(cmd="velero version -n {}".format(ns))
|
||||
capture_backup_logs()
|
||||
capture_restore_logs()
|
||||
kube_capture(what="logs", namespaces=[ns])
|
||||
kube_capture(what="objects", namespaces=[ns], kinds=['backupstoragelocations', 'podvolumebackups', 'podvolumerestores'])
|
||||
archive(output_file=output, source_paths=[crshd.workdir])
|
||||
```
|
||||
The sample command to trigger the script via crashd:
|
||||
```shell
|
||||
./crashd run ./velero.cshd --args
|
||||
'backup=harbor-backup-2nd,namespace=velero,basedir=,restore=,kubeconfig=/home/.kube/minikube-250-224/config,output='
|
||||
```
|
||||
To trigger the script in `velero debug`, in the package `pkg/cmd/cli/debug` a struct `option` will be introduced
|
||||
```go
|
||||
type option struct {
|
||||
// workdir for crashd will be $baseDir/tmp/crashd
|
||||
baseDir string
|
||||
// the namespace where velero server is installed
|
||||
namespace string
|
||||
// the absolute path for the log bundle to be generated
|
||||
outputPath string
|
||||
// the absolute path for the kubeconfig file that will be read by crashd for calling K8S API
|
||||
kubeconfigPath string
|
||||
// optional, the name of the backup resource whose log will be packaged into the debug bundle
|
||||
backup string
|
||||
// optional, the name of the restore resource whose log will be packaged into the debug bundle
|
||||
restore string
|
||||
}
|
||||
```
|
||||
The code will consolidate the input parameters and execution context of the `velero` CLI to form the option struct, which can be transformed into the `args` string for `crashd`
|
||||
### kubeconfig
|
||||
When it comes to accessing the API of k8s, `crashd` has a limitation that it can only accept a path of kubeconfig file, without customizing the `context`, and it does not honor the environment variables such as `KUBECONFIG`. `velero` does honor the environment variables and allow user to customize the path to kubeconfig and the `context`
|
||||
There are two ways to make crashd have consistent behavior as velero in terms of getting the kube configuration:
|
||||
1. Modify crashd to make it honor the environment variable and allow user to set context while calling k8s APIs. This is a preferred approach and it does make `crashd` better, but it may take longer time because we need to convince the maintainers of `crashd`, and double check the change will not break their current use cases.
|
||||
There are 2 issues opened:
|
||||
https://github.com/vmware-tanzu/crash-diagnostics/issues/208
|
||||
https://github.com/vmware-tanzu/crash-diagnostics/issues/122
|
||||
I'll try to contact the maintainers of `crashd` to see the feasibility for velero v1.7
|
||||
2. Before calling the `crashd` script velero CLI will use `client-go` to generate a temp `kubeconfig` file honoring the environment variable and global flags, and pass it to crashd. Although there’s no permission elevation and the temp file will be removed, there’s still some security concern because the temp file is accessible by other programs before it’s deleted, or it may not be deleted if an error happens.
|
||||
|
||||
Therefore, we should consider `option 1` the better choice, and see `option 2` as the backup.
|
||||
|
||||
## Alternatives Considered
|
||||
The collection could be done via the kubernetes client-go API, but such integration is not necessarily trivial to implement, therefore, `crashd` is preferred approach
|
||||
|
||||
|
||||
## Security Considerations
|
||||
- The current released version of `crashd` depends on `client-go v0.19.0` which has a known CVE, we need to make sure that when it’s compiled into velero it uses the version that has the CVE fixed. We should write a PR or push crashd maintainer to fix the CVE-2021-3121 in 0.19.0
|
||||
- The starlark script will be embedded into the velero binary, so there’s little risk that the script will be modified before being called.
|
||||
- There may be minor security issues if we choose to create a temp `kubeconfig` file for `crashd` and remove it afterwards. If we have to choose this option, we need to review it with security experts to better understand the risks.
|
||||
|
||||
## Compatibility
|
||||
As the `crashd` project evolves the behavior of the internal functions used in the Starlark script may change. We’ll ensure the correctness of the script via regular E2E tests.
|
||||
|
||||
|
||||
## Implementation
|
||||
1. Bump up to use Go v1.16 to compile velero
|
||||
2. Embed the starlark script
|
||||
3. Implement the `velero debug` sub-command to call the script
|
||||
4. Add E2E test case
|
||||
|
||||
## Open Questions
|
||||
- **Log collection for vsphere plugin:** Per the design of vsphere plugin: https://github.com/vmware-tanzu/velero-plugin-for-vsphere#architecture when user backup resource on a guest cluster the code in component in the supervisor cluster may be called. Per discussion in v1.7 we will only support collecting logs of process running in one k8s cluster. In terms of implementation, we will do investigate the possibility to call extra script in crashd and ask vsphere plugin developer to provide a script to do the log collection, but the details remain TBD.
|
||||
- **Command dependencies:** In the Starlark script, for collecting version info and backup logs, it calls the `velero backup logs` and `velero version`, which makes the call stack like velero debug -> crashd -> velero xxx. We need to make sure this works under different PATH settings.
|
||||
- **Progress and error handling:** The log collection may take a relatively long time, log messages should be printed to indicate the progress when different items are being downloaded and packaged. Additionally, when an error happens, we need to double check if it’s omitted by crashd.
|
||||
47
go.mod
47
go.mod
@@ -4,45 +4,44 @@ go 1.16
|
||||
|
||||
require (
|
||||
github.com/Azure/azure-sdk-for-go v42.0.0+incompatible
|
||||
github.com/Azure/go-autorest/autorest v0.11.1
|
||||
github.com/Azure/go-autorest/autorest/azure/auth v0.4.2
|
||||
github.com/Azure/go-autorest/autorest v0.11.21
|
||||
github.com/Azure/go-autorest/autorest/azure/auth v0.5.8
|
||||
github.com/Azure/go-autorest/autorest/to v0.3.0
|
||||
github.com/Azure/go-autorest/autorest/validation v0.2.0 // indirect
|
||||
github.com/aws/aws-sdk-go v1.28.2
|
||||
github.com/docker/spdystream v0.0.0-20170912183627-bc6354cbbc29 // indirect
|
||||
github.com/evanphx/json-patch v4.9.0+incompatible
|
||||
github.com/fatih/color v1.10.0
|
||||
github.com/evanphx/json-patch v4.11.0+incompatible
|
||||
github.com/fatih/color v1.13.0
|
||||
github.com/gobwas/glob v0.2.3
|
||||
github.com/gofrs/uuid v3.2.0+incompatible
|
||||
github.com/golang/protobuf v1.4.3
|
||||
github.com/golang/protobuf v1.5.2
|
||||
github.com/google/uuid v1.1.2
|
||||
github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd
|
||||
github.com/hashicorp/go-hclog v0.12.0
|
||||
github.com/hashicorp/go-plugin v0.0.0-20190610192547-a1bc61569a26
|
||||
github.com/joho/godotenv v1.3.0
|
||||
github.com/kubernetes-csi/external-snapshotter/client/v4 v4.0.0
|
||||
github.com/onsi/ginkgo v1.16.4
|
||||
github.com/onsi/gomega v1.10.2
|
||||
github.com/onsi/gomega v1.16.0
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/prometheus/client_golang v1.7.1
|
||||
github.com/prometheus/client_golang v1.11.0
|
||||
github.com/robfig/cron v1.1.0
|
||||
github.com/sirupsen/logrus v1.7.0
|
||||
github.com/spf13/afero v1.2.2
|
||||
github.com/spf13/cobra v1.1.1
|
||||
github.com/sirupsen/logrus v1.8.1
|
||||
github.com/spf13/afero v1.6.0
|
||||
github.com/spf13/cobra v1.2.1
|
||||
github.com/spf13/pflag v1.0.5
|
||||
github.com/stretchr/testify v1.6.1
|
||||
github.com/vmware-tanzu/crash-diagnostics v0.3.4
|
||||
golang.org/x/mod v0.3.0
|
||||
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b
|
||||
google.golang.org/grpc v1.31.0
|
||||
k8s.io/api v0.20.9
|
||||
k8s.io/apiextensions-apiserver v0.19.12
|
||||
k8s.io/apimachinery v0.20.9
|
||||
k8s.io/cli-runtime v0.20.9
|
||||
k8s.io/client-go v0.20.9
|
||||
github.com/stretchr/testify v1.7.0
|
||||
github.com/vmware-tanzu/crash-diagnostics v0.3.7
|
||||
golang.org/x/mod v0.4.2
|
||||
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023
|
||||
google.golang.org/grpc v1.40.0
|
||||
k8s.io/api v0.22.2
|
||||
k8s.io/apiextensions-apiserver v0.22.2
|
||||
k8s.io/apimachinery v0.22.2
|
||||
k8s.io/cli-runtime v0.22.2
|
||||
k8s.io/client-go v0.22.2
|
||||
k8s.io/klog v1.0.0
|
||||
k8s.io/kube-aggregator v0.19.12
|
||||
sigs.k8s.io/cluster-api v0.3.11-0.20210106212952-b6c1b5b3db3d
|
||||
sigs.k8s.io/controller-runtime v0.7.1-0.20201215171748-096b2e07c091
|
||||
sigs.k8s.io/cluster-api v1.0.0
|
||||
sigs.k8s.io/controller-runtime v0.10.2
|
||||
)
|
||||
|
||||
replace github.com/gogo/protobuf => github.com/gogo/protobuf v1.3.2
|
||||
|
||||
@@ -89,18 +89,31 @@ fi
|
||||
# Since we're past the validation of the VELERO_VERSION, parse the version's individual components.
|
||||
eval $(go run $DIR/chk_version.go)
|
||||
|
||||
|
||||
printf "To clarify, you've provided a version string of $VELERO_VERSION.\n"
|
||||
printf "Based on this, the following assumptions have been made: \n"
|
||||
|
||||
[[ "$VELERO_PATCH" != 0 ]] && printf "*\t This is a patch release.\n"
|
||||
# $VELERO_PATCH gets populated by the chk_version.go scrip that parses and verifies the given version format
|
||||
# If we've got a patch release, we assume the tag is on release branch.
|
||||
if [[ "$VELERO_PATCH" != 0 ]]; then
|
||||
printf "*\t This is a patch release.\n"
|
||||
ON_RELEASE_BRANCH=TRUE
|
||||
fi
|
||||
|
||||
# $VELERO_PRERELEASE gets populated by the chk_version.go script that parses and verifies the given version format
|
||||
# $VELERO_PRERELEASE gets populated by the chk_version.go script that parses and verifies the given version format
|
||||
# If we've got a GA release, we assume the tag is on release branch.
|
||||
# -n is "string is non-empty"
|
||||
[[ -n $VELERO_PRERELEASE ]] && printf "*\t This is a pre-release.\n"
|
||||
|
||||
# -z is "string is empty"
|
||||
[[ -z $VELERO_PRERELEASE ]] && printf "*\t This is a GA release.\n"
|
||||
if [[ -z $VELERO_PRERELEASE ]]; then
|
||||
printf "*\t This is a GA release.\n"
|
||||
ON_RELEASE_BRANCH=TRUE
|
||||
fi
|
||||
|
||||
if [[ "$ON_RELEASE_BRANCH" == "TRUE" ]]; then
|
||||
release_branch_name=release-$VELERO_MAJOR.$VELERO_MINOR
|
||||
printf "*\t The commit to tag is on branch: %s. Please make sure this branch has been created.\n" $release_branch_name
|
||||
fi
|
||||
|
||||
if [[ $publish == "TRUE" ]]; then
|
||||
echo "If this is all correct, press enter/return to proceed to TAG THE RELEASE and UPLOAD THE TAG TO GITHUB."
|
||||
@@ -117,55 +130,29 @@ echo "Alright, let's go."
|
||||
echo "Pulling down all git tags and branches before doing any work."
|
||||
git fetch "$remote" --tags
|
||||
|
||||
# $VELERO_PATCH gets populated by the chk_version.go scrip that parses and verifies the given version format
|
||||
# If we've got a patch release, we'll need to create a release branch for it.
|
||||
if [[ "$VELERO_PATCH" > 0 ]]; then
|
||||
release_branch_name=release-$VELERO_MAJOR.$VELERO_MINOR
|
||||
if [[ -n $release_branch_name ]]; then
|
||||
# Tag on release branch
|
||||
remote_release_branch_name="$remote/$release_branch_name"
|
||||
|
||||
# Determine whether the local and remote release branches already exist
|
||||
local_branch=$(git branch | grep "$release_branch_name")
|
||||
remote_branch=$(git branch -r | grep "$remote_release_branch_name")
|
||||
|
||||
if [[ -n $remote_branch ]]; then
|
||||
if [[ -z $local_branch ]]; then
|
||||
if [[ -z $remote_branch ]]; then
|
||||
echo "The branch $remote_release_branch_name must be created before you tag the release."
|
||||
exit 1
|
||||
fi
|
||||
if [[ -z $local_branch ]]; then
|
||||
# Remote branch exists, but does not exist locally. Checkout and track the remote branch.
|
||||
git checkout --track "$remote_release_branch_name"
|
||||
else
|
||||
else
|
||||
# Checkout the local release branch and ensure it is up to date with the remote
|
||||
git checkout "$release_branch_name"
|
||||
git pull --set-upstream "$remote" "$release_branch_name"
|
||||
fi
|
||||
else
|
||||
if [[ -z $local_branch ]]; then
|
||||
# Neither the remote or local release branch exists, create it
|
||||
git checkout -b $release_branch_name
|
||||
else
|
||||
# The local branch exists so check it out.
|
||||
git checkout $release_branch_name
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "Now you'll need to cherry-pick any relevant git commits into this release branch."
|
||||
echo "Either pause this script with ctrl-z, or open a new terminal window and do the cherry-picking."
|
||||
if [[ $publish == "TRUE" ]]; then
|
||||
read -p "Press enter when you're done cherry-picking. THIS WILL MAKE A TAG PUSH THE BRANCH TO $remote"
|
||||
else
|
||||
read -p "Press enter when you're done cherry-picking."
|
||||
fi
|
||||
|
||||
# TODO can/should we add a way to review the cherry-picked commits before the push?
|
||||
|
||||
if [[ $publish == "TRUE" ]]; then
|
||||
echo "Pushing $release_branch_name to \"$remote\" remote"
|
||||
git push --set-upstream "$remote" $release_branch_name
|
||||
fi
|
||||
|
||||
tag_and_push
|
||||
else
|
||||
echo "Checking out $remote/main."
|
||||
git checkout "$remote"/main
|
||||
|
||||
tag_and_push
|
||||
fi
|
||||
|
||||
|
||||
@@ -38,5 +38,10 @@ if [[ -n "${GOFLAGS:-}" ]]; then
|
||||
echo "GOFLAGS: ${GOFLAGS}"
|
||||
fi
|
||||
|
||||
go test -installsuffix "static" -short -timeout 60s "${TARGETS[@]}"
|
||||
# After bumping up "sigs.k8s.io/controller-runtime" to v0.10.2, get the error "panic: mkdir /.cache/kubebuilder-envtest: permission denied"
|
||||
# when running this script with "make test" command. This is caused by that "make test" runs inside a container with user and group specified,
|
||||
# but the user and group don't exist inside the container, when the code(https://github.com/kubernetes-sigs/controller-runtime/blob/v0.10.2/pkg/internal/testing/addr/manager.go#L44)
|
||||
# tries to get the cache directory, it gets the directory "/" and then get the permission error when trying to create directory under "/".
|
||||
# Specifying the cache directory by environment variable "XDG_CACHE_HOME" to workaround it
|
||||
XDG_CACHE_HOME=/tmp/ go test -installsuffix "static" -short -timeout 60s "${TARGETS[@]}"
|
||||
echo "Success!"
|
||||
|
||||
@@ -29,6 +29,7 @@ import (
|
||||
"github.com/vmware-tanzu/velero/pkg/archive"
|
||||
"github.com/vmware-tanzu/velero/pkg/discovery"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
deleteactionitemv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/collections"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
|
||||
)
|
||||
@@ -37,7 +38,7 @@ import (
|
||||
type Context struct {
|
||||
Backup *velerov1api.Backup
|
||||
BackupReader io.Reader
|
||||
Actions []velero.DeleteItemAction
|
||||
Actions []deleteactionitemv2.DeleteItemAction
|
||||
Filesystem filesystem.Interface
|
||||
Log logrus.FieldLogger
|
||||
DiscoveryHelper discovery.Helper
|
||||
@@ -163,7 +164,7 @@ func (ctx *Context) getApplicableActions(groupResource schema.GroupResource, nam
|
||||
|
||||
// resolvedActions are DeleteItemActions decorated with resource/namespace include/exclude collections, as well as label selectors for easy comparison.
|
||||
type resolvedAction struct {
|
||||
velero.DeleteItemAction
|
||||
deleteactionitemv2.DeleteItemAction
|
||||
|
||||
resourceIncludesExcludes *collections.IncludesExcludes
|
||||
namespaceIncludesExcludes *collections.IncludesExcludes
|
||||
@@ -171,7 +172,7 @@ type resolvedAction struct {
|
||||
}
|
||||
|
||||
// resolveActions resolves the AppliesTo ResourceSelectors of DeleteItemActions plugins against the Kubernetes discovery API for fully-qualified names.
|
||||
func resolveActions(actions []velero.DeleteItemAction, helper discovery.Helper) ([]resolvedAction, error) {
|
||||
func resolveActions(actions []deleteactionitemv2.DeleteItemAction, helper discovery.Helper) ([]resolvedAction, error) {
|
||||
var resolved []resolvedAction
|
||||
|
||||
for _, action := range actions {
|
||||
|
||||
@@ -44,7 +44,8 @@ import (
|
||||
"github.com/vmware-tanzu/velero/pkg/discovery"
|
||||
velerov1client "github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/typed/velero/v1"
|
||||
"github.com/vmware-tanzu/velero/pkg/kuberesource"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
|
||||
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
|
||||
"github.com/vmware-tanzu/velero/pkg/podexec"
|
||||
"github.com/vmware-tanzu/velero/pkg/restic"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/collections"
|
||||
@@ -61,7 +62,8 @@ const BackupFormatVersion = "1.1.0"
|
||||
type Backupper interface {
|
||||
// Backup takes a backup using the specification in the velerov1api.Backup and writes backup and log data
|
||||
// to the given writers.
|
||||
Backup(logger logrus.FieldLogger, backup *Request, backupFile io.Writer, actions []velero.BackupItemAction, volumeSnapshotterGetter VolumeSnapshotterGetter) error
|
||||
Backup(logger logrus.FieldLogger, backup *Request, backupFile io.Writer,
|
||||
actions []backupitemactionv2.BackupItemAction, volumeSnapshotterGetter VolumeSnapshotterGetter) error
|
||||
}
|
||||
|
||||
// kubernetesBackupper implements Backupper.
|
||||
@@ -77,7 +79,7 @@ type kubernetesBackupper struct {
|
||||
}
|
||||
|
||||
type resolvedAction struct {
|
||||
velero.BackupItemAction
|
||||
backupitemactionv2.BackupItemAction
|
||||
|
||||
resourceIncludesExcludes *collections.IncludesExcludes
|
||||
namespaceIncludesExcludes *collections.IncludesExcludes
|
||||
@@ -121,7 +123,7 @@ func NewKubernetesBackupper(
|
||||
}, nil
|
||||
}
|
||||
|
||||
func resolveActions(actions []velero.BackupItemAction, helper discovery.Helper) ([]resolvedAction, error) {
|
||||
func resolveActions(actions []backupitemactionv2.BackupItemAction, helper discovery.Helper) ([]resolvedAction, error) {
|
||||
var resolved []resolvedAction
|
||||
|
||||
for _, action := range actions {
|
||||
@@ -197,7 +199,7 @@ func getResourceHook(hookSpec velerov1api.BackupResourceHookSpec, discoveryHelpe
|
||||
}
|
||||
|
||||
type VolumeSnapshotterGetter interface {
|
||||
GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error)
|
||||
GetVolumeSnapshotter(name string) (volumesnapshotterv2.VolumeSnapshotter, error)
|
||||
}
|
||||
|
||||
// Backup backs up the items specified in the Backup, placing them in a gzip-compressed tar file
|
||||
@@ -205,7 +207,8 @@ type VolumeSnapshotterGetter interface {
|
||||
// a complete backup failure is returned. Errors that constitute partial failures (i.e. failures to
|
||||
// back up individual resources that don't prevent the backup from continuing to be processed) are logged
|
||||
// to the backup log.
|
||||
func (kb *kubernetesBackupper) Backup(log logrus.FieldLogger, backupRequest *Request, backupFile io.Writer, actions []velero.BackupItemAction, volumeSnapshotterGetter VolumeSnapshotterGetter) error {
|
||||
func (kb *kubernetesBackupper) Backup(log logrus.FieldLogger, backupRequest *Request, backupFile io.Writer,
|
||||
actions []backupitemactionv2.BackupItemAction, volumeSnapshotterGetter VolumeSnapshotterGetter) error {
|
||||
gzippedData := gzip.NewWriter(backupFile)
|
||||
defer gzippedData.Close()
|
||||
|
||||
|
||||
@@ -47,6 +47,7 @@ import (
|
||||
"github.com/vmware-tanzu/velero/pkg/discovery"
|
||||
"github.com/vmware-tanzu/velero/pkg/kuberesource"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
|
||||
"github.com/vmware-tanzu/velero/pkg/restic"
|
||||
"github.com/vmware-tanzu/velero/pkg/test"
|
||||
testutil "github.com/vmware-tanzu/velero/pkg/test"
|
||||
@@ -970,6 +971,30 @@ func TestBackupResourceCohabitation(t *testing.T) {
|
||||
"resources/deployments.apps/v1-preferredversion/namespaces/zoo/raz.json",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "when deployments exist that are not in the cohabitating groups those are backed up along with apps/deployments",
|
||||
backup: defaultBackup().Result(),
|
||||
apiResources: []*test.APIResource{
|
||||
test.VeleroDeployments(
|
||||
builder.ForTestCR("Deployment", "foo", "bar").Result(),
|
||||
builder.ForTestCR("Deployment", "zoo", "raz").Result(),
|
||||
),
|
||||
test.Deployments(
|
||||
builder.ForDeployment("foo", "bar").Result(),
|
||||
builder.ForDeployment("zoo", "raz").Result(),
|
||||
),
|
||||
},
|
||||
want: []string{
|
||||
"resources/deployments.apps/namespaces/foo/bar.json",
|
||||
"resources/deployments.apps/namespaces/zoo/raz.json",
|
||||
"resources/deployments.apps/v1-preferredversion/namespaces/foo/bar.json",
|
||||
"resources/deployments.apps/v1-preferredversion/namespaces/zoo/raz.json",
|
||||
"resources/deployments.velero.io/namespaces/foo/bar.json",
|
||||
"resources/deployments.velero.io/namespaces/zoo/raz.json",
|
||||
"resources/deployments.velero.io/v1-preferredversion/namespaces/foo/bar.json",
|
||||
"resources/deployments.velero.io/v1-preferredversion/namespaces/zoo/raz.json",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
@@ -1307,7 +1332,7 @@ func TestBackupActionsRunForCorrectItems(t *testing.T) {
|
||||
h.addItems(t, resource)
|
||||
}
|
||||
|
||||
actions := []velero.BackupItemAction{}
|
||||
actions := []backupitemactionv2.BackupItemAction{}
|
||||
for action := range tc.actions {
|
||||
actions = append(actions, action)
|
||||
}
|
||||
@@ -1333,7 +1358,7 @@ func TestBackupWithInvalidActions(t *testing.T) {
|
||||
name string
|
||||
backup *velerov1.Backup
|
||||
apiResources []*test.APIResource
|
||||
actions []velero.BackupItemAction
|
||||
actions []backupitemactionv2.BackupItemAction
|
||||
}{
|
||||
{
|
||||
name: "action with invalid label selector results in an error",
|
||||
@@ -1349,7 +1374,7 @@ func TestBackupWithInvalidActions(t *testing.T) {
|
||||
builder.ForPersistentVolume("baz").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
new(recordResourcesAction).ForLabelSelector("=invalid-selector"),
|
||||
},
|
||||
},
|
||||
@@ -1367,7 +1392,7 @@ func TestBackupWithInvalidActions(t *testing.T) {
|
||||
builder.ForPersistentVolume("baz").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
&appliesToErrorAction{},
|
||||
},
|
||||
},
|
||||
@@ -1429,7 +1454,7 @@ func TestBackupActionModifications(t *testing.T) {
|
||||
name string
|
||||
backup *velerov1.Backup
|
||||
apiResources []*test.APIResource
|
||||
actions []velero.BackupItemAction
|
||||
actions []backupitemactionv2.BackupItemAction
|
||||
want map[string]unstructuredObject
|
||||
}{
|
||||
{
|
||||
@@ -1440,7 +1465,7 @@ func TestBackupActionModifications(t *testing.T) {
|
||||
builder.ForPod("ns-1", "pod-1").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
modifyingActionGetter(func(item *unstructured.Unstructured) {
|
||||
item.SetLabels(map[string]string{"updated": "true"})
|
||||
}),
|
||||
@@ -1457,7 +1482,7 @@ func TestBackupActionModifications(t *testing.T) {
|
||||
builder.ForPod("ns-1", "pod-1").ObjectMeta(builder.WithLabels("should-be-removed", "true")).Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
modifyingActionGetter(func(item *unstructured.Unstructured) {
|
||||
item.SetLabels(nil)
|
||||
}),
|
||||
@@ -1474,7 +1499,7 @@ func TestBackupActionModifications(t *testing.T) {
|
||||
builder.ForPod("ns-1", "pod-1").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
modifyingActionGetter(func(item *unstructured.Unstructured) {
|
||||
item.Object["spec"].(map[string]interface{})["nodeName"] = "foo"
|
||||
}),
|
||||
@@ -1492,7 +1517,7 @@ func TestBackupActionModifications(t *testing.T) {
|
||||
builder.ForPod("ns-1", "pod-1").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
modifyingActionGetter(func(item *unstructured.Unstructured) {
|
||||
item.SetName(item.GetName() + "-updated")
|
||||
item.SetNamespace(item.GetNamespace() + "-updated")
|
||||
@@ -1533,7 +1558,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
|
||||
name string
|
||||
backup *velerov1.Backup
|
||||
apiResources []*test.APIResource
|
||||
actions []velero.BackupItemAction
|
||||
actions []backupitemactionv2.BackupItemAction
|
||||
want []string
|
||||
}{
|
||||
{
|
||||
@@ -1546,7 +1571,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
|
||||
builder.ForPod("ns-3", "pod-3").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
&pluggableAction{
|
||||
selector: velero.ResourceSelector{IncludedNamespaces: []string{"ns-1"}},
|
||||
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
@@ -1578,7 +1603,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
|
||||
builder.ForPod("ns-3", "pod-3").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
&pluggableAction{
|
||||
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
additionalItems := []velero.ResourceIdentifier{
|
||||
@@ -1608,7 +1633,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
|
||||
builder.ForPersistentVolume("pv-2").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
&pluggableAction{
|
||||
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
additionalItems := []velero.ResourceIdentifier{
|
||||
@@ -1641,7 +1666,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
|
||||
builder.ForPersistentVolume("pv-2").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
&pluggableAction{
|
||||
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
additionalItems := []velero.ResourceIdentifier{
|
||||
@@ -1671,7 +1696,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
|
||||
builder.ForPersistentVolume("pv-2").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
&pluggableAction{
|
||||
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
additionalItems := []velero.ResourceIdentifier{
|
||||
@@ -1702,7 +1727,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
|
||||
builder.ForPersistentVolume("pv-2").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
&pluggableAction{
|
||||
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
additionalItems := []velero.ResourceIdentifier{
|
||||
@@ -1732,7 +1757,7 @@ func TestBackupActionAdditionalItems(t *testing.T) {
|
||||
builder.ForPod("ns-3", "pod-3").Result(),
|
||||
),
|
||||
},
|
||||
actions: []velero.BackupItemAction{
|
||||
actions: []backupitemactionv2.BackupItemAction{
|
||||
&pluggableAction{
|
||||
selector: velero.ResourceSelector{IncludedNamespaces: []string{"ns-1"}},
|
||||
executeFunc: func(item runtime.Unstructured, backup *velerov1.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
|
||||
@@ -39,7 +39,7 @@ import (
|
||||
"github.com/vmware-tanzu/velero/pkg/client"
|
||||
"github.com/vmware-tanzu/velero/pkg/discovery"
|
||||
"github.com/vmware-tanzu/velero/pkg/kuberesource"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
|
||||
"github.com/vmware-tanzu/velero/pkg/restic"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
|
||||
"github.com/vmware-tanzu/velero/pkg/volume"
|
||||
@@ -56,7 +56,7 @@ type itemBackupper struct {
|
||||
volumeSnapshotterGetter VolumeSnapshotterGetter
|
||||
|
||||
itemHookHandler hook.ItemHookHandler
|
||||
snapshotLocationVolumeSnapshotters map[string]velero.VolumeSnapshotter
|
||||
snapshotLocationVolumeSnapshotters map[string]volumesnapshotterv2.VolumeSnapshotter
|
||||
}
|
||||
|
||||
// backupItem backs up an individual item to tarWriter. The item may be excluded based on the
|
||||
@@ -367,7 +367,8 @@ func (ib *itemBackupper) executeActions(
|
||||
|
||||
// volumeSnapshotter instantiates and initializes a VolumeSnapshotter given a VolumeSnapshotLocation,
|
||||
// or returns an existing one if one's already been initialized for the location.
|
||||
func (ib *itemBackupper) volumeSnapshotter(snapshotLocation *velerov1api.VolumeSnapshotLocation) (velero.VolumeSnapshotter, error) {
|
||||
func (ib *itemBackupper) volumeSnapshotter(snapshotLocation *velerov1api.VolumeSnapshotLocation) (
|
||||
volumesnapshotterv2.VolumeSnapshotter, error) {
|
||||
if bs, ok := ib.snapshotLocationVolumeSnapshotters[snapshotLocation.Name]; ok {
|
||||
return bs, nil
|
||||
}
|
||||
@@ -382,7 +383,7 @@ func (ib *itemBackupper) volumeSnapshotter(snapshotLocation *velerov1api.VolumeS
|
||||
}
|
||||
|
||||
if ib.snapshotLocationVolumeSnapshotters == nil {
|
||||
ib.snapshotLocationVolumeSnapshotters = make(map[string]velero.VolumeSnapshotter)
|
||||
ib.snapshotLocationVolumeSnapshotters = make(map[string]volumesnapshotterv2.VolumeSnapshotter)
|
||||
}
|
||||
ib.snapshotLocationVolumeSnapshotters[snapshotLocation.Name] = bs
|
||||
|
||||
@@ -438,7 +439,7 @@ func (ib *itemBackupper) takePVSnapshot(obj runtime.Unstructured, log logrus.Fie
|
||||
|
||||
var (
|
||||
volumeID, location string
|
||||
volumeSnapshotter velero.VolumeSnapshotter
|
||||
volumeSnapshotter volumesnapshotterv2.VolumeSnapshotter
|
||||
)
|
||||
|
||||
for _, snapshotLocation := range ib.backupRequest.SnapshotLocations {
|
||||
|
||||
@@ -209,16 +209,18 @@ func (r *itemCollector) getResourceItems(log logrus.FieldLogger, gv schema.Group
|
||||
}
|
||||
|
||||
if cohabitator, found := r.cohabitatingResources[resource.Name]; found {
|
||||
if cohabitator.seen {
|
||||
log.WithFields(
|
||||
logrus.Fields{
|
||||
"cohabitatingResource1": cohabitator.groupResource1.String(),
|
||||
"cohabitatingResource2": cohabitator.groupResource2.String(),
|
||||
},
|
||||
).Infof("Skipping resource because it cohabitates and we've already processed it")
|
||||
return nil, nil
|
||||
if gv.Group == cohabitator.groupResource1.Group || gv.Group == cohabitator.groupResource2.Group {
|
||||
if cohabitator.seen {
|
||||
log.WithFields(
|
||||
logrus.Fields{
|
||||
"cohabitatingResource1": cohabitator.groupResource1.String(),
|
||||
"cohabitatingResource2": cohabitator.groupResource2.String(),
|
||||
},
|
||||
).Infof("Skipping resource because it cohabitates and we've already processed it")
|
||||
return nil, nil
|
||||
}
|
||||
cohabitator.seen = true
|
||||
}
|
||||
cohabitator.seen = true
|
||||
}
|
||||
|
||||
namespacesToList := getNamespacesToList(r.backupRequest.NamespaceIncludesExcludes)
|
||||
|
||||
77
pkg/builder/testcr_builder.go
Normal file
77
pkg/builder/testcr_builder.go
Normal file
@@ -0,0 +1,77 @@
|
||||
/*
|
||||
Copyright the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package builder
|
||||
|
||||
import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
)
|
||||
|
||||
// CustomResourceBuilder builds objects based on velero APIVersion CRDs.
|
||||
type TestCRBuilder struct {
|
||||
object *TestCR
|
||||
}
|
||||
|
||||
// ForTestCR is the constructor for a TestCRBuilder.
|
||||
func ForTestCR(crdKind, ns, name string) *TestCRBuilder {
|
||||
return &TestCRBuilder{
|
||||
object: &TestCR{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
APIVersion: velerov1api.SchemeGroupVersion.String(),
|
||||
Kind: crdKind,
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: ns,
|
||||
Name: name,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Result returns the built TestCR.
|
||||
func (b *TestCRBuilder) Result() *TestCR {
|
||||
return b.object
|
||||
}
|
||||
|
||||
// ObjectMeta applies functional options to the TestCR's ObjectMeta.
|
||||
func (b *TestCRBuilder) ObjectMeta(opts ...ObjectMetaOpt) *TestCRBuilder {
|
||||
for _, opt := range opts {
|
||||
opt(b.object)
|
||||
}
|
||||
|
||||
return b
|
||||
}
|
||||
|
||||
type TestCR struct {
|
||||
metav1.TypeMeta `json:",inline"`
|
||||
|
||||
// +optional
|
||||
metav1.ObjectMeta `json:"metadata,omitempty"`
|
||||
|
||||
// +optional
|
||||
Spec TestCRSpec `json:"spec,omitempty"`
|
||||
|
||||
// +optional
|
||||
Status TestCRStatus `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type TestCRSpec struct {
|
||||
}
|
||||
|
||||
type TestCRStatus struct {
|
||||
}
|
||||
@@ -303,7 +303,8 @@ func newServer(f client.Factory, config serverConfig, logger *logrus.Logger) (*s
|
||||
corev1api.AddToScheme(scheme)
|
||||
|
||||
mgr, err := ctrl.NewManager(clientConfig, ctrl.Options{
|
||||
Scheme: scheme,
|
||||
Scheme: scheme,
|
||||
Namespace: f.Namespace(),
|
||||
})
|
||||
if err != nil {
|
||||
cancelFunc()
|
||||
|
||||
@@ -48,7 +48,7 @@ import (
|
||||
persistencemocks "github.com/vmware-tanzu/velero/pkg/persistence/mocks"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
|
||||
pluginmocks "github.com/vmware-tanzu/velero/pkg/plugin/mocks"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
|
||||
velerotest "github.com/vmware-tanzu/velero/pkg/test"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/logging"
|
||||
@@ -58,7 +58,7 @@ type fakeBackupper struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
func (b *fakeBackupper) Backup(logger logrus.FieldLogger, backup *pkgbackup.Request, backupFile io.Writer, actions []velero.BackupItemAction, volumeSnapshotterGetter pkgbackup.VolumeSnapshotterGetter) error {
|
||||
func (b *fakeBackupper) Backup(logger logrus.FieldLogger, backup *pkgbackup.Request, backupFile io.Writer, actions []backupitemactionv2.BackupItemAction, volumeSnapshotterGetter pkgbackup.VolumeSnapshotterGetter) error {
|
||||
args := b.Called(logger, backup, backupFile, actions, volumeSnapshotterGetter)
|
||||
return args.Error(0)
|
||||
}
|
||||
@@ -825,7 +825,7 @@ func TestProcessBackupCompletions(t *testing.T) {
|
||||
|
||||
pluginManager.On("GetBackupItemActions").Return(nil, nil)
|
||||
pluginManager.On("CleanupClients").Return(nil)
|
||||
backupper.On("Backup", mock.Anything, mock.Anything, mock.Anything, []velero.BackupItemAction(nil), pluginManager).Return(nil)
|
||||
backupper.On("Backup", mock.Anything, mock.Anything, mock.Anything, []backupitemactionv2.BackupItemAction(nil), pluginManager).Return(nil)
|
||||
backupStore.On("BackupExists", test.backupLocation.Spec.StorageType.ObjectStorage.Bucket, test.backup.Name).Return(test.backupExists, test.existenceCheckError)
|
||||
|
||||
// Ensure we have a CompletionTimestamp when uploading and that the backup name matches the backup in the object store.
|
||||
|
||||
@@ -46,7 +46,7 @@ import (
|
||||
"github.com/vmware-tanzu/velero/pkg/metrics"
|
||||
"github.com/vmware-tanzu/velero/pkg/persistence"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
volumesnapshotter "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
|
||||
"github.com/vmware-tanzu/velero/pkg/restic"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/filesystem"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/kube"
|
||||
@@ -333,7 +333,7 @@ func (c *backupDeletionController) processRequest(req *velerov1api.DeleteBackupR
|
||||
if snapshots, err := backupStore.GetBackupVolumeSnapshots(backup.Name); err != nil {
|
||||
errs = append(errs, errors.Wrap(err, "error getting backup's volume snapshots").Error())
|
||||
} else {
|
||||
volumeSnapshotters := make(map[string]velero.VolumeSnapshotter)
|
||||
volumeSnapshotters := make(map[string]volumesnapshotter.VolumeSnapshotter)
|
||||
|
||||
for _, snapshot := range snapshots {
|
||||
log.WithField("providerSnapshotID", snapshot.Status.ProviderSnapshotID).Info("Removing snapshot associated with backup")
|
||||
@@ -433,7 +433,7 @@ func volumeSnapshotterForSnapshotLocation(
|
||||
namespace, snapshotLocationName string,
|
||||
snapshotLocationLister velerov1listers.VolumeSnapshotLocationLister,
|
||||
pluginManager clientmgmt.Manager,
|
||||
) (velero.VolumeSnapshotter, error) {
|
||||
) (volumesnapshotter.VolumeSnapshotter, error) {
|
||||
snapshotLocation, err := snapshotLocationLister.VolumeSnapshotLocations(namespace).Get(snapshotLocationName)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error getting volume snapshot location %s", snapshotLocationName)
|
||||
|
||||
@@ -45,7 +45,7 @@ import (
|
||||
persistencemocks "github.com/vmware-tanzu/velero/pkg/persistence/mocks"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/clientmgmt"
|
||||
pluginmocks "github.com/vmware-tanzu/velero/pkg/plugin/mocks"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero/mocks"
|
||||
velerotest "github.com/vmware-tanzu/velero/pkg/test"
|
||||
"github.com/vmware-tanzu/velero/pkg/volume"
|
||||
@@ -802,7 +802,7 @@ func TestBackupDeletionControllerProcessRequest(t *testing.T) {
|
||||
|
||||
pluginManager := &pluginmocks.Manager{}
|
||||
pluginManager.On("GetVolumeSnapshotter", "provider-1").Return(td.volumeSnapshotter, nil)
|
||||
pluginManager.On("GetDeleteItemActions").Return([]velero.DeleteItemAction{}, nil)
|
||||
pluginManager.On("GetDeleteItemActions").Return([]deleteitemactionv2.DeleteItemAction{}, nil)
|
||||
pluginManager.On("CleanupClients")
|
||||
td.controller.newPluginManager = func(logrus.FieldLogger) clientmgmt.Manager { return pluginManager }
|
||||
|
||||
@@ -932,7 +932,7 @@ func TestBackupDeletionControllerProcessRequest(t *testing.T) {
|
||||
|
||||
pluginManager := &pluginmocks.Manager{}
|
||||
pluginManager.On("GetVolumeSnapshotter", "provider-1").Return(td.volumeSnapshotter, nil)
|
||||
pluginManager.On("GetDeleteItemActions").Return([]velero.DeleteItemAction{new(mocks.DeleteItemAction)}, nil)
|
||||
pluginManager.On("GetDeleteItemActions").Return([]deleteitemactionv2.DeleteItemAction{new(mocks.DeleteItemAction)}, nil)
|
||||
pluginManager.On("CleanupClients")
|
||||
td.controller.newPluginManager = func(logrus.FieldLogger) clientmgmt.Manager { return pluginManager }
|
||||
|
||||
|
||||
@@ -312,7 +312,7 @@ func (c *backupSyncController) run() {
|
||||
c.deleteOrphanedBackups(location.Name, backupStoreBackups, log)
|
||||
|
||||
// update the location's last-synced time field
|
||||
statusPatch := client.MergeFrom(location.DeepCopyObject())
|
||||
statusPatch := client.MergeFrom(location.DeepCopy())
|
||||
location.Status.LastSyncedTime = &metav1.Time{Time: time.Now().UTC()}
|
||||
if err := c.kbClient.Status().Patch(context.Background(), &location, statusPatch); err != nil {
|
||||
log.WithError(errors.WithStack(err)).Error("Error patching backup location's last-synced time")
|
||||
|
||||
@@ -276,11 +276,11 @@ func (c *scheduleController) submitBackupIfDue(item *api.Schedule, cronSchedule
|
||||
}
|
||||
|
||||
func getNextRunTime(schedule *api.Schedule, cronSchedule cron.Schedule, asOf time.Time) (bool, time.Time) {
|
||||
// get the latest run time (if the schedule hasn't run yet, this will be the zero value which will trigger
|
||||
// an immediate backup)
|
||||
var lastBackupTime time.Time
|
||||
if schedule.Status.LastBackup != nil {
|
||||
lastBackupTime = schedule.Status.LastBackup.Time
|
||||
} else {
|
||||
lastBackupTime = schedule.CreationTimestamp.Time
|
||||
}
|
||||
|
||||
nextRunTime := cronSchedule.Next(lastBackupTime)
|
||||
|
||||
@@ -274,7 +274,7 @@ func TestGetNextRunTime(t *testing.T) {
|
||||
{
|
||||
name: "first run",
|
||||
schedule: defaultSchedule(),
|
||||
expectedDue: true,
|
||||
expectedDue: false,
|
||||
expectedNextRunTimeOffset: "5m",
|
||||
},
|
||||
{
|
||||
@@ -319,6 +319,9 @@ func TestGetNextRunTime(t *testing.T) {
|
||||
require.NoError(t, err, "unable to parse test.lastRanOffset: %v", err)
|
||||
|
||||
test.schedule.Status.LastBackup = &metav1.Time{Time: testClock.Now().Add(-offsetDuration)}
|
||||
test.schedule.CreationTimestamp = *test.schedule.Status.LastBackup
|
||||
} else {
|
||||
test.schedule.CreationTimestamp = metav1.Time{Time: testClock.Now()}
|
||||
}
|
||||
|
||||
nextRunTimeOffset, err := time.ParseDuration(test.expectedNextRunTimeOffset)
|
||||
@@ -326,11 +329,11 @@ func TestGetNextRunTime(t *testing.T) {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// calculate expected next run time (if the schedule hasn't run yet, this
|
||||
// will be the zero value which will trigger an immediate backup)
|
||||
var baseTime time.Time
|
||||
if test.lastRanOffset != "" {
|
||||
baseTime = test.schedule.Status.LastBackup.Time
|
||||
} else {
|
||||
baseTime = test.schedule.CreationTimestamp.Time
|
||||
}
|
||||
expectedNextRunTime := baseTime.Add(nextRunTimeOffset)
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ import (
|
||||
"github.com/vmware-tanzu/velero/internal/credentials"
|
||||
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
"github.com/vmware-tanzu/velero/pkg/generated/clientset/versioned/scheme"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
|
||||
"github.com/vmware-tanzu/velero/pkg/volume"
|
||||
)
|
||||
|
||||
@@ -80,16 +80,16 @@ type BackupStore interface {
|
||||
const DownloadURLTTL = 10 * time.Minute
|
||||
|
||||
type objectBackupStore struct {
|
||||
objectStore velero.ObjectStore
|
||||
objectStore objectstorev2.ObjectStore
|
||||
bucket string
|
||||
layout *ObjectStoreLayout
|
||||
logger logrus.FieldLogger
|
||||
}
|
||||
|
||||
// ObjectStoreGetter is a type that can get a velero.ObjectStore
|
||||
// ObjectStoreGetter is a type that can get a objectstorev2.ObjectStore
|
||||
// from a provider name.
|
||||
type ObjectStoreGetter interface {
|
||||
GetObjectStore(provider string) (velero.ObjectStore, error)
|
||||
GetObjectStore(provider string) (objectstorev2.ObjectStore, error)
|
||||
}
|
||||
|
||||
// ObjectBackupStoreGetter is a type that can get a velero.BackupStore for a
|
||||
@@ -326,7 +326,7 @@ func (s *objectBackupStore) GetBackupVolumeSnapshots(name string) ([]*volume.Sna
|
||||
|
||||
// tryGet returns the object with the given key if it exists, nil if it does not exist,
|
||||
// or an error if it was unable to check existence or get the object.
|
||||
func tryGet(objectStore velero.ObjectStore, bucket, key string) (io.ReadCloser, error) {
|
||||
func tryGet(objectStore objectstorev2.ObjectStore, bucket, key string) (io.ReadCloser, error) {
|
||||
exists, err := objectStore.ObjectExists(bucket, key)
|
||||
if err != nil {
|
||||
return nil, errors.WithStack(err)
|
||||
@@ -494,7 +494,7 @@ func seekToBeginning(r io.Reader) error {
|
||||
return err
|
||||
}
|
||||
|
||||
func seekAndPutObject(objectStore velero.ObjectStore, bucket, key string, file io.Reader) error {
|
||||
func seekAndPutObject(objectStore objectstorev2.ObjectStore, bucket, key string, file io.Reader) error {
|
||||
if file == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -36,8 +36,8 @@ import (
|
||||
"github.com/vmware-tanzu/velero/internal/credentials"
|
||||
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
"github.com/vmware-tanzu/velero/pkg/builder"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
providermocks "github.com/vmware-tanzu/velero/pkg/plugin/velero/mocks"
|
||||
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
|
||||
velerotest "github.com/vmware-tanzu/velero/pkg/test"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/encode"
|
||||
"github.com/vmware-tanzu/velero/pkg/volume"
|
||||
@@ -595,9 +595,9 @@ func TestGetDownloadURL(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
type objectStoreGetter map[string]velero.ObjectStore
|
||||
type objectStoreGetter map[string]objectstorev2.ObjectStore
|
||||
|
||||
func (osg objectStoreGetter) GetObjectStore(provider string) (velero.ObjectStore, error) {
|
||||
func (osg objectStoreGetter) GetObjectStore(provider string) (objectstorev2.ObjectStore, error) {
|
||||
res, ok := osg[provider]
|
||||
if !ok {
|
||||
return nil, errors.New("object store not found")
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2020 the Velero contributors.
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -73,6 +73,12 @@ func (b *clientBuilder) clientConfig() *hcplugin.ClientConfig {
|
||||
string(framework.PluginKindPluginLister): &framework.PluginListerPlugin{},
|
||||
string(framework.PluginKindRestoreItemAction): framework.NewRestoreItemActionPlugin(framework.ClientLogger(b.clientLogger)),
|
||||
string(framework.PluginKindDeleteItemAction): framework.NewDeleteItemActionPlugin(framework.ClientLogger(b.clientLogger)),
|
||||
// Version 2
|
||||
string(framework.PluginKindBackupItemActionV2): framework.NewBackupItemActionPlugin(framework.ClientLogger(b.clientLogger)),
|
||||
string(framework.PluginKindVolumeSnapshotterV2): framework.NewVolumeSnapshotterPlugin(framework.ClientLogger(b.clientLogger)),
|
||||
string(framework.PluginKindObjectStoreV2): framework.NewObjectStorePlugin(framework.ClientLogger(b.clientLogger)),
|
||||
string(framework.PluginKindRestoreItemActionV2): framework.NewRestoreItemActionPlugin(framework.ClientLogger(b.clientLogger)),
|
||||
string(framework.PluginKindDeleteItemActionV2): framework.NewDeleteItemActionPlugin(framework.ClientLogger(b.clientLogger)),
|
||||
},
|
||||
Logger: b.pluginLogger,
|
||||
Cmd: exec.Command(b.commandName, b.commandArgs...),
|
||||
|
||||
@@ -18,6 +18,7 @@ package clientmgmt
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
|
||||
hclog "github.com/hashicorp/go-hclog"
|
||||
@@ -162,3 +163,37 @@ func (l *logrusAdapter) StandardLogger(opts *hclog.StandardLoggerOptions) *log.L
|
||||
func (l *logrusAdapter) SetLevel(_ hclog.Level) {
|
||||
return
|
||||
}
|
||||
|
||||
// ImpliedArgs returns With key/value pairs
|
||||
func (l *logrusAdapter) ImpliedArgs() []interface{} {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// Args are alternating key, val pairs
|
||||
// keys must be strings
|
||||
// vals can be any type, but display is implementation specific
|
||||
// Emit a message and key/value pairs at a provided log level
|
||||
func (l *logrusAdapter) Log(level hclog.Level, msg string, args ...interface{}) {
|
||||
switch level {
|
||||
case hclog.Trace:
|
||||
l.Trace(msg, args...)
|
||||
case hclog.Debug:
|
||||
l.Debug(msg, args...)
|
||||
case hclog.Info:
|
||||
l.Info(msg, args...)
|
||||
case hclog.Warn:
|
||||
l.Warn(msg, args...)
|
||||
case hclog.Error:
|
||||
l.Error(msg, args...)
|
||||
}
|
||||
}
|
||||
|
||||
// Returns the Name of the logger
|
||||
func (l *logrusAdapter) Name() string {
|
||||
return l.name
|
||||
}
|
||||
|
||||
// Return a value that conforms to io.Writer, which can be passed into log.SetOutput()
|
||||
func (l *logrusAdapter) StandardWriter(opts *hclog.StandardLoggerOptions) io.Writer {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2020 the Velero contributors.
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -17,40 +17,46 @@ limitations under the License.
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
|
||||
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
|
||||
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
|
||||
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
|
||||
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
|
||||
)
|
||||
|
||||
// Manager manages the lifecycles of plugins.
|
||||
type Manager interface {
|
||||
// GetObjectStore returns the ObjectStore plugin for name.
|
||||
GetObjectStore(name string) (velero.ObjectStore, error)
|
||||
GetObjectStore(name string) (objectstorev2.ObjectStore, error)
|
||||
|
||||
// GetVolumeSnapshotter returns the VolumeSnapshotter plugin for name.
|
||||
GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error)
|
||||
GetVolumeSnapshotter(name string) (volumesnapshotterv2.VolumeSnapshotter, error)
|
||||
|
||||
// GetBackupItemActions returns all backup item action plugins.
|
||||
GetBackupItemActions() ([]velero.BackupItemAction, error)
|
||||
GetBackupItemActions() ([]backupitemactionv2.BackupItemAction, error)
|
||||
|
||||
// GetBackupItemAction returns the backup item action plugin for name.
|
||||
GetBackupItemAction(name string) (velero.BackupItemAction, error)
|
||||
GetBackupItemAction(name string) (backupitemactionv2.BackupItemAction, error)
|
||||
|
||||
// GetRestoreItemActions returns all restore item action plugins.
|
||||
GetRestoreItemActions() ([]velero.RestoreItemAction, error)
|
||||
GetRestoreItemActions() ([]restoreitemactionv2.RestoreItemAction, error)
|
||||
|
||||
// GetRestoreItemAction returns the restore item action plugin for name.
|
||||
GetRestoreItemAction(name string) (velero.RestoreItemAction, error)
|
||||
GetRestoreItemAction(name string) (restoreitemactionv2.RestoreItemAction, error)
|
||||
|
||||
// GetDeleteItemActions returns all delete item action plugins.
|
||||
GetDeleteItemActions() ([]velero.DeleteItemAction, error)
|
||||
GetDeleteItemActions() ([]deleteitemactionv2.DeleteItemAction, error)
|
||||
|
||||
// GetDeleteItemAction returns the delete item action plugin for name.
|
||||
GetDeleteItemAction(name string) (velero.DeleteItemAction, error)
|
||||
GetDeleteItemAction(name string) (deleteitemactionv2.DeleteItemAction, error)
|
||||
|
||||
// CleanupClients terminates all of the Manager's running plugin processes.
|
||||
CleanupClients()
|
||||
@@ -129,39 +135,82 @@ func (m *manager) getRestartableProcess(kind framework.PluginKind, name string)
|
||||
return restartableProcess, nil
|
||||
}
|
||||
|
||||
// GetObjectStore returns a restartableObjectStore for name.
|
||||
func (m *manager) GetObjectStore(name string) (velero.ObjectStore, error) {
|
||||
name = sanitizeName(name)
|
||||
type RestartableObjectStore struct {
|
||||
kind framework.PluginKind
|
||||
// Get returns a restartable ObjectStore for the given name and process, wrapping if necessary
|
||||
Get func(name string, restartableProcess RestartableProcess) objectstorev2.ObjectStore
|
||||
}
|
||||
|
||||
restartableProcess, err := m.getRestartableProcess(framework.PluginKindObjectStore, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
func (m *manager) restartableObjectStores() []RestartableObjectStore {
|
||||
return []RestartableObjectStore{
|
||||
{
|
||||
kind: framework.PluginKindObjectStoreV2,
|
||||
Get: newRestartableObjectStoreV2,
|
||||
},
|
||||
{
|
||||
kind: framework.PluginKindObjectStore,
|
||||
Get: newAdaptedV1ObjectStore, // Adapt v1 plugin to v2
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
r := newRestartableObjectStore(name, restartableProcess)
|
||||
// GetObjectStore returns a restartableObjectStore for name.
|
||||
func (m *manager) GetObjectStore(name string) (objectstorev2.ObjectStore, error) {
|
||||
name = sanitizeName(name)
|
||||
for _, restartableObjStore := range m.restartableObjectStores() {
|
||||
restartableProcess, err := m.getRestartableProcess(restartableObjStore.kind, name)
|
||||
if err != nil {
|
||||
// Check if plugin was not found
|
||||
if errors.Is(err, &pluginNotFoundError{}) {
|
||||
continue
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return restartableObjStore.Get(name, restartableProcess), nil
|
||||
}
|
||||
return nil, fmt.Errorf("unable to get valid ObjectStore for %q", name)
|
||||
}
|
||||
|
||||
return r, nil
|
||||
type RestartableVolumeSnapshotter struct {
|
||||
kind framework.PluginKind
|
||||
// Get returns a restartable VolumeSnapshotter for the given name and process, wrapping if necessary
|
||||
Get func(name string, restartableProcess RestartableProcess) volumesnapshotterv2.VolumeSnapshotter
|
||||
}
|
||||
|
||||
func (m *manager) restartableVolumeSnapshotters() []RestartableVolumeSnapshotter {
|
||||
return []RestartableVolumeSnapshotter{
|
||||
{
|
||||
kind: framework.PluginKindVolumeSnapshotterV2,
|
||||
Get: newRestartableVolumeSnapshotterV2,
|
||||
},
|
||||
{
|
||||
kind: framework.PluginKindVolumeSnapshotter,
|
||||
Get: newAdaptedV1VolumeSnapshotter, // Adapt v1 plugin to v2
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// GetVolumeSnapshotter returns a restartableVolumeSnapshotter for name.
|
||||
func (m *manager) GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error) {
|
||||
func (m *manager) GetVolumeSnapshotter(name string) (volumesnapshotterv2.VolumeSnapshotter, error) {
|
||||
name = sanitizeName(name)
|
||||
|
||||
restartableProcess, err := m.getRestartableProcess(framework.PluginKindVolumeSnapshotter, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
for _, restartableVolumeSnapshotter := range m.restartableVolumeSnapshotters() {
|
||||
restartableProcess, err := m.getRestartableProcess(restartableVolumeSnapshotter.kind, name)
|
||||
if err != nil {
|
||||
// Check if plugin was not found
|
||||
if errors.Is(err, &pluginNotFoundError{}) {
|
||||
continue
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return restartableVolumeSnapshotter.Get(name, restartableProcess), nil
|
||||
}
|
||||
|
||||
r := newRestartableVolumeSnapshotter(name, restartableProcess)
|
||||
|
||||
return r, nil
|
||||
return nil, fmt.Errorf("unable to get valid VolumeSnapshotter for %q", name)
|
||||
}
|
||||
|
||||
// GetBackupItemActions returns all backup item actions as restartableBackupItemActions.
|
||||
func (m *manager) GetBackupItemActions() ([]velero.BackupItemAction, error) {
|
||||
list := m.registry.List(framework.PluginKindBackupItemAction)
|
||||
|
||||
actions := make([]velero.BackupItemAction, 0, len(list))
|
||||
func (m *manager) GetBackupItemActions() ([]backupitemactionv2.BackupItemAction, error) {
|
||||
list := m.registry.ListForKinds(framework.BackupItemActionKinds())
|
||||
actions := make([]backupitemactionv2.BackupItemAction, 0, len(list))
|
||||
|
||||
for i := range list {
|
||||
id := list[i]
|
||||
@@ -177,24 +226,47 @@ func (m *manager) GetBackupItemActions() ([]velero.BackupItemAction, error) {
|
||||
return actions, nil
|
||||
}
|
||||
|
||||
// GetBackupItemAction returns a restartableBackupItemAction for name.
|
||||
func (m *manager) GetBackupItemAction(name string) (velero.BackupItemAction, error) {
|
||||
name = sanitizeName(name)
|
||||
type RestartableBackupItemAction struct {
|
||||
kind framework.PluginKind
|
||||
// Get returns a restartable BackupItemAction for the given name and process, wrapping if necessary
|
||||
Get func(name string, restartableProcess RestartableProcess) backupitemactionv2.BackupItemAction
|
||||
}
|
||||
|
||||
restartableProcess, err := m.getRestartableProcess(framework.PluginKindBackupItemAction, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
func (m *manager) restartableBackupItemActions() []RestartableBackupItemAction {
|
||||
return []RestartableBackupItemAction{
|
||||
{
|
||||
kind: framework.PluginKindBackupItemActionV2,
|
||||
Get: newRestartableBackupItemActionV2,
|
||||
},
|
||||
{
|
||||
kind: framework.PluginKindBackupItemAction,
|
||||
Get: newAdaptedV1BackupItemAction, // Adapt v1 plugin to v2
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
r := newRestartableBackupItemAction(name, restartableProcess)
|
||||
return r, nil
|
||||
// GetBackupItemAction returns a restartableBackupItemAction for name.
|
||||
func (m *manager) GetBackupItemAction(name string) (backupitemactionv2.BackupItemAction, error) {
|
||||
name = sanitizeName(name)
|
||||
for _, restartableBackupItemAction := range m.restartableBackupItemActions() {
|
||||
restartableProcess, err := m.getRestartableProcess(restartableBackupItemAction.kind, name)
|
||||
if err != nil {
|
||||
// Check if plugin was not found
|
||||
if errors.Is(err, &pluginNotFoundError{}) {
|
||||
continue
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return restartableBackupItemAction.Get(name, restartableProcess), nil
|
||||
}
|
||||
return nil, fmt.Errorf("unable to get valid BackupItemAction for %q", name)
|
||||
}
|
||||
|
||||
// GetRestoreItemActions returns all restore item actions as restartableRestoreItemActions.
|
||||
func (m *manager) GetRestoreItemActions() ([]velero.RestoreItemAction, error) {
|
||||
list := m.registry.List(framework.PluginKindRestoreItemAction)
|
||||
func (m *manager) GetRestoreItemActions() ([]restoreitemactionv2.RestoreItemAction, error) {
|
||||
list := m.registry.ListForKinds(framework.RestoreItemActionKinds())
|
||||
|
||||
actions := make([]velero.RestoreItemAction, 0, len(list))
|
||||
actions := make([]restoreitemactionv2.RestoreItemAction, 0, len(list))
|
||||
|
||||
for i := range list {
|
||||
id := list[i]
|
||||
@@ -210,24 +282,47 @@ func (m *manager) GetRestoreItemActions() ([]velero.RestoreItemAction, error) {
|
||||
return actions, nil
|
||||
}
|
||||
|
||||
// GetRestoreItemAction returns a restartableRestoreItemAction for name.
|
||||
func (m *manager) GetRestoreItemAction(name string) (velero.RestoreItemAction, error) {
|
||||
name = sanitizeName(name)
|
||||
type RestartableRestoreItemAction struct {
|
||||
kind framework.PluginKind
|
||||
// Get returns a restartable RestoreItemAction for the given name and process, wrapping if necessary
|
||||
Get func(name string, restartableProcess RestartableProcess) restoreitemactionv2.RestoreItemAction
|
||||
}
|
||||
|
||||
restartableProcess, err := m.getRestartableProcess(framework.PluginKindRestoreItemAction, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
func (m *manager) restartableRestoreItemActions() []RestartableRestoreItemAction {
|
||||
return []RestartableRestoreItemAction{
|
||||
{
|
||||
kind: framework.PluginKindRestoreItemActionV2,
|
||||
Get: newRestartableRestoreItemActionV2,
|
||||
},
|
||||
{
|
||||
kind: framework.PluginKindRestoreItemAction,
|
||||
Get: newAdaptedV1RestoreItemAction, // Adapt v1 plugin to v2
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
r := newRestartableRestoreItemAction(name, restartableProcess)
|
||||
return r, nil
|
||||
// GetRestoreItemAction returns a restartableRestoreItemAction for name.
|
||||
func (m *manager) GetRestoreItemAction(name string) (restoreitemactionv2.RestoreItemAction, error) {
|
||||
name = sanitizeName(name)
|
||||
for _, restartableRestoreItemAction := range m.restartableRestoreItemActions() {
|
||||
restartableProcess, err := m.getRestartableProcess(restartableRestoreItemAction.kind, name)
|
||||
if err != nil {
|
||||
// Check if plugin was not found
|
||||
if errors.Is(err, &pluginNotFoundError{}) {
|
||||
continue
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return restartableRestoreItemAction.Get(name, restartableProcess), nil
|
||||
}
|
||||
return nil, fmt.Errorf("unable to get valid RestoreItemAction for %q", name)
|
||||
}
|
||||
|
||||
// GetDeleteItemActions returns all delete item actions as restartableDeleteItemActions.
|
||||
func (m *manager) GetDeleteItemActions() ([]velero.DeleteItemAction, error) {
|
||||
list := m.registry.List(framework.PluginKindDeleteItemAction)
|
||||
func (m *manager) GetDeleteItemActions() ([]deleteitemactionv2.DeleteItemAction, error) {
|
||||
list := m.registry.ListForKinds(framework.DeleteItemActionKinds())
|
||||
|
||||
actions := make([]velero.DeleteItemAction, 0, len(list))
|
||||
actions := make([]deleteitemactionv2.DeleteItemAction, 0, len(list))
|
||||
|
||||
for i := range list {
|
||||
id := list[i]
|
||||
@@ -243,17 +338,40 @@ func (m *manager) GetDeleteItemActions() ([]velero.DeleteItemAction, error) {
|
||||
return actions, nil
|
||||
}
|
||||
|
||||
// GetDeleteItemAction returns a restartableDeleteItemAction for name.
|
||||
func (m *manager) GetDeleteItemAction(name string) (velero.DeleteItemAction, error) {
|
||||
name = sanitizeName(name)
|
||||
type RestartableDeleteItemAction struct {
|
||||
kind framework.PluginKind
|
||||
// Get returns a restartable DeleteItemAction for the given name and process, wrapping if necessary
|
||||
Get func(name string, restartableProcess RestartableProcess) deleteitemactionv2.DeleteItemAction
|
||||
}
|
||||
|
||||
restartableProcess, err := m.getRestartableProcess(framework.PluginKindDeleteItemAction, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
func (m *manager) restartableDeleteItemActions() []RestartableDeleteItemAction {
|
||||
return []RestartableDeleteItemAction{
|
||||
{
|
||||
kind: framework.PluginKindDeleteItemActionV2,
|
||||
Get: newRestartableDeleteItemActionV2,
|
||||
},
|
||||
{
|
||||
kind: framework.PluginKindDeleteItemAction,
|
||||
Get: newAdaptedV1DeleteItemAction, // Adapt v1 plugin to v2
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
r := newRestartableDeleteItemAction(name, restartableProcess)
|
||||
return r, nil
|
||||
// GetDeleteItemAction returns a restartableDeleteItemAction for name.
|
||||
func (m *manager) GetDeleteItemAction(name string) (deleteitemactionv2.DeleteItemAction, error) {
|
||||
name = sanitizeName(name)
|
||||
for _, restartableDeleteItemAction := range m.restartableDeleteItemActions() {
|
||||
restartableProcess, err := m.getRestartableProcess(restartableDeleteItemAction.kind, name)
|
||||
if err != nil {
|
||||
// Check if plugin was not found
|
||||
if errors.Is(err, &pluginNotFoundError{}) {
|
||||
continue
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return restartableDeleteItemAction.Get(name, restartableProcess), nil
|
||||
}
|
||||
return nil, fmt.Errorf("unable to get valid DeleteItemAction for %q", name)
|
||||
}
|
||||
|
||||
// sanitizeName adds "velero.io" to legacy plugins that weren't namespaced.
|
||||
|
||||
@@ -34,6 +34,8 @@ type Registry interface {
|
||||
DiscoverPlugins() error
|
||||
// List returns all PluginIdentifiers for kind.
|
||||
List(kind framework.PluginKind) []framework.PluginIdentifier
|
||||
// List returns all PluginIdentifiers for a list of kinds.
|
||||
ListForKinds(kinds []framework.PluginKind) (list []framework.PluginIdentifier)
|
||||
// Get returns the PluginIdentifier for kind and name.
|
||||
Get(kind framework.PluginKind, name string) (framework.PluginIdentifier, error)
|
||||
}
|
||||
@@ -108,6 +110,13 @@ func (r *registry) discoverPlugins(commands []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *registry) ListForKinds(kinds []framework.PluginKind) (list []framework.PluginIdentifier) {
|
||||
for _, kind := range kinds {
|
||||
list = append(list, r.pluginsByKind[kind]...)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// List returns info about all plugin binaries that implement the given
|
||||
// PluginKind.
|
||||
func (r *registry) List(kind framework.PluginKind) []framework.PluginIdentifier {
|
||||
|
||||
@@ -0,0 +1,105 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
backupitemactionv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v1"
|
||||
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
|
||||
)
|
||||
|
||||
type restartableAdaptedV1BackupItemAction struct {
|
||||
key kindAndName
|
||||
sharedPluginProcess RestartableProcess
|
||||
}
|
||||
|
||||
// newAdaptedV1BackupItemAction returns a new restartableAdaptedV1BackupItemAction.
|
||||
func newAdaptedV1BackupItemAction(
|
||||
name string, sharedPluginProcess RestartableProcess) backupitemactionv2.BackupItemAction {
|
||||
r := &restartableAdaptedV1BackupItemAction{
|
||||
key: kindAndName{kind: framework.PluginKindBackupItemAction, name: name},
|
||||
sharedPluginProcess: sharedPluginProcess,
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// getBackupItemAction returns the backup item action for this restartableAdaptedV1BackupItemAction.
|
||||
// It does *not* restart the plugin process.
|
||||
func (r *restartableAdaptedV1BackupItemAction) getBackupItemAction() (backupitemactionv1.BackupItemAction, error) {
|
||||
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
backupItemAction, ok := plugin.(backupitemactionv1.BackupItemAction)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a BackupItemAction!", plugin)
|
||||
}
|
||||
|
||||
return backupItemAction, nil
|
||||
}
|
||||
|
||||
// getDelegate restarts the plugin process (if needed) and returns the backup item
|
||||
// action for this restartableAdaptedV1BackupItemAction.
|
||||
func (r *restartableAdaptedV1BackupItemAction) getDelegate() (backupitemactionv1.BackupItemAction, error) {
|
||||
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return r.getBackupItemAction()
|
||||
}
|
||||
|
||||
// AppliesTo restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1BackupItemAction) AppliesTo() (velero.ResourceSelector, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return velero.ResourceSelector{}, err
|
||||
}
|
||||
|
||||
return delegate.AppliesTo()
|
||||
}
|
||||
|
||||
// Execute restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1BackupItemAction) Execute(
|
||||
item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return delegate.Execute(item, backup)
|
||||
}
|
||||
|
||||
// Version 2: simply discard ctx and call version 1 function.
|
||||
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1BackupItemAction) ExecuteV2(
|
||||
ctx context.Context, item runtime.Unstructured, backup *api.Backup) (
|
||||
runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
return delegate.Execute(item, backup)
|
||||
}
|
||||
@@ -0,0 +1,100 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
deleteitemactionv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v1"
|
||||
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
|
||||
)
|
||||
|
||||
type restartableAdaptedV1DeleteItemAction struct {
|
||||
key kindAndName
|
||||
sharedPluginProcess RestartableProcess
|
||||
config map[string]string
|
||||
}
|
||||
|
||||
// newAdaptedV1DeleteItemAction returns a new restartableAdaptedV1DeleteItemAction.
|
||||
func newAdaptedV1DeleteItemAction(
|
||||
name string, sharedPluginProcess RestartableProcess) deleteitemactionv2.DeleteItemAction {
|
||||
r := &restartableAdaptedV1DeleteItemAction{
|
||||
key: kindAndName{kind: framework.PluginKindDeleteItemAction, name: name},
|
||||
sharedPluginProcess: sharedPluginProcess,
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// getDeleteItemAction returns the delete item action for this restartableDeleteItemAction.
|
||||
// It does *not* restart the plugin process.
|
||||
func (r *restartableAdaptedV1DeleteItemAction) getDeleteItemAction() (deleteitemactionv1.DeleteItemAction, error) {
|
||||
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
deleteItemAction, ok := plugin.(deleteitemactionv1.DeleteItemAction)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a DeleteItemAction!", plugin)
|
||||
}
|
||||
|
||||
return deleteItemAction, nil
|
||||
}
|
||||
|
||||
// getDelegate restarts the plugin process (if needed) and returns the delete item action for this restartableDeleteItemAction.
|
||||
func (r *restartableAdaptedV1DeleteItemAction) getDelegate() (deleteitemactionv1.DeleteItemAction, error) {
|
||||
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return r.getDeleteItemAction()
|
||||
}
|
||||
|
||||
// AppliesTo restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1DeleteItemAction) AppliesTo() (velero.ResourceSelector, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return velero.ResourceSelector{}, err
|
||||
}
|
||||
|
||||
return delegate.AppliesTo()
|
||||
}
|
||||
|
||||
// Execute restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1DeleteItemAction) Execute(input *velero.DeleteItemActionExecuteInput) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return delegate.Execute(input)
|
||||
}
|
||||
|
||||
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1DeleteItemAction) ExecuteV2(
|
||||
ctx context.Context, input *velero.DeleteItemActionExecuteInput) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return delegate.Execute(input)
|
||||
}
|
||||
246
pkg/plugin/clientmgmt/restartable_adapted_v1_object_store.go
Normal file
246
pkg/plugin/clientmgmt/restartable_adapted_v1_object_store.go
Normal file
@@ -0,0 +1,246 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
objectstorev1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v1"
|
||||
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
|
||||
)
|
||||
|
||||
// restartableAdaptedV1ObjectStore is restartableAdaptedV1ObjectStore version 1 adaptive to version 2 plugin
|
||||
type restartableAdaptedV1ObjectStore struct {
|
||||
restartableObjectStore
|
||||
}
|
||||
|
||||
// newAdaptedV1ObjectStore returns a new restartableAdaptedV1ObjectStore.
|
||||
func newAdaptedV1ObjectStore(name string, sharedPluginProcess RestartableProcess) objectstorev2.ObjectStore {
|
||||
key := kindAndName{kind: framework.PluginKindObjectStore, name: name}
|
||||
r := &restartableAdaptedV1ObjectStore{
|
||||
restartableObjectStore: restartableObjectStore{
|
||||
key: key,
|
||||
sharedPluginProcess: sharedPluginProcess,
|
||||
},
|
||||
}
|
||||
|
||||
// Register our reinitializer so we can reinitialize after a restart with r.config.
|
||||
sharedPluginProcess.addReinitializer(key, r)
|
||||
return r
|
||||
}
|
||||
|
||||
// reinitialize reinitializes a re-dispensed plugin using the initial data passed to Init().
|
||||
func (r *restartableAdaptedV1ObjectStore) reinitialize(dispensed interface{}) error {
|
||||
objectStore, ok := dispensed.(objectstorev1.ObjectStore)
|
||||
if !ok {
|
||||
return errors.Errorf("%T is not a ObjectStore!", dispensed)
|
||||
}
|
||||
|
||||
return r.init(objectStore, r.config)
|
||||
}
|
||||
|
||||
// getObjectStore returns the object store for this restartableObjectStore.
|
||||
// It does *not* restart the plugin process.
|
||||
func (r *restartableAdaptedV1ObjectStore) getObjectStore() (objectstorev1.ObjectStore, error) {
|
||||
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
objectStore, ok := plugin.(objectstorev1.ObjectStore)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a ObjectStore!", plugin)
|
||||
}
|
||||
|
||||
return objectStore, nil
|
||||
}
|
||||
|
||||
// getDelegate restarts the plugin process (if needed) and returns the object store for this restartableObjectStore.
|
||||
func (r *restartableAdaptedV1ObjectStore) getDelegate() (objectstorev1.ObjectStore, error) {
|
||||
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return r.getObjectStore()
|
||||
}
|
||||
|
||||
// Init initializes the object store instance using config. If this is the first invocation, r stores config for future
|
||||
// reinitialization needs. Init does NOT restart the shared plugin process. Init may only be called once.
|
||||
func (r *restartableAdaptedV1ObjectStore) Init(config map[string]string) error {
|
||||
if r.config != nil {
|
||||
return errors.Errorf("already initialized")
|
||||
}
|
||||
|
||||
// Not using getDelegate() to avoid possible infinite recursion
|
||||
delegate, err := r.getObjectStore()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
r.config = config
|
||||
|
||||
return r.init(delegate, config)
|
||||
}
|
||||
|
||||
func (r *restartableAdaptedV1ObjectStore) InitV2(ctx context.Context, config map[string]string) error {
|
||||
return r.Init(config)
|
||||
}
|
||||
|
||||
// init calls Init on objectStore with config. This is split out from Init() so that both Init() and reinitialize() may
|
||||
// call it using a specific ObjectStore.
|
||||
func (r *restartableAdaptedV1ObjectStore) init(objectStore objectstorev1.ObjectStore, config map[string]string) error {
|
||||
return objectStore.Init(config)
|
||||
}
|
||||
|
||||
// PutObject restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) PutObject(bucket string, key string, body io.Reader) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return delegate.PutObject(bucket, key, body)
|
||||
}
|
||||
|
||||
// ObjectExists restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) ObjectExists(bucket, key string) (bool, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return delegate.ObjectExists(bucket, key)
|
||||
}
|
||||
|
||||
// GetObject restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) GetObject(bucket string, key string) (io.ReadCloser, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.GetObject(bucket, key)
|
||||
}
|
||||
|
||||
// ListCommonPrefixes restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) ListCommonPrefixes(
|
||||
bucket string, prefix string, delimiter string) ([]string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.ListCommonPrefixes(bucket, prefix, delimiter)
|
||||
}
|
||||
|
||||
// ListObjects restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) ListObjects(bucket string, prefix string) ([]string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.ListObjects(bucket, prefix)
|
||||
}
|
||||
|
||||
// DeleteObject restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) DeleteObject(bucket string, key string) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return delegate.DeleteObject(bucket, key)
|
||||
}
|
||||
|
||||
// CreateSignedURL restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) CreateSignedURL(
|
||||
bucket string, key string, ttl time.Duration) (string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.CreateSignedURL(bucket, key, ttl)
|
||||
}
|
||||
|
||||
// Version 2. Simply discard ctx.
|
||||
// PutObjectV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) PutObjectV2(
|
||||
ctx context.Context, bucket string, key string, body io.Reader) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return delegate.PutObject(bucket, key, body)
|
||||
}
|
||||
|
||||
// ObjectExistsV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) ObjectExistsV2(ctx context.Context, bucket, key string) (bool, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return delegate.ObjectExists(bucket, key)
|
||||
}
|
||||
|
||||
// GetObjectV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) GetObjectV2(
|
||||
ctx context.Context, bucket string, key string) (io.ReadCloser, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.GetObject(bucket, key)
|
||||
}
|
||||
|
||||
// ListCommonPrefixesV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) ListCommonPrefixesV2(
|
||||
ctx context.Context, bucket string, prefix string, delimiter string) ([]string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.ListCommonPrefixes(bucket, prefix, delimiter)
|
||||
}
|
||||
|
||||
// ListObjectsV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) ListObjectsV2(
|
||||
ctx context.Context, bucket string, prefix string) ([]string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.ListObjects(bucket, prefix)
|
||||
}
|
||||
|
||||
// DeleteObjectV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) DeleteObjectV2(ctx context.Context, bucket string, key string) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return delegate.DeleteObject(bucket, key)
|
||||
}
|
||||
|
||||
// CreateSignedURLV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1ObjectStore) CreateSignedURLV2(
|
||||
ctx context.Context, bucket string, key string, ttl time.Duration) (string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.CreateSignedURL(bucket, key, ttl)
|
||||
}
|
||||
@@ -0,0 +1,100 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
restoreitemactionv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v1"
|
||||
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
|
||||
)
|
||||
|
||||
type restartableAdaptedV1RestoreItemAction struct {
|
||||
key kindAndName
|
||||
sharedPluginProcess RestartableProcess
|
||||
config map[string]string
|
||||
}
|
||||
|
||||
// newRestartableRestoreItemAction returns a new restartableRestoreItemAction.
|
||||
func newAdaptedV1RestoreItemAction(
|
||||
name string, sharedPluginProcess RestartableProcess) restoreitemactionv2.RestoreItemAction {
|
||||
r := &restartableAdaptedV1RestoreItemAction{
|
||||
key: kindAndName{kind: framework.PluginKindRestoreItemAction, name: name},
|
||||
sharedPluginProcess: sharedPluginProcess,
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// getRestoreItemAction returns the restore item action for this restartableRestoreItemAction.
|
||||
// It does *not* restart the plugin process.
|
||||
func (r *restartableAdaptedV1RestoreItemAction) getRestoreItemAction() (restoreitemactionv1.RestoreItemAction, error) {
|
||||
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
restoreItemAction, ok := plugin.(restoreitemactionv1.RestoreItemAction)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a RestoreItemAction!", plugin)
|
||||
}
|
||||
|
||||
return restoreItemAction, nil
|
||||
}
|
||||
|
||||
// getDelegate restarts the plugin process (if needed) and returns the restore item action for this restartableRestoreItemAction.
|
||||
func (r *restartableAdaptedV1RestoreItemAction) getDelegate() (restoreitemactionv1.RestoreItemAction, error) {
|
||||
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return r.getRestoreItemAction()
|
||||
}
|
||||
|
||||
// AppliesTo restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1RestoreItemAction) AppliesTo() (velero.ResourceSelector, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return velero.ResourceSelector{}, err
|
||||
}
|
||||
|
||||
return delegate.AppliesTo()
|
||||
}
|
||||
|
||||
// Execute restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1RestoreItemAction) Execute(input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return delegate.Execute(input)
|
||||
}
|
||||
|
||||
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1RestoreItemAction) ExecuteV2(
|
||||
ctx context.Context, input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return delegate.Execute(input)
|
||||
}
|
||||
@@ -0,0 +1,233 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
volumesnapshotterv1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v1"
|
||||
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
|
||||
)
|
||||
|
||||
// restartableAdaptedV1VolumeSnapshotter
|
||||
type restartableAdaptedV1VolumeSnapshotter struct {
|
||||
key kindAndName
|
||||
sharedPluginProcess RestartableProcess
|
||||
config map[string]string
|
||||
}
|
||||
|
||||
// newAdaptedV1VolumeSnapshotter returns a new restartableAdaptedV1VolumeSnapshotter.
|
||||
func newAdaptedV1VolumeSnapshotter(
|
||||
name string, sharedPluginProcess RestartableProcess) volumesnapshotterv2.VolumeSnapshotter {
|
||||
key := kindAndName{kind: framework.PluginKindVolumeSnapshotter, name: name}
|
||||
r := &restartableAdaptedV1VolumeSnapshotter{
|
||||
key: key,
|
||||
sharedPluginProcess: sharedPluginProcess,
|
||||
}
|
||||
|
||||
// Register our reinitializer so we can reinitialize after a restart with r.config.
|
||||
sharedPluginProcess.addReinitializer(key, r)
|
||||
|
||||
return r
|
||||
}
|
||||
|
||||
// reinitialize reinitializes a re-dispensed plugin using the initial data passed to Init().
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) reinitialize(dispensed interface{}) error {
|
||||
volumeSnapshotter, ok := dispensed.(volumesnapshotterv1.VolumeSnapshotter)
|
||||
if !ok {
|
||||
return errors.Errorf("%T is not a VolumeSnapshotter!", dispensed)
|
||||
}
|
||||
return r.init(volumeSnapshotter, r.config)
|
||||
}
|
||||
|
||||
// getVolumeSnapshotter returns the volume snapshotter for this restartableVolumeSnapshotter.
|
||||
// It does *not* restart the plugin process.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) getVolumeSnapshotter() (volumesnapshotterv1.VolumeSnapshotter, error) {
|
||||
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
volumeSnapshotter, ok := plugin.(volumesnapshotterv1.VolumeSnapshotter)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a VolumeSnapshotter!", plugin)
|
||||
}
|
||||
|
||||
return volumeSnapshotter, nil
|
||||
}
|
||||
|
||||
// getDelegate restarts the plugin process (if needed) and returns the volume snapshotter
|
||||
// for this restartableVolumeSnapshotter.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) getDelegate() (volumesnapshotterv1.VolumeSnapshotter, error) {
|
||||
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return r.getVolumeSnapshotter()
|
||||
}
|
||||
|
||||
// Init initializes the volume snapshotter instance using config. If this is the first invocation,
|
||||
// r stores config for future reinitialization needs. Init does NOT restart the shared plugin process.
|
||||
// Init may only be called once.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) Init(config map[string]string) error {
|
||||
if r.config != nil {
|
||||
return errors.Errorf("already initialized")
|
||||
}
|
||||
|
||||
// Not using getDelegate() to avoid possible infinite recursion
|
||||
delegate, err := r.getVolumeSnapshotter()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
r.config = config
|
||||
|
||||
return r.init(delegate, config)
|
||||
}
|
||||
|
||||
// init calls Init on volumeSnapshotter with config. This is split out from Init() so that both Init()
|
||||
// and reinitialize() may call it using a specific VolumeSnapshotter.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) init(
|
||||
volumeSnapshotter volumesnapshotterv1.VolumeSnapshotter, config map[string]string) error {
|
||||
return volumeSnapshotter.Init(config)
|
||||
}
|
||||
|
||||
// CreateVolumeFromSnapshot restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) CreateVolumeFromSnapshot(
|
||||
snapshotID string, volumeType string, volumeAZ string, iops *int64) (volumeID string, err error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ, iops)
|
||||
}
|
||||
|
||||
// GetVolumeID restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) GetVolumeID(pv runtime.Unstructured) (string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.GetVolumeID(pv)
|
||||
}
|
||||
|
||||
// SetVolumeID restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) SetVolumeID(
|
||||
pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.SetVolumeID(pv, volumeID)
|
||||
}
|
||||
|
||||
// GetVolumeInfo restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) GetVolumeInfo(
|
||||
volumeID string, volumeAZ string) (string, *int64, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
return delegate.GetVolumeInfo(volumeID, volumeAZ)
|
||||
}
|
||||
|
||||
// CreateSnapshot restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) CreateSnapshot(
|
||||
volumeID string, volumeAZ string, tags map[string]string) (snapshotID string, err error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.CreateSnapshot(volumeID, volumeAZ, tags)
|
||||
}
|
||||
|
||||
// DeleteSnapshot restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) DeleteSnapshot(snapshotID string) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return delegate.DeleteSnapshot(snapshotID)
|
||||
}
|
||||
|
||||
// Version 2 simply discard ctx then call Version 1 function
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) InitV2(ctx context.Context, config map[string]string) error {
|
||||
return r.Init(config)
|
||||
}
|
||||
|
||||
// CreateVolumeFromSnapshotV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) CreateVolumeFromSnapshotV2(
|
||||
ctx context.Context, snapshotID string, volumeType string, volumeAZ string, iops *int64) (volumeID string, err error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ, iops)
|
||||
}
|
||||
|
||||
// GetVolumeIDV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) GetVolumeIDV2(
|
||||
ctx context.Context, pv runtime.Unstructured) (string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.GetVolumeID(pv)
|
||||
}
|
||||
|
||||
// SetVolumeIDV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) SetVolumeIDV2(
|
||||
ctx context.Context, pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.SetVolumeID(pv, volumeID)
|
||||
}
|
||||
|
||||
// GetVolumeInfoV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) GetVolumeInfoV2(
|
||||
ctx context.Context, volumeID string, volumeAZ string) (string, *int64, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
return delegate.GetVolumeInfo(volumeID, volumeAZ)
|
||||
}
|
||||
|
||||
// CreateSnapshotV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) CreateSnapshotV2(
|
||||
ctx context.Context, volumeID string, volumeAZ string, tags map[string]string) (snapshotID string, err error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.CreateSnapshot(volumeID, volumeAZ, tags)
|
||||
}
|
||||
|
||||
// DeleteSnapshotV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableAdaptedV1VolumeSnapshotter) DeleteSnapshotV2(ctx context.Context, snapshotID string) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return delegate.DeleteSnapshot(snapshotID)
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2018 the Velero contributors.
|
||||
Copyright 2018, 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -17,12 +17,15 @@ limitations under the License.
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
|
||||
)
|
||||
|
||||
// restartableBackupItemAction is a backup item action for a given implementation (such as "pod"). It is associated with
|
||||
@@ -34,10 +37,11 @@ type restartableBackupItemAction struct {
|
||||
sharedPluginProcess RestartableProcess
|
||||
}
|
||||
|
||||
// newRestartableBackupItemAction returns a new restartableBackupItemAction.
|
||||
func newRestartableBackupItemAction(name string, sharedPluginProcess RestartableProcess) *restartableBackupItemAction {
|
||||
// newRestartableBackupItemActionV2 returns a new restartableBackupItemAction.
|
||||
func newRestartableBackupItemActionV2(
|
||||
name string, sharedPluginProcess RestartableProcess) backupitemactionv2.BackupItemAction {
|
||||
r := &restartableBackupItemAction{
|
||||
key: kindAndName{kind: framework.PluginKindBackupItemAction, name: name},
|
||||
key: kindAndName{kind: framework.PluginKindBackupItemActionV2, name: name},
|
||||
sharedPluginProcess: sharedPluginProcess,
|
||||
}
|
||||
return r
|
||||
@@ -45,13 +49,13 @@ func newRestartableBackupItemAction(name string, sharedPluginProcess Restartable
|
||||
|
||||
// getBackupItemAction returns the backup item action for this restartableBackupItemAction. It does *not* restart the
|
||||
// plugin process.
|
||||
func (r *restartableBackupItemAction) getBackupItemAction() (velero.BackupItemAction, error) {
|
||||
func (r *restartableBackupItemAction) getBackupItemAction() (backupitemactionv2.BackupItemAction, error) {
|
||||
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
backupItemAction, ok := plugin.(velero.BackupItemAction)
|
||||
backupItemAction, ok := plugin.(backupitemactionv2.BackupItemAction)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a BackupItemAction!", plugin)
|
||||
}
|
||||
@@ -60,7 +64,7 @@ func (r *restartableBackupItemAction) getBackupItemAction() (velero.BackupItemAc
|
||||
}
|
||||
|
||||
// getDelegate restarts the plugin process (if needed) and returns the backup item action for this restartableBackupItemAction.
|
||||
func (r *restartableBackupItemAction) getDelegate() (velero.BackupItemAction, error) {
|
||||
func (r *restartableBackupItemAction) getDelegate() (backupitemactionv2.BackupItemAction, error) {
|
||||
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -87,3 +91,13 @@ func (r *restartableBackupItemAction) Execute(item runtime.Unstructured, backup
|
||||
|
||||
return delegate.Execute(item, backup)
|
||||
}
|
||||
|
||||
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableBackupItemAction) ExecuteV2(ctx context.Context, item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return delegate.ExecuteV2(ctx, item, backup)
|
||||
}
|
||||
|
||||
@@ -17,10 +17,13 @@ limitations under the License.
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
|
||||
)
|
||||
|
||||
// restartableDeleteItemAction is a delete item action for a given implementation (such as "pod"). It is associated with
|
||||
@@ -34,7 +37,8 @@ type restartableDeleteItemAction struct {
|
||||
}
|
||||
|
||||
// newRestartableDeleteItemAction returns a new restartableDeleteItemAction.
|
||||
func newRestartableDeleteItemAction(name string, sharedPluginProcess RestartableProcess) *restartableDeleteItemAction {
|
||||
func newRestartableDeleteItemActionV2(
|
||||
name string, sharedPluginProcess RestartableProcess) deleteitemactionv2.DeleteItemAction {
|
||||
r := &restartableDeleteItemAction{
|
||||
key: kindAndName{kind: framework.PluginKindDeleteItemAction, name: name},
|
||||
sharedPluginProcess: sharedPluginProcess,
|
||||
@@ -44,13 +48,13 @@ func newRestartableDeleteItemAction(name string, sharedPluginProcess Restartable
|
||||
|
||||
// getDeleteItemAction returns the delete item action for this restartableDeleteItemAction. It does *not* restart the
|
||||
// plugin process.
|
||||
func (r *restartableDeleteItemAction) getDeleteItemAction() (velero.DeleteItemAction, error) {
|
||||
func (r *restartableDeleteItemAction) getDeleteItemAction() (deleteitemactionv2.DeleteItemAction, error) {
|
||||
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
deleteItemAction, ok := plugin.(velero.DeleteItemAction)
|
||||
deleteItemAction, ok := plugin.(deleteitemactionv2.DeleteItemAction)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a DeleteItemAction!", plugin)
|
||||
}
|
||||
@@ -59,7 +63,7 @@ func (r *restartableDeleteItemAction) getDeleteItemAction() (velero.DeleteItemAc
|
||||
}
|
||||
|
||||
// getDelegate restarts the plugin process (if needed) and returns the delete item action for this restartableDeleteItemAction.
|
||||
func (r *restartableDeleteItemAction) getDelegate() (velero.DeleteItemAction, error) {
|
||||
func (r *restartableDeleteItemAction) getDelegate() (deleteitemactionv2.DeleteItemAction, error) {
|
||||
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -86,3 +90,13 @@ func (r *restartableDeleteItemAction) Execute(input *velero.DeleteItemActionExec
|
||||
|
||||
return delegate.Execute(input)
|
||||
}
|
||||
|
||||
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableDeleteItemAction) ExecuteV2(ctx context.Context, input *velero.DeleteItemActionExecuteInput) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return delegate.ExecuteV2(ctx, input)
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2018 the Velero contributors.
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -17,13 +17,14 @@ limitations under the License.
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
|
||||
)
|
||||
|
||||
// restartableObjectStore is an object store for a given implementation (such as "aws"). It is associated with
|
||||
@@ -38,9 +39,9 @@ type restartableObjectStore struct {
|
||||
config map[string]string
|
||||
}
|
||||
|
||||
// newRestartableObjectStore returns a new restartableObjectStore.
|
||||
func newRestartableObjectStore(name string, sharedPluginProcess RestartableProcess) *restartableObjectStore {
|
||||
key := kindAndName{kind: framework.PluginKindObjectStore, name: name}
|
||||
// newRestartableObjectStoreV2 returns a new objectstorev2.ObjectStore for PluginKindObjectStoreV2
|
||||
func newRestartableObjectStoreV2(name string, sharedPluginProcess RestartableProcess) objectstorev2.ObjectStore {
|
||||
key := kindAndName{kind: framework.PluginKindObjectStoreV2, name: name}
|
||||
r := &restartableObjectStore{
|
||||
key: key,
|
||||
sharedPluginProcess: sharedPluginProcess,
|
||||
@@ -54,7 +55,7 @@ func newRestartableObjectStore(name string, sharedPluginProcess RestartableProce
|
||||
|
||||
// reinitialize reinitializes a re-dispensed plugin using the initial data passed to Init().
|
||||
func (r *restartableObjectStore) reinitialize(dispensed interface{}) error {
|
||||
objectStore, ok := dispensed.(velero.ObjectStore)
|
||||
objectStore, ok := dispensed.(objectstorev2.ObjectStore)
|
||||
if !ok {
|
||||
return errors.Errorf("%T is not a ObjectStore!", dispensed)
|
||||
}
|
||||
@@ -64,13 +65,13 @@ func (r *restartableObjectStore) reinitialize(dispensed interface{}) error {
|
||||
|
||||
// getObjectStore returns the object store for this restartableObjectStore. It does *not* restart the
|
||||
// plugin process.
|
||||
func (r *restartableObjectStore) getObjectStore() (velero.ObjectStore, error) {
|
||||
func (r *restartableObjectStore) getObjectStore() (objectstorev2.ObjectStore, error) {
|
||||
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
objectStore, ok := plugin.(velero.ObjectStore)
|
||||
objectStore, ok := plugin.(objectstorev2.ObjectStore)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a ObjectStore!", plugin)
|
||||
}
|
||||
@@ -79,7 +80,7 @@ func (r *restartableObjectStore) getObjectStore() (velero.ObjectStore, error) {
|
||||
}
|
||||
|
||||
// getDelegate restarts the plugin process (if needed) and returns the object store for this restartableObjectStore.
|
||||
func (r *restartableObjectStore) getDelegate() (velero.ObjectStore, error) {
|
||||
func (r *restartableObjectStore) getDelegate() (objectstorev2.ObjectStore, error) {
|
||||
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -105,9 +106,15 @@ func (r *restartableObjectStore) Init(config map[string]string) error {
|
||||
return r.init(delegate, config)
|
||||
}
|
||||
|
||||
// InitV2 initializes the object store instance using config. If this is the first invocation, r stores config for future
|
||||
// reinitialization needs. Init does NOT restart the shared plugin process. Init may only be called once.
|
||||
func (r *restartableObjectStore) InitV2(ctx context.Context, config map[string]string) error {
|
||||
return r.Init(config)
|
||||
}
|
||||
|
||||
// init calls Init on objectStore with config. This is split out from Init() so that both Init() and reinitialize() may
|
||||
// call it using a specific ObjectStore.
|
||||
func (r *restartableObjectStore) init(objectStore velero.ObjectStore, config map[string]string) error {
|
||||
func (r *restartableObjectStore) init(objectStore objectstorev2.ObjectStore, config map[string]string) error {
|
||||
return objectStore.Init(config)
|
||||
}
|
||||
|
||||
@@ -173,3 +180,68 @@ func (r *restartableObjectStore) CreateSignedURL(bucket string, key string, ttl
|
||||
}
|
||||
return delegate.CreateSignedURL(bucket, key, ttl)
|
||||
}
|
||||
|
||||
// Version 2
|
||||
// PutObjectV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableObjectStore) PutObjectV2(ctx context.Context, bucket string, key string, body io.Reader) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return delegate.PutObjectV2(ctx, bucket, key, body)
|
||||
}
|
||||
|
||||
// ObjectExistsV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableObjectStore) ObjectExistsV2(ctx context.Context, bucket, key string) (bool, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return delegate.ObjectExistsV2(ctx, bucket, key)
|
||||
}
|
||||
|
||||
// GetObjectV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableObjectStore) GetObjectV2(ctx context.Context, bucket string, key string) (io.ReadCloser, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.GetObjectV2(ctx, bucket, key)
|
||||
}
|
||||
|
||||
// ListCommonPrefixesV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableObjectStore) ListCommonPrefixesV2(
|
||||
ctx context.Context, bucket string, prefix string, delimiter string) ([]string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.ListCommonPrefixesV2(ctx, bucket, prefix, delimiter)
|
||||
}
|
||||
|
||||
// ListObjectsV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableObjectStore) ListObjectsV2(ctx context.Context, bucket string, prefix string) ([]string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.ListObjectsV2(ctx, bucket, prefix)
|
||||
}
|
||||
|
||||
// DeleteObjectV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableObjectStore) DeleteObjectV2(ctx context.Context, bucket string, key string) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return delegate.DeleteObjectV2(ctx, bucket, key)
|
||||
}
|
||||
|
||||
// CreateSignedURLV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableObjectStore) CreateSignedURLV2(ctx context.Context, bucket string, key string, ttl time.Duration) (string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.CreateSignedURLV2(ctx, bucket, key, ttl)
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2018 the Velero contributors.
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -17,10 +17,13 @@ limitations under the License.
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
|
||||
)
|
||||
|
||||
// restartableRestoreItemAction is a restore item action for a given implementation (such as "pod"). It is associated with
|
||||
@@ -33,10 +36,11 @@ type restartableRestoreItemAction struct {
|
||||
config map[string]string
|
||||
}
|
||||
|
||||
// newRestartableRestoreItemAction returns a new restartableRestoreItemAction.
|
||||
func newRestartableRestoreItemAction(name string, sharedPluginProcess RestartableProcess) *restartableRestoreItemAction {
|
||||
// newRestartableRestoreItemActionV2 returns a new restartableRestoreItemAction.
|
||||
func newRestartableRestoreItemActionV2(
|
||||
name string, sharedPluginProcess RestartableProcess) restoreitemactionv2.RestoreItemAction {
|
||||
r := &restartableRestoreItemAction{
|
||||
key: kindAndName{kind: framework.PluginKindRestoreItemAction, name: name},
|
||||
key: kindAndName{kind: framework.PluginKindRestoreItemActionV2, name: name},
|
||||
sharedPluginProcess: sharedPluginProcess,
|
||||
}
|
||||
return r
|
||||
@@ -44,13 +48,13 @@ func newRestartableRestoreItemAction(name string, sharedPluginProcess Restartabl
|
||||
|
||||
// getRestoreItemAction returns the restore item action for this restartableRestoreItemAction. It does *not* restart the
|
||||
// plugin process.
|
||||
func (r *restartableRestoreItemAction) getRestoreItemAction() (velero.RestoreItemAction, error) {
|
||||
func (r *restartableRestoreItemAction) getRestoreItemAction() (restoreitemactionv2.RestoreItemAction, error) {
|
||||
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
restoreItemAction, ok := plugin.(velero.RestoreItemAction)
|
||||
restoreItemAction, ok := plugin.(restoreitemactionv2.RestoreItemAction)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a RestoreItemAction!", plugin)
|
||||
}
|
||||
@@ -59,7 +63,7 @@ func (r *restartableRestoreItemAction) getRestoreItemAction() (velero.RestoreIte
|
||||
}
|
||||
|
||||
// getDelegate restarts the plugin process (if needed) and returns the restore item action for this restartableRestoreItemAction.
|
||||
func (r *restartableRestoreItemAction) getDelegate() (velero.RestoreItemAction, error) {
|
||||
func (r *restartableRestoreItemAction) getDelegate() (restoreitemactionv2.RestoreItemAction, error) {
|
||||
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -86,3 +90,14 @@ func (r *restartableRestoreItemAction) Execute(input *velero.RestoreItemActionEx
|
||||
|
||||
return delegate.Execute(input)
|
||||
}
|
||||
|
||||
// ExecuteV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableRestoreItemAction) ExecuteV2(
|
||||
ctx context.Context, input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return delegate.ExecuteV2(ctx, input)
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2018 the Velero contributors.
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -17,11 +17,13 @@ limitations under the License.
|
||||
package clientmgmt
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
|
||||
)
|
||||
|
||||
// restartableVolumeSnapshotter is a volume snapshotter for a given implementation (such as "aws"). It is associated with
|
||||
@@ -34,9 +36,10 @@ type restartableVolumeSnapshotter struct {
|
||||
config map[string]string
|
||||
}
|
||||
|
||||
// newRestartableVolumeSnapshotter returns a new restartableVolumeSnapshotter.
|
||||
func newRestartableVolumeSnapshotter(name string, sharedPluginProcess RestartableProcess) *restartableVolumeSnapshotter {
|
||||
key := kindAndName{kind: framework.PluginKindVolumeSnapshotter, name: name}
|
||||
// newRestartableVolumeSnapshotterV2 returns a new restartableVolumeSnapshotter.
|
||||
func newRestartableVolumeSnapshotterV2(
|
||||
name string, sharedPluginProcess RestartableProcess) volumesnapshotterv2.VolumeSnapshotter {
|
||||
key := kindAndName{kind: framework.PluginKindVolumeSnapshotterV2, name: name}
|
||||
r := &restartableVolumeSnapshotter{
|
||||
key: key,
|
||||
sharedPluginProcess: sharedPluginProcess,
|
||||
@@ -50,7 +53,7 @@ func newRestartableVolumeSnapshotter(name string, sharedPluginProcess Restartabl
|
||||
|
||||
// reinitialize reinitializes a re-dispensed plugin using the initial data passed to Init().
|
||||
func (r *restartableVolumeSnapshotter) reinitialize(dispensed interface{}) error {
|
||||
volumeSnapshotter, ok := dispensed.(velero.VolumeSnapshotter)
|
||||
volumeSnapshotter, ok := dispensed.(volumesnapshotterv2.VolumeSnapshotter)
|
||||
if !ok {
|
||||
return errors.Errorf("%T is not a VolumeSnapshotter!", dispensed)
|
||||
}
|
||||
@@ -59,13 +62,13 @@ func (r *restartableVolumeSnapshotter) reinitialize(dispensed interface{}) error
|
||||
|
||||
// getVolumeSnapshotter returns the volume snapshotter for this restartableVolumeSnapshotter. It does *not* restart the
|
||||
// plugin process.
|
||||
func (r *restartableVolumeSnapshotter) getVolumeSnapshotter() (velero.VolumeSnapshotter, error) {
|
||||
func (r *restartableVolumeSnapshotter) getVolumeSnapshotter() (volumesnapshotterv2.VolumeSnapshotter, error) {
|
||||
plugin, err := r.sharedPluginProcess.getByKindAndName(r.key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
volumeSnapshotter, ok := plugin.(velero.VolumeSnapshotter)
|
||||
volumeSnapshotter, ok := plugin.(volumesnapshotterv2.VolumeSnapshotter)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a VolumeSnapshotter!", plugin)
|
||||
}
|
||||
@@ -74,7 +77,7 @@ func (r *restartableVolumeSnapshotter) getVolumeSnapshotter() (velero.VolumeSnap
|
||||
}
|
||||
|
||||
// getDelegate restarts the plugin process (if needed) and returns the volume snapshotter for this restartableVolumeSnapshotter.
|
||||
func (r *restartableVolumeSnapshotter) getDelegate() (velero.VolumeSnapshotter, error) {
|
||||
func (r *restartableVolumeSnapshotter) getDelegate() (volumesnapshotterv2.VolumeSnapshotter, error) {
|
||||
if err := r.sharedPluginProcess.resetIfNeeded(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -102,7 +105,7 @@ func (r *restartableVolumeSnapshotter) Init(config map[string]string) error {
|
||||
|
||||
// init calls Init on volumeSnapshotter with config. This is split out from Init() so that both Init() and reinitialize() may
|
||||
// call it using a specific VolumeSnapshotter.
|
||||
func (r *restartableVolumeSnapshotter) init(volumeSnapshotter velero.VolumeSnapshotter, config map[string]string) error {
|
||||
func (r *restartableVolumeSnapshotter) init(volumeSnapshotter volumesnapshotterv2.VolumeSnapshotter, config map[string]string) error {
|
||||
return volumeSnapshotter.Init(config)
|
||||
}
|
||||
|
||||
@@ -159,3 +162,67 @@ func (r *restartableVolumeSnapshotter) DeleteSnapshot(snapshotID string) error {
|
||||
}
|
||||
return delegate.DeleteSnapshot(snapshotID)
|
||||
}
|
||||
|
||||
// Version 2
|
||||
func (r *restartableVolumeSnapshotter) InitV2(ctx context.Context, config map[string]string) error {
|
||||
return r.Init(config)
|
||||
}
|
||||
|
||||
// CreateVolumeFromSnapshotV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableVolumeSnapshotter) CreateVolumeFromSnapshotV2(
|
||||
ctx context.Context, snapshotID string, volumeType string, volumeAZ string, iops *int64) (volumeID string, err error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.CreateVolumeFromSnapshotV2(ctx, snapshotID, volumeType, volumeAZ, iops)
|
||||
}
|
||||
|
||||
// GetVolumeIDV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableVolumeSnapshotter) GetVolumeIDV2(
|
||||
ctx context.Context, pv runtime.Unstructured) (string, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.GetVolumeIDV2(ctx, pv)
|
||||
}
|
||||
|
||||
// SetVolumeIDV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableVolumeSnapshotter) SetVolumeIDV2(
|
||||
ctx context.Context, pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return delegate.SetVolumeIDV2(ctx, pv, volumeID)
|
||||
}
|
||||
|
||||
// GetVolumeInfoV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableVolumeSnapshotter) GetVolumeInfoV2(
|
||||
ctx context.Context, volumeID string, volumeAZ string) (string, *int64, error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
return delegate.GetVolumeInfoV2(ctx, volumeID, volumeAZ)
|
||||
}
|
||||
|
||||
// CreateSnapshotV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableVolumeSnapshotter) CreateSnapshotV2(
|
||||
ctx context.Context, volumeID string, volumeAZ string, tags map[string]string) (snapshotID string, err error) {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return delegate.CreateSnapshotV2(ctx, volumeID, volumeAZ, tags)
|
||||
}
|
||||
|
||||
// DeleteSnapshotV2 restarts the plugin's process if needed, then delegates the call.
|
||||
func (r *restartableVolumeSnapshotter) DeleteSnapshotV2(ctx context.Context, snapshotID string) error {
|
||||
delegate, err := r.getDelegate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return delegate.DeleteSnapshotV2(ctx, snapshotID)
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2019 the Velero contributors.
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
||||
@@ -76,6 +76,13 @@ func (c *BackupItemActionGRPCClient) AppliesTo() (velero.ResourceSelector, error
|
||||
}
|
||||
|
||||
func (c *BackupItemActionGRPCClient) Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
return c.ExecuteV2(context.Background(), item, backup)
|
||||
}
|
||||
|
||||
func (c *BackupItemActionGRPCClient) ExecuteV2(
|
||||
ctx context.Context, item runtime.Unstructured, backup *api.Backup) (
|
||||
runtime.Unstructured, []velero.ResourceIdentifier, error) {
|
||||
|
||||
itemJSON, err := json.Marshal(item.UnstructuredContent())
|
||||
if err != nil {
|
||||
return nil, nil, errors.WithStack(err)
|
||||
@@ -92,7 +99,7 @@ func (c *BackupItemActionGRPCClient) Execute(item runtime.Unstructured, backup *
|
||||
Backup: backupJSON,
|
||||
}
|
||||
|
||||
res, err := c.grpcClient.Execute(context.Background(), req)
|
||||
res, err := c.grpcClient.Execute(ctx, req)
|
||||
if err != nil {
|
||||
return nil, nil, fromGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -26,6 +26,7 @@ import (
|
||||
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
|
||||
)
|
||||
|
||||
// BackupItemActionGRPCServer implements the proto-generated BackupItemAction interface, and accepts
|
||||
@@ -34,13 +35,13 @@ type BackupItemActionGRPCServer struct {
|
||||
mux *serverMux
|
||||
}
|
||||
|
||||
func (s *BackupItemActionGRPCServer) getImpl(name string) (velero.BackupItemAction, error) {
|
||||
func (s *BackupItemActionGRPCServer) getImpl(name string) (backupitemactionv2.BackupItemAction, error) {
|
||||
impl, err := s.mux.getHandler(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
itemAction, ok := impl.(velero.BackupItemAction)
|
||||
itemAction, ok := impl.(backupitemactionv2.BackupItemAction)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a backup item action", impl)
|
||||
}
|
||||
@@ -98,7 +99,7 @@ func (s *BackupItemActionGRPCServer) Execute(ctx context.Context, req *proto.Exe
|
||||
return nil, newGRPCError(errors.WithStack(err))
|
||||
}
|
||||
|
||||
updatedItem, additionalItems, err := impl.Execute(&item, &backup)
|
||||
updatedItem, additionalItems, err := impl.ExecuteV2(ctx, &item, &backup)
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -25,9 +25,10 @@ import (
|
||||
|
||||
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
|
||||
)
|
||||
|
||||
var _ velero.DeleteItemAction = &DeleteItemActionGRPCClient{}
|
||||
var _ deleteitemactionv2.DeleteItemAction = &DeleteItemActionGRPCClient{}
|
||||
|
||||
// NewDeleteItemActionPlugin constructs a DeleteItemActionPlugin.
|
||||
func NewDeleteItemActionPlugin(options ...PluginOption) *DeleteItemActionPlugin {
|
||||
@@ -70,6 +71,10 @@ func (c *DeleteItemActionGRPCClient) AppliesTo() (velero.ResourceSelector, error
|
||||
}
|
||||
|
||||
func (c *DeleteItemActionGRPCClient) Execute(input *velero.DeleteItemActionExecuteInput) error {
|
||||
return c.ExecuteV2(context.Background(), input)
|
||||
}
|
||||
|
||||
func (c *DeleteItemActionGRPCClient) ExecuteV2(ctx context.Context, input *velero.DeleteItemActionExecuteInput) error {
|
||||
itemJSON, err := json.Marshal(input.Item.UnstructuredContent())
|
||||
if err != nil {
|
||||
return errors.WithStack(err)
|
||||
@@ -87,7 +92,7 @@ func (c *DeleteItemActionGRPCClient) Execute(input *velero.DeleteItemActionExecu
|
||||
}
|
||||
|
||||
// First return item is just an empty struct no matter what.
|
||||
if _, err = c.grpcClient.Execute(context.Background(), req); err != nil {
|
||||
if _, err = c.grpcClient.Execute(ctx, req); err != nil {
|
||||
return fromGRPCError(err)
|
||||
}
|
||||
|
||||
|
||||
@@ -26,6 +26,7 @@ import (
|
||||
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
|
||||
)
|
||||
|
||||
// DeleteItemActionGRPCServer implements the proto-generated DeleteItemActionServer interface, and accepts
|
||||
@@ -34,13 +35,13 @@ type DeleteItemActionGRPCServer struct {
|
||||
mux *serverMux
|
||||
}
|
||||
|
||||
func (s *DeleteItemActionGRPCServer) getImpl(name string) (velero.DeleteItemAction, error) {
|
||||
func (s *DeleteItemActionGRPCServer) getImpl(name string) (deleteitemactionv2.DeleteItemAction, error) {
|
||||
impl, err := s.mux.getHandler(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
itemAction, ok := impl.(velero.DeleteItemAction)
|
||||
itemAction, ok := impl.(deleteitemactionv2.DeleteItemAction)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a delete item action", impl)
|
||||
}
|
||||
@@ -76,7 +77,8 @@ func (s *DeleteItemActionGRPCServer) AppliesTo(ctx context.Context, req *proto.D
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *DeleteItemActionGRPCServer) Execute(ctx context.Context, req *proto.DeleteItemActionExecuteRequest) (_ *proto.Empty, err error) {
|
||||
func (s *DeleteItemActionGRPCServer) Execute(
|
||||
ctx context.Context, req *proto.DeleteItemActionExecuteRequest) (_ *proto.Empty, err error) {
|
||||
defer func() {
|
||||
if recoveredErr := handlePanic(recover()); recoveredErr != nil {
|
||||
err = recoveredErr
|
||||
@@ -101,7 +103,7 @@ func (s *DeleteItemActionGRPCServer) Execute(ctx context.Context, req *proto.Del
|
||||
return nil, newGRPCError(errors.WithStack(err))
|
||||
}
|
||||
|
||||
if err := impl.Execute(&velero.DeleteItemActionExecuteInput{
|
||||
if err := impl.ExecuteV2(ctx, &velero.DeleteItemActionExecuteInput{
|
||||
Item: &item,
|
||||
Backup: &backup,
|
||||
}); err != nil {
|
||||
|
||||
@@ -69,7 +69,13 @@ func (c *ObjectStoreGRPCClient) Init(config map[string]string) error {
|
||||
// PutObject creates a new object using the data in body within the specified
|
||||
// object storage bucket with the given key.
|
||||
func (c *ObjectStoreGRPCClient) PutObject(bucket, key string, body io.Reader) error {
|
||||
stream, err := c.grpcClient.PutObject(context.Background())
|
||||
return c.PutObjectV2(context.Background(), bucket, key, body)
|
||||
}
|
||||
|
||||
// PutObjectV2 creates a new object using the data in body within the specified
|
||||
// object storage bucket with the given key.
|
||||
func (c *ObjectStoreGRPCClient) PutObjectV2(ctx context.Context, bucket, key string, body io.Reader) error {
|
||||
stream, err := c.grpcClient.PutObject(ctx)
|
||||
if err != nil {
|
||||
return fromGRPCError(err)
|
||||
}
|
||||
@@ -98,13 +104,18 @@ func (c *ObjectStoreGRPCClient) PutObject(bucket, key string, body io.Reader) er
|
||||
|
||||
// ObjectExists checks if there is an object with the given key in the object storage bucket.
|
||||
func (c *ObjectStoreGRPCClient) ObjectExists(bucket, key string) (bool, error) {
|
||||
return c.ObjectExistsV2(context.Background(), bucket, key)
|
||||
}
|
||||
|
||||
// ObjectExistsV2 checks if there is an object with the given key in the object storage bucket.
|
||||
func (c *ObjectStoreGRPCClient) ObjectExistsV2(ctx context.Context, bucket, key string) (bool, error) {
|
||||
req := &proto.ObjectExistsRequest{
|
||||
Plugin: c.plugin,
|
||||
Bucket: bucket,
|
||||
Key: key,
|
||||
}
|
||||
|
||||
res, err := c.grpcClient.ObjectExists(context.Background(), req)
|
||||
res, err := c.grpcClient.ObjectExists(ctx, req)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
@@ -115,13 +126,19 @@ func (c *ObjectStoreGRPCClient) ObjectExists(bucket, key string) (bool, error) {
|
||||
// GetObject retrieves the object with the given key from the specified
|
||||
// bucket in object storage.
|
||||
func (c *ObjectStoreGRPCClient) GetObject(bucket, key string) (io.ReadCloser, error) {
|
||||
return c.GetObjectV2(context.Background(), bucket, key)
|
||||
}
|
||||
|
||||
// GetObjectV2 retrieves the object with the given key from the specified
|
||||
// bucket in object storage.
|
||||
func (c *ObjectStoreGRPCClient) GetObjectV2(ctx context.Context, bucket, key string) (io.ReadCloser, error) {
|
||||
req := &proto.GetObjectRequest{
|
||||
Plugin: c.plugin,
|
||||
Bucket: bucket,
|
||||
Key: key,
|
||||
}
|
||||
|
||||
stream, err := c.grpcClient.GetObject(context.Background(), req)
|
||||
stream, err := c.grpcClient.GetObject(ctx, req)
|
||||
if err != nil {
|
||||
return nil, fromGRPCError(err)
|
||||
}
|
||||
@@ -155,6 +172,14 @@ func (c *ObjectStoreGRPCClient) GetObject(bucket, key string) (io.ReadCloser, er
|
||||
// after the provided prefix and before the provided delimiter (this is
|
||||
// often used to simulate a directory hierarchy in object storage).
|
||||
func (c *ObjectStoreGRPCClient) ListCommonPrefixes(bucket, prefix, delimiter string) ([]string, error) {
|
||||
return c.ListCommonPrefixesV2(context.Background(), bucket, prefix, delimiter)
|
||||
}
|
||||
|
||||
// ListCommonPrefixesV2 gets a list of all object key prefixes that come
|
||||
// after the provided prefix and before the provided delimiter (this is
|
||||
// often used to simulate a directory hierarchy in object storage).
|
||||
func (c *ObjectStoreGRPCClient) ListCommonPrefixesV2(
|
||||
ctx context.Context, bucket, prefix, delimiter string) ([]string, error) {
|
||||
req := &proto.ListCommonPrefixesRequest{
|
||||
Plugin: c.plugin,
|
||||
Bucket: bucket,
|
||||
@@ -162,7 +187,7 @@ func (c *ObjectStoreGRPCClient) ListCommonPrefixes(bucket, prefix, delimiter str
|
||||
Delimiter: delimiter,
|
||||
}
|
||||
|
||||
res, err := c.grpcClient.ListCommonPrefixes(context.Background(), req)
|
||||
res, err := c.grpcClient.ListCommonPrefixes(ctx, req)
|
||||
if err != nil {
|
||||
return nil, fromGRPCError(err)
|
||||
}
|
||||
@@ -172,13 +197,19 @@ func (c *ObjectStoreGRPCClient) ListCommonPrefixes(bucket, prefix, delimiter str
|
||||
|
||||
// ListObjects gets a list of all objects in bucket that have the same prefix.
|
||||
func (c *ObjectStoreGRPCClient) ListObjects(bucket, prefix string) ([]string, error) {
|
||||
return c.ListObjectsV2(context.Background(), bucket, prefix)
|
||||
}
|
||||
|
||||
// ListObjectsV2 gets a list of all objects in bucket that have the same prefix.
|
||||
func (c *ObjectStoreGRPCClient) ListObjectsV2(
|
||||
ctx context.Context, bucket, prefix string) ([]string, error) {
|
||||
req := &proto.ListObjectsRequest{
|
||||
Plugin: c.plugin,
|
||||
Bucket: bucket,
|
||||
Prefix: prefix,
|
||||
}
|
||||
|
||||
res, err := c.grpcClient.ListObjects(context.Background(), req)
|
||||
res, err := c.grpcClient.ListObjects(ctx, req)
|
||||
if err != nil {
|
||||
return nil, fromGRPCError(err)
|
||||
}
|
||||
@@ -189,13 +220,19 @@ func (c *ObjectStoreGRPCClient) ListObjects(bucket, prefix string) ([]string, er
|
||||
// DeleteObject removes object with the specified key from the given
|
||||
// bucket.
|
||||
func (c *ObjectStoreGRPCClient) DeleteObject(bucket, key string) error {
|
||||
return c.DeleteObjectV2(context.Background(), bucket, key)
|
||||
}
|
||||
|
||||
// DeleteObjectV2 removes object with the specified key from the given bucket.
|
||||
func (c *ObjectStoreGRPCClient) DeleteObjectV2(
|
||||
ctx context.Context, bucket, key string) error {
|
||||
req := &proto.DeleteObjectRequest{
|
||||
Plugin: c.plugin,
|
||||
Bucket: bucket,
|
||||
Key: key,
|
||||
}
|
||||
|
||||
if _, err := c.grpcClient.DeleteObject(context.Background(), req); err != nil {
|
||||
if _, err := c.grpcClient.DeleteObject(ctx, req); err != nil {
|
||||
return fromGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -204,6 +241,12 @@ func (c *ObjectStoreGRPCClient) DeleteObject(bucket, key string) error {
|
||||
|
||||
// CreateSignedURL creates a pre-signed URL for the given bucket and key that expires after ttl.
|
||||
func (c *ObjectStoreGRPCClient) CreateSignedURL(bucket, key string, ttl time.Duration) (string, error) {
|
||||
return c.CreateSignedURLV2(context.Background(), bucket, key, ttl)
|
||||
}
|
||||
|
||||
// CreateSignedURLV2 creates a pre-signed URL for the given bucket and key that expires after ttl.
|
||||
func (c *ObjectStoreGRPCClient) CreateSignedURLV2(
|
||||
ctx context.Context, bucket, key string, ttl time.Duration) (string, error) {
|
||||
req := &proto.CreateSignedURLRequest{
|
||||
Plugin: c.plugin,
|
||||
Bucket: bucket,
|
||||
@@ -211,7 +254,7 @@ func (c *ObjectStoreGRPCClient) CreateSignedURL(bucket, key string, ttl time.Dur
|
||||
Ttl: int64(ttl),
|
||||
}
|
||||
|
||||
res, err := c.grpcClient.CreateSignedURL(context.Background(), req)
|
||||
res, err := c.grpcClient.CreateSignedURL(ctx, req)
|
||||
if err != nil {
|
||||
return "", fromGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -24,7 +24,7 @@ import (
|
||||
"golang.org/x/net/context"
|
||||
|
||||
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
|
||||
)
|
||||
|
||||
// ObjectStoreGRPCServer implements the proto-generated ObjectStoreServer interface, and accepts
|
||||
@@ -33,13 +33,13 @@ type ObjectStoreGRPCServer struct {
|
||||
mux *serverMux
|
||||
}
|
||||
|
||||
func (s *ObjectStoreGRPCServer) getImpl(name string) (velero.ObjectStore, error) {
|
||||
func (s *ObjectStoreGRPCServer) getImpl(name string) (objectstorev2.ObjectStore, error) {
|
||||
impl, err := s.mux.getHandler(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
itemAction, ok := impl.(velero.ObjectStore)
|
||||
itemAction, ok := impl.(objectstorev2.ObjectStore)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not an object store", impl)
|
||||
}
|
||||
@@ -62,7 +62,7 @@ func (s *ObjectStoreGRPCServer) Init(ctx context.Context, req *proto.ObjectStore
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
if err := impl.Init(req.Config); err != nil {
|
||||
if err := impl.InitV2(ctx, req.Config); err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -141,7 +141,7 @@ func (s *ObjectStoreGRPCServer) ObjectExists(ctx context.Context, req *proto.Obj
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
exists, err := impl.ObjectExists(req.Bucket, req.Key)
|
||||
exists, err := impl.ObjectExistsV2(ctx, req.Bucket, req.Key)
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
@@ -200,7 +200,7 @@ func (s *ObjectStoreGRPCServer) ListCommonPrefixes(ctx context.Context, req *pro
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
prefixes, err := impl.ListCommonPrefixes(req.Bucket, req.Prefix, req.Delimiter)
|
||||
prefixes, err := impl.ListCommonPrefixesV2(ctx, req.Bucket, req.Prefix, req.Delimiter)
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
@@ -221,7 +221,7 @@ func (s *ObjectStoreGRPCServer) ListObjects(ctx context.Context, req *proto.List
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
keys, err := impl.ListObjects(req.Bucket, req.Prefix)
|
||||
keys, err := impl.ListObjectsV2(ctx, req.Bucket, req.Prefix)
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
@@ -243,7 +243,7 @@ func (s *ObjectStoreGRPCServer) DeleteObject(ctx context.Context, req *proto.Del
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
if err := impl.DeleteObject(req.Bucket, req.Key); err != nil {
|
||||
if err := impl.DeleteObjectV2(ctx, req.Bucket, req.Key); err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -263,7 +263,7 @@ func (s *ObjectStoreGRPCServer) CreateSignedURL(ctx context.Context, req *proto.
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
url, err := impl.CreateSignedURL(req.Bucket, req.Key, time.Duration(req.Ttl))
|
||||
url, err := impl.CreateSignedURLV2(ctx, req.Bucket, req.Key, time.Duration(req.Ttl))
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -45,6 +45,58 @@ const (
|
||||
PluginKindPluginLister PluginKind = "PluginLister"
|
||||
)
|
||||
|
||||
const (
|
||||
// PluginKindObjectStoreV2 represents an object store plugin version 2.
|
||||
PluginKindObjectStoreV2 PluginKind = "ObjectStoreV2"
|
||||
|
||||
// PluginKindVolumeSnapshotterV2 represents a volume snapshotter plugin version 2.
|
||||
PluginKindVolumeSnapshotterV2 PluginKind = "VolumeSnapshotterV2"
|
||||
|
||||
// PluginKindBackupItemActionV2 represents a backup item action plugin version 2.
|
||||
PluginKindBackupItemActionV2 PluginKind = "BackupItemActionV2"
|
||||
|
||||
// PluginKindRestoreItemActionV2 represents a restore item action plugin version 2.
|
||||
PluginKindRestoreItemActionV2 PluginKind = "RestoreItemActionV2"
|
||||
|
||||
// PluginKindDeleteItemActionV2 represents a delete item action plugin version 2.
|
||||
PluginKindDeleteItemActionV2 PluginKind = "DeleteItemActionV2"
|
||||
)
|
||||
|
||||
func ObjectStoreKinds() []PluginKind {
|
||||
return []PluginKind{
|
||||
PluginKindObjectStoreV2,
|
||||
PluginKindObjectStore,
|
||||
}
|
||||
}
|
||||
|
||||
func VolumeSnapshotterKinds() []PluginKind {
|
||||
return []PluginKind{
|
||||
PluginKindVolumeSnapshotterV2,
|
||||
PluginKindVolumeSnapshotter,
|
||||
}
|
||||
}
|
||||
|
||||
func BackupItemActionKinds() []PluginKind {
|
||||
return []PluginKind{
|
||||
PluginKindBackupItemActionV2,
|
||||
PluginKindBackupItemAction,
|
||||
}
|
||||
}
|
||||
|
||||
func RestoreItemActionKinds() []PluginKind {
|
||||
return []PluginKind{
|
||||
PluginKindRestoreItemActionV2,
|
||||
PluginKindRestoreItemAction,
|
||||
}
|
||||
}
|
||||
|
||||
func DeleteItemActionKinds() []PluginKind {
|
||||
return []PluginKind{
|
||||
PluginKindDeleteItemActionV2,
|
||||
PluginKindDeleteItemAction,
|
||||
}
|
||||
}
|
||||
|
||||
// AllPluginKinds contains all the valid plugin kinds that Velero supports, excluding PluginLister because that is not a
|
||||
// kind that a developer would ever need to implement (it's handled by Velero and the Velero plugin library code).
|
||||
func AllPluginKinds() map[string]PluginKind {
|
||||
@@ -54,5 +106,11 @@ func AllPluginKinds() map[string]PluginKind {
|
||||
allPluginKinds[PluginKindBackupItemAction.String()] = PluginKindBackupItemAction
|
||||
allPluginKinds[PluginKindRestoreItemAction.String()] = PluginKindRestoreItemAction
|
||||
allPluginKinds[PluginKindDeleteItemAction.String()] = PluginKindDeleteItemAction
|
||||
// Version 2
|
||||
allPluginKinds[PluginKindObjectStoreV2.String()] = PluginKindObjectStoreV2
|
||||
allPluginKinds[PluginKindVolumeSnapshotterV2.String()] = PluginKindVolumeSnapshotterV2
|
||||
allPluginKinds[PluginKindBackupItemActionV2.String()] = PluginKindBackupItemActionV2
|
||||
allPluginKinds[PluginKindRestoreItemActionV2.String()] = PluginKindRestoreItemActionV2
|
||||
allPluginKinds[PluginKindDeleteItemActionV2.String()] = PluginKindDeleteItemActionV2
|
||||
return allPluginKinds
|
||||
}
|
||||
|
||||
@@ -27,9 +27,10 @@ import (
|
||||
|
||||
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
|
||||
)
|
||||
|
||||
var _ velero.RestoreItemAction = &RestoreItemActionGRPCClient{}
|
||||
var _ restoreitemactionv2.RestoreItemAction = &RestoreItemActionGRPCClient{}
|
||||
|
||||
// NewRestoreItemActionPlugin constructs a RestoreItemActionPlugin.
|
||||
func NewRestoreItemActionPlugin(options ...PluginOption) *RestoreItemActionPlugin {
|
||||
@@ -71,7 +72,14 @@ func (c *RestoreItemActionGRPCClient) AppliesTo() (velero.ResourceSelector, erro
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (c *RestoreItemActionGRPCClient) Execute(input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
|
||||
func (c *RestoreItemActionGRPCClient) Execute(
|
||||
input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
|
||||
return c.ExecuteV2(context.Background(), input)
|
||||
}
|
||||
|
||||
func (c *RestoreItemActionGRPCClient) ExecuteV2(
|
||||
ctx context.Context, input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
|
||||
|
||||
itemJSON, err := json.Marshal(input.Item.UnstructuredContent())
|
||||
if err != nil {
|
||||
return nil, errors.WithStack(err)
|
||||
@@ -94,7 +102,7 @@ func (c *RestoreItemActionGRPCClient) Execute(input *velero.RestoreItemActionExe
|
||||
Restore: restoreJSON,
|
||||
}
|
||||
|
||||
res, err := c.grpcClient.Execute(context.Background(), req)
|
||||
res, err := c.grpcClient.Execute(ctx, req)
|
||||
if err != nil {
|
||||
return nil, fromGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -26,6 +26,7 @@ import (
|
||||
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
|
||||
)
|
||||
|
||||
// RestoreItemActionGRPCServer implements the proto-generated RestoreItemActionServer interface, and accepts
|
||||
@@ -34,13 +35,13 @@ type RestoreItemActionGRPCServer struct {
|
||||
mux *serverMux
|
||||
}
|
||||
|
||||
func (s *RestoreItemActionGRPCServer) getImpl(name string) (velero.RestoreItemAction, error) {
|
||||
func (s *RestoreItemActionGRPCServer) getImpl(name string) (restoreitemactionv2.RestoreItemAction, error) {
|
||||
impl, err := s.mux.getHandler(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
itemAction, ok := impl.(velero.RestoreItemAction)
|
||||
itemAction, ok := impl.(restoreitemactionv2.RestoreItemAction)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a restore item action", impl)
|
||||
}
|
||||
@@ -76,7 +77,9 @@ func (s *RestoreItemActionGRPCServer) AppliesTo(ctx context.Context, req *proto.
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *RestoreItemActionGRPCServer) Execute(ctx context.Context, req *proto.RestoreItemActionExecuteRequest) (response *proto.RestoreItemActionExecuteResponse, err error) {
|
||||
func (s *RestoreItemActionGRPCServer) Execute(
|
||||
ctx context.Context, req *proto.RestoreItemActionExecuteRequest) (response *proto.RestoreItemActionExecuteResponse, err error) {
|
||||
|
||||
defer func() {
|
||||
if recoveredErr := handlePanic(recover()); recoveredErr != nil {
|
||||
err = recoveredErr
|
||||
@@ -106,11 +109,12 @@ func (s *RestoreItemActionGRPCServer) Execute(ctx context.Context, req *proto.Re
|
||||
return nil, newGRPCError(errors.WithStack(err))
|
||||
}
|
||||
|
||||
executeOutput, err := impl.Execute(&velero.RestoreItemActionExecuteInput{
|
||||
Item: &item,
|
||||
ItemFromBackup: &itemFromBackup,
|
||||
Restore: &restoreObj,
|
||||
})
|
||||
executeOutput, err := impl.ExecuteV2(ctx,
|
||||
&velero.RestoreItemActionExecuteInput{
|
||||
Item: &item,
|
||||
ItemFromBackup: &itemFromBackup,
|
||||
Restore: &restoreObj,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -74,21 +74,65 @@ type Server interface {
|
||||
// RegisterDeleteItemActions registers multiple Delete item actions.
|
||||
RegisterDeleteItemActions(map[string]HandlerInitializer) Server
|
||||
|
||||
// Version 2
|
||||
|
||||
// RegisterVolumeSnapshottersV2 registers multiple volume snapshotters.
|
||||
RegisterVolumeSnapshottersV2(map[string]HandlerInitializer) Server
|
||||
|
||||
// RegisterObjectStoreV2 registers an object store. Accepted format
|
||||
// for the plugin name is <DNS subdomain>/<non-empty name>.
|
||||
RegisterObjectStoreV2(pluginName string, initializer HandlerInitializer) Server
|
||||
|
||||
// RegisterBackupItemActionV2 registers a backup item action. Accepted format
|
||||
// for the plugin name is <DNS subdomain>/<non-empty name>.
|
||||
RegisterBackupItemActionV2(pluginName string, initializer HandlerInitializer) Server
|
||||
|
||||
// RegisterBackupItemActionsV2 registers multiple backup item actions.
|
||||
RegisterBackupItemActionsV2(map[string]HandlerInitializer) Server
|
||||
|
||||
// RegisterVolumeSnapshotterV2 registers a volume snapshotter. Accepted format
|
||||
// for the plugin name is <DNS subdomain>/<non-empty name>.
|
||||
RegisterVolumeSnapshotterV2(pluginName string, initializer HandlerInitializer) Server
|
||||
|
||||
// RegisterObjectStoresV2 registers multiple object stores.
|
||||
RegisterObjectStoresV2(map[string]HandlerInitializer) Server
|
||||
|
||||
// RegisterRestoreItemActionV2 registers a restore item action. Accepted format
|
||||
// for the plugin name is <DNS subdomain>/<non-empty name>.
|
||||
RegisterRestoreItemActionV2(pluginName string, initializer HandlerInitializer) Server
|
||||
|
||||
// RegisterRestoreItemActionsV2 registers multiple restore item actions.
|
||||
RegisterRestoreItemActionsV2(map[string]HandlerInitializer) Server
|
||||
|
||||
// RegisterDeleteItemActionV2 registers a delete item action. Accepted format
|
||||
// for the plugin name is <DNS subdomain>/<non-empty name>.
|
||||
RegisterDeleteItemActionV2(pluginName string, initializer HandlerInitializer) Server
|
||||
|
||||
// RegisterDeleteItemActionsV2 registers multiple Delete item actions.
|
||||
RegisterDeleteItemActionsV2(map[string]HandlerInitializer) Server
|
||||
|
||||
// Server runs the plugin server.
|
||||
Serve()
|
||||
}
|
||||
|
||||
// server implements Server.
|
||||
type server struct {
|
||||
log *logrus.Logger
|
||||
logLevelFlag *logging.LevelFlag
|
||||
flagSet *pflag.FlagSet
|
||||
featureSet *veleroflag.StringArray
|
||||
log *logrus.Logger
|
||||
logLevelFlag *logging.LevelFlag
|
||||
flagSet *pflag.FlagSet
|
||||
featureSet *veleroflag.StringArray
|
||||
// Version 1
|
||||
backupItemAction *BackupItemActionPlugin
|
||||
volumeSnapshotter *VolumeSnapshotterPlugin
|
||||
objectStore *ObjectStorePlugin
|
||||
restoreItemAction *RestoreItemActionPlugin
|
||||
deleteItemAction *DeleteItemActionPlugin
|
||||
// Version 2
|
||||
backupItemActionV2 *BackupItemActionPlugin
|
||||
volumeSnapshotterV2 *VolumeSnapshotterPlugin
|
||||
objectStoreV2 *ObjectStorePlugin
|
||||
restoreItemActionV2 *RestoreItemActionPlugin
|
||||
deleteItemActionV2 *DeleteItemActionPlugin
|
||||
}
|
||||
|
||||
// NewServer returns a new Server
|
||||
@@ -97,14 +141,19 @@ func NewServer() Server {
|
||||
features := veleroflag.NewStringArray()
|
||||
|
||||
return &server{
|
||||
log: log,
|
||||
logLevelFlag: logging.LogLevelFlag(log.Level),
|
||||
featureSet: &features,
|
||||
backupItemAction: NewBackupItemActionPlugin(serverLogger(log)),
|
||||
volumeSnapshotter: NewVolumeSnapshotterPlugin(serverLogger(log)),
|
||||
objectStore: NewObjectStorePlugin(serverLogger(log)),
|
||||
restoreItemAction: NewRestoreItemActionPlugin(serverLogger(log)),
|
||||
deleteItemAction: NewDeleteItemActionPlugin(serverLogger(log)),
|
||||
log: log,
|
||||
logLevelFlag: logging.LogLevelFlag(log.Level),
|
||||
featureSet: &features,
|
||||
backupItemAction: NewBackupItemActionPlugin(serverLogger(log)),
|
||||
volumeSnapshotter: NewVolumeSnapshotterPlugin(serverLogger(log)),
|
||||
objectStore: NewObjectStorePlugin(serverLogger(log)),
|
||||
restoreItemAction: NewRestoreItemActionPlugin(serverLogger(log)),
|
||||
deleteItemAction: NewDeleteItemActionPlugin(serverLogger(log)),
|
||||
backupItemActionV2: NewBackupItemActionPlugin(serverLogger(log)),
|
||||
volumeSnapshotterV2: NewVolumeSnapshotterPlugin(serverLogger(log)),
|
||||
objectStoreV2: NewObjectStorePlugin(serverLogger(log)),
|
||||
restoreItemActionV2: NewRestoreItemActionPlugin(serverLogger(log)),
|
||||
deleteItemActionV2: NewDeleteItemActionPlugin(serverLogger(log)),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -177,6 +226,67 @@ func (s *server) RegisterDeleteItemActions(m map[string]HandlerInitializer) Serv
|
||||
return s
|
||||
}
|
||||
|
||||
// Version 2
|
||||
func (s *server) RegisterBackupItemActionV2(name string, initializer HandlerInitializer) Server {
|
||||
s.backupItemActionV2.register(name, initializer)
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *server) RegisterBackupItemActionsV2(m map[string]HandlerInitializer) Server {
|
||||
for name := range m {
|
||||
s.RegisterBackupItemActionV2(name, m[name])
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *server) RegisterVolumeSnapshotterV2(name string, initializer HandlerInitializer) Server {
|
||||
s.volumeSnapshotterV2.register(name, initializer)
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *server) RegisterVolumeSnapshottersV2(m map[string]HandlerInitializer) Server {
|
||||
for name := range m {
|
||||
s.RegisterVolumeSnapshotterV2(name, m[name])
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *server) RegisterObjectStoreV2(name string, initializer HandlerInitializer) Server {
|
||||
s.objectStoreV2.register(name, initializer)
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *server) RegisterObjectStoresV2(m map[string]HandlerInitializer) Server {
|
||||
for name := range m {
|
||||
s.RegisterObjectStoreV2(name, m[name])
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *server) RegisterRestoreItemActionV2(name string, initializer HandlerInitializer) Server {
|
||||
s.restoreItemActionV2.register(name, initializer)
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *server) RegisterRestoreItemActionsV2(m map[string]HandlerInitializer) Server {
|
||||
for name := range m {
|
||||
s.RegisterRestoreItemActionV2(name, m[name])
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *server) RegisterDeleteItemActionV2(name string, initializer HandlerInitializer) Server {
|
||||
s.deleteItemActionV2.register(name, initializer)
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *server) RegisterDeleteItemActionsV2(m map[string]HandlerInitializer) Server {
|
||||
for name := range m {
|
||||
s.RegisterDeleteItemActionV2(name, m[name])
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// getNames returns a list of PluginIdentifiers registered with plugin.
|
||||
func getNames(command string, kind PluginKind, plugin Interface) []PluginIdentifier {
|
||||
var pluginIdentifiers []PluginIdentifier
|
||||
@@ -206,6 +316,12 @@ func (s *server) Serve() {
|
||||
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindObjectStore, s.objectStore)...)
|
||||
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindRestoreItemAction, s.restoreItemAction)...)
|
||||
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindDeleteItemAction, s.deleteItemAction)...)
|
||||
// Version 2
|
||||
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindBackupItemActionV2, s.backupItemActionV2)...)
|
||||
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindVolumeSnapshotterV2, s.volumeSnapshotterV2)...)
|
||||
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindObjectStoreV2, s.objectStoreV2)...)
|
||||
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindRestoreItemActionV2, s.restoreItemActionV2)...)
|
||||
pluginIdentifiers = append(pluginIdentifiers, getNames(command, PluginKindDeleteItemActionV2, s.deleteItemActionV2)...)
|
||||
|
||||
pluginLister := NewPluginLister(pluginIdentifiers...)
|
||||
|
||||
@@ -218,6 +334,12 @@ func (s *server) Serve() {
|
||||
string(PluginKindPluginLister): NewPluginListerPlugin(pluginLister),
|
||||
string(PluginKindRestoreItemAction): s.restoreItemAction,
|
||||
string(PluginKindDeleteItemAction): s.deleteItemAction,
|
||||
// Version 2
|
||||
string(PluginKindBackupItemActionV2): s.backupItemActionV2,
|
||||
string(PluginKindVolumeSnapshotterV2): s.volumeSnapshotterV2,
|
||||
string(PluginKindObjectStoreV2): s.objectStoreV2,
|
||||
string(PluginKindRestoreItemActionV2): s.restoreItemActionV2,
|
||||
string(PluginKindDeleteItemActionV2): s.deleteItemActionV2,
|
||||
},
|
||||
GRPCServer: plugin.DefaultGRPCServer,
|
||||
})
|
||||
|
||||
@@ -53,12 +53,19 @@ func newVolumeSnapshotterGRPCClient(base *clientBase, clientConn *grpc.ClientCon
|
||||
// configuration key-value pairs. It returns an error if the VolumeSnapshotter
|
||||
// cannot be initialized from the provided config.
|
||||
func (c *VolumeSnapshotterGRPCClient) Init(config map[string]string) error {
|
||||
return c.InitV2(context.Background(), config)
|
||||
}
|
||||
|
||||
// InitV2 prepares the VolumeSnapshotter for usage using the provided map of
|
||||
// configuration key-value pairs. It returns an error if the VolumeSnapshotter
|
||||
// cannot be initialized from the provided config.
|
||||
func (c *VolumeSnapshotterGRPCClient) InitV2(ctx context.Context, config map[string]string) error {
|
||||
req := &proto.VolumeSnapshotterInitRequest{
|
||||
Plugin: c.plugin,
|
||||
Config: config,
|
||||
}
|
||||
|
||||
if _, err := c.grpcClient.Init(context.Background(), req); err != nil {
|
||||
if _, err := c.grpcClient.Init(ctx, req); err != nil {
|
||||
return fromGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -67,7 +74,14 @@ func (c *VolumeSnapshotterGRPCClient) Init(config map[string]string) error {
|
||||
|
||||
// CreateVolumeFromSnapshot creates a new block volume, initialized from the provided snapshot,
|
||||
// and with the specified type and IOPS (if using provisioned IOPS).
|
||||
func (c *VolumeSnapshotterGRPCClient) CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ string, iops *int64) (string, error) {
|
||||
func (c *VolumeSnapshotterGRPCClient) CreateVolumeFromSnapshot(
|
||||
snapshotID, volumeType, volumeAZ string, iops *int64) (string, error) {
|
||||
return c.CreateVolumeFromSnapshotV2(context.Background(), snapshotID, volumeType, volumeAZ, iops)
|
||||
}
|
||||
|
||||
func (c *VolumeSnapshotterGRPCClient) CreateVolumeFromSnapshotV2(
|
||||
ctx context.Context, snapshotID, volumeType, volumeAZ string, iops *int64) (string, error) {
|
||||
|
||||
req := &proto.CreateVolumeRequest{
|
||||
Plugin: c.plugin,
|
||||
SnapshotID: snapshotID,
|
||||
@@ -81,7 +95,7 @@ func (c *VolumeSnapshotterGRPCClient) CreateVolumeFromSnapshot(snapshotID, volum
|
||||
req.Iops = *iops
|
||||
}
|
||||
|
||||
res, err := c.grpcClient.CreateVolumeFromSnapshot(context.Background(), req)
|
||||
res, err := c.grpcClient.CreateVolumeFromSnapshot(ctx, req)
|
||||
if err != nil {
|
||||
return "", fromGRPCError(err)
|
||||
}
|
||||
@@ -92,13 +106,19 @@ func (c *VolumeSnapshotterGRPCClient) CreateVolumeFromSnapshot(snapshotID, volum
|
||||
// GetVolumeInfo returns the type and IOPS (if using provisioned IOPS) for a specified block
|
||||
// volume.
|
||||
func (c *VolumeSnapshotterGRPCClient) GetVolumeInfo(volumeID, volumeAZ string) (string, *int64, error) {
|
||||
return c.GetVolumeInfoV2(context.Background(), volumeID, volumeAZ)
|
||||
}
|
||||
|
||||
func (c *VolumeSnapshotterGRPCClient) GetVolumeInfoV2(
|
||||
ctx context.Context, volumeID, volumeAZ string) (string, *int64, error) {
|
||||
|
||||
req := &proto.GetVolumeInfoRequest{
|
||||
Plugin: c.plugin,
|
||||
VolumeID: volumeID,
|
||||
VolumeAZ: volumeAZ,
|
||||
}
|
||||
|
||||
res, err := c.grpcClient.GetVolumeInfo(context.Background(), req)
|
||||
res, err := c.grpcClient.GetVolumeInfo(ctx, req)
|
||||
if err != nil {
|
||||
return "", nil, fromGRPCError(err)
|
||||
}
|
||||
@@ -114,6 +134,11 @@ func (c *VolumeSnapshotterGRPCClient) GetVolumeInfo(volumeID, volumeAZ string) (
|
||||
// CreateSnapshot creates a snapshot of the specified block volume, and applies the provided
|
||||
// set of tags to the snapshot.
|
||||
func (c *VolumeSnapshotterGRPCClient) CreateSnapshot(volumeID, volumeAZ string, tags map[string]string) (string, error) {
|
||||
return c.CreateSnapshotV2(context.Background(), volumeID, volumeID, tags)
|
||||
}
|
||||
|
||||
func (c *VolumeSnapshotterGRPCClient) CreateSnapshotV2(
|
||||
ctx context.Context, volumeID, volumeAZ string, tags map[string]string) (string, error) {
|
||||
req := &proto.CreateSnapshotRequest{
|
||||
Plugin: c.plugin,
|
||||
VolumeID: volumeID,
|
||||
@@ -121,7 +146,7 @@ func (c *VolumeSnapshotterGRPCClient) CreateSnapshot(volumeID, volumeAZ string,
|
||||
Tags: tags,
|
||||
}
|
||||
|
||||
res, err := c.grpcClient.CreateSnapshot(context.Background(), req)
|
||||
res, err := c.grpcClient.CreateSnapshot(ctx, req)
|
||||
if err != nil {
|
||||
return "", fromGRPCError(err)
|
||||
}
|
||||
@@ -131,12 +156,17 @@ func (c *VolumeSnapshotterGRPCClient) CreateSnapshot(volumeID, volumeAZ string,
|
||||
|
||||
// DeleteSnapshot deletes the specified volume snapshot.
|
||||
func (c *VolumeSnapshotterGRPCClient) DeleteSnapshot(snapshotID string) error {
|
||||
return c.DeleteSnapshotV2(context.Background(), snapshotID)
|
||||
}
|
||||
|
||||
func (c *VolumeSnapshotterGRPCClient) DeleteSnapshotV2(
|
||||
ctx context.Context, snapshotID string) error {
|
||||
req := &proto.DeleteSnapshotRequest{
|
||||
Plugin: c.plugin,
|
||||
SnapshotID: snapshotID,
|
||||
}
|
||||
|
||||
if _, err := c.grpcClient.DeleteSnapshot(context.Background(), req); err != nil {
|
||||
if _, err := c.grpcClient.DeleteSnapshot(ctx, req); err != nil {
|
||||
return fromGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -144,6 +174,11 @@ func (c *VolumeSnapshotterGRPCClient) DeleteSnapshot(snapshotID string) error {
|
||||
}
|
||||
|
||||
func (c *VolumeSnapshotterGRPCClient) GetVolumeID(pv runtime.Unstructured) (string, error) {
|
||||
return c.GetVolumeIDV2(context.Background(), pv)
|
||||
}
|
||||
|
||||
func (c *VolumeSnapshotterGRPCClient) GetVolumeIDV2(
|
||||
ctx context.Context, pv runtime.Unstructured) (string, error) {
|
||||
encodedPV, err := json.Marshal(pv.UnstructuredContent())
|
||||
if err != nil {
|
||||
return "", errors.WithStack(err)
|
||||
@@ -154,7 +189,7 @@ func (c *VolumeSnapshotterGRPCClient) GetVolumeID(pv runtime.Unstructured) (stri
|
||||
PersistentVolume: encodedPV,
|
||||
}
|
||||
|
||||
resp, err := c.grpcClient.GetVolumeID(context.Background(), req)
|
||||
resp, err := c.grpcClient.GetVolumeID(ctx, req)
|
||||
if err != nil {
|
||||
return "", fromGRPCError(err)
|
||||
}
|
||||
@@ -163,6 +198,11 @@ func (c *VolumeSnapshotterGRPCClient) GetVolumeID(pv runtime.Unstructured) (stri
|
||||
}
|
||||
|
||||
func (c *VolumeSnapshotterGRPCClient) SetVolumeID(pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error) {
|
||||
return c.SetVolumeIDV2(context.Background(), pv, volumeID)
|
||||
}
|
||||
|
||||
func (c *VolumeSnapshotterGRPCClient) SetVolumeIDV2(
|
||||
ctx context.Context, pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error) {
|
||||
encodedPV, err := json.Marshal(pv.UnstructuredContent())
|
||||
if err != nil {
|
||||
return nil, errors.WithStack(err)
|
||||
@@ -174,7 +214,7 @@ func (c *VolumeSnapshotterGRPCClient) SetVolumeID(pv runtime.Unstructured, volum
|
||||
VolumeID: volumeID,
|
||||
}
|
||||
|
||||
resp, err := c.grpcClient.SetVolumeID(context.Background(), req)
|
||||
resp, err := c.grpcClient.SetVolumeID(ctx, req)
|
||||
if err != nil {
|
||||
return nil, fromGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -24,7 +24,7 @@ import (
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
|
||||
proto "github.com/vmware-tanzu/velero/pkg/plugin/generated"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
|
||||
)
|
||||
|
||||
// VolumeSnapshotterGRPCServer implements the proto-generated VolumeSnapshotterServer interface, and accepts
|
||||
@@ -33,13 +33,13 @@ type VolumeSnapshotterGRPCServer struct {
|
||||
mux *serverMux
|
||||
}
|
||||
|
||||
func (s *VolumeSnapshotterGRPCServer) getImpl(name string) (velero.VolumeSnapshotter, error) {
|
||||
func (s *VolumeSnapshotterGRPCServer) getImpl(name string) (volumesnapshotterv2.VolumeSnapshotter, error) {
|
||||
impl, err := s.mux.getHandler(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
volumeSnapshotter, ok := impl.(velero.VolumeSnapshotter)
|
||||
volumeSnapshotter, ok := impl.(volumesnapshotterv2.VolumeSnapshotter)
|
||||
if !ok {
|
||||
return nil, errors.Errorf("%T is not a volume snapshotter", impl)
|
||||
}
|
||||
@@ -62,7 +62,7 @@ func (s *VolumeSnapshotterGRPCServer) Init(ctx context.Context, req *proto.Volum
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
if err := impl.Init(req.Config); err != nil {
|
||||
if err := impl.InitV2(ctx, req.Config); err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -92,7 +92,7 @@ func (s *VolumeSnapshotterGRPCServer) CreateVolumeFromSnapshot(ctx context.Conte
|
||||
iops = &req.Iops
|
||||
}
|
||||
|
||||
volumeID, err := impl.CreateVolumeFromSnapshot(snapshotID, volumeType, volumeAZ, iops)
|
||||
volumeID, err := impl.CreateVolumeFromSnapshotV2(ctx, snapshotID, volumeType, volumeAZ, iops)
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
@@ -114,7 +114,7 @@ func (s *VolumeSnapshotterGRPCServer) GetVolumeInfo(ctx context.Context, req *pr
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
volumeType, iops, err := impl.GetVolumeInfo(req.VolumeID, req.VolumeAZ)
|
||||
volumeType, iops, err := impl.GetVolumeInfoV2(ctx, req.VolumeID, req.VolumeAZ)
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
@@ -144,7 +144,7 @@ func (s *VolumeSnapshotterGRPCServer) CreateSnapshot(ctx context.Context, req *p
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
snapshotID, err := impl.CreateSnapshot(req.VolumeID, req.VolumeAZ, req.Tags)
|
||||
snapshotID, err := impl.CreateSnapshotV2(ctx, req.VolumeID, req.VolumeAZ, req.Tags)
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
@@ -165,7 +165,7 @@ func (s *VolumeSnapshotterGRPCServer) DeleteSnapshot(ctx context.Context, req *p
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
if err := impl.DeleteSnapshot(req.SnapshotID); err != nil {
|
||||
if err := impl.DeleteSnapshotV2(ctx, req.SnapshotID); err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -190,7 +190,7 @@ func (s *VolumeSnapshotterGRPCServer) GetVolumeID(ctx context.Context, req *prot
|
||||
return nil, newGRPCError(errors.WithStack(err))
|
||||
}
|
||||
|
||||
volumeID, err := impl.GetVolumeID(&pv)
|
||||
volumeID, err := impl.GetVolumeIDV2(ctx, &pv)
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
@@ -215,7 +215,7 @@ func (s *VolumeSnapshotterGRPCServer) SetVolumeID(ctx context.Context, req *prot
|
||||
return nil, newGRPCError(errors.WithStack(err))
|
||||
}
|
||||
|
||||
updatedPV, err := impl.SetVolumeID(&pv, req.VolumeID)
|
||||
updatedPV, err := impl.SetVolumeIDV2(ctx, &pv, req.VolumeID)
|
||||
if err != nil {
|
||||
return nil, newGRPCError(err)
|
||||
}
|
||||
|
||||
@@ -4,7 +4,12 @@ package mocks
|
||||
|
||||
import (
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
velero "github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
|
||||
backupitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v2"
|
||||
deleteitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v2"
|
||||
objectstorev2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v2"
|
||||
restoreitemactionv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
|
||||
volumesnapshotterv2 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
|
||||
)
|
||||
|
||||
// Manager is an autogenerated mock type for the Manager type
|
||||
@@ -18,15 +23,15 @@ func (_m *Manager) CleanupClients() {
|
||||
}
|
||||
|
||||
// GetBackupItemAction provides a mock function with given fields: name
|
||||
func (_m *Manager) GetBackupItemAction(name string) (velero.BackupItemAction, error) {
|
||||
func (_m *Manager) GetBackupItemAction(name string) (backupitemactionv2.BackupItemAction, error) {
|
||||
ret := _m.Called(name)
|
||||
|
||||
var r0 velero.BackupItemAction
|
||||
if rf, ok := ret.Get(0).(func(string) velero.BackupItemAction); ok {
|
||||
var r0 backupitemactionv2.BackupItemAction
|
||||
if rf, ok := ret.Get(0).(func(string) backupitemactionv2.BackupItemAction); ok {
|
||||
r0 = rf(name)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(velero.BackupItemAction)
|
||||
r0 = ret.Get(0).(backupitemactionv2.BackupItemAction)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -41,15 +46,15 @@ func (_m *Manager) GetBackupItemAction(name string) (velero.BackupItemAction, er
|
||||
}
|
||||
|
||||
// GetBackupItemActions provides a mock function with given fields:
|
||||
func (_m *Manager) GetBackupItemActions() ([]velero.BackupItemAction, error) {
|
||||
func (_m *Manager) GetBackupItemActions() ([]backupitemactionv2.BackupItemAction, error) {
|
||||
ret := _m.Called()
|
||||
|
||||
var r0 []velero.BackupItemAction
|
||||
if rf, ok := ret.Get(0).(func() []velero.BackupItemAction); ok {
|
||||
var r0 []backupitemactionv2.BackupItemAction
|
||||
if rf, ok := ret.Get(0).(func() []backupitemactionv2.BackupItemAction); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]velero.BackupItemAction)
|
||||
r0 = ret.Get(0).([]backupitemactionv2.BackupItemAction)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -64,15 +69,15 @@ func (_m *Manager) GetBackupItemActions() ([]velero.BackupItemAction, error) {
|
||||
}
|
||||
|
||||
// GetDeleteItemAction provides a mock function with given fields: name
|
||||
func (_m *Manager) GetDeleteItemAction(name string) (velero.DeleteItemAction, error) {
|
||||
func (_m *Manager) GetDeleteItemAction(name string) (deleteitemactionv2.DeleteItemAction, error) {
|
||||
ret := _m.Called(name)
|
||||
|
||||
var r0 velero.DeleteItemAction
|
||||
if rf, ok := ret.Get(0).(func(string) velero.DeleteItemAction); ok {
|
||||
var r0 deleteitemactionv2.DeleteItemAction
|
||||
if rf, ok := ret.Get(0).(func(string) deleteitemactionv2.DeleteItemAction); ok {
|
||||
r0 = rf(name)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(velero.DeleteItemAction)
|
||||
r0 = ret.Get(0).(deleteitemactionv2.DeleteItemAction)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -87,15 +92,15 @@ func (_m *Manager) GetDeleteItemAction(name string) (velero.DeleteItemAction, er
|
||||
}
|
||||
|
||||
// GetDeleteItemActions provides a mock function with given fields:
|
||||
func (_m *Manager) GetDeleteItemActions() ([]velero.DeleteItemAction, error) {
|
||||
func (_m *Manager) GetDeleteItemActions() ([]deleteitemactionv2.DeleteItemAction, error) {
|
||||
ret := _m.Called()
|
||||
|
||||
var r0 []velero.DeleteItemAction
|
||||
if rf, ok := ret.Get(0).(func() []velero.DeleteItemAction); ok {
|
||||
var r0 []deleteitemactionv2.DeleteItemAction
|
||||
if rf, ok := ret.Get(0).(func() []deleteitemactionv2.DeleteItemAction); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]velero.DeleteItemAction)
|
||||
r0 = ret.Get(0).([]deleteitemactionv2.DeleteItemAction)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -110,15 +115,15 @@ func (_m *Manager) GetDeleteItemActions() ([]velero.DeleteItemAction, error) {
|
||||
}
|
||||
|
||||
// GetObjectStore provides a mock function with given fields: name
|
||||
func (_m *Manager) GetObjectStore(name string) (velero.ObjectStore, error) {
|
||||
func (_m *Manager) GetObjectStore(name string) (objectstorev2.ObjectStore, error) {
|
||||
ret := _m.Called(name)
|
||||
|
||||
var r0 velero.ObjectStore
|
||||
if rf, ok := ret.Get(0).(func(string) velero.ObjectStore); ok {
|
||||
var r0 objectstorev2.ObjectStore
|
||||
if rf, ok := ret.Get(0).(func(string) objectstorev2.ObjectStore); ok {
|
||||
r0 = rf(name)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(velero.ObjectStore)
|
||||
r0 = ret.Get(0).(objectstorev2.ObjectStore)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -133,15 +138,15 @@ func (_m *Manager) GetObjectStore(name string) (velero.ObjectStore, error) {
|
||||
}
|
||||
|
||||
// GetRestoreItemAction provides a mock function with given fields: name
|
||||
func (_m *Manager) GetRestoreItemAction(name string) (velero.RestoreItemAction, error) {
|
||||
func (_m *Manager) GetRestoreItemAction(name string) (restoreitemactionv2.RestoreItemAction, error) {
|
||||
ret := _m.Called(name)
|
||||
|
||||
var r0 velero.RestoreItemAction
|
||||
if rf, ok := ret.Get(0).(func(string) velero.RestoreItemAction); ok {
|
||||
var r0 restoreitemactionv2.RestoreItemAction
|
||||
if rf, ok := ret.Get(0).(func(string) restoreitemactionv2.RestoreItemAction); ok {
|
||||
r0 = rf(name)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(velero.RestoreItemAction)
|
||||
r0 = ret.Get(0).(restoreitemactionv2.RestoreItemAction)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -156,15 +161,15 @@ func (_m *Manager) GetRestoreItemAction(name string) (velero.RestoreItemAction,
|
||||
}
|
||||
|
||||
// GetRestoreItemActions provides a mock function with given fields:
|
||||
func (_m *Manager) GetRestoreItemActions() ([]velero.RestoreItemAction, error) {
|
||||
func (_m *Manager) GetRestoreItemActions() ([]restoreitemactionv2.RestoreItemAction, error) {
|
||||
ret := _m.Called()
|
||||
|
||||
var r0 []velero.RestoreItemAction
|
||||
if rf, ok := ret.Get(0).(func() []velero.RestoreItemAction); ok {
|
||||
var r0 []restoreitemactionv2.RestoreItemAction
|
||||
if rf, ok := ret.Get(0).(func() []restoreitemactionv2.RestoreItemAction); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]velero.RestoreItemAction)
|
||||
r0 = ret.Get(0).([]restoreitemactionv2.RestoreItemAction)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -179,15 +184,15 @@ func (_m *Manager) GetRestoreItemActions() ([]velero.RestoreItemAction, error) {
|
||||
}
|
||||
|
||||
// GetVolumeSnapshotter provides a mock function with given fields: name
|
||||
func (_m *Manager) GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error) {
|
||||
func (_m *Manager) GetVolumeSnapshotter(name string) (volumesnapshotterv2.VolumeSnapshotter, error) {
|
||||
ret := _m.Called(name)
|
||||
|
||||
var r0 velero.VolumeSnapshotter
|
||||
if rf, ok := ret.Get(0).(func(string) velero.VolumeSnapshotter); ok {
|
||||
var r0 volumesnapshotterv2.VolumeSnapshotter
|
||||
if rf, ok := ret.Get(0).(func(string) volumesnapshotterv2.VolumeSnapshotter); ok {
|
||||
r0 = rf(name)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(velero.VolumeSnapshotter)
|
||||
r0 = ret.Get(0).(volumesnapshotterv2.VolumeSnapshotter)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2017 the Velero contributors.
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -14,13 +14,13 @@ See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package velero
|
||||
package v1
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
|
||||
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
)
|
||||
|
||||
// BackupItemAction is an actor that performs an operation on an individual item being backed up.
|
||||
@@ -28,18 +28,12 @@ type BackupItemAction interface {
|
||||
// AppliesTo returns information about which resources this action should be invoked for.
|
||||
// A BackupItemAction's Execute function will only be invoked on items that match the returned
|
||||
// selector. A zero-valued ResourceSelector matches all resources.
|
||||
AppliesTo() (ResourceSelector, error)
|
||||
AppliesTo() (velero.ResourceSelector, error)
|
||||
|
||||
// Execute allows the ItemAction to perform arbitrary logic with the item being backed up,
|
||||
// including mutating the item itself prior to backup. The item (unmodified or modified)
|
||||
// should be returned, along with an optional slice of ResourceIdentifiers specifying
|
||||
// additional related items that should be backed up.
|
||||
Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []ResourceIdentifier, error)
|
||||
}
|
||||
|
||||
// ResourceIdentifier describes a single item by its group, resource, namespace, and name.
|
||||
type ResourceIdentifier struct {
|
||||
schema.GroupResource
|
||||
Namespace string
|
||||
Name string
|
||||
Execute(item runtime.Unstructured, backup *api.Backup) (
|
||||
runtime.Unstructured, []velero.ResourceIdentifier, error)
|
||||
}
|
||||
@@ -0,0 +1,38 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v2
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
v1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/backupitemaction/v1"
|
||||
|
||||
"context"
|
||||
)
|
||||
|
||||
type BackupItemAction interface {
|
||||
v1.BackupItemAction
|
||||
|
||||
// ExecuteV2 allows the ItemAction to perform arbitrary logic with the item being backed up,
|
||||
// including mutating the item itself prior to backup. The item (unmodified or modified)
|
||||
// should be returned, along with an optional slice of ResourceIdentifiers specifying
|
||||
// additional related items that should be backed up.
|
||||
ExecuteV2(ctx context.Context, item runtime.Unstructured,
|
||||
backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, error)
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2020 the Velero contributors.
|
||||
Copyright 2020, 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -22,20 +22,6 @@ import (
|
||||
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
)
|
||||
|
||||
// DeleteItemAction is an actor that performs an operation on an individual item being restored.
|
||||
type DeleteItemAction interface {
|
||||
// AppliesTo returns information about which resources this action should be invoked for.
|
||||
// A DeleteItemAction's Execute function will only be invoked on items that match the returned
|
||||
// selector. A zero-valued ResourceSelector matches all resources.
|
||||
AppliesTo() (ResourceSelector, error)
|
||||
|
||||
// Execute allows the ItemAction to perform arbitrary logic with the item being deleted.
|
||||
// An error should be returned if there were problems with the deletion process, but the
|
||||
// overall deletion process cannot be stopped.
|
||||
// Returned errors are logged.
|
||||
Execute(input *DeleteItemActionExecuteInput) error
|
||||
}
|
||||
|
||||
// DeleteItemActionExecuteInput contains the input parameters for the ItemAction's Execute function.
|
||||
type DeleteItemActionExecuteInput struct {
|
||||
// Item is the item taken from the pristine backed up version of resource.
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v1
|
||||
|
||||
import (
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
)
|
||||
|
||||
// DeleteItemAction is an actor that performs an operation on an individual item being restored.
|
||||
type DeleteItemAction interface {
|
||||
// AppliesTo returns information about which resources this action should be invoked for.
|
||||
// A DeleteItemAction's Execute function will only be invoked on items that match the returned
|
||||
// selector. A zero-valued ResourceSelector matches all resources.
|
||||
AppliesTo() (velero.ResourceSelector, error)
|
||||
|
||||
// Execute allows the ItemAction to perform arbitrary logic with the item being deleted.
|
||||
// An error should be returned if there were problems with the deletion process, but the
|
||||
// overall deletion process cannot be stopped.
|
||||
// Returned errors are logged.
|
||||
Execute(input *velero.DeleteItemActionExecuteInput) error
|
||||
}
|
||||
@@ -0,0 +1,34 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v2
|
||||
|
||||
import (
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
v1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/deleteitemaction/v1"
|
||||
|
||||
"context"
|
||||
)
|
||||
|
||||
type DeleteItemAction interface {
|
||||
v1.DeleteItemAction
|
||||
|
||||
// ExecuteV2 allows the ItemAction to perform arbitrary logic with the item being deleted.
|
||||
// An error should be returned if there were problems with the deletion process, but the
|
||||
// overall deletion process cannot be stopped.
|
||||
// Returned errors are logged.
|
||||
ExecuteV2(ctx context.Context, input *velero.DeleteItemActionExecuteInput) error
|
||||
}
|
||||
@@ -4,7 +4,7 @@ package mocks
|
||||
|
||||
import (
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
velero "github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
)
|
||||
|
||||
// DeleteItemAction is an autogenerated mock type for the DeleteItemAction type
|
||||
@@ -46,3 +46,8 @@ func (_m *DeleteItemAction) Execute(input *velero.DeleteItemActionExecuteInput)
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ExecuteV2 provides a mock function with given fields: ctx, input
|
||||
func (_m *DeleteItemAction) ExecuteV2(ctx context.Context, input *velero.DeleteItemActionExecuteInput) error {
|
||||
return _m.Execute(input)
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package velero
|
||||
package v1
|
||||
|
||||
import (
|
||||
"io"
|
||||
68
pkg/plugin/velero/objectstore/v2/object_storev2.go
Normal file
68
pkg/plugin/velero/objectstore/v2/object_storev2.go
Normal file
@@ -0,0 +1,68 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v2
|
||||
|
||||
import (
|
||||
v1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/objectstore/v1"
|
||||
|
||||
"context"
|
||||
"io"
|
||||
"time"
|
||||
)
|
||||
|
||||
type ObjectStore interface {
|
||||
v1.ObjectStore
|
||||
|
||||
// InitV2 prepares the ObjectStore for usage using the provided map of
|
||||
// configuration key-value pairs. It returns an error if the ObjectStore
|
||||
// cannot be initialized from the provided config.
|
||||
InitV2(ctx context.Context, config map[string]string) error
|
||||
|
||||
// PutObjectV2 creates a new object using the data in body within the specified
|
||||
// object storage bucket with the given key.
|
||||
PutObjectV2(ctx context.Context, bucket, key string, body io.Reader) error
|
||||
|
||||
// ObjectExistsV2 checks if there is an object with the given key in the object storage bucket.
|
||||
ObjectExistsV2(ctx context.Context, bucket, key string) (bool, error)
|
||||
|
||||
// GetObjectV2 retrieves the object with the given key from the specified
|
||||
// bucket in object storage.
|
||||
GetObjectV2(ctx context.Context, bucket, key string) (io.ReadCloser, error)
|
||||
|
||||
// ListCommonPrefixesV2 gets a list of all object key prefixes that start with
|
||||
// the specified prefix and stop at the next instance of the provided delimiter.
|
||||
//
|
||||
// For example, if the bucket contains the following keys:
|
||||
// a-prefix/foo-1/bar
|
||||
// a-prefix/foo-1/baz
|
||||
// a-prefix/foo-2/baz
|
||||
// some-other-prefix/foo-3/bar
|
||||
// and the provided prefix arg is "a-prefix/", and the delimiter is "/",
|
||||
// this will return the slice {"a-prefix/foo-1/", "a-prefix/foo-2/"}.
|
||||
ListCommonPrefixesV2(ctx context.Context, bucket, prefix, delimiter string) ([]string, error)
|
||||
|
||||
// ListObjectsV2 gets a list of all keys in the specified bucket
|
||||
// that have the given prefix.
|
||||
ListObjectsV2(ctx context.Context, bucket, prefix string) ([]string, error)
|
||||
|
||||
// DeleteObjectV2 removes the object with the specified key from the given
|
||||
// bucket.
|
||||
DeleteObjectV2(ctx context.Context, bucket, key string) error
|
||||
|
||||
// CreateSignedURLV2 creates a pre-signed URL for the given bucket and key that expires after ttl.
|
||||
CreateSignedURLV2(ctx context.Context, bucket, key string, ttl time.Duration) (string, error)
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2017, 2019 the Velero contributors.
|
||||
Copyright 2017, 2019, 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -22,22 +22,6 @@ import (
|
||||
api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
)
|
||||
|
||||
// RestoreItemAction is an actor that performs an operation on an individual item being restored.
|
||||
type RestoreItemAction interface {
|
||||
// AppliesTo returns information about which resources this action should be invoked for.
|
||||
// A RestoreItemAction's Execute function will only be invoked on items that match the returned
|
||||
// selector. A zero-valued ResourceSelector matches all resources.
|
||||
AppliesTo() (ResourceSelector, error)
|
||||
|
||||
// Execute allows the ItemAction to perform arbitrary logic with the item being restored,
|
||||
// including mutating the item itself prior to restore. The item (unmodified or modified)
|
||||
// should be returned, along with an optional slice of ResourceIdentifiers specifying additional
|
||||
// related items that should be restored, a warning (which will be logged but will not prevent
|
||||
// the item from being restored) or error (which will be logged and will prevent the item
|
||||
// from being restored) if applicable.
|
||||
Execute(input *RestoreItemActionExecuteInput) (*RestoreItemActionExecuteOutput, error)
|
||||
}
|
||||
|
||||
// RestoreItemActionExecuteInput contains the input parameters for the ItemAction's Execute function.
|
||||
type RestoreItemActionExecuteInput struct {
|
||||
// Item is the item being restored. It is likely different from the pristine backed up version
|
||||
|
||||
@@ -0,0 +1,37 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v1
|
||||
|
||||
import (
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
)
|
||||
|
||||
// RestoreItemAction is an actor that performs an operation on an individual item being restored.
|
||||
type RestoreItemAction interface {
|
||||
// AppliesTo returns information about which resources this action should be invoked for.
|
||||
// A RestoreItemAction's Execute function will only be invoked on items that match the returned
|
||||
// selector. A zero-valued ResourceSelector matches all resources.
|
||||
AppliesTo() (velero.ResourceSelector, error)
|
||||
|
||||
// Execute allows the ItemAction to perform arbitrary logic with the item being restored,
|
||||
// including mutating the item itself prior to restore. The item (unmodified or modified)
|
||||
// should be returned, along with an optional slice of ResourceIdentifiers specifying additional
|
||||
// related items that should be restored, a warning (which will be logged but will not prevent
|
||||
// the item from being restored) or error (which will be logged and will prevent the item
|
||||
// from being restored) if applicable.
|
||||
Execute(input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error)
|
||||
}
|
||||
@@ -0,0 +1,37 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v2
|
||||
|
||||
import (
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
v1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v1"
|
||||
|
||||
"context"
|
||||
)
|
||||
|
||||
type RestoreItemAction interface {
|
||||
v1.RestoreItemAction
|
||||
|
||||
// ExecuteV2 allows the ItemAction to perform arbitrary logic with the item being restored,
|
||||
// including mutating the item itself prior to restore. The item (unmodified or modified)
|
||||
// should be returned, along with an optional slice of ResourceIdentifiers specifying additional
|
||||
// related items that should be restored, a warning (which will be logged but will not prevent
|
||||
// the item from being restored) or error (which will be logged and will prevent the item
|
||||
// from being restored) if applicable.
|
||||
ExecuteV2(ctx context.Context,
|
||||
input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error)
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
/*
|
||||
Copyright 2019 the Velero contributors.
|
||||
Copyright 2019, 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -20,6 +20,10 @@ limitations under the License.
|
||||
// plugins of any type can be implemented.
|
||||
package velero
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
)
|
||||
|
||||
// ResourceSelector is a collection of included/excluded namespaces,
|
||||
// included/excluded resources, and a label-selector that can be used
|
||||
// to match a set of items from a cluster.
|
||||
@@ -48,3 +52,10 @@ type ResourceSelector struct {
|
||||
// for details on syntax.
|
||||
LabelSelector string
|
||||
}
|
||||
|
||||
// ResourceIdentifier describes a single item by its group, resource, namespace, and name.
|
||||
type ResourceIdentifier struct {
|
||||
schema.GroupResource
|
||||
Namespace string
|
||||
Name string
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package velero
|
||||
package v1
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
@@ -0,0 +1,59 @@
|
||||
/*
|
||||
Copyright 2021 the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package v2
|
||||
|
||||
import (
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
v1 "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v1"
|
||||
|
||||
"context"
|
||||
)
|
||||
|
||||
type VolumeSnapshotter interface {
|
||||
v1.VolumeSnapshotter
|
||||
|
||||
// InitV2 prepares the VolumeSnapshotter for usage using the provided map of
|
||||
// configuration key-value pairs. It returns an error if the VolumeSnapshotter
|
||||
// cannot be initialized from the provided config.
|
||||
InitV2(ctx context.Context, config map[string]string) error
|
||||
|
||||
// CreateVolumeFromSnapshotV2 creates a new volume in the specified
|
||||
// availability zone, initialized from the provided snapshot,
|
||||
// and with the specified type and IOPS (if using provisioned IOPS).
|
||||
CreateVolumeFromSnapshotV2(ctx context.Context,
|
||||
snapshotID, volumeType, volumeAZ string, iops *int64) (volumeID string, err error)
|
||||
|
||||
// GetVolumeIDV2 returns the cloud provider specific identifier for the PersistentVolume.
|
||||
GetVolumeIDV2(ctx context.Context, pv runtime.Unstructured) (string, error)
|
||||
|
||||
// SetVolumeIDV2 sets the cloud provider specific identifier for the PersistentVolume.
|
||||
SetVolumeIDV2(ctx context.Context,
|
||||
pv runtime.Unstructured, volumeID string) (runtime.Unstructured, error)
|
||||
|
||||
// GetVolumeInfoV2 returns the type and IOPS (if using provisioned IOPS) for
|
||||
// the specified volume in the given availability zone.
|
||||
GetVolumeInfoV2(ctx context.Context, volumeID, volumeAZ string) (string, *int64, error)
|
||||
|
||||
// CreateSnapshotV2 creates a snapshot of the specified volume, and applies the provided
|
||||
// set of tags to the snapshot.
|
||||
CreateSnapshotV2(ctx context.Context,
|
||||
volumeID, volumeAZ string, tags map[string]string) (snapshotID string, err error)
|
||||
|
||||
// DeleteSnapshotV2 deletes the specified volume snapshot.
|
||||
DeleteSnapshotV2(ctx context.Context, snapshotID string) error
|
||||
}
|
||||
@@ -57,6 +57,8 @@ import (
|
||||
"github.com/vmware-tanzu/velero/pkg/kuberesource"
|
||||
"github.com/vmware-tanzu/velero/pkg/label"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
restoreitemaction "github.com/vmware-tanzu/velero/pkg/plugin/velero/restoreitemaction/v2"
|
||||
volumesnapshotter "github.com/vmware-tanzu/velero/pkg/plugin/velero/volumesnapshotter/v2"
|
||||
"github.com/vmware-tanzu/velero/pkg/podexec"
|
||||
"github.com/vmware-tanzu/velero/pkg/restic"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
|
||||
@@ -75,7 +77,7 @@ const KubeAnnBoundByController = "pv.kubernetes.io/bound-by-controller"
|
||||
const KubeAnnDynamicallyProvisioned = "pv.kubernetes.io/provisioned-by"
|
||||
|
||||
type VolumeSnapshotterGetter interface {
|
||||
GetVolumeSnapshotter(name string) (velero.VolumeSnapshotter, error)
|
||||
GetVolumeSnapshotter(name string) (volumesnapshotter.VolumeSnapshotter, error)
|
||||
}
|
||||
|
||||
type Request struct {
|
||||
@@ -92,7 +94,7 @@ type Request struct {
|
||||
type Restorer interface {
|
||||
// Restore restores the backup data from backupReader, returning warnings and errors.
|
||||
Restore(req Request,
|
||||
actions []velero.RestoreItemAction,
|
||||
actions []restoreitemaction.RestoreItemAction,
|
||||
snapshotLocationLister listers.VolumeSnapshotLocationLister,
|
||||
volumeSnapshotterGetter VolumeSnapshotterGetter,
|
||||
) (Result, Result)
|
||||
@@ -158,7 +160,7 @@ func NewKubernetesRestorer(
|
||||
// respectively, summarizing info about the restore.
|
||||
func (kr *kubernetesRestorer) Restore(
|
||||
req Request,
|
||||
actions []velero.RestoreItemAction,
|
||||
actions []restoreitemaction.RestoreItemAction,
|
||||
snapshotLocationLister listers.VolumeSnapshotLocationLister,
|
||||
volumeSnapshotterGetter VolumeSnapshotterGetter,
|
||||
) (Result, Result) {
|
||||
@@ -278,14 +280,14 @@ func (kr *kubernetesRestorer) Restore(
|
||||
}
|
||||
|
||||
type resolvedAction struct {
|
||||
velero.RestoreItemAction
|
||||
restoreitemaction.RestoreItemAction
|
||||
|
||||
resourceIncludesExcludes *collections.IncludesExcludes
|
||||
namespaceIncludesExcludes *collections.IncludesExcludes
|
||||
selector labels.Selector
|
||||
}
|
||||
|
||||
func resolveActions(actions []velero.RestoreItemAction, helper discovery.Helper) ([]resolvedAction, error) {
|
||||
func resolveActions(actions []restoreitemaction.RestoreItemAction, helper discovery.Helper) ([]resolvedAction, error) {
|
||||
var resolved []resolvedAction
|
||||
|
||||
for _, action := range actions {
|
||||
|
||||
@@ -57,6 +57,7 @@ func NewAPIServer(t *testing.T) *APIServer {
|
||||
{Group: "apiextensions.k8s.io", Version: "v1beta1", Resource: "customresourcedefinitions"}: "CRDList",
|
||||
{Group: "velero.io", Version: "v1", Resource: "volumesnapshotlocations"}: "VSLList",
|
||||
{Group: "extensions", Version: "v1", Resource: "deployments"}: "ExtDeploymentsList",
|
||||
{Group: "velero.io", Version: "v1", Resource: "deployments"}: "VeleroDeploymentsList",
|
||||
})
|
||||
discoveryClient = &DiscoveryClient{FakeDiscovery: kubeClient.Discovery().(*discoveryfake.FakeDiscovery)}
|
||||
)
|
||||
|
||||
@@ -24,6 +24,7 @@ import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
"k8s.io/apimachinery/pkg/watch"
|
||||
v1 "k8s.io/client-go/applyconfigurations/core/v1"
|
||||
corev1 "k8s.io/client-go/kubernetes/typed/core/v1"
|
||||
)
|
||||
|
||||
@@ -77,3 +78,13 @@ func (c *FakeNamespaceClient) UpdateStatus(ctx context.Context, namespace *corev
|
||||
args := c.Called(namespace)
|
||||
return args.Get(0).(*corev1api.Namespace), args.Error(1)
|
||||
}
|
||||
|
||||
func (c *FakeNamespaceClient) Apply(ctx context.Context, namespace *v1.NamespaceApplyConfiguration, opts metav1.ApplyOptions) (result *corev1api.Namespace, err error) {
|
||||
args := c.Called(namespace)
|
||||
return args.Get(0).(*corev1api.Namespace), args.Error(1)
|
||||
}
|
||||
|
||||
func (c *FakeNamespaceClient) ApplyStatus(ctx context.Context, namespace *v1.NamespaceApplyConfiguration, opts metav1.ApplyOptions) (result *corev1api.Namespace, err error) {
|
||||
args := c.Called(namespace)
|
||||
return args.Get(0).(*corev1api.Namespace), args.Error(1)
|
||||
}
|
||||
|
||||
@@ -108,6 +108,18 @@ func ExtensionsDeployments(items ...metav1.Object) *APIResource {
|
||||
}
|
||||
}
|
||||
|
||||
// test CRD
|
||||
func VeleroDeployments(items ...metav1.Object) *APIResource {
|
||||
return &APIResource{
|
||||
Group: "velero.io",
|
||||
Version: "v1",
|
||||
Name: "deployments",
|
||||
ShortName: "deploy",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
}
|
||||
|
||||
func Namespaces(items ...metav1.Object) *APIResource {
|
||||
return &APIResource{
|
||||
Group: "",
|
||||
|
||||
@@ -164,21 +164,13 @@ func ValidateNamespaceIncludesExcludes(includesList, excludesList []string) []er
|
||||
excludes := sets.NewString(excludesList...)
|
||||
|
||||
for _, itm := range includes.List() {
|
||||
// Although asterisks is not a valid Kubernetes namespace name, it is
|
||||
// allowed here.
|
||||
if itm != "*" {
|
||||
if nsErrs := validateNamespaceName(itm); nsErrs != nil {
|
||||
errs = append(errs, nsErrs...)
|
||||
}
|
||||
if nsErrs := validateNamespaceName(itm); nsErrs != nil {
|
||||
errs = append(errs, nsErrs...)
|
||||
}
|
||||
}
|
||||
|
||||
for _, itm := range excludes.List() {
|
||||
// Asterisks in excludes list have been checked previously.
|
||||
if itm != "*" {
|
||||
if nsErrs := validateNamespaceName(itm); nsErrs != nil {
|
||||
errs = append(errs, nsErrs...)
|
||||
}
|
||||
if nsErrs := validateNamespaceName(itm); nsErrs != nil {
|
||||
errs = append(errs, nsErrs...)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -188,7 +180,18 @@ func ValidateNamespaceIncludesExcludes(includesList, excludesList []string) []er
|
||||
func validateNamespaceName(ns string) []error {
|
||||
var errs []error
|
||||
|
||||
if errMsgs := validation.ValidateNamespaceName(ns, false); errMsgs != nil {
|
||||
// Velero interprets empty string as "no namespace", so allow it even though
|
||||
// it is not a valid Kubernetes name.
|
||||
if ns == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Kubernetes does not allow asterisks in namespaces but Velero uses them as
|
||||
// wildcards. Replace asterisks with an arbitrary letter to pass Kubernetes
|
||||
// validation.
|
||||
tmpNamespace := strings.ReplaceAll(ns, "*", "x")
|
||||
|
||||
if errMsgs := validation.ValidateNamespaceName(tmpNamespace, false); errMsgs != nil {
|
||||
for _, msg := range errMsgs {
|
||||
errs = append(errs, errors.Errorf("invalid namespace %q: %s", ns, msg))
|
||||
}
|
||||
|
||||
@@ -207,11 +207,6 @@ func TestValidateNamespaceIncludesExcludes(t *testing.T) {
|
||||
includes: []string{},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "empty string is invalid",
|
||||
includes: []string{""},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "asterisk by itself is valid",
|
||||
includes: []string{"*"},
|
||||
@@ -232,7 +227,7 @@ func TestValidateNamespaceIncludesExcludes(t *testing.T) {
|
||||
{
|
||||
name: "special characters in name is invalid",
|
||||
includes: []string{"foo?", "foo.bar", "bar_321"},
|
||||
excludes: []string{"$foo", "foo*bar", "bar=321"},
|
||||
excludes: []string{"$foo", "foo>bar", "bar=321"},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
@@ -240,11 +235,33 @@ func TestValidateNamespaceIncludesExcludes(t *testing.T) {
|
||||
includes: []string{},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "empty string includes is valid (includes nothing)",
|
||||
includes: []string{""},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "empty string excludes is valid (excludes nothing)",
|
||||
excludes: []string{""},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "include everything using asterisk is valid",
|
||||
includes: []string{"*"},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "excludes can contain wildcard",
|
||||
includes: []string{"foo", "bar"},
|
||||
excludes: []string{"nginx-ingress-*", "*-bar", "*-ingress-*"},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "includes can contain wildcard",
|
||||
includes: []string{"*-foo", "kube-*", "*kube*"},
|
||||
excludes: []string{"bar"},
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "include everything not allowed with other includes",
|
||||
includes: []string{"*", "foo"},
|
||||
|
||||
@@ -12,7 +12,7 @@ params:
|
||||
hero:
|
||||
backgroundColor: med-blue
|
||||
versioning: true
|
||||
latest: v1.6
|
||||
latest: v1.7
|
||||
versions:
|
||||
- main
|
||||
- v1.7
|
||||
|
||||
@@ -4,5 +4,5 @@ last_name: Smith-Uchida
|
||||
image: /img/contributors/dave.png
|
||||
github_handle: dsu-igeek
|
||||
---
|
||||
Technical Lead
|
||||
Architect
|
||||
|
||||
7
site/content/contributors/01-daniel-jiang.md
Normal file
7
site/content/contributors/01-daniel-jiang.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
first_name: Daniel
|
||||
last_name: Jiang
|
||||
image: /img/contributors/daniel-jiang.png
|
||||
github_handle: reasonerjt
|
||||
---
|
||||
Technical Lead
|
||||
7
site/content/contributors/02-wenkai-yin.md
Normal file
7
site/content/contributors/02-wenkai-yin.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
first_name: Wenkai
|
||||
last_name: Yin
|
||||
image: /img/contributors/wenkai-yin.png
|
||||
github_handle: ywk253100
|
||||
---
|
||||
Engineer
|
||||
@@ -82,6 +82,12 @@ For each major or minor release, create and publish a blog post to let folks kno
|
||||
- Do a review of the diffs, and/or run `make serve-docs` and review the site.
|
||||
- Submit a PR containing the changelog and the version-tagged docs.
|
||||
|
||||
### Pin the base image
|
||||
The image of velero is built based on [Distroless docker image](https://github.com/GoogleContainerTools/distroless).
|
||||
For the reproducibility of the release, before the release candidate is tagged, we need to make sure the in the Dockerfile
|
||||
on the release branch, the base image is referenced by digest, such as
|
||||
https://github.com/vmware-tanzu/velero/blob/release-1.7/Dockerfile#L53-L54
|
||||
|
||||
## Velero release
|
||||
### Notes
|
||||
- Pre-requisite: PR with the changelog and docs is merged, so that it's included in the release tag.
|
||||
|
||||
@@ -10,41 +10,41 @@ the supported cloud providers’ block storage offerings (Amazon EBS Volumes, Az
|
||||
It also provides a plugin model that enables anyone to implement additional object and block storage backends, outside the
|
||||
main Velero repository.
|
||||
|
||||
The restic integration was added to give you an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to Velero's capabilities, not a replacement for existing functionality. If you're running on AWS, and taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using restic. However, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
|
||||
local, or any other volume type that doesn't have a native snapshot concept, restic might be for you.
|
||||
Velero's Restic integration was added to give you an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to Velero's capabilities, not a replacement for existing functionality. If you're running on AWS, and taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using Restic. However, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
|
||||
local, or any other volume type that doesn't have a native snapshot concept, Restic might be for you.
|
||||
|
||||
Restic is not tied to a specific storage platform, which means that this integration also paves the way for future work to enable
|
||||
cross-volume-type data migrations.
|
||||
|
||||
**NOTE:** hostPath volumes are not supported, but the [local volume type][4] is supported.
|
||||
|
||||
## Setup restic
|
||||
## Setup Restic
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Understand how Velero performs [backups with the restic integration](#how-backup-and-restore-work-with-restic).
|
||||
- Understand how Velero performs [backups with the Restic integration](#how-backup-and-restore-work-with-restic).
|
||||
- [Download][3] the latest Velero release.
|
||||
- Kubernetes v1.12.0 and later. Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.12.0 and later.
|
||||
- Kubernetes v1.12.0 and later. Velero's Restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.12.0 and later.
|
||||
|
||||
### Install restic
|
||||
### Install Restic
|
||||
|
||||
To install restic, use the `--use-restic` flag in the `velero install` command. See the [install overview][2] for more details on other flags for the install command.
|
||||
To install Restic, use the `--use-restic` flag in the `velero install` command. See the [install overview][2] for more details on other flags for the install command.
|
||||
|
||||
```
|
||||
velero install --use-restic
|
||||
```
|
||||
|
||||
When using restic on a storage provider that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
|
||||
When using Restic on a storage provider that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
|
||||
|
||||
### Configure restic DaemonSet spec
|
||||
### Configure Restic DaemonSet spec
|
||||
|
||||
After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the restic DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure.
|
||||
After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the Restic DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure.
|
||||
|
||||
|
||||
**RancherOS**
|
||||
|
||||
|
||||
Update the host path for volumes in the restic DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`.
|
||||
Update the host path for volumes in the Restic DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`.
|
||||
|
||||
```yaml
|
||||
hostPath:
|
||||
@@ -62,7 +62,7 @@ hostPath:
|
||||
**OpenShift**
|
||||
|
||||
|
||||
To mount the correct hostpath to pods volumes, run the restic pod in `privileged` mode.
|
||||
To mount the correct hostpath to pods volumes, run the Restic pod in `privileged` mode.
|
||||
|
||||
1. Add the `velero` ServiceAccount to the `privileged` SCC:
|
||||
|
||||
@@ -125,7 +125,7 @@ To mount the correct hostpath to pods volumes, run the restic pod in `privileged
|
||||
```
|
||||
|
||||
|
||||
If restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to relax the security in your cluster so that restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
|
||||
If Restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to relax the security in your cluster so that Restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
|
||||
|
||||
By default a userland openshift namespace will not schedule pods on all nodes in the cluster.
|
||||
|
||||
@@ -147,7 +147,7 @@ oc create -n <velero namespace> -f ds.yaml
|
||||
|
||||
**VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS)**
|
||||
|
||||
You need to enable the `Allow Privileged` option in your plan configuration so that restic is able to mount the hostpath.
|
||||
You need to enable the `Allow Privileged` option in your plan configuration so that Restic is able to mount the hostpath.
|
||||
|
||||
The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods`
|
||||
|
||||
@@ -172,16 +172,16 @@ kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME> \
|
||||
|
||||
## To back up
|
||||
|
||||
Velero supports two approaches of discovering pod volumes that need to be backed up using restic:
|
||||
Velero supports two approaches of discovering pod volumes that need to be backed up using Restic:
|
||||
|
||||
- Opt-in approach: Where every pod containing a volume to be backed up using restic must be annotated with the volume's name.
|
||||
- Opt-out approach: Where all pod volumes are backed up using restic, with the ability to opt-out any volumes that should not be backed up.
|
||||
- Opt-in approach: Where every pod containing a volume to be backed up using Restic must be annotated with the volume's name.
|
||||
- Opt-out approach: Where all pod volumes are backed up using Restic, with the ability to opt-out any volumes that should not be backed up.
|
||||
|
||||
The following sections provide more details on the two approaches.
|
||||
|
||||
### Using the opt-out approach
|
||||
|
||||
In this approach, Velero will back up all pod volumes using restic with the exception of:
|
||||
In this approach, Velero will back up all pod volumes using Restic with the exception of:
|
||||
|
||||
- Volumes mounting the default service account token, kubernetes secrets, and config maps
|
||||
- Hostpath volumes
|
||||
@@ -190,7 +190,7 @@ It is possible to exclude volumes from being backed up using the `backup.velero.
|
||||
|
||||
Instructions to back up using this approach are as follows:
|
||||
|
||||
1. Run the following command on each pod that contains volumes that should **not** be backed up using restic
|
||||
1. Run the following command on each pod that contains volumes that should **not** be backed up using Restic
|
||||
|
||||
```bash
|
||||
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes-excludes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
|
||||
@@ -221,7 +221,7 @@ Instructions to back up using this approach are as follows:
|
||||
- name: pvc2-vm
|
||||
claimName: pvc2
|
||||
```
|
||||
to exclude restic backup of volume `pvc1-vm`, you would run:
|
||||
to exclude Restic backup of volume `pvc1-vm`, you would run:
|
||||
|
||||
```bash
|
||||
kubectl -n sample annotate pod/app1 backup.velero.io/backup-volumes-excludes=pvc1-vm
|
||||
@@ -248,7 +248,7 @@ Instructions to back up using this approach are as follows:
|
||||
|
||||
### Using opt-in pod volume backup
|
||||
|
||||
Velero, by default, uses this approach to discover pod volumes that need to be backed up using restic, where every pod containing a volume to be backed up using restic must be annotated with the volume's name.
|
||||
Velero, by default, uses this approach to discover pod volumes that need to be backed up using Restic, where every pod containing a volume to be backed up using Restic must be annotated with the volume's name.
|
||||
|
||||
Instructions to back up using this approach are as follows:
|
||||
|
||||
@@ -310,7 +310,7 @@ Instructions to back up using this approach are as follows:
|
||||
|
||||
## To restore
|
||||
|
||||
Regardless of how volumes are discovered for backup using restic, the process of restoring remains the same.
|
||||
Regardless of how volumes are discovered for backup using Restic, the process of restoring remains the same.
|
||||
|
||||
1. Restore from your Velero backup:
|
||||
|
||||
@@ -331,20 +331,20 @@ Regardless of how volumes are discovered for backup using restic, the process of
|
||||
|
||||
- `hostPath` volumes are not supported. [Local persistent volumes][4] are supported.
|
||||
- Those of you familiar with [restic][1] may know that it encrypts all of its data. Velero uses a static,
|
||||
common encryption key for all restic repositories it creates. **This means that anyone who has access to your
|
||||
bucket can decrypt your restic backup data**. Make sure that you limit access to the restic bucket
|
||||
common encryption key for all Restic repositories it creates. **This means that anyone who has access to your
|
||||
bucket can decrypt your Restic backup data**. Make sure that you limit access to the Restic bucket
|
||||
appropriately.
|
||||
- An incremental backup chain will be maintained across pod reschedules for PVCs. However, for pod volumes that are *not*
|
||||
PVCs, such as `emptyDir` volumes, when a pod is deleted/recreated (for example, by a ReplicaSet/Deployment), the next backup of those
|
||||
volumes will be full rather than incremental, because the pod volume's lifecycle is assumed to be defined by its pod.
|
||||
- Restic scans each file in a single thread. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual
|
||||
difference is small.
|
||||
- If you plan to use the Velero restic integration to backup 100GB of data or more, you may need to [customize the resource limits](/docs/main/customize-installation/#customize-resource-requests-and-limits) to make sure backups complete successfully.
|
||||
- Velero's restic integration backs up data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, restic integration can only backup volumes that are mounted by a pod and not directly from the PVC.
|
||||
- If you plan to use Velero's Restic integration to backup 100GB of data or more, you may need to [customize the resource limits](/docs/main/customize-installation/#customize-resource-requests-and-limits) to make sure backups complete successfully.
|
||||
- Velero's Restic integration backs up data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, Velero's Restic integration can only backup volumes that are mounted by a pod and not directly from the PVC. For orphan PVC/PV pairs (without running pods), some Velero users overcame this limitation running a staging pod (i.e. a busybox or alpine container with an infinite sleep) to mount these PVC/PV pairs prior taking a Velero backup.
|
||||
|
||||
## Customize Restore Helper Container
|
||||
|
||||
Velero uses a helper init container when performing a restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
|
||||
Velero uses a helper init container when performing a Restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
|
||||
where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with
|
||||
the alternate image.
|
||||
|
||||
@@ -410,7 +410,7 @@ Are your Velero server and daemonset pods running?
|
||||
kubectl get pods -n velero
|
||||
```
|
||||
|
||||
Does your restic repository exist, and is it ready?
|
||||
Does your Restic repository exist, and is it ready?
|
||||
|
||||
```bash
|
||||
velero restic repo get
|
||||
@@ -446,31 +446,31 @@ kubectl -n velero logs DAEMON_POD_NAME
|
||||
**NOTE**: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument
|
||||
to the container command in the deployment/daemonset pod template spec.
|
||||
|
||||
## How backup and restore work with restic
|
||||
## How backup and restore work with Restic
|
||||
|
||||
Velero has three custom resource definitions and associated controllers:
|
||||
|
||||
- `ResticRepository` - represents/manages the lifecycle of Velero's [restic repositories][5]. Velero creates
|
||||
a restic repository per namespace when the first restic backup for a namespace is requested. The controller
|
||||
for this custom resource executes restic repository lifecycle commands -- `restic init`, `restic check`,
|
||||
a Restic repository per namespace when the first Restic backup for a namespace is requested. The controller
|
||||
for this custom resource executes Restic repository lifecycle commands -- `restic init`, `restic check`,
|
||||
and `restic prune`.
|
||||
|
||||
You can see information about your Velero restic repositories by running `velero restic repo get`.
|
||||
You can see information about your Velero's Restic repositories by running `velero restic repo get`.
|
||||
|
||||
- `PodVolumeBackup` - represents a restic backup of a volume in a pod. The main Velero backup process creates
|
||||
- `PodVolumeBackup` - represents a Restic backup of a volume in a pod. The main Velero backup process creates
|
||||
one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this
|
||||
resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. The controller executes
|
||||
`restic backup` commands to backup pod volume data.
|
||||
|
||||
- `PodVolumeRestore` - represents a restic restore of a pod volume. The main Velero restore process creates one
|
||||
or more of these when it encounters a pod that has associated restic backups. Each node in the cluster runs a
|
||||
- `PodVolumeRestore` - represents a Restic restore of a pod volume. The main Velero restore process creates one
|
||||
or more of these when it encounters a pod that has associated Restic backups. Each node in the cluster runs a
|
||||
controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods
|
||||
on that node. The controller executes `restic restore` commands to restore pod volume data.
|
||||
|
||||
### Backup
|
||||
|
||||
1. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using restic.
|
||||
1. When found, Velero first ensures a restic repository exists for the pod's namespace, by:
|
||||
1. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using Restic.
|
||||
1. When found, Velero first ensures a Restic repository exists for the pod's namespace, by:
|
||||
- checking if a `ResticRepository` custom resource already exists
|
||||
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it
|
||||
1. Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation
|
||||
@@ -485,14 +485,14 @@ on that node. The controller executes `restic restore` commands to restore pod v
|
||||
### Restore
|
||||
|
||||
1. The main Velero restore process checks each existing `PodVolumeBackup` custom resource in the cluster to backup from.
|
||||
1. For each `PodVolumeBackup` found, Velero first ensures a restic repository exists for the pod's namespace, by:
|
||||
1. For each `PodVolumeBackup` found, Velero first ensures a Restic repository exists for the pod's namespace, by:
|
||||
- checking if a `ResticRepository` custom resource already exists
|
||||
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it (note that
|
||||
in this case, the actual repository should already exist in object storage, so the Velero controller will simply
|
||||
check it for integrity)
|
||||
1. Velero adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more
|
||||
1. Velero adds an init container to the pod, whose job is to wait for all Restic restores for the pod to complete (more
|
||||
on this shortly)
|
||||
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API
|
||||
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API. Then, the Kubernetes scheduler schedules this pod to a worker node, and the pod must be in a running state. If the pod fails to start for some reason (i.e. lack of cluster resources), the Restic restore will not be done.
|
||||
1. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod
|
||||
1. The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail
|
||||
1. Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which:
|
||||
@@ -512,7 +512,7 @@ on to running other init containers/the main containers.
|
||||
|
||||
### Monitor backup annotation
|
||||
|
||||
Velero does not provide a mechanism to detect persistent volume claims that are missing the restic backup annotation.
|
||||
Velero does not provide a mechanism to detect persistent volume claims that are missing the Restic backup annotation.
|
||||
|
||||
To solve this, a controller was written by Thomann Bits&Beats: [velero-pvc-watcher][7]
|
||||
|
||||
@@ -526,4 +526,3 @@ To solve this, a controller was written by Thomann Bits&Beats: [velero-pvc-watch
|
||||
[8]: https://docs.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv
|
||||
[9]: https://github.com/restic/restic/issues/1800
|
||||
[11]: customize-installation.md#default-pod-volume-backup-to-restic
|
||||
|
||||
|
||||
@@ -10,41 +10,41 @@ the supported cloud providers’ block storage offerings (Amazon EBS Volumes, Az
|
||||
It also provides a plugin model that enables anyone to implement additional object and block storage backends, outside the
|
||||
main Velero repository.
|
||||
|
||||
The restic intergation was added to give you an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to Velero's capabilities, not a replacement for existing functionality. If you're running on AWS, and taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using restic. However, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
|
||||
local, or any other volume type that doesn't have a native snapshot concept, restic might be for you.
|
||||
Velero's Restic integration was added to give you an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to Velero's capabilities, not a replacement for existing functionality. If you're running on AWS, and taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using Restic. However, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
|
||||
local, or any other volume type that doesn't have a native snapshot concept, Restic might be for you.
|
||||
|
||||
Restic is not tied to a specific storage platform, which means that this integration also paves the way for future work to enable
|
||||
cross-volume-type data migrations.
|
||||
|
||||
**NOTE:** hostPath volumes are not supported, but the [local volume type][4] is supported.
|
||||
|
||||
## Setup restic
|
||||
## Setup Restic
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Understand how Velero performs [backups with the restic integration](#how-backup-and-restore-work-with-restic).
|
||||
- Understand how Velero performs [backups with the Restic integration](#how-backup-and-restore-work-with-restic).
|
||||
- [Download][3] the latest Velero release.
|
||||
- Kubernetes v1.12.0 and later. Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.12.0 and later.
|
||||
- Kubernetes v1.12.0 and later. Velero's Restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.12.0 and later.
|
||||
|
||||
### Install restic
|
||||
### Install Restic
|
||||
|
||||
To install restic, use the `--use-restic` flag in the `velero install` command. See the [install overview][2] for more details on other flags for the install command.
|
||||
To install Restic, use the `--use-restic` flag in the `velero install` command. See the [install overview][2] for more details on other flags for the install command.
|
||||
|
||||
```
|
||||
velero install --use-restic
|
||||
```
|
||||
|
||||
When using restic on a storage provider that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
|
||||
When using Restic on a storage provider that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation.
|
||||
|
||||
### Configure restic DaemonSet spec
|
||||
### Configure Restic DaemonSet spec
|
||||
|
||||
After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the restic DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure.
|
||||
After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the Restic DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure.
|
||||
|
||||
|
||||
**RancherOS**
|
||||
|
||||
|
||||
Update the host path for volumes in the restic DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`.
|
||||
Update the host path for volumes in the Restic DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`.
|
||||
|
||||
```yaml
|
||||
hostPath:
|
||||
@@ -62,7 +62,7 @@ hostPath:
|
||||
**OpenShift**
|
||||
|
||||
|
||||
To mount the correct hostpath to pods volumes, run the restic pod in `privileged` mode.
|
||||
To mount the correct hostpath to pods volumes, run the Restic pod in `privileged` mode.
|
||||
|
||||
1. Add the `velero` ServiceAccount to the `privileged` SCC:
|
||||
|
||||
@@ -125,7 +125,7 @@ To mount the correct hostpath to pods volumes, run the restic pod in `privileged
|
||||
```
|
||||
|
||||
|
||||
If restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to relax the security in your cluster so that restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
|
||||
If Restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can [create a custom SCC](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to relax the security in your cluster so that Restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC.
|
||||
|
||||
By default a userland openshift namespace will not schedule pods on all nodes in the cluster.
|
||||
|
||||
@@ -147,7 +147,7 @@ oc create -n <velero namespace> -f ds.yaml
|
||||
|
||||
**VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS)**
|
||||
|
||||
You need to enable the `Allow Privileged` option in your plan configuration so that restic is able to mount the hostpath.
|
||||
You need to enable the `Allow Privileged` option in your plan configuration so that Restic is able to mount the hostpath.
|
||||
|
||||
The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods`
|
||||
|
||||
@@ -172,16 +172,16 @@ kubectl patch storageclass/<YOUR_AZURE_FILE_STORAGE_CLASS_NAME> \
|
||||
|
||||
## To back up
|
||||
|
||||
Velero supports two approaches of discovering pod volumes that need to be backed up using restic:
|
||||
Velero supports two approaches of discovering pod volumes that need to be backed up using Restic:
|
||||
|
||||
- Opt-in approach: Where every pod containing a volume to be backed up using restic must be annotated with the volume's name.
|
||||
- Opt-out approach: Where all pod volumes are backed up using restic, with the ability to opt-out any volumes that should not be backed up.
|
||||
- Opt-in approach: Where every pod containing a volume to be backed up using Restic must be annotated with the volume's name.
|
||||
- Opt-out approach: Where all pod volumes are backed up using Restic, with the ability to opt-out any volumes that should not be backed up.
|
||||
|
||||
The following sections provide more details on the two approaches.
|
||||
|
||||
### Using the opt-out approach
|
||||
|
||||
In this approach, Velero will back up all pod volumes using restic with the exception of:
|
||||
In this approach, Velero will back up all pod volumes using Restic with the exception of:
|
||||
|
||||
- Volumes mounting the default service account token, kubernetes secrets, and config maps
|
||||
- Hostpath volumes
|
||||
@@ -190,12 +190,12 @@ It is possible to exclude volumes from being backed up using the `backup.velero.
|
||||
|
||||
Instructions to back up using this approach are as follows:
|
||||
|
||||
1. Run the following command on each pod that contains volumes that should **not** be backed up using restic
|
||||
1. Run the following command on each pod that contains volumes that should **not** be backed up using Restic
|
||||
|
||||
```bash
|
||||
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes-excludes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
|
||||
```
|
||||
where the volume names are the names of the volumes in the pod sepc.
|
||||
where the volume names are the names of the volumes in the pod spec.
|
||||
|
||||
For example, in the following pod:
|
||||
|
||||
@@ -221,7 +221,7 @@ Instructions to back up using this approach are as follows:
|
||||
- name: pvc2-vm
|
||||
claimName: pvc2
|
||||
```
|
||||
to exclude restic backup of volume `pvc1-vm`, you would run:
|
||||
to exclude Restic backup of volume `pvc1-vm`, you would run:
|
||||
|
||||
```bash
|
||||
kubectl -n sample annotate pod/app1 backup.velero.io/backup-volumes-excludes=pvc1-vm
|
||||
@@ -248,7 +248,7 @@ Instructions to back up using this approach are as follows:
|
||||
|
||||
### Using opt-in pod volume backup
|
||||
|
||||
Velero, by default, uses this approach to discover pod volumes that need to be backed up using restic, where every pod containing a volume to be backed up using restic must be annotated with the volume's name.
|
||||
Velero, by default, uses this approach to discover pod volumes that need to be backed up using Restic, where every pod containing a volume to be backed up using Restic must be annotated with the volume's name.
|
||||
|
||||
Instructions to back up using this approach are as follows:
|
||||
|
||||
@@ -310,7 +310,7 @@ Instructions to back up using this approach are as follows:
|
||||
|
||||
## To restore
|
||||
|
||||
Regardless of how volumes are discovered for backup using restic, the process of restoring remains the same.
|
||||
Regardless of how volumes are discovered for backup using Restic, the process of restoring remains the same.
|
||||
|
||||
1. Restore from your Velero backup:
|
||||
|
||||
@@ -331,20 +331,20 @@ Regardless of how volumes are discovered for backup using restic, the process of
|
||||
|
||||
- `hostPath` volumes are not supported. [Local persistent volumes][4] are supported.
|
||||
- Those of you familiar with [restic][1] may know that it encrypts all of its data. Velero uses a static,
|
||||
common encryption key for all restic repositories it creates. **This means that anyone who has access to your
|
||||
bucket can decrypt your restic backup data**. Make sure that you limit access to the restic bucket
|
||||
common encryption key for all Restic repositories it creates. **This means that anyone who has access to your
|
||||
bucket can decrypt your Restic backup data**. Make sure that you limit access to the Restic bucket
|
||||
appropriately.
|
||||
- An incremental backup chain will be maintained across pod reschedules for PVCs. However, for pod volumes that are *not*
|
||||
PVCs, such as `emptyDir` volumes, when a pod is deleted/recreated (for example, by a ReplicaSet/Deployment), the next backup of those
|
||||
volumes will be full rather than incremental, because the pod volume's lifecycle is assumed to be defined by its pod.
|
||||
- Restic scans each file in a single thread. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual
|
||||
difference is small.
|
||||
- If you plan to use the Velero restic integration to backup 100GB of data or more, you may need to [customize the resource limits](/docs/main/customize-installation/#customize-resource-requests-and-limits) to make sure backups complete successfully.
|
||||
- Velero's restic integration backs up data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, restic integration can only backup volumes that are mounted by a pod and not directly from the PVC.
|
||||
- If you plan to use Velero's Restic integration to backup 100GB of data or more, you may need to [customize the resource limits](/docs/main/customize-installation/#customize-resource-requests-and-limits) to make sure backups complete successfully.
|
||||
- Velero's Restic integration backs up data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, Velero's Restic integration can only backup volumes that are mounted by a pod and not directly from the PVC. For orphan PVC/PV pairs (without running pods), some Velero users overcame this limitation running a staging pod (i.e. a busybox or alpine container with an infinite sleep) to mount these PVC/PV pairs prior taking a Velero backup.
|
||||
|
||||
## Customize Restore Helper Container
|
||||
|
||||
Velero uses a helper init container when performing a restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
|
||||
Velero uses a helper init container when performing a Restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`,
|
||||
where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with
|
||||
the alternate image.
|
||||
|
||||
@@ -410,7 +410,7 @@ Are your Velero server and daemonset pods running?
|
||||
kubectl get pods -n velero
|
||||
```
|
||||
|
||||
Does your restic repository exist, and is it ready?
|
||||
Does your Restic repository exist, and is it ready?
|
||||
|
||||
```bash
|
||||
velero restic repo get
|
||||
@@ -446,31 +446,31 @@ kubectl -n velero logs DAEMON_POD_NAME
|
||||
**NOTE**: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument
|
||||
to the container command in the deployment/daemonset pod template spec.
|
||||
|
||||
## How backup and restore work with restic
|
||||
## How backup and restore work with Restic
|
||||
|
||||
Velero has three custom resource definitions and associated controllers:
|
||||
|
||||
- `ResticRepository` - represents/manages the lifecycle of Velero's [restic repositories][5]. Velero creates
|
||||
a restic repository per namespace when the first restic backup for a namespace is requested. The controller
|
||||
for this custom resource executes restic repository lifecycle commands -- `restic init`, `restic check`,
|
||||
a Restic repository per namespace when the first Restic backup for a namespace is requested. The controller
|
||||
for this custom resource executes Restic repository lifecycle commands -- `restic init`, `restic check`,
|
||||
and `restic prune`.
|
||||
|
||||
You can see information about your Velero restic repositories by running `velero restic repo get`.
|
||||
You can see information about your Velero's Restic repositories by running `velero restic repo get`.
|
||||
|
||||
- `PodVolumeBackup` - represents a restic backup of a volume in a pod. The main Velero backup process creates
|
||||
- `PodVolumeBackup` - represents a Restic backup of a volume in a pod. The main Velero backup process creates
|
||||
one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this
|
||||
resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. The controller executes
|
||||
`restic backup` commands to backup pod volume data.
|
||||
|
||||
- `PodVolumeRestore` - represents a restic restore of a pod volume. The main Velero restore process creates one
|
||||
or more of these when it encounters a pod that has associated restic backups. Each node in the cluster runs a
|
||||
- `PodVolumeRestore` - represents a Restic restore of a pod volume. The main Velero restore process creates one
|
||||
or more of these when it encounters a pod that has associated Restic backups. Each node in the cluster runs a
|
||||
controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods
|
||||
on that node. The controller executes `restic restore` commands to restore pod volume data.
|
||||
|
||||
### Backup
|
||||
|
||||
1. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using restic.
|
||||
1. When found, Velero first ensures a restic repository exists for the pod's namespace, by:
|
||||
1. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using Restic.
|
||||
1. When found, Velero first ensures a Restic repository exists for the pod's namespace, by:
|
||||
- checking if a `ResticRepository` custom resource already exists
|
||||
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it
|
||||
1. Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation
|
||||
@@ -485,14 +485,14 @@ on that node. The controller executes `restic restore` commands to restore pod v
|
||||
### Restore
|
||||
|
||||
1. The main Velero restore process checks each existing `PodVolumeBackup` custom resource in the cluster to backup from.
|
||||
1. For each `PodVolumeBackup` found, Velero first ensures a restic repository exists for the pod's namespace, by:
|
||||
1. For each `PodVolumeBackup` found, Velero first ensures a Restic repository exists for the pod's namespace, by:
|
||||
- checking if a `ResticRepository` custom resource already exists
|
||||
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it (note that
|
||||
in this case, the actual repository should already exist in object storage, so the Velero controller will simply
|
||||
check it for integrity)
|
||||
1. Velero adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more
|
||||
1. Velero adds an init container to the pod, whose job is to wait for all Restic restores for the pod to complete (more
|
||||
on this shortly)
|
||||
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API
|
||||
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API. Then, the Kubernetes scheduler schedules this pod to a worker node, and the pod must be in a running state. If the pod fails to start for some reason (i.e. lack of cluster resources), the Restic restore will not be done.
|
||||
1. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod
|
||||
1. The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail
|
||||
1. Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which:
|
||||
@@ -512,7 +512,7 @@ on to running other init containers/the main containers.
|
||||
|
||||
### Monitor backup annotation
|
||||
|
||||
Velero does not provide a mechanism to detect persistent volume claims that are missing the restic backup annotation.
|
||||
Velero does not provide a mechanism to detect persistent volume claims that are missing the Restic backup annotation.
|
||||
|
||||
To solve this, a controller was written by Thomann Bits&Beats: [velero-pvc-watcher][7]
|
||||
|
||||
|
||||
BIN
site/static/img/contributors/daniel-jiang.png
Normal file
BIN
site/static/img/contributors/daniel-jiang.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 681 KiB |
BIN
site/static/img/contributors/wenkai-yin.png
Normal file
BIN
site/static/img/contributors/wenkai-yin.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 216 KiB |
@@ -236,6 +236,11 @@ func veleroBackupNamespace(ctx context.Context, veleroCLI string, veleroNamespac
|
||||
args = append(args, "--snapshot-volumes")
|
||||
} else {
|
||||
args = append(args, "--default-volumes-to-restic")
|
||||
// To workaround https://github.com/vmware-tanzu/velero-plugin-for-vsphere/issues/347 for vsphere plugin v1.1.1
|
||||
// if the "--snapshot-volumes=false" isn't specified explicitly, the vSphere plugin will always take snapshots
|
||||
// for the volumes even though the "--default-volumes-to-restic" is specified
|
||||
// TODO This can be removed if the logic of vSphere plugin bump up to 1.3
|
||||
args = append(args, "--snapshot-volumes=false")
|
||||
}
|
||||
if backupLocation != "" {
|
||||
args = append(args, "--storage-location", backupLocation)
|
||||
|
||||
@@ -39,7 +39,7 @@ spec:
|
||||
value: /plugins
|
||||
- name: AWS_SHARED_CREDENTIALS_FILE
|
||||
value: /credentials/cloud
|
||||
- name: AZURE_SHARED_CREDENTIALS_FILE
|
||||
- name: AZURE_CREDENTIALS_FILE
|
||||
value: /credentials/cloud
|
||||
- name: GOOGLE_APPLICATION_CREDENTIALS
|
||||
value: /credentials/cloud
|
||||
|
||||
@@ -37,7 +37,7 @@ spec:
|
||||
value: /scratch
|
||||
- name: AWS_SHARED_CREDENTIALS_FILE
|
||||
value: /credentials/cloud
|
||||
- name: AZURE_SHARED_CREDENTIALS_FILE
|
||||
- name: AZURE_CREDENTIALS_FILE
|
||||
value: /credentials/cloud
|
||||
- name: GOOGLE_APPLICATION_CREDENTIALS
|
||||
value: /credentials/cloud
|
||||
|
||||
Reference in New Issue
Block a user