Compare commits

...

309 Commits

Author SHA1 Message Date
Bridget McErlean
ef34b9b654 Merge pull request #3889 from zubron/release-1.6.1
Add cherry-pick commits and changelog for v1.6.1
2021-06-22 09:15:31 -04:00
Bridget McErlean
e22d6591e4 Add changelog for v1.6.1
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-06-21 12:01:27 -04:00
Scott Seago
38493995ad regression introduced in 1.6 restore progress: fix CR restore (#3845)
Signed-off-by: Scott Seago <sseago@redhat.com>
2021-06-21 12:01:21 -04:00
Bridget McErlean
74a0b39e3e Skip volume restores from projected sources (#3877)
In #3863, it was discovered that volumes from projected sources were
being backed up by restic when they should have been skipped. Restoring
these volumes triggers a known bug in restic.

In #3866, we started skipping volumes from a projected source, however
there will exist backups that were taken before this fix was introduced.
This change modifies the restore logic to skip the restore of any volume
that came from a projected source, allowing backups taken before #3866
to be restored successfully.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-06-21 12:01:21 -04:00
codegold79
a378d3a9d4 API groups e2e tests remove controllers (#3564)
* Remove controllers and sleeps in API groups e2e tests

Signed-off-by: F. Gold <fgold@vmware.com>

* Print command in AfterEach(...) and check error

Signed-off-by: F. Gold <fgold@vmware.com>

* Make change ahead of PR3764 changes in main

Signed-off-by: F. Gold <fgold@vmware.com>

* Update go.{mod,sum} files

Signed-off-by: F. Gold <fgold@vmware.com>

* Run make update

Signed-off-by: F. Gold <fgold@vmware.com>
2021-06-21 12:01:21 -04:00
Scott Seago
0f576fb748 Merge pull request #3866 from alaypatel07/fix-projected-volume-for-restic
skip backuping projected volume when using restic

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-06-21 12:00:58 -04:00
Carlisia Thompson
119529c9a2 Consolidate api clients for e2e tests (#3764)
* Consolidate api clients
* Adress Nolan reviews
* Adding back output warning for consistency
* Remove unnecessary documentation
* Address Bridget's reviews
* Update go.sum files

Signed-off-by: Carlisia <carlisia@grokkingtech.io>
Co-authored-by: Bridget McErlean <bmcerlean@vmware.com>
2021-06-21 11:17:35 -04:00
Carlisia Thompson
2c26119b10 A small refactor of the e2e tests (#3726)
* A small refactor of the e2e tests

Signed-off-by: Carlisia <carlisia@grokkingtech.io>

* Add copyright header

Signed-off-by: Carlisia <carlisia@grokkingtech.io>

* Fix CI

Signed-off-by: Carlisia <carlisia@grokkingtech.io>

* Revert unneeded changes

Signed-off-by: Carlisia <carlisia@grokkingtech.io>

* Remove file that doesnt belong here

Signed-off-by: Carlisia <carlisia@grokkingtech.io>
2021-06-21 11:17:08 -04:00
Ashish Amarnath
cbccdbd05a 🐛 Fix plugin name derivation from image name (#3711)
* 🐛 Fix plugin name derivation from image name

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* changelog

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2021-06-21 09:43:33 -04:00
David L. Smith-Uchida
5bd70fd8ee Merge pull request #3673 from zubron/release-1.6-changelog-docs
Add changelog and docs for v1.6.0
2021-04-12 14:54:26 -07:00
Bridget McErlean
b7c166e019 Add changelog and docs for v1.6.0
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-04-12 16:00:34 -04:00
Carlisia Thompson
9f24587cef Upgrade docs for v1.6.0-rc2 (#3662)
* Update changelog for v1.6.0-rc.2

Signed-off-by: Carlisia <carlisia@vmware.com>

* Update docs for v1.6.0-rc.2

Signed-off-by: Carlisia <carlisia@vmware.com>

* Upgrade docs for v1.6.0-rc2

Signed-off-by: Carlisia <carlisia@vmware.com>
2021-04-05 15:31:30 -04:00
Carlisia Thompson
c65c17c559 Revert printer columns (#3652)
* Revert "Add additional printer columns for CRDs (#2881)"

This reverts commit 4178d9de32.

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add generated files

Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-31 14:46:37 -07:00
David L. Smith-Uchida
52d4a4693a Merge pull request #3637 from zubron/release-1.6-rc1
Add changelog and docs for v1.6.0-rc.1
2021-03-29 09:32:23 -07:00
Bridget McErlean
5e72b87ef7 Add changelog and docs for v1.6.0-rc.1
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-03-29 10:56:05 -04:00
Bridget McErlean
9a9525725d Allow Dockerfiles to be configurable (#3634)
For internal builds of Velero, we need to be able to specify an
alternative Dockerfile which uses an alternative image registry to pull
the base images from. This change adapts our Makefile such that both the
main Dockerfile and build image Dockerfile can be overridden.

We have some special handling for the build image to only build when the
Dockerfile has changed. In this case, we check whether a custom
Dockerfile has been provided, and always rebuild in that case. For
custom build image Dockerfiles, use a fixed tag rather than the one
based on commit SHA of the original file.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-03-26 17:30:40 -07:00
David L. Smith-Uchida
6a6734789e Merge pull request #3618 from carlisia/c-uninstall
Make uninstall more robust and informative
2021-03-25 19:13:01 -07:00
Carlisia
e498a23311 Remove unnecessary check
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-25 17:26:44 -07:00
Carlisia
6decba9dda Adress Ashish's second review
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-25 17:21:23 -07:00
David L. Smith-Uchida
242fba9c05 Runs vSphere tests with snapshots (#3629)
Added wait for vSphere plug-in uploads to complete

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
2021-03-25 17:46:57 -04:00
Carlisia
c64f45e044 Address Ashish's review
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-25 14:31:21 -07:00
Carlisia
ae47ac0948 Addressed Bridget's review
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-25 13:36:49 -07:00
Carlisia
61c891f055 Addressed Dave's review
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-25 13:24:30 -07:00
Carlisia
4fff2a4a5c Make uninstall more robust and informative
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-23 18:00:38 -07:00
David L. Smith-Uchida
e9c997839e Added volume snapshot test for backup/restore. (#3592)
Snapshot tests can be run with Ginkgo focus "Snapshot" and restic tests with Ginkgo focus "Restic".
Restic and volume snapshot tests can now be run simultaneously.
Added check for kibishii app start after restore.
Consolidated kibishii pod checks into waitForKibishiiPods.
Added WaitForPods function to e2e/tests/common.goSnapshot tests are skipped automatically on kind clusters.
Fixed issue where velero_utils InstallVeleroServer was looking for the Restic daemon set in the "velero" namespace only (was ignoring io.Namespace)

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
2021-03-17 14:38:47 -04:00
David L. Smith-Uchida
69334d782b Merge pull request #3590 from carlisia/c-1.2
Upgrade e2e tests to new plugin versions (v1.2)
2021-03-16 11:24:37 -07:00
Carlisia Thompson
68320362d5 Improve GH Action PR assign + labeling (#3584)
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-16 23:52:50 +08:00
Carlisia Thompson
50b7201508 Update upgrade docs (#3568)
* Update upgrade docs

Signed-off-by: Carlisia <carlisia@vmware.com>

* Update TOC

Signed-off-by: Carlisia <carlisia@vmware.com>

* The right next version is v1.6.0-beta.1

Signed-off-by: Carlisia <carlisia@vmware.com>

* Correct the listing order

Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-16 11:43:08 -04:00
Carlisia
c032f12232 Upgrade e2e tests to new plugin versions (v1.2)
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-15 16:06:16 -07:00
codegold79
c8dfd648bb Restore progress reporting bug fix (#3583)
* Improve readbility and formatting of pkg/restore/restore.go

Signed-off-by: F. Gold <fgold@vmware.com>

* Update paths to include API group versions

Signed-off-by: F. Gold <fgold@vmware.com>

* Use full word, 'resource' instead of 'resrc'

Signed-off-by: F. Gold <fgold@vmware.com>
2021-03-15 18:51:07 -04:00
Bridget McErlean
70287f00f9 Install plugins for additional BSL in E2E test (#3582)
The test for multiple credentials assumed that the plugin for the
additional BSL provider was already installed. This will not be the case
when performing a clean install of Velero between tests.

This adds a new utility function to add the plugins that are necessary
for the additional BSL provider. It doesn't check which plugins are
already installed, it will just attempt to install and if the stderr
contains the message that it is a duplicate plugin, we ignore the error
and continue. This could be improved by instpecting the output from
`velero plugin get` but I opted for a quicker solution given the
upcoming release.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-03-14 23:23:41 -07:00
David L. Smith-Uchida
b191140cf1 Updated Azure plugin in e2e tests to 1.1.2 (latest) (#3585)
Updated vSphere plugin in e2e tests to 1.1.0 (latest)

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
2021-03-14 23:21:05 -07:00
Ashish Amarnath
2cddda84c5 Upgrade restic from v0.9.6 to v0.12.0 (#3528)
* Upgrade restic from v0.9.6 to v0.12.0

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* add changelog

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2021-03-11 13:11:23 -05:00
Bridget McErlean
9ffffda11e Use Credential from BSL for restic commands (#3489)
* Use Credential from BSL for restic commands

This change introduces support for restic to make use of per-BSL
credentials. It makes use of the `credentials.FileStore` introduced in
PR #3442 to write the BSL credentials to disk. To support per-BSL
credentials for restic, the environment for the restic commands needs to
be modified for each provider to ensure that the credentials are
provided via the correct provider specific environment variables.
This change introduces a new function `restic.CmdEnv` to check the BSL
provider and create the correct mapping of environment variables for
each provider.

Previously, AWS and GCP could rely on the environment variables in the
Velero deployments to obtain the credentials file, but now these
environment variables need to be set with the path to the serialized
credentials file if a credential is set on the BSL.

For Azure, the credentials file in the environment was loaded and parsed
to set the environment variables for restic. Now, we check if the BSL
has a credential, and if it does, load and parse that file instead.

This change also introduces a few other small improvements. Now that we
are fetching the BSL to check for the `Credential` field, we can use the
BSL directly to get the `CACert` which means that we can remove the
`GetCACert` function. Also, now that we have a way to serialize secrets
to disk, we can use the `credentials.FileStore` to get a temp file for
the restic repo password and remove the `restic.TempCredentialsFile`
function.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Add documentation for per-BSL credentials

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Address review feedback

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Address review comments

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-03-11 13:10:51 -05:00
Bridget McErlean
3656f45f55 Partially revert adding credentials to VSL (#3561)
We are no longer adding the Credentials field to the VSL so this reverts
part the change that added it (#3409).

The original PR also added the `snapshot-location set` command. This
command only included options for setting the credential but is part of
the work for #2426. Due to this, the command has been left in place
(with the credentials option removed) but has been hidden.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-03-11 10:10:27 -08:00
David L. Smith-Uchida
574bc16aa1 Merge pull request #3559 from zubron/add-e2e-test-for-multi-creds
Add E2E test for multiple credentials
2021-03-10 15:00:26 -08:00
Bridget McErlean
26c14933cc Address review comments
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-03-10 17:35:17 -05:00
Bridget McErlean
9ad1e898d6 Add E2E test for multiple credentials
This change adds an E2E test which exercises the mulitple credentials
feature added in #3489. The test creates a secret from the given
credentials and creates a BackupStorageLocation which uses those
credentials. A backup and restore is then performed to the default
BSL and to the newly created BSL.

This change adds new flags to the E2E test suite to configure the BSL
created and used in the test.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-03-10 16:43:47 -05:00
Ashish Amarnath
55a9b65c17 Prefer conditional waiting over magic sleep (#3527)
* prefer conditional waiting over magic sleep

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* update go modules

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2021-03-09 13:32:30 -08:00
David L. Smith-Uchida
afe47aeec8 Proposed 1.7.0 roadmap (#3537)
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
2021-03-08 17:04:30 -08:00
Nolan Brubaker
a76cacd2ca Assign a smaller number of reviewers to PRs (#3543)
Using a smaller number of reviewers should increase responsiveness and
accountability.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-03-08 16:28:03 -08:00
Ashish Amarnath
5778752d2c fix broken build (#3525)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2021-03-05 06:30:44 +08:00
Bridget McErlean
b9a8c0b254 Pass configured BSL credential to plugin via config (#3442)
* Load credentials and pass to ObjectStorage plugins

Update NewObjectBackupStore to take a CredentialsGetter which can be
used to get the credentials for a BackupStorageLocation if it has been
configured with a Credential. If the BSL has a credential, use that
SecretKeySelector to fetch the secret, write the contents to a temp file
and then pass that file through to the plugin via the config map using
the key `credentialsFile`. This relies on the plugin being able to use
this new config field.

This does not yet handle VolumeSnapshotLocations or ResticRepositories.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Address code reviews

Add godocs and comments.
Improve formatting and test names.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Address code reviews

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-03-04 13:43:15 -08:00
Pranav Gaikwad
c46fe71b12 Restore progress reporting (#3125)
* restore progress reporting

Signed-off-by: Pranav Gaikwad <pgaikwad@redhat.com>

* add restore statistics to describe restore

Signed-off-by: Pranav Gaikwad <pgaikwad@redhat.com>

* address feedback, include namespaces in the count

Signed-off-by: Pranav Gaikwad <pgaikwad@redhat.com>
2021-03-04 16:21:44 -05:00
Suraj Banakar
ff1a31db4a Support cli uninstall (#3399)
* Add uninstall cmd
- init fn to uninstall velero
- abstract dynamic client creation to a separate fn
- creates a separate client per unstructured resource
- add delete client for CRDs
- export appendUnstructured
- add uninstall command to main cmd
- export `podTemplateOption`
- uninstall resources in the reverse order of installation
- fallback to `velero` if no ns is provided during uninstall
- skip deletion if the resource doesn't exist
- handle resource not found error
- match log formatting with cli install logs
- add Delete fn to fake client
- fix import order
- add changelog
- add comment doc for CreateClient fn

Signed-off-by: Suraj Banakar <suraj@infracloud.io>

* Re-use uninstall code from test suite
- move helper functions out of test suite
- this is to prevent cyclic imports
- move uninstall helpers to uninstall cmd
- call them from test suite
- revert export of variables/fns from install code
- because not required anymore

Signed-off-by: Suraj Banakar <suraj@infracloud.io>

* Revert `PodTemplateOption` -> `podTemplateOption`

Signed-off-by: Suraj Banakar <suraj@infracloud.io>

* Use uninstall helper under VeleroUninstall
- as a wrapper
- fix import related errors in test suite

Signed-off-by: Suraj Banakar <suraj@infracloud.io>
2021-03-04 14:16:40 -05:00
Carlisia Thompson
11bfe82342 Convert DownloadRequest resource/controller to kubebuilder (#3004)
* Migrate DownloadRequest types to kubebuilder

Signed-off-by: Carlisia <carlisia@vmware.com>

* Migrate controller to kubebuilder

Signed-off-by: Carlisia <carlisia@vmware.com>

* Migrate download request cli to kubebuilder

Signed-off-by: Carlisia <carlisia@vmware.com>

* Format w make update

Signed-off-by: Carlisia <carlisia@vmware.com>

* Remove download file

Signed-off-by: Carlisia <carlisia@vmware.com>

* Remove kubebuilder from backup/restore apis

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix test description

Signed-off-by: Carlisia <carlisia@vmware.com>

* Import cleanups

Signed-off-by: Carlisia <carlisia@vmware.com>

* Refactor for controller runtime version update

Signed-off-by: Carlisia <carlisia@vmware.com>

* Remove year from the copyright

Signed-off-by: Carlisia <carlisia@vmware.com>

* Check for expiration regardless of phase

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix typos and godoc

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix test setup and fix a test case

Signed-off-by: Carlisia <carlisia@vmware.com>
2021-03-01 13:28:46 -05:00
codegold79
c80ad61bbc Update in-code documentation to show resources can be specified with group name (#3498)
Signed-off-by: F. Gold <fgold@vmware.com>
2021-03-01 13:24:11 -05:00
Carlisia Thompson
5e18bd4d1e (low priority) Add port fwding info to Tilt doc (#3424)
* Add port fwding info to Tilt doc

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix spelling

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix capitalization

Signed-off-by: Carlisia <carlisia@vmware.com>
2021-02-26 11:35:29 -05:00
Nolan Brubaker
1e16723da4 Combine CRD install verification into 1 job, and update k8s versions (#3448)
* Validate CRDs against latest Kubernetes versions

Add Kubernetes v1.19 and v1.20 series images, and consolidate the job
into a single file to reduce repetition.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Ignore job if the changes are only site/design

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix codespell error

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Cache Velero binary for reuse on workers

This will cache the Velero binary based on the PR number and a SHA256 of
the generated binary.

This way, the runners testing each version of Kubernetes do not need to
build it independently.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix GitHub event access

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Wrap output path in quotes

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Move code checkout to build step

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Also cache go modules

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix syntax issues

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Download cached binary on each node

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Use cached go modules on main CI

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-25 16:51:39 -08:00
Carlisia Thompson
cf5d7d4701 (low priority) Update to Thompson (#3502)
* Update to Thompson

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix main page

Signed-off-by: Carlisia <carlisia@vmware.com>
2021-02-24 08:01:35 -05:00
Bridget McErlean
f9fe40befc Install CA certificates in Tilt Docker image (#3496)
HTTPS requests were failing due to the ca-certificates package not being
installed in the Tilt image.

This change takes the command to install this package from our main
Dockerfile (which also includes installing tzdata).

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-02-24 13:50:30 +08:00
Bridget McErlean
31246c569a Update PR template to use checkbox task lists (#3492)
To use checkboxes, each line must be part of a list.

For more details, see:
https://docs.github.com/en/github/managing-your-work-on-github/about-task-lists#creating-task-lists

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-02-23 12:58:47 -05:00
Bridget McErlean
0246a32ad0 Use pod namespace from backup when matching PVBs (#3475)
* Use pod namespace from backup when matching PVBs

In #3051, we introduced an additional check to ensure that a PVB matched
a particular pod by checking both the name and the namespace of the pod.
This caused an issue when using a namespace mapping on restore. In the
case where a namespace mapping is being used, the check for whether a
PVB matches a particular pod will fail as the PVB was created for the
original pod namespace and is not aware of the new namespace mapping
being used. This resulted in PVRs not being created for pods that were
being restored into new namespaces. The restic init containers were
being created to wait on the volume restore, however this would cause
the restored pods to block indefinitely as they would be waiting for a
volume restore that was not scheduled.

To fix this, we use the original namespace of the pod from the backup to
match the PVB to the pod being restored, not the new namespace where
the pod is being restored into.

Fixes #3467.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Explain why the namespace mapping can't be used

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-02-22 11:16:00 -08:00
Madhav Jivrajani
046cb596d0 added documentation for how velero handles encryption (#3463)
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
2021-02-22 13:35:42 -05:00
David L. Smith-Uchida
45d53178ae E2E tests now run in multiple clouds in addition to KIND (#3286)
Split plug-in provider into cloud provider/object provider
Moved velero install/uninstall for tests into velero_utils
Added remove of CRDs to test v elero uninstall
Added remove of cluster role binding to test velero uninstall
Added dump of velero describe and logs on error
Added velero namespace argument to velero_utils functions
Modified api group versions e2e tests to use VeleroInstall
Added velero logs dumps for api group versions e2e testing
Added DeleteNamespace to test/e2e/common.go
Fixed VeleroInstall to use the image specified
Changed enable_api_group_versions_test to use veleroNamespace instead of hardcoded "velero"

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
2021-02-19 08:16:59 +08:00
Zadkiel
52504b548d fix typo in item_hook_handler (#3361)
Signed-off-by: GitHub <noreply@github.com>
2021-02-18 13:32:32 -05:00
Bridget McErlean
9dbd238c89 Use controller-runtime client to get restic secrets (#3320)
* Use kubebuilder client for fetching restic secrets

Instead of using a SecretInformer for fetching secrets for restic, use
the cached client provided by the controller-runtime manager.

In order to use this client, the scheme for Secrets must be added to the
scheme used by the manager so this is added when creating the manager in
both the velero and restic servers.

This change also refactors some of the tests to add a shared utility for
creating a fake controller-runtime client which is now used among all
tests which use that client. This has been added to ensure that all
tests use the same client with the same scheme.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Add builder for SecretKeySelector

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-02-18 10:30:52 -08:00
codegold79
6bdd4ac192 Restore API group version by priority (#3133)
* Restore API group version by priority

Signed-off-by: F. Gold <fgold@vmware.com>

* Add changelog

Signed-off-by: F. Gold <fgold@vmware.com>

* Correct spelling

Signed-off-by: F. Gold <fgold@vmware.com>

* Refactor userResourceGroupVersionPriorities(...) to accept config map, adjust unit test

Signed-off-by: F. Gold <fgold@vmware.com>

* Move some unit tests into e2e

Signed-off-by: F. Gold <fgold@vmware.com>

* Add three e2e tests using Testify Suites

Summary of changes

Makefile - add testify e2e test target
go.sum - changed with go mod tidy
pkg/install/install.go - increased polling timeout
test/e2e/restore_priority_group_test.go - deleted
test/e2e/restore_test.go - deleted
test/e2e/velero_utils.go - made restic optional in velero install
test/e2e_testify/Makefile - makefile for testify e2e tests
test/e2e_testify/README.md - example command for running tests
test/e2e_testify/common_test.go - helper functions
test/e2e_testify/e2e_suite_test.go - prepare for tests and run
test/e2e_testify/restore_priority_apigv_test.go - test cases

Signed-off-by: F. Gold <fgold@vmware.com>

* Make changes per @nrb code review

Signed-off-by: F. Gold <fgold@vmware.com>

* Wait for pods in e2e tests

Signed-off-by: F. Gold <fgold@vmware.com>

* Remove testify suites e2e scaffolding moved to PR #3354

Signed-off-by: F. Gold <fgold@vmware.com>

* Make changes per @brito-rafa and Velero maintainers code reviews

- Made changes suggested by @brito-rafa in GitHub.
- We had a code review meeting with @carlisia, @dsu-igeek, @zubron, and @nrb
- and changes were made based on their suggetions:
  - pull in logic from 'meetsAPIGVResotreReqs()' to restore.go.
  - add TODO to remove APIGroupVersionFeatureFlag check
  - have feature flag and backup version format checks in separate `if` statements.
  - rename variables to be sourceGVs, targetGVs, and userGVs.

Signed-off-by: F. Gold <fgold@vmware.com>

* Convert Testify Suites e2e tests to existing Ginkgo framework

Signed-off-by: F. Gold <fgold@vmware.com>

* Made changes per @zubron PR review

Signed-off-by: F. Gold <fgold@vmware.com>

* Run go mod tidy after resolving go.sum merge conflict

Signed-off-by: F. Gold <fgold@vmware.com>

* Add feature documentation to velero.io site

Signed-off-by: F. Gold <fgold@vmware.com>

* Add config map e2e test; rename e2e test file and name

Signed-off-by: F. Gold <fgold@vmware.com>

* Update go.{mod,sum} files

Signed-off-by: F. Gold <fgold@vmware.com>

* Move CRDs and CRs to testdata folder

Signed-off-by: F. Gold <fgold@vmware.com>

* Fix typos in cert-manager to pass codespell CICD check

Signed-off-by: F. Gold <fgold@vmware.com>

* Make changes per @nrb code review round 2

- make checkAndReadDir function private
- add info level messages when priorties 1-3 API group versions can not be used

Signed-off-by: F. Gold <fgold@vmware.com>

* Make user config map rules less strict

Signed-off-by: F. Gold <fgold@vmware.com>

* Update e2e test image version in example

Signed-off-by: F. Gold <fgold@vmware.com>

* Update case A music-system controller code

Signed-off-by: F. Gold <fgold@vmware.com>

* Documentation updates

Signed-off-by: F. Gold <fgold@vmware.com>

* Update migration case documentation

Signed-off-by: F. Gold <fgold@vmware.com>
2021-02-16 12:36:17 -05:00
Nolan Brubaker
2c6adab903 Document design doc template (#3443)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-11 12:46:12 -08:00
Nolan Brubaker
09d59aa8ee Really fix the Github pull request template (#3444)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-11 15:36:22 -05:00
JenTing Hsiao
a460caae13 Merge pull request #3446 from nrb/fix-CAPI-panic 2021-02-11 11:25:21 +08:00
codegold79
6455350940 Use label to select Velero deployment in plugin cmd (#3447)
* Use label to select Velero deployment in plugin cmd

Signed-off-by: F. Gold <fgold@vmware.com>

* Move veleroLabel constant closer to usage

Signed-off-by: F. Gold <fgold@vmware.com>

* Add changelog

Signed-off-by: F. Gold <fgold@vmware.com>

* Remove year from copyright in new file

Signed-off-by: F. Gold <fgold@vmware.com>

* Export and use install.Labels() function

Signed-off-by: F. Gold <fgold@vmware.com>
2021-02-10 20:42:04 -05:00
Nolan Brubaker
502fb8d7be Remove pull request processing from prow action (#3445)
The prow action does not support running commands except for /lgtm in
PRs.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-10 17:00:09 -05:00
Nolan Brubaker
8bb1b12c2f Add changelog
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-10 14:28:43 -05:00
Nolan Brubaker
64667ee704 Restore CAPI cluster objects in a better order
Restoring CAPI workload clusters without this ordering caused the
capi-controller-manager code to panic, resulting in an unhealthy cluster
state.

This can be worked around
(https://community.pivotal.io/s/article/5000e00001pJyN41611954332537?language=en_US),
but we provide the inclusion of these resources as a default in order to
provide a better out-of-the-box experience.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-10 14:23:06 -05:00
Bridget McErlean
2a4586ba08 Use correct suffix for Labeler config file (#3441)
The labeler action was failing as it was looking for
`.github/labels.yaml` but the file has the suffix `.yml`. This change
fixes the path used by the labeler action.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-02-10 12:58:17 -05:00
JenTing Hsiao
3070feaeb2 Merge pull request #3409 from nrb/add-credentials-to-vsls
Add credentials to VSLs
2021-02-10 09:07:41 +08:00
JenTing Hsiao
7a2fea07ca Merge pull request #3190 from carlisia/c-bsl-credential
Add credential field to the bsl
2021-02-10 09:00:27 +08:00
JenTing Hsiao
2c14f25f48 Merge pull request #3435 from nrb/unify-labels
Unify labels across GitHub Actions
2021-02-10 08:16:48 +08:00
JenTing Hsiao
2f55f520ee Merge pull request #3436 from nrb/add-pr-template 2021-02-10 08:06:52 +08:00
JenTing Hsiao
124c618a0b Merge pull request #3437 from nrb/prow-on-prs 2021-02-10 08:04:38 +08:00
JenTing Hsiao
fe8aea3f81 Merge pull request #3439 from nrb/pin-labeler 2021-02-10 08:03:30 +08:00
Nolan Brubaker
0be45888d6 Pin version of labeler action
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-09 16:36:50 -05:00
Carlisia
930508be60 Better validation
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-02-09 13:18:45 -08:00
Carlisia
b7c2f2d7ed Better help messages and validation check
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-02-09 13:18:45 -08:00
Carlisia
60cbac1e9f Fix typo
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-02-09 13:17:49 -08:00
Carlisia
4b2aae308c Add changelog
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-02-09 13:11:12 -08:00
Carlisia
9dbb8b6906 Add credential field to the bsl
Signed-off-by: Carlisia <carlisia@vmware.com>
2021-02-09 13:11:11 -08:00
Nolan Brubaker
f57b934420 Enable Prow commands when opening or readying PRs
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-09 14:17:50 -05:00
Nolan Brubaker
9caba99f69 Correct PR template file name
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-09 14:06:13 -05:00
Nolan Brubaker
4c01494abc Unify labels across GitHub Actions
The prow-action plugin will pre-pend `area` or `kind` to labels, so
unify them into a common format.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-09 14:00:23 -05:00
Nolan Brubaker
2a234a75bb Update version of prow-action (#3434)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-09 13:51:22 -05:00
Mikael Manukyan
36fc42761b Add colors to describe command (#3275)
* Add colors to describe command

* Add colors to describe backups/restore/schedules commands
* Make name in the output bold
* Disable colors via `--colorized` flag or if velero isn't in TTY

Co-authored-by: Clay Kauzlaric <ckauzlaric@vmware.com>
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>
Signed-off-by: Mikael Manukyan <mmanukyan@vmware.com>

* Add changelog
* and run make update

Co-authored-by: Mikael Manukyan <mmanukyan@vmware.com>
Signed-off-by: Mikael Manukyan <mmanukyan@vmware.com>
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>

* Add colorized to the client config file

Co-authored-by: Mikael Manukyan <mmanukyan@vmware.com>
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>
Co-authored-by: Mikael Manukyan <mmanukyan@vmware.com>

* allow client config to use string values

* the command `velero client config set colorized=false` writes a string
value of "false" into the config. This change allows that string to be
accepted and converted into a boolean when used in program.

Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>

* Add docs about colored CLI output

Co-authored-by: Mikael Manukyan <mmanukyan@vmware.com>
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>

* Update site/content/docs/main/customize-installation.md

Co-authored-by: JenTing Hsiao <jenting.hsiao@suse.com>
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>

* docs: remove comma

* as per @carlisia 's suggestion

Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>

Co-authored-by: Clay Kauzlaric <ckauzlaric@vmware.com>
Co-authored-by: Clay Kauzlaric <clay.kauzlaric@gmail.com>
Co-authored-by: JenTing Hsiao <jenting.hsiao@suse.com>
2021-02-09 08:39:41 -08:00
Nolan Brubaker
5940a47789 Enable automatic labeling of PRs via Actions (#3431)
* Automatically label PRs based on the file paths

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Enable prow-like commands

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Require filling in the PR template

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Update contributor docs to reference PR template

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Expand checklist and ask for issue number on PRs

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Document why we're not enabling /lgtm yet

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Combine PR assignment and labeling workflow

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-09 10:12:48 -05:00
Nolan Brubaker
fc152e6dcb Close issues after 35 total days of inactivity. (#3427)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-08 16:02:53 -05:00
Bridget McErlean
3286834a90 Download restic binary using curl (#3421)
With #3327, the restic binary for the Tilt Velero image is downloaded on
the local machine using the `./hack/download-restic.sh` script. This
script relies on `wget` being availabe on the local machine. `wget` is
not commonly available on macOS but `curl` is. This change modifies the
`./hack/download-restic.sh` script to use `curl` instead as it is
available on both Linux and macOS and is available in our `golang`
docker build image.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-02-08 10:31:20 -08:00
JenTing Hsiao
e115949d9b feat: support set BackupStorageLocation(BSL) CA certificate (#3167)
* Rename --cacert-file to --cacert in the CLI design doc

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Add a new flag --cacert under `velero backup-location set`

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Add changelog

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Changelog rewording

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Revert CLI design doc

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
2021-02-08 13:28:47 -05:00
Taeuk Kim
529e05d6b2 Modify InitContainer checking function that potentially incurs error (#3198)
Signed-off-by: Taeuk Kim <taeuk_kim@tmax.co.kr>
2021-02-08 13:26:56 -05:00
Nolan Brubaker
e0ccc9942c Instantiate the flag map on set
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-08 13:07:20 -05:00
Bridget McErlean
38c08e087b Replace NewObjectBackupStore with interface (#3329)
In preparation for modifying the instantiation of `BackupStores` to be
able to load credentials, change the function `NewObjectBackupStore` to
be an interface that is passed in to all controllers.

Previously, the function to get a new backup store was configurable but
for many controllers was fixed to use `NewObjectBackupStore`. This
change introduces an interface for getting the backup store and wraps
the functionality from `NewObjectBackupStore` in a type which implements
this interface. This will allow more flexibility when introducing
credentials for a specific backup store as it will allow us to create a
new `ObjectBackupStoreGetter` type which can be configured to add
credentials config when creating the ObjectBackupStore without needing
to change the API used by the controllers.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-02-08 13:04:08 -05:00
Bridget McErlean
65c16a1d00 Download restic binary outside container (#3327)
In #3310, the Dockerfile for the Tilt Velero container was modified to
call the `./hack/download-restic.sh` script. A side effect of this
change was that the context for the docker build was much larger as it
was the root of the Velero repo, rather than just the `_tiltbuild`
directory. With the frequent rebuilds of the image that happen when
using Tilt, a large amounts of disk space was being consumed by the
different layers of images builds in the Docker overlay filesystem (as
diffs could include the `.go` directory which can be several GBs).

This change modifies the `download-restic.sh` script to allow the output
directory for the restic binary to be configured. This means that the
script can be called directly from the Tiltfile and can be managed
outside the container build. This allows us to restore the previous
`_tiltbuild` context. It also speeds up image builds as we can download
restic once and use it for all builds rather than redownloading
frequently.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-02-08 09:42:15 -08:00
David L. Smith-Uchida
4823f49198 Merge pull request #3408 from a-mccarthy/remove-faqs
remove FAQ pages
2021-02-06 18:47:55 -08:00
Nolan Brubaker
e5d83197f6 Add changelog entry
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-05 14:00:09 -05:00
Nolan Brubaker
328ba8228b Add snapshot-location set command
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-05 13:54:08 -05:00
Nolan Brubaker
c7bbb9870d Add credential arg to snapshot-location create
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-05 13:40:33 -05:00
Nolan Brubaker
31e4cc0e3b Add credential field to VolumeSnapshotLocation
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2021-02-05 13:40:08 -05:00
Carlisia Thompson
23ebf00999 Proposal for handling multiple credential secrets (#2403)
* Proposal for handling multiple credential secrets

Signed-off-by: Carlisia <carlisia@vmware.com>

* Update mulitple credentials design

The changes here are based on [this comment](https://github.com/vmware-tanzu/velero/pull/2403#issuecomment-728132546)
and a discussion with @carlisia.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Update multiple credentials design doc

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Clarify points around node-based auth

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Add more details around setting env vars

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Fix spelling

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Update design to detail selected approach

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Add title for design doc

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Remove usage of AIM

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

Co-authored-by: Bridget McErlean <bmcerlean@vmware.com>
2021-02-05 13:15:38 -05:00
Abigail McCarthy
f31c1f4921 remove FAQ pages
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2021-02-05 12:48:55 -05:00
David L. Smith-Uchida
1fd49f4fd6 Added information about minimum space required for Minio install. (#3393)
https://github.com/vmware-tanzu/velero/issues/3108

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
2021-02-02 05:24:01 -08:00
Madhav Jivrajani
75fd5a187d Update docs for running velero locally (#3363)
* add documentation for running velero locally, specifically for --plugin-dir flag requireing binary to be present locally

Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>

* add changelog

Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>

* remove changelog

Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
2021-02-01 09:52:04 -05:00
JenTing Hsiao
e258ec65c5 Merge pull request #3360 from zubron/reword-message-in-issue-template
Reword message for Q&A issue template
2021-01-29 09:10:16 +08:00
Abigail McCarthy
9d19f87706 Remove references to zenhub (#3357)
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2021-01-28 14:21:28 -05:00
Bridget McErlean
3e7857b474 Reword message for Q&A issue template
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-01-28 11:51:33 -05:00
steve_chph
a8ef1a65ff Fix typo (#3352)
Signed-off-by: peihongchen <171250610@smail.nju.edu.cn>
2021-01-27 08:59:23 -05:00
JenTing Hsiao
e4b6f947f8 Merge pull request #3202 from georgettica/georgettica/bump-external-snapshotter-version
Georgettica/bump external snapshotter version (fixes #2966)
2021-01-27 06:56:53 +08:00
Ron Green
cba96280fd fix(tests): make tests pass?
this fix is from my understanding of the context package, can be fixed later

Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
2021-01-26 20:40:33 +02:00
Ron Green
f4355f6e8b chore(gomod): bump k8s version
Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
2021-01-26 20:21:03 +02:00
Ron Green
a27824d734 chore(update): run 'make update'
Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
2021-01-26 13:06:27 +02:00
Ron Green
f0472bde71 fix(tests): make tests pass
- change to new api resource

not all tests are passing, but most of them do

Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
2021-01-26 13:06:27 +02:00
Ron Green
ef07b72dbc fix: apply patch
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
2021-01-26 13:06:27 +02:00
Ron Green
8bb3615339 feat(gomod): bump versions
now versions are working and there are code changes that need to happen

- release candidate versions are aligned and working
- replaces fields are removed and not required anymore

controller runtime has been changed during the 'make' command

Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
2021-01-26 13:06:27 +02:00
Ron Green
c4484d1c7e chore(changelog): add changelog message
Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
2021-01-26 13:06:27 +02:00
Ron Green
861cc78bcd refactor(external-snapshotter): bump to v4
Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
2021-01-26 13:06:27 +02:00
Ron Green
5d72a06756 refactor(gomod): move replaces
Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
2021-01-26 12:55:24 +02:00
Bridget McErlean
7c93aa380d Add Q&A discussion to issue templates (#3339)
This change customises the issue template chooser to include a link to
the Community Support Q&A discussion board. This lets users know that
there is another place to ask questions related to using Velero.

This change also disables the creation of blank issues to prevent issues
that don't follow either the bug or feature request templates from being
opened.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-01-25 14:55:19 -08:00
JenTing Hsiao
870291141f Support fish shell completion (#3231)
* Support fish shell completion

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Use spf13/cobra library to generate zsh completion

reference to https://github.com/spf13/cobra/blob/v1.1.1/shell_completions.md

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Update velero completion help message

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Add changelog

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Update cobra version in go.mod instead of replace

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Replace yourprogram to velero

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
2021-01-25 12:45:53 -05:00
Abigail McCarthy
c59c52e6f6 Update docs to clarify backup location and relationship with other data (#3309)
* Clarify backup location information in the docs
* Update wording a bit

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2021-01-26 00:52:27 +08:00
Bridget McErlean
a42284ed17 Add Tilt configuration to debug using Delve (#3189)
* Add Tilt configuration to debug using Delve

This change adds support to run the Velero process in Tilt using
[Delve](https://github.com/go-delve/delve).
This does not include support for debugging the Velero process in the
restic pods, just in the Velero deployment.

For an optimal debugging experience, this change also introduces a new
flag (`DEBUG`) to the `hack/build.sh` script to enable a "debug" build
of the Velero binary. This flag, if enabled, will build the binary
without optimisations and inlining. Disabling optimisations and inlining
is recommended by Delve.

Two configuration options have been added to the Tilt settings. The
first, `enable_debug`, is to control whether debugging should be
enabled. If enabled, the process will be started by Delve, and the Delve
server port (2345) will be forwarded to the local machine.
The second option, `debug_continue_on_start`, is to control whether the
process should "continue" when started by Delve or should be paused.
By default, debugging is disabled, and if in debug mode, the process
will continue.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Add spaces around keyword args

Starlark prefers spaces around `=` in keyword arguments:
https://docs.bazel.build/versions/master/skylark/bzl-style.html#keyword-arguments

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Remove unnecessary command from Dockerfile

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Add note to connect after Tilt is running

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-01-22 10:12:04 +08:00
David L. Smith-Uchida
3754691e1c Updated for new repository for Kibishii Distributed Data Generator for e2e tests (#3267)
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
2021-01-21 21:06:44 -05:00
Madhav Jivrajani
01853826b5 Raise logging level for PV deletion timeout (#3316)
* raised logging level for PV deletion timeout

Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>

* add changelog

Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
2021-01-21 19:48:33 -05:00
Carlisia Thompson
9e29e50773 Minor kubebuilder related items to clean up (#3180)
* Remove unnecessary files

Signed-off-by: Carlisia <carlisia@vmware.com>

* Switch to CAPI patch function for updates

Signed-off-by: Carlisia <carlisia@vmware.com>

* Improve table test format

Signed-off-by: Carlisia <carlisia@vmware.com>

* Refactor and add test for disabling controller

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add tests

Signed-off-by: Carlisia <carlisia@vmware.com>

* Change test to use real word

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix CI

Signed-off-by: Carlisia <carlisia@vmware.com>

* Minor test fixes

Signed-off-by: Carlisia <carlisia@vmware.com>

* Remove rback/role generation

Signed-off-by: Carlisia <carlisia@vmware.com>
2021-01-21 18:22:34 -05:00
Bridget McErlean
56550386e0 Download Restic binary and copy into Tilt Velero image (#3310)
This change adds an additional set of commands to Dockerfile for the
Velero image which adds the `hack/download-restic.sh` script, installs
the necessary dependencies, and then runs that script.

In order to copy the script from the `hack` directory, the context for
building the image has been changed to the root of the velero
repository.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2021-01-21 12:16:42 +08:00
JenTing Hsiao
db403c6c54 Merge pull request #3235 from dsu-igeek/dsu-pod-mem-increase-01-12-2021
Increased limit for Velero pod to 512M.  Fixes #3234
2021-01-13 14:15:55 +08:00
Dave Smith-Uchida
bb2891a881 Increased limit for Velero pod to 512M. Fixes #3234
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
2021-01-12 15:42:45 -08:00
JenTing Hsiao
4ae55bb20a Capitalize all help messages (#3209)
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
2021-01-05 16:36:15 -08:00
JenTing Hsiao
be5fbb00ea Merge pull request #3186 from carlisia/c-plugin-naming
Minor refactor plus better documentation for plugin naming
2020-12-16 23:53:56 +08:00
JenTing Hsiao
12d4aff3db Merge pull request #3183 from carlisia/c-name
Better name format for init containers
2020-12-16 23:16:48 +08:00
JenTing Hsiao
6fee09bbb9 Merge pull request #3172 from carlisia/c-bsl-imp
Improvements to BSL logic
2020-12-16 23:16:36 +08:00
Carlisia Thompson
d3364fe267 Nominate JenTing Hsiao for core maintainer (#3188)
* Nominate JenTing Hsiao for core maintainer

Signed-off-by: Carlisia <carlisia@vmware.com>

* Correct company names

Signed-off-by: Carlisia <carlisia@vmware.com>
2020-12-15 11:08:46 -05:00
Carlisia
63301213bd Improve name formatting logic and add more tests
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-12-14 18:32:37 -08:00
Bridget McErlean
5c4f33b317 Fix path to crds.go file in codespell config (#3185)
This changes the codespell action config to use a relative path for the
generated crds.go file as the current pattern used fails the check used
by codespell (which uses the `fnmatch` module).

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-12-14 16:57:41 -08:00
Carlisia
203a103fc4 Minor refactor plus better documentation for naming
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-12-14 16:52:52 -08:00
matheusjuvelino
73693de5a3 issue: add flag to the schedule cmd to configure the useOwnerReferencesInBackup option #3176 (#3182)
* resolve: #3176

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>

* created changelog

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>

* fixed use-owner-rferences-in-backup flag for use-owner-references-in-backup

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
2020-12-14 19:30:35 -05:00
Carlisia
2de7c7924c Better name format for init containers
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-12-14 14:14:12 -08:00
Jokey
2f635e14ce Tencent S3 Compatible Support Docs (#3115)
* Tencent S3 Compatible Surpport Docs

Signed-off-by: jokeyli <jokeyli@tencent.com>

* fixed typos & and CHANGELOG log

Signed-off-by: jokeyli <jokeyli@tencent.com>

* add changelog

Signed-off-by: jokeyli <jokeyli@tencent.com>

* add changelog

Signed-off-by: jokeyli <jokeyli@tencent.com>

* fixed suggestions

Signed-off-by: jokeyli <jokeyli@tencent.com>
2020-12-11 13:19:11 -05:00
matheusjuvelino
309d3dcf0a Owner reference in backup when created from schedule (#3127)
* added useOwnerReferencesInBackup to crd velerio.io_schedules

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>

* added UseOwnerReferencesInBackup property to schedule.go

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>

* deepcopy schedule configured for reference the property UseOwnerReferencesInBackup

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>

* added UseOwnerReferencesInBackup property verification to modify OwnerReferences from backup

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>

* created changelog

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>

* removed deepcopy schedule configured for reference the property UseOwnerReferencesInBackup

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>

* running make update

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>

* running make update

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>

* updated the year at the top of the schedule.go file for 2020

Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
2020-12-11 13:10:34 -05:00
Nolan Brubaker
f1ec10a518 Ignore config/crd/crds/crds.go file in codespell (#3174)
This file is generated and has binary contents that we shouldn't be
modifying anyway.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-12-10 15:17:16 -08:00
Ferenc Nemeth
6b7b0f4de9 Use inline markdown links in tables (#3114)
It seems that Hugo does not support reference-style links. See the generated doc at velero.io:
https://velero.io/docs/v1.5/api-types/volumesnapshotlocation/#parameter-reference

Signed-off-by: Ferenc Nemeth <ferenc.nemeth@cheppers.com>
2020-12-10 14:07:23 -05:00
Ashish Amarnath
249215f1ff Add more E2E tests and improvement (#3111)
* remove checked in binary and update test/e2e Makefile

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* remove platform specific tests for now

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* install velero before running tests and robust makefiles

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* changelog

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* running e2e tests expects credentials file to be supplied
run e2e tests on velero/velero:main image by default

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* refactor to parameterize tests

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* rename files to use provider tests convention

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* rename tests file

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* remove providerName config

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* run kibishii test on azure

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* refactor to make bsl vsl configurable

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* skip e2e tests when not explicitly running e2e tests

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* update e2e docs

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* refactor and update docs

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* refactor

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* cleanup

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* use velero's exec package

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

Co-authored-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
2020-12-09 16:26:05 -08:00
Carlisia
fa65af87d0 Add changelog
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-12-09 15:00:37 -08:00
Carlisia
bd10b7660c Improvements to BSL logic
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-12-09 13:25:01 -08:00
Nolan Brubaker
844cc16803 Revert workflow access token changes (#3170)
Per
https://github.com/alex-page/github-project-automation-plus/issues/51,
the `GITHUB_TOKEN` secret doesn't have the appropriate permissions to
manage the issue workflows at a repo level. Reverting to the previous
secret to get the workflows working again.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-12-09 12:08:39 -08:00
yusufgungor
3b2e9036d1 Preserve nodePort support with --preserve-nodeports flag (#3095)
* -> Preserve nodePort support when restoring via "--preserve-nodeports" flag

Signed-off-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>

* -> Added changelog.

Signed-off-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>

* -> Unit test added.
-> Using boolptr.IsSetToTrue for bool ptr check.

Signed-off-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>

* -> Unit test added.
-> Using boolptr.IsSetToTrue for bool ptr check.

Signed-off-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>

* -> Other restore errors log level changed from info to error.
-> Documentation updated about Velero nodePort restore logic and preservation of them.

Signed-off-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>

Co-authored-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>
2020-12-09 09:32:34 -08:00
Nolan Brubaker
d09b4d60bb Add milestoned issues to their respective board (#3162)
As long as a milestone and the board have the same title, then this
workflow should take care of adding an issue into the GitHub Project
board when an existing issue is given a milestone.

It does NOT support checking for a milestone when an issue is edited or
created though, due to limitations on GitHub Actions syntax right now -
there's not a great way to validate against an empty `milestone` object
at the moment, per https://docs.github.com/en/free-pro-team@latest/actions/reference/context-and-expression-syntax-for-github-actions

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-12-08 15:59:43 -08:00
Nolan Brubaker
5414742695 Use new repository-local board & github secret (#3163)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-12-08 15:59:01 -08:00
Carlisia Thompson
db824670b0 Change distro (#3166)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-12-08 18:13:51 -05:00
codegold79
b6eae33503 Draft design doc for restoring API group version by priority level (#3050)
* Draft design doc for restoring API group version by priority level

Signed-off-by: F. Gold <fgold@vmware.com>

* Make changes per @jenting review and use filepath to join paths

Signed-off-by: F. Gold <fgold@vmware.com>

* Update design doc with config map and k8s version priorities

Signed-off-by: F. Gold <fgold@vmware.com>

* Edit k8s doc URL per @jenting's review comment

Signed-off-by: F. Gold <fgold@vmware.com>

* Editorial changes

Signed-off-by: F. Gold <fgold@vmware.com>

* Changes per @nrb PR review and other edits

Signed-off-by: F. Gold <fgold@vmware.com>

* Update Status.FormatVersion check sections and minor edits

Signed-off-by: F. Gold <fgold@vmware.com>
2020-12-08 16:49:45 -05:00
JenTing Hsiao
9dd158d13d feat: support configure BSL CR to indicate which one is the default (#3092)
* Add default field to BSL CRD

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Add a new flag `--default` under `velero backup-location create`

add a new flag `--default` under `velero backup-location create`
to specify this new location to be the new default BSL.

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Add a new default field under `velero backup-location get`

add a new default field under `velero backup-location get` to indicate
which BSL is the default one.

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Add a new sub-command and flag under `velero backup-location`

Add a new sub-command called `velero backup-location set` sub-command
and a new flag `velero backup-cation set --default` to configure which
BSL is the default one.

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Add new flag to get the default backup-location

Add a new flag `--default` under `velero backup-location get`
to displays the current default BSL.

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Configures default BSL in BSL controller

When upgrade the BSL CRDs, none of the BSL has been labeled as default.
Sets the BSL default field to true if the BSL name matches to the default BSL setting.

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Configures the default BSL in BSL controller for velero upgrade

When upgrade the BSL CRDs, none of the BSL be marked as the default.
Sets the BSL `.spec.default: true` if the BSL name matches against the
`velero server --default-backup-storage-location`.

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Add unit test to test default BSL behavior

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Update check which one is the default BSL in backup/backup_sync/restore controller

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Add changelog

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Update docs locations.md and upgrade-to-1.6.md

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
2020-12-08 16:38:29 -05:00
Carlisia Thompson
5eb64eb84b Add Tilt configs (#3119)
* Adding Tilt configs

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix spelling

Signed-off-by: Carlisia <carlisia@vmware.com>

* Reuse sample BSL yaml file

Signed-off-by: Carlisia <carlisia@vmware.com>

* Minor fix and more documentation

Signed-off-by: Carlisia <carlisia@vmware.com>

* Reuse our build.sh script

Signed-off-by: Carlisia <carlisia@vmware.com>

* Finish tweaking Tilt build

Signed-off-by: Carlisia <carlisia@vmware.com>

* Improvements

Signed-off-by: Carlisia <carlisia@vmware.com>

* This will make a better startup config

Signed-off-by: Carlisia <carlisia@vmware.com>

* Code review + improvements

Signed-off-by: Carlisia <carlisia@vmware.com>

* Improvements

Signed-off-by: Carlisia <carlisia@vmware.com>

* Reset go.sum

Signed-off-by: Carlisia <carlisia@vmware.com>

* Address code reviews

Signed-off-by: Carlisia <carlisia@vmware.com>

* Improve Tilt code

Signed-off-by: Carlisia <carlisia@vmware.com>

* Address code reviews

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix links

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add CSI image to example deployment

Signed-off-by: Carlisia <carlisia@vmware.com>
2020-12-08 13:42:03 -05:00
Ashish Amarnath
7727d535a4 🐛 BSLs with validation disabled should be validated at least once (#3084)
* 🐛 BSLs with validation disabled should be validated at least once

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* review comments

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-12-03 07:52:36 -08:00
Carlisia Thompson
6808acd92e Expand maintainer documentation (#3102)
* Expand maintainer documentation

Signed-off-by: Carlisia <carlisia@vmware.com>

* Expand maintainer documentation, 1.5

Signed-off-by: Carlisia <carlisia@vmware.com>

* Improve instructions

Signed-off-by: Carlisia <carlisia@vmware.com>
2020-12-01 16:01:30 -05:00
Carlisia Thompson
53e61bab4e Organize design docs (#3101)
* Organize design docs

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add instructions

Signed-off-by: Carlisia <carlisia@vmware.com>

* Wait, this is better

Signed-off-by: Carlisia <carlisia@vmware.com>
2020-12-01 16:00:52 -05:00
Bridget McErlean
ad31e6eda7 Fix broken docker login action (#3121)
PR #3110 introduced a new action for performing the login to Dockerhub
as part of image building and pushing however there is an error with the
configuration and the credentials are not being passed through
correctly. This change reverts to the previous log in approach.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-11-30 11:53:25 -08:00
Bridget McErlean
c7531adda3 Upgrade to Docker provided buildx action for CI (#3110)
The previous buildx action that we were using has been archived and
users are recommended to switch to the new action provided by Docker.
The previous action also included setting up QEMU. This is now provided
as a separate action which needs to be run separately.
This change also replaces the direct use of `docker login` with the new
`login-action`. This new action also handles logging out once the build
is complete.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-11-30 14:19:07 -05:00
JenTing Hsiao
7f2de65b5b feat: add delete sub-command for backup-location (#3073)
* feat: add delete sub-command for backup-location

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Change to use kubebuilder/runtimecontroller API

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* fix get BSL by label doesn't work

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Update changelog

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Ordering by alphabet

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Better example format for help message

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Capital the comments

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
2020-11-30 13:59:42 -05:00
Bridget McErlean
a877354750 Don't fail backup deletion if downloading tarball fails (#2993)
* Don't fail backup if downloading tarball fails

Previously, we would always attempt to download the tarball for a backup
for processing DeleteItemAction plugins, even if there weren't any.
This caused an issue for some users in the case where the backup tarball
had been deleted from object storage as the backup deletion would fail.

Now, we only attempt to download the tarball in the case where there are
DeleteItemAction plugins. If downloading that tarball fails, we log
the error, skip the processing of the DeleteItemAction plugins and
proceed with the rest of the deletion.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>

* Skip file removal in closeAndRemoveFile if nil

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-11-30 10:58:34 -08:00
David L. Smith-Uchida
aa47309700 Add an E2E test framework to test Velero across cloud platforms (#3060)
* Basic end-to-end tests, generate data/backup/remove/restore/verify
Uses distributed data generator

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>

* Moved backup/restore into velero_utils, started using a name for the restore

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>

* remove checked in binary and update test/e2e Makefile

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* Ran make update

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>

* Save

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* Ran make update

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>

* Basic end-to-end test, generate data/backup/remove/restore/verify
Uses distributed data generator

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>

* Changed tests/e2e Makefile to just use go get to install ginkgo in the GOPATH/bin
Updated to ginkgo 1.14.2
Put cobra back to v0.0.7

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>

* Added CLOUD_PLATFORM env variable to Makefile, updated README, removed ginkgo from .gitignore

Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>

* choose velero CLI binary based on local env

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

Co-authored-by: Ashish Amarnath <ashisham@vmware.com>
2020-11-24 14:12:52 -05:00
Ashish Amarnath
b321838c72 🏃‍♂️ reducing verbosity of another log message (#3109)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-11-24 08:12:51 -08:00
Taeuk Kim
12d70e30a8 Modify function name typo (#3106)
Signed-off-by: Taeuk Kim <taeuk_kim@tmax.co.kr>
2020-11-23 11:28:34 -05:00
Carlisia Thompson
8edf100186 Fix project automation (#3089)
* Fix project automation

Signed-off-by: Carlisia <carlisia@vmware.com>

* Case sensitive

Signed-off-by: Carlisia <carlisia@vmware.com>
2020-11-23 11:24:07 -05:00
Ashish Amarnath
a1e182e723 📖 Add docs to troubleshoot cloud-credentials (#3100)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-11-19 14:19:50 -08:00
Misha Ketslah
8c8385aabb pass annotations from scheduler to created backup (#3067)
* pass annotations from scheduler to created backup

Signed-off-by: Michael <michael.ketslah@tufin.com>

* add change log

Signed-off-by: Michael <michael.ketslah@tufin.com>

* add test for annotations in controller

Signed-off-by: Michael <michael.ketslah@tufin.com>

* If no annotations are set - do not copy empty list

Signed-off-by: Michael <michael.ketslah@tufin.com>

* remove unneeded var

Signed-off-by: Michael <michael.ketslah@tufin.com>

* add empty annotations and actually check annotations in backups

Signed-off-by: Michael <michael.ketslah@tufin.com>

* add empty missing label and empty annotations

Signed-off-by: Michael <michael.ketslah@tufin.com>

* revert empty annotations as seems they are nil as expected

Signed-off-by: Michael <michael.ketslah@tufin.com>

* fix typo in changelog

Signed-off-by: Michael <michael.ketslah@tufin.com>

Co-authored-by: Michael <michael.ketslah@tufin.com>
2020-11-19 13:19:42 -08:00
Carlisia Thompson
c10feb2cc3 Update to latest covenant coc (#3076)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-11-19 16:07:38 -05:00
Pranav Gaikwad
a757304d71 propose restore progress (#3016)
Signed-off-by: Pranav Gaikwad <pgaikwad@redhat.com>
2020-11-19 13:02:34 -08:00
Scott Seago
b876dd97aa Design doc for RestoreItemAction wait for AdditionalItems to be ready (#2867)
Signed-off-by: Scott Seago <sseago@redhat.com>
2020-11-19 14:31:23 -05:00
Ashish Amarnath
9b20e8d2e6 🏃‍♂️ Turn down logging verbosity (#3091)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-11-19 14:03:29 -05:00
Madhav Jivrajani
a386139788 Add instructions to clone repo for examples (#3074)
* Add instructions to clone repo for examples

Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>

* Add changelog

Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>

* Revert changes in v1.4 and 1.3.x

Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>

* Revert changes for v1.2.0

Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
2020-11-17 16:28:03 -08:00
Ashish Amarnath
4f1d46c452 🏃‍♂️ update setup-kind github actions CI (#3085)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-11-17 12:31:14 -05:00
Carlisia Thompson
37b4aae033 Add Dave Uchida (#3077)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-11-16 10:35:41 -05:00
Ashish Amarnath
bae18e6b3f 📖 use correct link to the minio.md (#3071)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-11-12 16:08:26 -08:00
Nolan Brubaker
1b54444568 Automate adding opened issues to the triage board (#3068)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-11-11 10:38:11 -08:00
Ashish Amarnath
ecab583680 🐛 Do not run ItemAction plugins for unresolvable types for all types (#3059)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-11-11 09:50:57 -05:00
Mateusz Gozdek
9acd4ac4d5 .github/workflows: add PR codespell workflow (#3064)
To avoid adding typos to the code base.

Follow up to #3057.

Signed-off-by: Mateusz Gozdek <mgozdekof@gmail.com>
2020-11-10 12:43:32 -08:00
Mateusz Gozdek
dbc83af77b Fix various typos found by codespell (#3057)
By running the following command:

codespell -S .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico -L \
iam,aks,ist,bridget,ue

Signed-off-by: Mateusz Gozdek <mgozdekof@gmail.com>
2020-11-10 11:48:35 -05:00
Ashish Amarnath
dc6762a895 🐛 Use namespace and name to match PVB to Pod restore (#3051)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-11-10 11:36:49 -05:00
Abigail McCarthy
7d1b613459 Add custom 404 page to website (#3056)
* Add custom 404 page to website

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* point to repo issues

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* remove 404 from title

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2020-11-09 09:05:40 -08:00
Mayank
68a4b23722 fixing 'velero.io/change-pvc-node-selector' plugin to fetch configmap using plugin name (#2970)
* fixing label for 'velero.io/change-pvc-node-selector' plugin in site document

Signed-off-by: mayank <mayank.patel@mayadata.io>

* Fixing "velero.io/change-pvc-node-selector" to fetch config using plugin name

Signed-off-by: mayank <mayank.patel@mayadata.io>

* adding changelog

Signed-off-by: mayank <mayank.patel@mayadata.io>
2020-11-04 11:45:39 -08:00
Ashish Amarnath
1be97a2b04 🏃‍♂ Improve log message clarity (#3047)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-11-02 13:39:32 -08:00
Ashish Amarnath
e9a19581bf 📖 Clarify restore hook init container priority (#3030)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-11-02 14:38:00 -05:00
Bridget McErlean
4178d9de32 Add additional printer columns for CRDs (#2881)
This change modifies the kubebuilder annotations for the Velero CRDs to
include `additionalPrinterColumns` so that more information is exposed
when using `kubectl get`.

For each of the CRDs, annotations have been added to make the output
for `kubectl get` match the output from the equivalent `velero get`
command as closely as possible. There are some cases where this output
could not be replicated, such as the `EXPIRES` column for Backups, due
to the limitations of JSONPath expressions within the resulting CRD
defition. Some columns undergo processing and formatting before being
printed by the Velero CLI which cannot be replicated using JSONPath. In
these cases, these printer columns have been omitted.

For other CRDs where there is no `velero get` equivalent, such as
`PodVolumeBackup` and `PodVolumeRestore`, a best effort has been made to
expose information that provides value.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-10-27 15:19:33 -07:00
Ashish Amarnath
0487a21c84 📖 fix image links in how-velero-works page (#3031)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-10-23 17:41:46 -04:00
Hariharan
df982c9fc9 Add warning to velero version cmd. Fixes #3017 (#3024)
Signed-off-by: Hariharan <cvhariharan@protonmail.com>
2020-10-23 17:33:13 -04:00
Michael Michael
6f6292492c fix of microsoft typo in restic docs (#3037)
* Update restic.md

* Update restic.md
2020-10-23 13:03:59 -07:00
Abigail McCarthy
fca6dbcb9a fix minio code samples (#3034)
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2020-10-22 13:17:36 -07:00
Piper Dougherty
60ff351269 Adding fix for restic init container index on restores. (#3011)
* Adding handling of restic-wait init container at any order with warning.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Adding newline at end of files to match convention.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Formatting.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Update copyright year on modified files.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>
2020-10-21 15:15:03 -07:00
Nolan Brubaker
28a46d3a8b Ensure PVs and PVCs remain bound when doing a restore (#3007)
* Only remove the UID from a PV's claimRef

The UID is the only part of a claimRef that might prevent it from being
rebound correctly on a restore. The namespace and name within the
claimRef should be preserved in order to ensure that the PV is claimed
by the correct PVC on restore.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remap PVs claimRef.namespace on relevant restores

When remapping namespaces, any included PVs need to have their claimRef
updated to point remapped namespaces to the new namespace name in order
to be bound to the correct PVC.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Update tests and ensure claimRef namespace remaps

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remove lowercased uid field from unstructured PV

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix issues that prevented PVs from being restored

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add changelog

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Dynamically reprovision volumes without snapshots

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Update test for lower case uid field

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remove stray debugging print statement

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix typo, remove extra code, add tests.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-10-15 16:57:43 -07:00
Nolan Brubaker
2b47ab2c7a Add initial instructions for releasing plugins (#2952)
* Add initial instructions for releasing plugins

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Document that goreleaser isn't needed

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add link from core release instructions to plugins

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add notes about updating compatibility matrix

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add velero install note

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Address review feedback

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remove anchor link to table, since it didn't work

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-10-13 16:18:05 -07:00
Alay Patel
467f5fb723 create CRB with velero-<namespace> (#2886)
* create CRB with velero-<namespace>

This will allow creating multiple instances of velero,
across two different namespaces

Signed-off-by: Alay Patel <alay1431@gmail.com>

* add changelog

Signed-off-by: Alay Patel <alay1431@gmail.com>

* add package var DefaultVeleroNamespace and use it wherever needed

Signed-off-by: Alay Patel <alay1431@gmail.com>
2020-10-13 16:13:42 -07:00
Bridget McErlean
60905d36c3 Auto assign reviewers when PR is ready for review (#3006)
The workflow that we are using for auto-assigning reviewers to a PR did
not cover the event when a draft PR is marked as ready for review. This
change adds the `ready_for_review` activity type to the list of types to
use for triggering the workflow.

This change also fixes a typo in one of the other listed types
(`synchronize`).
See the [docs for more information](https://docs.github.com/en/free-pro-team@latest/actions/reference/events-that-trigger-workflows#pull_request_target).

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-10-13 19:06:47 -04:00
Bridget McErlean
704bf01fab Check existing remote branches in release script (#2951)
The command to check for an existing release branch only checked for
local branches. We should be considering both local and remote branches
before cherry-picking commits for the new release.

This change checks for existing local and remote release branches and
creates or updates them accordingly.
* If a remote branch exists, but a local branch does not, checkout the
  remote branch and track it.
* If the remote branch and local branch exists, checkout the local
  branch and ensure that the latest commits from the remote are pulled.
* Otherwise, if the remote branch does not exist, create it locally if
  needed.

This also handles the case where an existing release branch may be
tracked in multiple remotes as the remote to use is explicitly stated.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-10-13 13:02:21 -07:00
Bridget McErlean
fb76a8fe33 Include --validate=false in upgrade instructions (#2969)
We instruct users to update the CRDs when upgrading to 1.4 and 1.5 which
involves using `kubectl apply` to apply the CRD configuration. The CRD
configuration generated by `velero install` includes fields which are
not valid when running Kubernetes v1.14 or earlier. We instruct users to
work around this when doing a customised velero install, but not when
upgrading to newer versions. This change updates the upgrade
instructions for v1.4 and v1.5 to include the use of `--validate=false`
flag when running `kubectl apply`.

See #2077 and #2311 for more context.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-10-13 13:00:36 -07:00
Ashish Amarnath
c24f2baf0d 📖 document restic limitation of backing only pod volumes (#2976)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-10-13 12:58:36 -07:00
Gábor Lipták
3fb57c6b2e Bump Go to 1.15 (#2974)
Signed-off-by: Gábor Lipták <gliptak@gmail.com>
2020-10-13 12:42:06 -07:00
Antony S Bett
35d25c81ec Fix BSL controller to avoid invoking init() on all BSLs regardless of ValidationFrequency (#2992)
Signed-off-by: Bett, Antony <antony.bett@dell.com>
2020-10-13 12:10:32 -07:00
Carlisia Thompson
c6aa54a009 Fix version cmd getting nil pointer (#2996)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-10-12 16:17:10 -04:00
Carlisia Thompson
e69fac153b Centralize + rename controller names and list (#2936)
* Centralize + rename controller names and list

Signed-off-by: Carlisia <carlisia@vmware.com>

* Rename file

Signed-off-by: Carlisia <carlisia@vmware.com>

* Reset restic-repo name

Signed-off-by: Carlisia <carlisia@vmware.com>

* Reset gc controller name

Signed-off-by: Carlisia <carlisia@vmware.com>
2020-10-06 13:58:56 -04:00
mickkael
228b474859 Allow Timezone change in the container (#2944)
* Allow Timezone change in the container

Allow Timezone change by specifying the env TZ in the deployment manifest
Signed-off-by: mickkael <19755421+mickkael@users.noreply.github.com>

* Change log for 2944

Signed-off-by: mickkael <19755421+mickkael@users.noreply.github.com>
2020-10-06 13:58:16 -04:00
Steph Bman
a841167ee0 Update ROADMAP.md (#2986)
Updated roadmap with details from top ranked items on the 1.6 release proposed stack rank
2020-10-06 12:43:13 -04:00
Scott Seago
d820bc5e72 restore proper lowercase/plural CRD resource (#2949)
* restore proper lowercase/plural CRD resource

This commit restores the proper resource string
"customresourcedefinitions" for CRD. The prior change to
"CustomResourceDefinition" was made because this was being used
in another place to populate the CRD "Kind" field in
remap_crd_version_action.go -- there, just use the correct Kind
string instead of pulling from Resource.

Signed-off-by: Scott Seago <sseago@redhat.com>

* add changelog

Signed-off-by: Scott Seago <sseago@redhat.com>
2020-10-02 09:48:12 -04:00
Michael Michael
3867d1f434 Stephanie Bauman is leaving the velero project (#2985)
* Delete 05-stephanie-bauman.md

* Delete stephanie-bauman.png

* Update MAINTAINERS.md
2020-10-02 08:12:23 -04:00
Bridget McErlean
ea23d28192 Allow remote for release process to be configured (#2950)
The release script assumes that the remote for the vmware-tanzu/velero
repository is called `upstream`. It may be the case that this remote is
configured to use a different name. This change updates the script to
allow the remote name being used to be configured by setting the
environment variable `REMOTE` before running the script. If the variable
is not set, the remote defaults to `upstream`.

The release instructions have also been updated to reflect this change.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-09-24 14:29:46 -07:00
Bridget McErlean
d4e12d5f4a Improve release docs following v1.5.1 release (#2954)
This change addresses some issues in the documentation and scripts that
were found during the v1.5.1 release:
* Fix the path to the changelog script in the Makefile
* Fix the path to the pre-release TOC in the docs
* Improve the instructions for creating/updating the upgrade
  instructions page.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-09-24 13:00:10 -07:00
Jonas Rosland
c143912738 Fix adopters logos (#2968)
Signed-off-by: jonasrosland <jrosland@vmware.com>
2020-09-23 15:35:00 -07:00
Nolan Brubaker
e0dbbc747f Don't attempt to publish docker images on forks (#2953)
* Don't attempt to publish docker images on forks

When the Main CI workflow runs on a fork, the docker push step will
always fail because the appropriate secrets are missing. This is
annoying at best and causes CI on forks to be untrustworthy at worst.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Use single quotes for string, as github expects

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-09-22 15:04:50 -07:00
Jonas Rosland
d4b017d4d6 Add Velero Office Hours info (#2962)
Signed-off-by: jonasrosland <jrosland@vmware.com>
2020-09-22 10:06:36 -04:00
Nolan Brubaker
2fa96d0839 Fix 'subcommand required' error w/ cobra upgrade (#2947)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-09-17 14:08:26 -07:00
Nolan Brubaker
b2ff7e6c11 v1.5 blog post (#2940)
* Blop post announcing Velero 1.5

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* Remove hardcoded deploy preview URL

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Remove base URL entirely

Since there's not really an easy way to use the preview URL environment
variables in the netlify.toml, remove the baseURL argument entirely
from the build command.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Update blog post date and expected tag link

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

Co-authored-by: Ashish Amarnath <ashisham@vmware.com>
2020-09-16 17:56:47 -04:00
Nolan Brubaker
87d86a45a6 Add changelog and docs for v1.5 release (#2941)
* Add changelog and docs for v1.5 release

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix markdown indentation

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix URLs with patch version

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Fix example link

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-09-16 17:17:30 -04:00
Carlisia Thompson
6e20eaaba8 Spruce up release instructions and release scripts (#2931)
* Update tag-release script

Signed-off-by: Carlisia <carlisia@vmware.com>

* Reorg release instructions

Signed-off-by: Carlisia <carlisia@vmware.com>

* Move "troubleshooting" to proper section

Signed-off-by: Carlisia <carlisia@vmware.com>

* Better formatting

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix sorcery

Signed-off-by: Carlisia <carlisia@vmware.com>

* Address code review

Signed-off-by: Carlisia <carlisia@vmware.com>

* More code reviews

Signed-off-by: Carlisia <carlisia@vmware.com>
2020-09-16 16:12:09 -04:00
Carlisia Thompson
afa552ca67 Fix url (#2933)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-09-11 17:26:56 -04:00
Carlisia Thompson
6f374b5709 Documentation for maintainers (#2932)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-09-11 16:53:33 -04:00
Carlisia Thompson
062a598d8e Update upgrading instructions to right version (#2930)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-09-11 07:57:38 -07:00
Ashish Amarnath
cfc4651078 🏃‍♂️ add shortnames for CRDs (#2911)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-09-10 12:28:17 -07:00
Bridget McErlean
d85d785cf0 Remove "exec restore hooks coming soon" line (#2929)
Now that Exec restore hooks have been added in #2804 and are available
in 1.5.0-rc1, we can remove the line that states that they are coming
soon.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-09-10 12:25:24 -07:00
Ashish Amarnath
5caa97f335 📖 Fixing typos and markdown formatting (#2917)
* 📖 fix typo in resource-filtering page

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* 📖 fix markdown formatting

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-09-10 12:04:36 -07:00
Ashish Amarnath
6d729a90ca 🐛 fix 1.5 upgrade insturctions (#2926)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-09-10 13:46:32 -04:00
Andrew Reed
e53369d509 Document exec restore hooks (#2896)
Signed-off-by: Andrew Reed <andrew@replicated.com>
2020-09-10 11:24:10 -04:00
Carlisia Thompson
b60e6ff21e v1.5.0-rc.1 release (#2921)
* v1.5.0-rc.1 release

Signed-off-by: Carlisia <carlisia@vmware.com>

* Reviews

Signed-off-by: Carlisia <carlisia@vmware.com>

* Re-generate docs

Signed-off-by: Carlisia <carlisia@vmware.com>
2020-09-09 16:25:47 -07:00
Bridget McErlean
543678140b Update links to point to main branch (#2915)
A number of links still pointed to the old master branch and resulted in
404s. This updates those links to point to the new main branch.

Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-09-09 12:11:57 -07:00
Bridget McErlean
696117365f Add Bridget to list of maintainers (#2916)
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
2020-09-09 12:04:36 -07:00
Ashish Amarnath
44306e537e 📖 add restore hooks doc (#2862)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-09-08 17:36:31 -07:00
JenTing Hsiao
cd31141b93 Show format version on velero backup describe (#2901)
* Show format version on velero backup describe

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>

* Add changelog

Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
2020-09-08 16:08:56 -04:00
Andrew Reed
0547b1d945 Restore hooks exec (#2804)
* Exec hooks in restored pods

Signed-off-by: Andrew Reed <andrew@replicated.com>

* WaitExecHookHandler implements ItemHookHandler

This required adding a context.Context argument to the ItemHookHandler
interface which is unused by the DefaultItemHookHandler implementation.
It also means passing nil for the []ResourceHook argument since that
holds BackupResourceHook.

Signed-off-by: Andrew Reed <andrew@replicated.com>

* WaitExecHookHandler unit tests

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Changelog and go fmt

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Fix double import

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Default to first contaienr in pod

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Use constants for hook error modes in tests

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Revert to separate WaitExecHookHandler interface

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Negative tests for invalid timeout annotations

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Rename NamedExecRestoreHook PodExecRestoreHook

Also make field names more descriptive.

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Cleanup test names

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Separate maxHookWait and add unit tests

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Comment on maxWait <= 0

Also info log container is not running for hooks to execute in.
Also add context error to hooks not executed errors.

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Remove log about default for invalid timeout

There is no default wait or exec timeout.

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Linting

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Fix log message and rename controller to podWatcher

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Comment on exactly-once semantics for handler

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Fix logging and comments

Use filed logger for pod in handler.
Add comment about pod changes in unit tests.
Use kube util NamespaceAndName in messages.

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Fix maxHookWait

Signed-off-by: Andrew Reed <andrew@replicated.com>
2020-09-08 11:33:15 -07:00
Jonas Rosland
a179ae01ca Fix for docs redirects (#2895)
* Fixing redirects

Signed-off-by: jonasrosland <jrosland@vmware.com>

* Fix netlify config

Signed-off-by: jonasrosland <jrosland@vmware.com>

* Add previous redirects

Signed-off-by: jonasrosland <jrosland@vmware.com>

* Change netlify publish path

Signed-off-by: jonasrosland <jrosland@vmware.com>

* Add new redirect for restic

Signed-off-by: jonasrosland <jrosland@vmware.com>
2020-09-02 14:20:46 -07:00
Carlisia Thompson
be1cd03023 Refactor BSL related code (#2870)
* Refactor BSL related code

Signed-off-by: Carlisia <carlisia@vmware.com>

* Increase # of max concurrent reconciles

Signed-off-by: Carlisia <carlisia@vmware.com>

* Clean up for better, more precise interfaces

Signed-off-by: Carlisia <carlisia@vmware.com>

* Minor clean up - code reviews

Signed-off-by: Carlisia <carlisia@vmware.com>

* Address code review

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add import and fix CI

Signed-off-by: Carlisia <carlisia@vmware.com>
2020-09-02 13:12:37 -04:00
Ashish Amarnath
b5edac3c83 📖 update restore api types with init container restorehooks (#2855)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-09-02 13:11:57 -04:00
Pawan Prakash Sharma
debcbd2f8e fix: rename the PV if VolumeSnapshotter has modified the PV name (#2835)
* fix: rename the PV if VolumeSnapshotter has modified the PV name

When VolumeSnapshotter sets the PV name via SetVolumeID and PV is
not there in the cluster, velero does not rename the PV. Which causes
the pvc to be in the lost state as pvc points to the old PV but pv object
has been renamed by VolumeSnapshotter.

Signed-off-by: Pawan <pawan@mayadata.io>

* adding a test case for pv rename

Signed-off-by: Pawan <pawan@mayadata.io>
2020-09-01 14:25:13 -07:00
Carlisia Campos
c952932f1b Migrate ServerStatusRequest controller and resource to kubebuilder (#2838)
* Convert ServerStatusRequest controller to controller-runtime

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add select stm

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fixed status patch bug

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add mgr start

Signed-off-by: Carlisia <carlisia@vmware.com>

* Trying to sync

Signed-off-by: Carlisia <carlisia@vmware.com>

* Clean async now

Signed-off-by: Carlisia <carlisia@vmware.com>

* Clean up + move context out

Signed-off-by: Carlisia <carlisia@vmware.com>

* Bug: not closing the channel

Signed-off-by: Carlisia <carlisia@vmware.com>

* Clean up some tests

Signed-off-by: Carlisia <carlisia@vmware.com>

* Much better way to fetch an update using a backoff loop

Signed-off-by: Carlisia <carlisia@vmware.com>

* Even better way to retry: use apimachinery lib

Signed-off-by: Carlisia <carlisia@vmware.com>

* Refactor controller + add test

Signed-off-by: Carlisia <carlisia@vmware.com>

* partially fix unit tests

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* Fix and add tests

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add changelog

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add ability to disable the controller + cleanups

Signed-off-by: Carlisia <carlisia@vmware.com>

* Fix bug w/ disabling controllers + fix test + clean up

Signed-off-by: Carlisia <carlisia@vmware.com>

* Move role.yaml to the correct folder

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add sample serverstatusrequest.yaml

Signed-off-by: Carlisia <carlisia@vmware.com>

* Add requeue + better formatting

Signed-off-by: Carlisia <carlisia@vmware.com>

* Increase # of max concurrent reconciles

Signed-off-by: Carlisia <carlisia@vmware.com>

Co-authored-by: Ashish Amarnath <ashisham@vmware.com>
2020-09-01 14:15:23 -07:00
Nolan Brubaker
aed504a0fd Fix git commands and add dry run mode as default to the tag-release.sh script. (#2875)
* Fix git commands and add missing comment

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add dry-run mode to tag-release.sh

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Make if statement formatting more consistent

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add documentaion for dry-run mode

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Better support for dry-run

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-09-01 12:59:05 -07:00
JenTing Hsiao
1513674548 fix EnableAPIGroupersions output log format (#2882)
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
2020-09-01 13:50:44 -04:00
Jonas Rosland
9764845530 Add CII Best Practices badge to README (#2880)
Signed-off-by: jonasrosland <jrosland@vmware.com>
2020-08-31 17:33:03 -04:00
Jonas Rosland
1dcaa1bf75 Rename security policy file to show up accurately in the GitHub UI (#2879)
Signed-off-by: jonasrosland <jrosland@vmware.com>
2020-08-31 12:10:09 -04:00
Abigail McCarthy
ac50b457ad add hugo default TOC (#2866)
* add hugo default TOC

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* point contributors to style guide (#2872)

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* add hugo default TOC

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* remove unused links

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2020-08-28 14:06:11 -04:00
Abigail McCarthy
9cf35d9ba7 add new table shortcode (#2865)
* add new table shortcode

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* fix typo

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* update shortcode commentn

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* fix messed up table

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* fixing 2 more tables

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* update 1.4 supporteed providers

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* add note about links in tables

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2020-08-28 12:03:43 -04:00
Abigail McCarthy
36a4c28f61 point contributors to style guide (#2872)
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2020-08-26 17:13:16 -07:00
Ashish Amarnath
474de24d48 📖 update documentation to push container images using docker buildx (#2854)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-25 15:40:01 -07:00
Abigail McCarthy
648582c85e Update release checklist to include more info around blog posts and r… (#2837)
* Update release checklist to include more info around blog posts and release announcements

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* updating links

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* update from review

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2020-08-25 15:37:22 -04:00
Abigail McCarthy
839a9646aa update docs to match style guide (#2861)
* update docs to match style guide

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* update web site guide

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2020-08-25 10:02:21 -07:00
Dylan Murray
7369e4d99e Check for errors on restic backup command (#2863)
* Check for errors on restic backup command

Signed-off-by: Dylan Murray <dymurray@redhat.com>

* Add changelog

Signed-off-by: Dylan Murray <dymurray@redhat.com>
2020-08-25 08:51:50 -07:00
Carlisia Campos
03bd6c85c8 Better var names (#2848)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-08-24 15:14:07 -04:00
Nolan Brubaker
f5d10c5474 Merge pull request #2853 from ashish-amarnath/fix-server-version
🐛  fix passing LDFLAGS across build stages
2020-08-21 18:18:34 -04:00
Ashish Amarnath
20ac603747 🐛 fix passing LDFLAGS across build stages
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-21 14:20:37 -07:00
Nolan Brubaker
45168087f1 v1.5.0-beta.1 changelog and docs (#2849)
* Add changelogs for v1.5.0-beta.1

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add v1.5-pre docs

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-08-21 12:07:24 -07:00
Nolan Brubaker
97c14e34be Merge pull request #2844 from ashish-amarnath/1.5-upgrade-instructions
📖  document velero v1.5 upgrade instructions
2020-08-21 12:35:39 -04:00
Nolan Brubaker
718a94ad05 Invoke DeleteItemActions on backup deletion (#2815)
* Add serving and listing support

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-08-20 17:24:29 -07:00
Ashish Amarnath
89d4e4417f 📖 document velero v1.5 upgrade instructions
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-20 16:21:36 -07:00
Abigail McCarthy
71fd7cc5a7 add index files to api types folder (#2839)
* add index files to api tyypes folder

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* updating to using cascade

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2020-08-20 14:30:18 -07:00
Ashish Amarnath
d33982b811 🏃‍♂️remove go mod download from Dockerfile for build speedup (#2842)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-19 13:45:08 -07:00
Nolan Brubaker
e0098d8a69 Update the number of reviewers for PRs/add Bridget (#2843)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-08-19 12:42:43 -07:00
Nolan Brubaker
e9ece0f7b5 Implement DeleteItemAction plugin support (#2808)
* Add DeleteItemAction struct & protobuf definition

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-08-18 12:16:26 -07:00
Steph Bman
d1a1d063e1 Create Velero security policy (#2797)
* Create securitypolicy.md

Creating the Security Policy documentation for Project Velero.
2020-08-18 11:28:14 -07:00
Ashish Amarnath
e6eb5372ea update image link in readme (#2827)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-18 11:06:16 -07:00
JenTing Hsiao
62b2a0e17f The EnableCSI flag on velero backup describe command only (#2817)
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
2020-08-18 11:05:45 -07:00
Ashish Amarnath
9ed43f96c1 📖 update default-volumes-to-restic exclusion list (#2828)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-18 11:05:12 -07:00
Imran Pochi
52b6838004 docs: add metadata to resource-filtering.md (#2832)
This metadata is required by hugo to discover the content in the
documentation website, without which a page not found is shown to the
viewer.

Fixes: #2831

Signed-off-by: Imran Pochi <imran@kinvolk.io>
2020-08-18 11:02:49 -07:00
Benoit Gagnon
5d2c9e2ba1 Override logrus.ErrorKey when json logging is enabled (#2830)
* override logrus.ErrorKey when json logging is enabled

Signed-off-by: Benoit Gagnon <benoit.gagnon@ubisoft.com>

* document the logrus.ErrorKey override

Signed-off-by: Benoit Gagnon <benoit.gagnon@ubisoft.com>

* add changelog entry

Signed-off-by: Benoit Gagnon <benoit.gagnon@ubisoft.com>
2020-08-18 13:53:45 -04:00
runzexia
3a4e441af8 add kindfor func to get apiresource from gvk (#2764)
* add kindfor func to get apiresource from gvk

Signed-off-by: RyderXia <ryder.xia@sap.com>

* impl interface & changelog

Signed-off-by: RyderXia <ryder.xia@sap.com>
2020-08-17 15:52:33 -07:00
Tony Batard
c663ce15ab Hugo migration (#2720)
* Move files to a Hugo structure

Signed-off-by: Tony Batard <tbatard@pivotal.io>
2020-08-13 09:09:15 -07:00
Nolan Brubaker
681123596f Checkout code on builder image workflow (#2816)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-08-12 17:55:53 -07:00
Phuong N. Hoang
14170b52a8 Enhance Backup to backup resources in specific order. (#2724)
Signed-off-by: Phuong Hoang <phuong.n.hoang@dell.com>

Co-authored-by: Phuong Hoang <phuong.n.hoang@dell.com>
2020-08-12 17:17:31 -07:00
Steph Bman
dd2d040fcf Create restore-hooks_product-requirements.md (#2699)
Restore Hooks Design Proposal

Signed-off-by: Stephanie Bauman <bstephanie@vmware.com>
Co-authored-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-12 16:13:52 -07:00
Ashish Amarnath
d4bbd7b817 🐛 Patch generated yaml for restore CRD as a hacky workaround (#2814)
* 🐛 Patch generated yaml for restore CRD as a hacky workaround

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* changelog

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-12 18:43:29 -04:00
Jason Scarano
827d5d34f5 Improve and clarify cmd help documentation, flags, and examples (#2736)
* capitalize `backup create` cmd comments & examples

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* update copyright and capitilize flags and comments

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* update copyright and capitilize flags and comments

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* update copyright and capitilize flags and comments

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* update backuplocation, restic, & restore cmd doc

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* fix local typo

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* update copyrights & capitalize pflag/help strings

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* update copyright in utils dir

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* Revert "update copyright in utils dir"

This reverts commit d116efe3a3.

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* revert copyright changes

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* restore missing file

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* revert copyright changes

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

* add cacert flag back

Signed-off-by: Jason Scarano <scaranoj@vmware.com>

Co-authored-by: Carlisia Campos <carlisia@vmware.com>
2020-08-12 18:13:44 -04:00
Abigail McCarthy
98a09bcbc5 add note about windows support (#2806)
* add note about windows support

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* adding to 1.4 docs and adjusting wording to be more clear

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2020-08-12 18:03:36 -04:00
Ashish Amarnath
9eca0fcbff Pass default-volumes-to-restic flag from create schedule to backup (#2776)
* Pass default-volumes-to-restic flag from create schedule to backup

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-12 12:09:07 -07:00
Nolan Brubaker
70e9391278 Add design doc for DeletionAction plugins (#2713)
* Add design doc for DeletionAction plugins

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-08-12 11:21:55 -07:00
Carlisia Campos
d7d6a85e46 Add an auto-rebase workflow (#2813)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-08-12 10:27:41 -07:00
Nolan Brubaker
7914138dd7 Merge pull request #2802 from ashish-amarnath/fix-restic-restore
🐛 Supply command to run restic-wait init container
2020-08-11 14:37:19 -04:00
Ashish Amarnath
e391e43192 🐛 Supply command to run restic-wait init container
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-11 11:05:30 -07:00
Nolan Brubaker
5b28d70e49 Switch event to use pull_request_target (#2807)
The pull_request_target event is like pull_request, but runs in the
context of the target repo (Velero, in this case), instead of the fork.
This allows us to use the GitHub token secret as expected.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-08-11 10:57:16 -07:00
Ashish Amarnath
a68e5fc330 🏃‍♂️ Setup crd validation github action on k8s versions (#2805)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-11 10:45:12 -07:00
Ashish Amarnath
5d6da6517b Implement restore hooks injecting init containers into pod spec (#2787)
*  Implement restore hooks injecting init containers into pod spec

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-11 10:38:44 -07:00
Ashish Amarnath
9b9bb62968 🐛 Make init and exec restore hooks as omitempty in restore hookSpec (#2793)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-08-11 10:13:41 -07:00
Abigail McCarthy
4364a813c1 update docs to include cpu/memory defaults for restic (#2772)
* update docs to include cpu/memory defaults

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* fixes from review

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* updates from review

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* update to use kubectl patch command

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>

* update typos and add changes to 1.4 docs

Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
2020-08-10 09:38:35 -07:00
Nolan Brubaker
6edf279bd8 Merge pull request #2803 from vmware-tanzu/dependabot/bundler/site/kramdown-2.3.0
Bump kramdown from 2.2.1 to 2.3.0 in /site
2020-08-10 12:21:40 -04:00
dependabot[bot]
1386a85657 Bump kramdown from 2.2.1 to 2.3.0 in /site
Bumps [kramdown](https://github.com/gettalong/kramdown) from 2.2.1 to 2.3.0.
- [Release notes](https://github.com/gettalong/kramdown/releases)
- [Changelog](https://github.com/gettalong/kramdown/blob/master/doc/news.page)
- [Commits](https://github.com/gettalong/kramdown/commits)

Signed-off-by: dependabot[bot] <support@github.com>
2020-08-08 01:14:58 +00:00
Alex Punnen
0b87fbbde8 Update minio.md (#2799)
Added  option for --snapshot-location-config region as else it will give an error during backup - https://github.com/vmware-tanzu/velero-plugin-for-aws/issues/12

Signed-off-by: Alex Punnen <alexcpn@gmail.com>
2020-08-07 14:35:17 -07:00
Mike Tritabaugh
4e2f4bb6af Add resource filtering page (#2771)
* added resource filtering page

Signed-off-by: mtritabaugh <mtritabaugh@vmware.com>

* added resource filtering page to v1.4

Signed-off-by: mtritabaugh <mtritabaugh@vmware.com>

* change  to  per style guide

Signed-off-by: mtritabaugh <mtritabaugh@vmware.com>
2020-08-06 15:09:59 -07:00
Benoit Gagnon
0e8a7a23cb always use groupResource.String() when logging (fixes #2795) (#2796)
Signed-off-by: Benoit Gagnon <benoit.gagnon@ubisoft.com>
2020-08-06 10:10:59 -07:00
Piper Dougherty
19e65689ef Add the ability to set the allowPrivilegeEscalation property on the Restic restore helper via plugin ConfigMap (#2792)
* Add the ability to set the `allowPrivilegeEscalation` security context attribute on the Restic restore helper init container.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Add changelog.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Fix old tests and add tests for new allowPrivilegeEscalation config option.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Correct spelling in changelog.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Switch to boolptr type.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>

* Reorder imports for sanity.

Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>
2020-08-06 13:08:36 -04:00
Rob Reus
6ac0398c7b Reverting change on 1.4 docs and re-applying to main docs (#2791)
Signed-off-by: Rob Reus <rob@devrobs.nl>
2020-08-04 14:11:52 -07:00
Rob Reus
db139cf07c Refactor image builds to use buildx for multi arch image building (#2754)
* Refactor image builds to use buildx for multi arch image building

Signed-off-by: Rob Reus <rob@devrobs.nl>

* Adding image build sanity checks to Makefile

Signed-off-by: Rob Reus <rob@devrobs.nl>

* Making locally building of docker images possible

Signed-off-by: Rob Reus <rob@devrobs.nl>

* Adding docs on building container images using buildx

Signed-off-by: Rob Reus <rob@devrobs.nl>

* Adding changelog and implementing feedback from PR

Signed-off-by: Rob Reus <rob@devrobs.nl>

* Making GOPROXY used in the build containers configurable

Signed-off-by: Rob Reus <rob@devrobs.nl>
2020-08-04 11:40:05 -07:00
Nolan Brubaker
4e05e81ca2 Merge pull request #2786 from skriss/upd-build-badge
update CI badge on README
2020-08-04 13:55:26 -04:00
Steve Kriss
c5a2137538 update CI badge on README
Signed-off-by: Steve Kriss <krisss@vmware.com>
2020-08-03 12:53:49 -06:00
JenTing Hsiao
1fdb647c7f Add cacert flag for velero backup-location create (#2778)
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
2020-07-30 11:33:45 -07:00
Ashish Amarnath
9de644f1a4 Add types to implement restore hooks (#2761)
* Add types to implement restore hooks

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-07-29 13:29:40 -07:00
Carlisia Campos
5bafa9b317 Add JenTing Hsiao (#2768)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-07-29 12:41:28 -07:00
Carlisia Campos
fcf7f27967 Moved FAQ (#2769)
* Moved FAQ

Signed-off-by: Carlisia <carlisia@vmware.com>

* Better entry point

Signed-off-by: Carlisia <carlisia@vmware.com>
2020-07-29 10:16:32 -07:00
Ashish Amarnath
028818a053 exclude vols mounting secrets and configmaps from defaultVolumesToRestic (#2762)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-07-27 20:27:49 -07:00
Nolan Brubaker
94872ea2fc Add github token so action can actually auth (#2766)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-07-27 14:35:26 -07:00
Ashish Amarnath
8e672408a2 📖 update VSC DeletionPolicy docs to be inline with code (#2765)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-07-27 13:42:38 -07:00
Nolan Brubaker
7005879f3f Fix YAML in auto-assign GitHub Action (#2759)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-07-24 14:08:09 -07:00
Ashish Amarnath
bc25e789e0 Add constants for restore hook annotation keys (#2750)
* add annotation key constants for restore hooks

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-07-24 13:05:27 -07:00
Nolan Brubaker
cffb639380 Auto-assign github PR reviewers and assignee (#2758)
Use a GitHub Action to automatically assign GitHub PRs to the author, as
well as add reviewers.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-07-24 12:58:44 -07:00
Andrew Reed
9011b192e9 Add hooks fields to restore context (#2755)
* Add hooks fields to restore context

Signed-off-by: Andrew Reed <andrew@replicated.com>

* Changelog

Signed-off-by: Andrew Reed <andrew@replicated.com>
2020-07-24 11:43:44 -07:00
Nolan Brubaker
bbcbde084d Create hook package (#2734)
* Move pkg/backup/item_hook_handler to internal/hoo

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>

* Add internal packages to test target

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-07-22 14:26:14 -07:00
Nolan Brubaker
e83ec79df3 Merge pull request #2751 from ashish-amarnath/fix-boilerplate
Use correct year for copyright
2020-07-22 15:32:24 -04:00
Ashish Amarnath
2636730ef2 fix copyright year
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-07-22 12:11:59 -07:00
Martin Odstrčilík
86efd1577e add support for setting SecurityContext (user, group) for restic restore (#2621)
* add support for setting SecurityContext (user, group) for restic restore

Signed-off-by: Martin Odstrcilik <martin.odstrcilik@gmail.com>
2020-07-22 12:10:25 -07:00
Ashish Amarnath
91ccc4adb2 Add metrics for restic back up operation (#2719)
* add metrics for restic back up operation

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

* changelog

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-07-22 15:07:52 -04:00
Ashish Amarnath
a63a82fcb0 📖 update csi docs to document volumesnapshotclass label (#2741)
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
2020-07-22 11:49:47 -07:00
Thejas Babu
d0d143e119 Add StartTimestamp and CompletionTimestamp in Restore Status (#2748)
Signed-off-by: thejas <thejasb99@gmail.com>
2020-07-22 11:40:39 -07:00
Carlisia Campos
c27c3cd56a Fix path + add helpful message (#2740)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-07-22 11:13:02 -07:00
Nolan Brubaker
84dbd13313 Move style guide to main (#2735)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-07-21 11:30:05 -07:00
Nolan Brubaker
caeb4ae404 Update changelogs for v1.4.2 (#2705)
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-07-21 11:14:41 -07:00
Carlisia Campos
e60d9f2821 Remove unneeded auto-generated files (#2732)
Signed-off-by: Carlisia <carlisia@vmware.com>
2020-07-20 15:37:27 -07:00
fvsqr
01e2dcb364 StorageGrid compatibility (#2712)
* remove explicit Accept-Encoding header

For StorageGrid compatibility the Accept-Encoding header should not be set, otherwise StorageGrid compresses the already compressed log files which are only decompressed by the client once

Signed-off-by: fvsqr <48791253+fvsqr@users.noreply.github.com>

* Removed explicit gzip Accept-Encoding header

For StorageGrid compatibility the Accept-Encoding header should not be set, otherwise StorageGrid compresses the already compressed log files which are only decompressed by the client once.
Unclear, how this affects Backup endpoints from Azure or GCP

Signed-off-by: fvsqr <48791253+fvsqr@users.noreply.github.com>

* Create 2712-fvsqr

Signed-off-by: fvsqr <48791253+fvsqr@users.noreply.github.com>
2020-07-20 13:11:26 -04:00
Marc Campbell
9189cffb1c Design proposal for Restore hooks (#2465)
* Add design proposal for restore hooks

Signed-off-by: Marc Campbell <marc.e.campbell@gmail.com>

* Add details to restore hooks design

Signed-off-by: Marc Campbell <marc.e.campbell@gmail.com>

* Restore initContainers and requested changes

Change post-restore exec hooks to wait for container running status
instead of pod ready status.
Add separate exec timeout and wait timeouts for post-restore exec hooks.

Signed-off-by: Marc Campbell <marc.e.campbell@gmail.com>

Co-authored-by: Andrew Reed <andrew@replicated.com>
2020-07-20 08:34:16 -07:00
Nolan Brubaker
f42c63af1b Address insensitive language (#2677)
* Change master to main in most uses

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2020-07-17 14:59:51 -07:00
2676 changed files with 105003 additions and 73197 deletions

3
.dockerignore Normal file
View File

@@ -0,0 +1,3 @@
.go/
.go.std/
site/

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Velero Q&A
url: https://github.com/vmware-tanzu/velero/discussions/categories/community-support-q-a
about: Have questions about Velero? Please ask them here.

14
.github/auto_assign.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
addReviewers: true
addAssignees: author
# Only require 2, random reviewers.
# TODO expand this to support using reviewGroups
numberOfReviewers: 2
reviewers:
- nrb
- ashish-amarnath
- carlisia
- zubron
- dsu-igeek
- jenting

41
.github/labels.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
area:
- "Cloud/AWS"
- "Cloud/GCP"
- "Cloud/Azure"
- "Design"
- "Plugins"
# Labels that can be applied to PRs with the /kind command
kind:
- "changelog-not-required"
- "tech-debt"
# Works with https://github.com/actions/labeler/
# Below this line, the keys are labels to be applied, and the values are the file globs to match against.
# Anything in the `design` directory gets the `Design` label.
Area/Design:
- design/*
# Anything in the site directory gets the website label *EXCEPT* docs
Website:
- any: ["site/**/*", "!site/content/docs/**/*"]
Documentation:
- site/content/docs/**/*
Dependencies:
- go.mod
# Anything that has plugin infra will be labeled.
# Individual plugins don't necessarily live here, though
Area/Plugins:
- "pkg/plugins/**/*"
has-unit-tests:
- "pkg/**/*_test.go"
has-e2e-2tests:
- "test/e2e/**/*"
has-changelog:
- "changelogs/**"

13
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,13 @@
Thank you for contributing to Velero!
# Please add a summary of your change
# Does your change fix a particular issue?
Fixes #(issue)
# Please indicate you've done the following:
- [ ] [Accepted the DCO](https://velero.io/docs/v1.5/code-standards/#dco-sign-off). Commits without the DCO will delay acceptance.
- [ ] [Created a changelog file](https://velero.io/docs/v1.5/code-standards/#adding-a-changelog) or added `/kind changelog-not-required`.
- [ ] Updated the corresponding documentation in `site/content/docs/main`.

16
.github/workflows/auto_assign_prs.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
name: "Auto Assign PR Reviewers"
# pull_request_target means that this will run on pull requests, but in the context of the base repo.
# This should mean PRs from forks are supported.
on:
pull_request_target:
types: [opened, reopened, ready_for_review]
jobs:
# Automatically assigns reviewers and owner
add-reviews:
runs-on: ubuntu-latest
steps:
- uses: kentaro-m/auto-assign-action@v1.1.1
with:
configuration-path: ".github/auto_assign.yml"
repo-token: "${{ secrets.GITHUB_TOKEN }}"

19
.github/workflows/auto_label_prs.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: "Auto Label PRs"
# pull_request_target means that this will run on pull requests, but in the context of the base repo.
# This should mean PRs from forks are supported.
# Because it includes the `synchronize` parameter, any push of a new commit to the HEAD ref of a pull request
# will trigger this process.
on:
pull_request_target:
types: [opened, reopened, synchronize, ready_for_review]
jobs:
# Automatically labels PRs based on file globs in the change.
triage:
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v3
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/labels.yml

86
.github/workflows/crds-verify-kind.yaml vendored Normal file
View File

@@ -0,0 +1,86 @@
name: "Verify Velero CRDs across k8s versions"
on:
pull_request:
# Do not run when the change only includes these directories.
paths-ignore:
- "site/**"
- "design/**"
jobs:
# Build the Velero CLI once for all Kubernetes versions, and cache it so the fan-out workers can get it.
build-cli:
runs-on: ubuntu-latest
steps:
# Look for a CLI that's made for this PR
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- name: Fetch cached go modules
uses: actions/cache@v2
if: steps.cache.outputs.cache-hit != 'true'
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Check out the code
uses: actions/checkout@v2
if: steps.cache.outputs.cache-hit != 'true'
# If no binaries were built for this PR, build it now.
- name: Build Velero CLI
if: steps.cache.outputs.cache-hit != 'true'
run: |
make local
# Check the common CLI against all kubernetes versions
crd-check:
needs: build-cli
runs-on: ubuntu-latest
strategy:
matrix:
# Latest k8s versions. There's no series-based tag, nor is there a latest tag.
k8s:
- 1.15.12
- 1.16.15
- 1.17.17
- 1.18.15
- 1.19.7
- 1.20.2
# All steps run in parallel unless otherwise specified.
# See https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#creating-dependent-jobs
steps:
- name: Fetch built CLI
id: cache
uses: actions/cache@v2
env:
cache-name: cache-velero-cli
with:
path: ./_output/bin/linux/amd64/velero
# The cache key a combination of the current PR number, and a SHA256 hash of the Velero binary
key: velero-${{ github.event.pull_request.number }}-${{ hashFiles('./_output/bin/linux/amd64/velero') }}
# This key controls the prefixes that we'll look at in the cache to restore from
restore-keys: |
velero-${{ github.event.pull_request.number }}-
- uses: engineerd/setup-kind@v0.5.0
with:
image: "kindest/node:v${{ matrix.k8s }}"
- name: Install CRDs
run: |
kubectl cluster-info
kubectl get pods -n kube-system
kubectl version
echo "current-context:" $(kubectl config current-context)
echo "environment-kubeconfig:" ${KUBECONFIG}
./_output/bin/linux/amd64/velero install --crds-only --dry-run -oyaml | kubectl apply -f -

18
.github/workflows/milestoned-issues.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Add issues with a milestone to the milestone's board
on:
issues:
types: [milestoned]
jobs:
automate-project-columns:
runs-on: ubuntu-latest
steps:
- uses: alex-page/github-project-automation-plus@v0.3.0
with:
# Do NOT add PRs to the board, as that's duplication. Their corresponding issue should be on the board.
if: ${{ !github.event.issue.pull_request }}
project: "${{ github.event.issue.milestone.title }}"
column: "To Do"
repo-token: ${{ secrets.GH_TOKEN }}

View File

@@ -0,0 +1,15 @@
name: Move new issues into Triage
on:
issues:
types: [opened]
jobs:
automate-project-columns:
runs-on: ubuntu-latest
steps:
- uses: alex-page/github-project-automation-plus@v0.3.0
with:
project: "Velero Support Board"
column: "New"
repo-token: ${{ secrets.GH_TOKEN }}

View File

@@ -11,5 +11,5 @@ jobs:
uses: actions/checkout@v2
- name: Changelog check
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'changelog-not-required') || contains(github.event.pull_request.labels.*.name, 'Design') || contains(github.event.pull_request.labels.*.name, 'Website') || contains(github.event.pull_request.labels.*.name, 'Documentation'))}}
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'kind/changelog-not-required') || contains(github.event.pull_request.labels.*.name, 'Design') || contains(github.event.pull_request.labels.*.name, 'Website') || contains(github.event.pull_request.labels.*.name, 'Documentation'))}}
run: ./hack/changelog-check.sh

View File

@@ -1,14 +1,20 @@
name: Pull Request CI Check
on: [pull_request]
jobs:
build:
name: Run CI
runs-on: ubuntu-latest
steps:
- name: Check out the code
uses: actions/checkout@v2
- name: Check out the code
uses: actions/checkout@v2
- name: Fetch cached go modules
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Make ci
run: make ci
- name: Make ci
run: make ci

20
.github/workflows/pr-codespell.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: Pull Request Codespell Check
on: [pull_request]
jobs:
codespell:
name: Run Codespell
runs-on: ubuntu-latest
steps:
- name: Check out the code
uses: actions/checkout@v2
- name: Codespell
uses: codespell-project/actions-codespell@master
with:
# ignore the config/.../crd.go file as it's generated binary data that is edited elswhere.
skip: .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico,./config/crd/crds/crds.go
ignore_words_list: iam,aks,ist,bridget,ue
check_filenames: true
check_hidden: true

20
.github/workflows/prow-action.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
# Adds support for prow-like commands
# Uses .github/labels.yaml to define areas and kinds
name: "Prow github actions"
on:
issue_comment:
types: [created]
jobs:
execute:
runs-on: ubuntu-latest
steps:
- uses: jpmcb/prow-github-actions@v1.1.2
with:
# Only support /kind command for now.
# TODO: before allowing the /lgtm command, see if we can block merging if changelog labels are missing.
prow-commands: "/area
/kind
/cc
/uncc"
github-token: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -2,7 +2,7 @@ name: build-image
on:
push:
branches: [ master ]
branches: [ main ]
paths:
- 'hack/build-image/Dockerfile'
@@ -12,10 +12,14 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Build
run: make build-image
# Only try to publish the container image from the root repo; forks don't have permission to do so and will always get failures.
- name: Publish container image
if: github.repository == 'vmware-tanzu/velero'
run: |
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}

View File

@@ -1,8 +1,8 @@
name: Master CI
name: Main CI
on:
push:
branches: [ master ]
branches: [ main ]
tags:
- '*'
@@ -13,22 +13,36 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.14
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.14
go-version: 1.15
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Build
run: make local
- name: Test
run: make test
# Only try to publish the container image from the root repo; forks don't have permission to do so and will always get failures.
- name: Publish container image
if: github.repository == 'vmware-tanzu/velero'
run: |
docker login -u ${{ secrets.DOCKER_USER }} -p ${{ secrets.DOCKER_PASSWORD }}
./hack/docker-push.sh

18
.github/workflows/rebase.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
on:
issue_comment:
types: [created]
name: Automatic Rebase
jobs:
rebase:
name: Rebase
if: github.event.issue.pull_request != '' && contains(github.event.comment.body, '/rebase')
runs-on: ubuntu-latest
steps:
- name: Checkout the latest code
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Automatic Rebase
uses: cirrus-actions/rebase@1.3.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

24
.github/workflows/stale-issues.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: "Close stale issues and PRs"
on:
schedule:
# First of every month
- cron: "30 1 * * *"
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: "This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."
close-issue-message: "This issue was closed because it has been stalled for 5 days with no activity."
days-before-issue-stale: 30
days-before-issue-close: 5
# Disable stale PRs for now; they can remain open.
days-before-pr-stale: -1
days-before-pr-close: -1
# Only issues made after Feb 09 2021.
start-date: "2021-09-02T00:00:00"
# Only make issues stale if they have these labels. Comma separated.
only-labels: "Needs info,Duplicate"

21
.gitignore vendored
View File

@@ -28,7 +28,6 @@ debug
/velero
.idea/
Tiltfile
.container-*
.vimrc
@@ -38,14 +37,16 @@ Tiltfile
.vscode
*.diff
# Jekyll compiled data
site/_site
site/.sass-cache
site/.jekyll
site/.jekyll-metadata
site/.jekyll-cache
site/.bundle
site/vendor
.ruby-version
# Hugo compiled data
site/public
site/resources
.vs
# these are files for local Tilt development:
_tiltbuild
tilt-resources/tilt-settings.json
tilt-resources/velero_v1_backupstoragelocation.yaml
tilt-resources/deployment.yaml
tilt-resources/restic.yaml
tilt-resources/cloud

View File

@@ -3,16 +3,16 @@
If you're using Velero and want to add your organization to this list,
[follow these directions][1]!
<a href="https://www.bitgo.com" border="0" target="_blank"><img alt="bitgo.com" src="site/img/adopters/BitGo.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://www.nirmata.com" border="0" target="_blank"><img alt="nirmata.com" src="site/img/adopters/nirmata.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://kyma-project.io/" border="0" target="_blank"><img alt="kyma-project.io" src="site/img/adopters/kyma.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://redhat.com/" border="0" target="_blank"><img alt="redhat.com" src="site/img/adopters/redhat.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://dellemc.com/" border="0" target="_blank"><img alt="dellemc.com" src="site/img/adopters/DellEMC.png" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://bugsnag.com/" border="0" target="_blank"><img alt="bugsnag.com" src="site/img/adopters/bugsnag.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://okteto.com/" border="0" target="_blank"><img alt="okteto.com" src="site/img/adopters/okteto.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://banzaicloud.com/" border="0" target="_blank"><img alt="banzaicloud.com" src="site/img/adopters/banzaicloud.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://sighup.io/" border="0" target="_blank"><img alt="sighup.io" src="site/img/adopters/sighup.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://mayadata.io/" border="0" target="_blank"><img alt="mayadata.io" src="site/img/adopters/mayadata.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://www.bitgo.com" border="0" target="_blank"><img alt="bitgo.com" src="site/static/img/adopters/BitGo.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://www.nirmata.com" border="0" target="_blank"><img alt="nirmata.com" src="site/static/img/adopters/nirmata.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://kyma-project.io/" border="0" target="_blank"><img alt="kyma-project.io" src="site/static/img/adopters/kyma.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://redhat.com/" border="0" target="_blank"><img alt="redhat.com" src="site/static/img/adopters/redhat.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://dellemc.com/" border="0" target="_blank"><img alt="dellemc.com" src="site/static/img/adopters/DellEMC.png" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://bugsnag.com/" border="0" target="_blank"><img alt="bugsnag.com" src="site/static/img/adopters/bugsnag.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://okteto.com/" border="0" target="_blank"><img alt="okteto.com" src="site/static/img/adopters/okteto.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://banzaicloud.com/" border="0" target="_blank"><img alt="banzaicloud.com" src="site/static/img/adopters/banzaicloud.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://sighup.io/" border="0" target="_blank"><img alt="sighup.io" src="site/static/img/adopters/sighup.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
<a href="https://mayadata.io/" border="0" target="_blank"><img alt="mayadata.io" src="site/static/img/adopters/mayadata.svg" height="50"></a>&nbsp; &nbsp; &nbsp;
## Success Stories
@@ -56,7 +56,7 @@ Okteto integrates Velero in [Okteto Cloud][94] and [Okteto Enterprise][95] to pe
## Adding your organization to the list of Velero Adopters
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/static/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
### Adding a logo to velero.io

View File

@@ -1,10 +1,9 @@
## Current release:
* [CHANGELOG-1.4.md][14]
## Development release:
* [Unreleased Changes][0]
* [CHANGELOG-1.6.md][16]
## Older releases:
* [CHANGELOG-1.5.md][15]
* [CHANGELOG-1.4.md][14]
* [CHANGELOG-1.3.md][13]
* [CHANGELOG-1.2.md][12]
* [CHANGELOG-1.1.md][11]
@@ -20,18 +19,20 @@
* [CHANGELOG-0.3.md][1]
[14]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.4.md
[13]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.3.md
[12]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.2.md
[11]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.1.md
[10]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-1.0.md
[9]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.11.md
[8]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.10.md
[7]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.9.md
[6]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.8.md
[5]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.7.md
[4]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.6.md
[3]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.5.md
[2]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.4.md
[1]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/CHANGELOG-0.3.md
[0]: https://github.com/vmware-tanzu/velero/blob/master/changelogs/unreleased
[16]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.6.md
[15]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.5.md
[14]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.4.md
[13]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.3.md
[12]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.2.md
[11]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.1.md
[10]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-1.0.md
[9]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.11.md
[8]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.10.md
[7]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.9.md
[6]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.8.md
[5]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.7.md
[4]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.6.md
[3]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.5.md
[2]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.4.md
[1]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/CHANGELOG-0.3.md
[0]: https://github.com/vmware-tanzu/velero/blob/main/changelogs/unreleased

View File

@@ -2,19 +2,28 @@
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
We as members, contributors, and leaders pledge to make participation in the Velero project and our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our community include:
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall community
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
@@ -29,56 +38,90 @@ Examples of unacceptable behavior include:
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [oss-coc@vmware.com](mailto:oss-coc@vmware.com). All complaints will be reviewed and investigated promptly and fairly.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at oss-coc@vmware.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of actions.
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the community.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0,
available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

View File

@@ -1,3 +1,3 @@
# Contributing
Authors are expected to follow some guidelines when submitting PRs. Please see [our documentation](https://velero.io/docs/master/code-standards/) for details.
Authors are expected to follow some guidelines when submitting PRs. Please see [our documentation](https://velero.io/docs/main/code-standards/) for details.

61
Dockerfile Normal file
View File

@@ -0,0 +1,61 @@
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=$BUILDPLATFORM golang:1.15 as builder-env
ARG GOPROXY
ARG PKG
ARG VERSION
ARG GIT_SHA
ARG GIT_TREE_STATE
ENV CGO_ENABLED=0 \
GO111MODULE=on \
GOPROXY=${GOPROXY} \
LDFLAGS="-X ${PKG}/pkg/buildinfo.Version=${VERSION} -X ${PKG}/pkg/buildinfo.GitSHA=${GIT_SHA} -X ${PKG}/pkg/buildinfo.GitTreeState=${GIT_TREE_STATE}"
WORKDIR /go/src/github.com/vmware-tanzu/velero
COPY . /go/src/github.com/vmware-tanzu/velero
RUN apt-get update && apt-get install -y bzip2
FROM --platform=$BUILDPLATFORM builder-env as builder
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
ARG PKG
ARG BIN
ARG RESTIC_VERSION
ENV GOOS=${TARGETOS} \
GOARCH=${TARGETARCH} \
GOARM=${TARGETVARIANT}
RUN mkdir -p /output/usr/bin && \
bash ./hack/download-restic.sh && \
export GOARM=$( echo "${GOARM}" | cut -c2-) && \
go build -o /output/${BIN} \
-ldflags "${LDFLAGS}" ${PKG}/cmd/${BIN}
FROM ubuntu:focal
LABEL maintainer="Nolan Brubaker <brubakern@vmware.com>"
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -qq -y ca-certificates tzdata && rm -rf /var/lib/apt/lists/*
COPY --from=builder /output /
USER nobody:nogroup

View File

@@ -1,33 +0,0 @@
# Copyright 2017, 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM ubuntu:focal
LABEL maintainer="Nolan Brubaker <brubakern@vmware.com>"
RUN apt-get update && \
apt-get install -y --no-install-recommends ca-certificates wget bzip2 && \
wget --quiet https://github.com/restic/restic/releases/download/v0.9.6/restic_0.9.6_linux_amd64.bz2 && \
bunzip2 restic_0.9.6_linux_amd64.bz2 && \
mv restic_0.9.6_linux_amd64 /usr/bin/restic && \
chmod +x /usr/bin/restic && \
apt-get remove -y wget bzip2 && \
rm -rf /var/lib/apt/lists/*
ADD /bin/linux/amd64/velero /velero
USER nobody:nogroup
ENTRYPOINT ["/velero"]

View File

@@ -1,23 +0,0 @@
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM arm32v7/ubuntu:focal
ADD /bin/linux/arm/restic /usr/bin/restic
ADD /bin/linux/arm/velero /velero
USER nobody:nogroup
ENTRYPOINT ["/velero"]

View File

@@ -1,23 +0,0 @@
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM arm64v8/ubuntu:focal
ADD /bin/linux/arm64/restic /usr/bin/restic
ADD /bin/linux/arm64/velero /velero
USER nobody:nogroup
ENTRYPOINT ["/velero"]

View File

@@ -1,25 +0,0 @@
# Copyright 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM ppc64le/ubuntu:focal
LABEL maintainer="Prajyot Parab <prajyot.parab@ibm.com>"
ADD /bin/linux/ppc64le/restic /usr/bin/restic
ADD /bin/linux/ppc64le/velero /velero
USER nobody:nogroup
ENTRYPOINT ["/velero"]

View File

@@ -1,23 +0,0 @@
# Copyright 2018, 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM ubuntu:focal
LABEL maintainer="Nolan Brubaker <brubakern@vmware.com>"
ADD /bin/linux/amd64/velero-restic-restore-helper .
USER nobody:nogroup
ENTRYPOINT [ "/velero-restic-restore-helper" ]

View File

@@ -1,21 +0,0 @@
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM arm32v7/ubuntu:focal
ADD /bin/linux/arm/velero-restic-restore-helper .
USER nobody:nogroup
ENTRYPOINT [ "/velero-restic-restore-helper" ]

View File

@@ -1,21 +0,0 @@
# Copyright 2020 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM arm64v8/ubuntu:focal
ADD /bin/linux/arm64/velero-restic-restore-helper .
USER nobody:nogroup
ENTRYPOINT [ "/velero-restic-restore-helper" ]

View File

@@ -1,23 +0,0 @@
# Copyright 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM ppc64le/ubuntu:focal
LABEL maintainer="Prajyot Parab <prajyot.parab@ibm.com>"
ADD /bin/linux/ppc64le/velero-restic-restore-helper .
USER nobody:nogroup
ENTRYPOINT [ "/velero-restic-restore-helper" ]

View File

@@ -11,7 +11,7 @@ This document defines the project governance for Velero.
The following code repositories are governed by Velero community and maintained under the `vmware-tanzu\Velero` organization.
* **[Velero](https://github.com/vmware-tanzu/velero):** Main Velero codebase
* **[Helm Chart](https://github.com/vmware-tanzu/helm-charts/tree/master/charts/velero):** The Helm chart for the Velero server component
* **[Helm Chart](https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero):** The Helm chart for the Velero server component
* **[Velero CSI Plugin](https://github.com/vmware-tanzu/velero-plugin-for-csi):** This repository contains Velero plugins for snapshotting CSI backed PVCs using the CSI beta snapshot APIs
* **[Velero Plugin for vSphere](https://github.com/vmware-tanzu/velero-plugin-for-vsphere):** This repository contains the Velero Plugin for vSphere. This plugin is a volume snapshotter plugin that provides crash-consistent snapshots of vSphere block volumes and backup of volume data into S3 compatible storage.
* **[Velero Plugin for AWS](https://github.com/vmware-tanzu/velero-plugin-for-aws):** This repository contains the plugins to support running Velero on AWS, including the object store plugin and the volume snapshotter plugin
@@ -67,12 +67,12 @@ interested in implementing the proposal should be either deeply engaged in the
proposal process or be an author of the proposal.
The proposal should be documented as a separated markdown file pushed to the root of the
`design` folder in the [Velero](https://github.com/vmware-tanzu/velero/tree/master/design)
`design` folder in the [Velero](https://github.com/vmware-tanzu/velero/tree/main/design)
repository via PR. The name of the file should follow the name pattern `<short
meaningful words joined by '-'>_design.md`, e.g:
`restore-hooks-design.md`.
Use the [Proposal Template](https://github.com/vmware-tanzu/velero/blob/master/design/_template.md) as a starting point.
Use the [Proposal Template](https://github.com/vmware-tanzu/velero/blob/main/design/_template.md) as a starting point.
### Proposal Lifecycle

View File

@@ -1,15 +1,17 @@
# Velero Maintainers
[GOVERNANCE.md](https://github.com/vmware-tanzu/velero/blob/master/GOVERNANCE.md) describes governance guidelines and maintainer responsibilities.
[GOVERNANCE.md](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md) describes governance guidelines and maintainer responsibilities.
## Maintainers
| Maintainer | GitHub ID | Affiliation |
| --------------- | --------- | ----------- |
| Carlisia Campos | [carlisia](https://github.com/carlisia) | [VMware](https://www.github.com/vmware/) |
| Carlisia Thompson | [carlisia](https://github.com/carlisia) | [VMware](https://www.github.com/vmware/) |
| Nolan Brubaker | [nrb](https://github.com/nrb) | [VMware](https://www.github.com/vmware/) |
| Ashish Amarnath | [ashish-amarnath](https://github.com/ashish-amarnath) | [VMware](https://www.github.com/vmware/) |
| Stephanie Bauman | [stephbman](https://github.com/stephbman) | [VMware](https://www.github.com/vmware/) |
| Bridget McErlean | [zubron](https://github.com/zubron) | [VMware](https://www.github.com/vmware/) |
| Dave Smith-Uchida | [dsu-igeek](https://github.com/dsu-igeek) | [VMware](https://www.github.com/vmware/) |
| JenTing Hsiao | [jenting](https://github.com/jenting) | [SUSE](https://github.com/SUSE/)
## Emeritus Maintainers
* Adnan Abdulhussein ([prydonius](https://github.com/prydonius))
@@ -22,6 +24,6 @@
| ----------------------------- | :---------------------: |
| Technical Lead | Nolan Brubaker (nrb) |
| Kubernetes CSI Liaison | Nolan Brubaker (nrb), Ashish Amarnath (ashish-amarnath) |
| Deployment | Carlisia Campos (carlisia), Carlos Tadeu Panato Junior (cpanato) |
| Deployment | Carlisia Thompson (carlisia), Carlos Tadeu Panato Junior (cpanato), JenTing Hsiao (jenting) |
| Community Management | Jonas Rosland (jonasrosland) |
| Product Management | Stephanie Bauman (stephbman) |
| Product Management | Michael Michael (michmike) |

262
Makefile
View File

@@ -23,36 +23,78 @@ PKG := github.com/vmware-tanzu/velero
# Where to push the docker image.
REGISTRY ?= velero
# Build image handling. We push a build image for every changed version of
# Image name
IMAGE ?= $(REGISTRY)/$(BIN)
# We allow the Dockerfile to be configurable to enable the use of custom Dockerfiles
# that pull base images from different registries.
VELERO_DOCKERFILE ?= Dockerfile
BUILDER_IMAGE_DOCKERFILE ?= hack/build-image/Dockerfile
# Calculate the realpath of the build-image Dockerfile as we `cd` into the hack/build
# directory before this Dockerfile is used and any relative path will not be valid.
BUILDER_IMAGE_DOCKERFILE_REALPATH := $(shell realpath $(BUILDER_IMAGE_DOCKERFILE))
# Build image handling. We push a build image for every changed version of
# /hack/build-image/Dockerfile. We tag the dockerfile with the short commit hash
# of the commit that changed it. When determining if there is a build image in
# the registry to use we look for one that matches the current "commit" for the
# Dockerfile else we make one.
# In the case where the Dockerfile for the build image has been overridden using
# the BUILDER_IMAGE_DOCKERFILE variable, we always force a build.
ifneq "$(origin BUILDER_IMAGE_DOCKERFILE)" "file"
BUILDER_IMAGE_TAG := "custom"
else
BUILDER_IMAGE_TAG := $(shell git log -1 --pretty=%h $(BUILDER_IMAGE_DOCKERFILE))
endif
BUILDER_IMAGE_TAG := $(shell git log -1 --pretty=%h hack/build-image/Dockerfile)
BUILDER_IMAGE := $(REGISTRY)/build-image:$(BUILDER_IMAGE_TAG)
BUILDER_IMAGE_CACHED := $(shell docker images -q ${BUILDER_IMAGE} 2>/dev/null )
HUGO_IMAGE := hugo-builder
# Which architecture to build - see $(ALL_ARCH) for options.
# if the 'local' rule is being run, detect the ARCH from 'go env'
# if it wasn't specified by the caller.
local : ARCH ?= $(shell go env GOOS)-$(shell go env GOARCH)
ARCH ?= linux-amd64
VERSION ?= master
VERSION ?= main
TAG_LATEST ?= false
ifeq ($(TAG_LATEST), true)
IMAGE_TAGS ?= $(IMAGE):$(VERSION) $(IMAGE):latest
else
IMAGE_TAGS ?= $(IMAGE):$(VERSION)
endif
ifeq ($(shell docker buildx inspect 2>/dev/null | awk '/Status/ { print $$2 }'), running)
BUILDX_ENABLED ?= true
else
BUILDX_ENABLED ?= false
endif
define BUILDX_ERROR
buildx not enabled, refusing to run this recipe
see: https://velero.io/docs/main/build-from-source/#making-images-and-updating-velero for more info
endef
# The version of restic binary to be downloaded for power architecture
RESTIC_VERSION ?= 0.9.6
RESTIC_VERSION ?= 0.12.0
CLI_PLATFORMS ?= linux-amd64 linux-arm linux-arm64 darwin-amd64 windows-amd64 linux-ppc64le
CONTAINER_PLATFORMS ?= linux-amd64 linux-ppc64le linux-arm linux-arm64
MANIFEST_PLATFORMS ?= amd64 ppc64le arm arm64
BUILDX_PLATFORMS ?= $(subst -,/,$(ARCH))
BUILDX_OUTPUT_TYPE ?= docker
# set git sha and tree state
GIT_SHA = $(shell git rev-parse HEAD)
GIT_DIRTY = $(shell git status --porcelain 2> /dev/null)
ifneq ($(shell git status --porcelain 2> /dev/null),)
GIT_TREE_STATE ?= dirty
else
GIT_TREE_STATE ?= clean
endif
# The default linters used by lint and local-lint
LINTERS ?= "gosec,goconst,gofmt,goimports,unparam"
@@ -64,36 +106,7 @@ LINTERS ?= "gosec,goconst,gofmt,goimports,unparam"
platform_temp = $(subst -, ,$(ARCH))
GOOS = $(word 1, $(platform_temp))
GOARCH = $(word 2, $(platform_temp))
# Set default base image dynamically for each arch
ifeq ($(GOARCH),amd64)
DOCKERFILE ?= Dockerfile-$(BIN)
local-arch:
@echo "local environment for amd64 is up-to-date"
endif
ifeq ($(GOARCH),arm)
DOCKERFILE ?= Dockerfile-$(BIN)-arm
local-arch:
@mkdir -p _output/bin/linux/arm/
@wget -q -O - https://github.com/restic/restic/releases/download/v$(RESTIC_VERSION)/restic_$(RESTIC_VERSION)_linux_arm.bz2 | bunzip2 > _output/bin/linux/arm/restic
@chmod a+x _output/bin/linux/arm/restic
endif
ifeq ($(GOARCH),arm64)
DOCKERFILE ?= Dockerfile-$(BIN)-arm64
local-arch:
@mkdir -p _output/bin/linux/arm64/
@wget -q -O - https://github.com/restic/restic/releases/download/v$(RESTIC_VERSION)/restic_$(RESTIC_VERSION)_linux_arm64.bz2 | bunzip2 > _output/bin/linux/arm64/restic
@chmod a+x _output/bin/linux/arm64/restic
endif
ifeq ($(GOARCH),ppc64le)
DOCKERFILE ?= Dockerfile-$(BIN)-ppc64le
local-arch:
RESTIC_VERSION=$(RESTIC_VERSION) \
./hack/get-restic-ppc64le.sh
endif
MULTIARCH_IMAGE = $(REGISTRY)/$(BIN)
IMAGE ?= $(REGISTRY)/$(BIN)-$(GOARCH)
GOPROXY ?= https://proxy.golang.org
# If you want to build all binaries, see the 'all-build' rule.
# If you want to build all containers, see the 'all-containers' rule.
@@ -106,23 +119,11 @@ build-%:
@$(MAKE) --no-print-directory ARCH=$* build
@$(MAKE) --no-print-directory ARCH=$* build BIN=velero-restic-restore-helper
container-%:
@$(MAKE) --no-print-directory ARCH=$* container
@$(MAKE) --no-print-directory ARCH=$* container BIN=velero-restic-restore-helper
push-%:
@$(MAKE) --no-print-directory ARCH=$* push
@$(MAKE) --no-print-directory ARCH=$* push BIN=velero-restic-restore-helper
all-build: $(addprefix build-, $(CLI_PLATFORMS))
all-containers: $(addprefix container-, $(CONTAINER_PLATFORMS))
all-push: $(addprefix push-, $(CONTAINER_PLATFORMS))
all-manifests:
@$(MAKE) manifest
@$(MAKE) manifest BIN=velero-restic-restore-helper
all-containers: container-builder-env
@$(MAKE) --no-print-directory container
@$(MAKE) --no-print-directory container BIN=velero-restic-restore-helper
local: build-dirs
GOOS=$(GOOS) \
@@ -131,7 +132,7 @@ local: build-dirs
PKG=$(PKG) \
BIN=$(BIN) \
GIT_SHA=$(GIT_SHA) \
GIT_DIRTY="$(GIT_DIRTY)" \
GIT_TREE_STATE=$(GIT_TREE_STATE) \
OUTPUT_DIR=$$(pwd)/_output/bin/$(GOOS)/$(GOARCH) \
./hack/build.sh
@@ -146,13 +147,12 @@ _output/bin/$(GOOS)/$(GOARCH)/$(BIN): build-dirs
PKG=$(PKG) \
BIN=$(BIN) \
GIT_SHA=$(GIT_SHA) \
GIT_DIRTY=\"$(GIT_DIRTY)\" \
GIT_TREE_STATE=$(GIT_TREE_STATE) \
OUTPUT_DIR=/output/$(GOOS)/$(GOARCH) \
./hack/build.sh'"
TTY := $(shell tty -s && echo "-t")
# Example: make shell CMD="date > datefile"
shell: build-dirs build-env
@# bind-mount the Velero root dir in at /github.com/vmware-tanzu/velero
@@ -175,47 +175,36 @@ shell: build-dirs build-env
$(BUILDER_IMAGE) \
/bin/sh $(CMD)
DOTFILE_IMAGE = $(subst :,_,$(subst /,_,$(IMAGE))-$(VERSION))
container-builder-env:
ifneq ($(BUILDX_ENABLED), true)
$(error $(BUILDX_ERROR))
endif
@docker buildx build \
--target=builder-env \
--build-arg=GOPROXY=$(GOPROXY) \
--build-arg=PKG=$(PKG) \
--build-arg=VERSION=$(VERSION) \
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
-f $(VELERO_DOCKERFILE) .
all-containers:
$(MAKE) container
$(MAKE) container BIN=velero-restic-restore-helper
container: local-arch .container-$(DOTFILE_IMAGE) container-name
.container-$(DOTFILE_IMAGE): _output/bin/$(GOOS)/$(GOARCH)/$(BIN) $(DOCKERFILE)
@cp $(DOCKERFILE) _output/.dockerfile-$(BIN)-$(GOOS)-$(GOARCH)
@docker build --pull -t $(IMAGE):$(VERSION) -f _output/.dockerfile-$(BIN)-$(GOOS)-$(GOARCH) _output
@docker images -q $(IMAGE):$(VERSION) > $@
container-name:
container:
ifneq ($(BUILDX_ENABLED), true)
$(error $(BUILDX_ERROR))
endif
@docker buildx build --pull \
--output=type=$(BUILDX_OUTPUT_TYPE) \
--platform $(BUILDX_PLATFORMS) \
$(addprefix -t , $(IMAGE_TAGS)) \
--build-arg=PKG=$(PKG) \
--build-arg=BIN=$(BIN) \
--build-arg=VERSION=$(VERSION) \
--build-arg=GIT_SHA=$(GIT_SHA) \
--build-arg=GIT_TREE_STATE=$(GIT_TREE_STATE) \
--build-arg=RESTIC_VERSION=$(RESTIC_VERSION) \
-f $(VELERO_DOCKERFILE) .
@echo "container: $(IMAGE):$(VERSION)"
push: .push-$(DOTFILE_IMAGE) push-name
.push-$(DOTFILE_IMAGE): .container-$(DOTFILE_IMAGE)
@docker push $(IMAGE):$(VERSION)
ifeq ($(TAG_LATEST), true)
docker tag $(IMAGE):$(VERSION) $(IMAGE):latest
docker push $(IMAGE):latest
endif
@docker images -q $(IMAGE):$(VERSION) > $@
push-name:
@echo "pushed: $(IMAGE):$(VERSION)"
manifest: .manifest-$(MULTIARCH_IMAGE) manifest-name
.manifest-$(MULTIARCH_IMAGE):
@DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create $(MULTIARCH_IMAGE):$(VERSION) \
$(foreach arch, $(MANIFEST_PLATFORMS), $(MULTIARCH_IMAGE)-$(arch):$(VERSION))
@DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push --purge $(MULTIARCH_IMAGE):$(VERSION)
ifeq ($(TAG_LATEST), true)
@DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create $(MULTIARCH_IMAGE):latest \
$(foreach arch, $(MANIFEST_PLATFORMS), $(MULTIARCH_IMAGE)-$(arch):latest)
@DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push --purge $(MULTIARCH_IMAGE):latest
endif
manifest-name:
@echo "pushed: $(MULTIARCH_IMAGE):$(VERSION)"
SKIP_TESTS ?=
test: build-dirs
ifneq ($(SKIP_TESTS), 1)
@@ -260,34 +249,53 @@ build-dirs:
@mkdir -p .go/src/$(PKG) .go/pkg .go/bin .go/std/$(GOOS)/$(GOARCH) .go/go-build .go/golangci-lint
build-env:
@# if we detect changes in dockerfile force a new build-image
@# if we have overridden the value for the build-image Dockerfile,
@# force a build using that Dockerfile
@# if we detect changes in dockerfile force a new build-image
@# else if we dont have a cached image make one
@# finally use the cached image
ifneq ($(shell git diff --quiet HEAD -- hack/build-image/Dockerfile; echo $$?), 0)
@echo "Local changes detected in hack/build-image/Dockerfile"
ifneq "$(origin BUILDER_IMAGE_DOCKERFILE)" "file"
@echo "Dockerfile for builder image has been overridden to $(BUILDER_IMAGE_DOCKERFILE)"
@echo "Preparing a new builder-image"
@make build-image
$(MAKE) build-image
else ifneq ($(shell git diff --quiet HEAD -- $(BUILDER_IMAGE_DOCKERFILE); echo $$?), 0)
@echo "Local changes detected in $(BUILDER_IMAGE_DOCKERFILE)"
@echo "Preparing a new builder-image"
$(MAKE) build-image
else ifneq ($(BUILDER_IMAGE_CACHED),)
@echo "Using Cached Image: $(BUILDER_IMAGE)"
else
@echo "Trying to pull build-image: $(BUILDER_IMAGE)"
docker pull -q $(BUILDER_IMAGE) || make build-image
docker pull -q $(BUILDER_IMAGE) || $(MAKE) build-image
endif
build-image:
@# When we build a new image we just untag the old one.
@# This makes sure we don't leave the orphaned image behind.
@id=$$(docker image inspect --format '{{ .ID }}' ${BUILDER_IMAGE} 2>/dev/null); \
cd hack/build-image && docker build --pull -t $(BUILDER_IMAGE) . ; \
new_id=$$(docker image inspect --format '{{ .ID }}' ${BUILDER_IMAGE} 2>/dev/null); \
if [ "$$id" != "" ] && [ "$$id" != "$$new_id" ]; then \
$(eval old_id=$(shell docker image inspect --format '{{ .ID }}' ${BUILDER_IMAGE} 2>/dev/null))
ifeq ($(BUILDX_ENABLED), true)
@cd hack/build-image && docker buildx build --build-arg=GOPROXY=$(GOPROXY) --output=type=docker --pull -t $(BUILDER_IMAGE) -f $(BUILDER_IMAGE_DOCKERFILE_REALPATH) .
else
@cd hack/build-image && docker build --build-arg=GOPROXY=$(GOPROXY) --pull -t $(BUILDER_IMAGE) -f $(BUILDER_IMAGE_DOCKERFILE_REALPATH) .
endif
$(eval new_id=$(shell docker image inspect --format '{{ .ID }}' ${BUILDER_IMAGE} 2>/dev/null))
@if [ "$(old_id)" != "" ] && [ "$(old_id)" != "$(new_id)" ]; then \
docker rmi -f $$id || true; \
fi
push-build-image:
@# this target will push the build-image it assumes you already have docker
@# credentials needed to accomplish this.
docker push $(BUILDER_IMAGE)
@# Pushing will be skipped if a custom Dockerfile was used to build the image.
ifneq "$(origin BUILDER_IMAGE_DOCKERFILE)" "file"
@echo "Dockerfile for builder image has been overridden"
@echo "Skipping push of custom image"
else
docker push $(BUILDER_IMAGE)
endif
build-image-hugo:
cd site && docker build --pull -t $(HUGO_IMAGE) .
clean:
# if we have a cached image then use it to run go clean --modcache
@@ -296,8 +304,8 @@ ifneq ($(strip $(BUILDER_IMAGE_CACHED)),)
$(MAKE) shell CMD="-c 'go clean --modcache'"
docker rmi -f $(BUILDER_IMAGE) || true
endif
rm -rf .container-* _output/.dockerfile-* .push-*
rm -rf .go _output
docker rmi $(HUGO_IMAGE)
.PHONY: modules
@@ -316,7 +324,7 @@ ci: verify-modules verify all test
changelog:
hack/changelog.sh
hack/release-tools/changelog.sh
# release builds a GitHub release using goreleaser within the build container.
#
@@ -338,38 +346,20 @@ release:
GITHUB_TOKEN=$(GITHUB_TOKEN) \
RELEASE_NOTES_FILE=$(RELEASE_NOTES_FILE) \
PUBLISH=$(PUBLISH) \
./hack/goreleaser.sh'"
./hack/release-tools/goreleaser.sh'"
serve-docs:
serve-docs: build-image-hugo
docker run \
--rm \
-v "$$(pwd)/site:/srv/jekyll" \
-it -p 4000:4000 \
jekyll/jekyll \
jekyll serve --livereload --incremental
# gen-docs generates a new versioned docs directory under site/docs. It follows
# the following process:
# 1. Copies the contents of the most recently tagged docs directory into the new
# directory, to establish a useful baseline to diff against.
# 2. Adds all copied content from step 1 to git's staging area via 'git add'.
# 3. Replaces the contents of the new docs directory with the contents of the
# 'master' docs directory, updating any version-specific links (e.g. to a
# specific branch of the GitHub repository) to use the new version
# 4. Copies the previous version's ToC file and runs 'git add' to establish
# a useful baseline to diff against.
# 5. Replaces the content of the new ToC file with the master ToC.
# 6. Update site/_config.yml and site/_data/toc-mapping.yml to include entries
# for the new version.
#
# The unstaged changes in the working directory can now easily be diff'ed against the
# staged changes using 'git diff' to review all docs changes made since the previous
# tagged version. Once the unstaged changes are ready, they can be added to the
# staging area using 'git add' and then committed.
#
# To run gen-docs: "NEW_DOCS_VERSION=v1.4 VELERO_VERSION=v1.4.0 make gen-docs"
#
# **NOTE**: there are additional manual steps required to finalize the process of generating
# a new versioned docs site. The full process is documented in site/README-JEKYLL.md.
-v "$$(pwd)/site:/srv/hugo" \
-it -p 1313:1313 \
$(HUGO_IMAGE) \
hugo server --bind=0.0.0.0 --enableGitInfo=false
# gen-docs generates a new versioned docs directory under site/content/docs.
# Please read the documentation in the script for instructions on how to use it.
gen-docs:
@hack/gen-docs.sh
@hack/release-tools/gen-docs.sh
.PHONY: test-e2e
test-e2e: local
$(MAKE) -C test/e2e run

View File

@@ -1,6 +1,7 @@
![100]
[![Build Status][1]][2]
[![Build Status][1]][2] [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/3811/badge)](https://bestpractices.coreinfrastructure.org/projects/3811)
## Overview
@@ -33,8 +34,8 @@ If you are ready to jump in and test, add code, or help with documentation, foll
See [the list of releases][6] to find out about feature changes.
[1]: https://github.com/vmware-tanzu/velero/workflows/Master%20CI/badge.svg
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Master+CI"
[1]: https://github.com/vmware-tanzu/velero/workflows/Main%20CI/badge.svg
[2]: https://github.com/vmware-tanzu/velero/actions?query=workflow%3A"Main+CI"
[4]: https://github.com/vmware-tanzu/velero/issues
[6]: https://github.com/vmware-tanzu/velero/releases
[9]: https://kubernetes.io/docs/setup/
@@ -47,4 +48,4 @@ See [the list of releases][6] to find out about feature changes.
[29]: https://velero.io/docs/
[30]: https://velero.io/docs/troubleshooting
[31]: https://velero.io/docs/start-contributing
[100]: https://velero.io/docs/master/img/velero.png
[100]: https://velero.io/docs/main/img/velero.png

View File

@@ -1,13 +1,13 @@
## Velero Roadmap
### About this document
This document provides a link to the [Velero Project board](https://app.zenhub.com/workspaces/velero-5c59c15e39d47b774b5864e3/board?repos=99143276,112385197,190224441,214524700,214524630,213946861) that serves as the up to date description of items that are in the release pipeline. The board has separate swim lanes based on prioritization. Most items are gathered from the community or include a feedback loop with the community. This should serve as a reference point for Velero users and contributors to understand where the project is heading, and help determine if a contribution could be conflicting with a longer term plan. You will need the ZenHub plugin to view the board.
This document provides a link to the [Velero Project boards](https://github.com/vmware-tanzu/velero/projects) that serves as the up to date description of items that are in the release pipeline. The release boards have separate swim lanes based on prioritization. Most items are gathered from the community or include a feedback loop with the community. This should serve as a reference point for Velero users and contributors to understand where the project is heading, and help determine if a contribution could be conflicting with a longer term plan.
### How to help?
Discussion on the roadmap can take place in threads under [Issues](https://github.com/vmware-tanzu/velero/issues) or in [community meetings](https://velero.io/community/). Please open and comment on an issue if you want to provide suggestions, use cases, and feedback to an item in the roadmap. Please review the roadmap to avoid potential duplicated effort.
### How to add an item to the roadmap?
One of the most important aspects in any open source community is the concept of proposals. Large changes to the codebase and / or new features should be preceded by a [proposal](https://github.com/vmware-tanzu/velero/blob/master/GOVERNANCE.md#proposal-process) in our repo.
One of the most important aspects in any open source community is the concept of proposals. Large changes to the codebase and / or new features should be preceded by a [proposal](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#proposal-process) in our repo.
For smaller enhancements, you can open an issue to track that initiative or feature request.
We work with and rely on community feedback to focus our efforts to improve Velero and maintain a healthy roadmap.
@@ -15,22 +15,55 @@ We work with and rely on community feedback to focus our efforts to improve Vele
The following table includes the current roadmap for Velero. If you have any questions or would like to contribute to Velero, please attend a [community meeting](https://velero.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt.
Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Velero.
`Last Updated: May 2020`
`Last Updated: March 2021`
#### 1.7.0 Roadmap
The release roadmap is split into Core items that are required for the release, desired items that may be removed from the
release and opportunistic items that will be added to the release if possible.
##### Core items
|Issue|Description|
|---|---|
|[3493](https://github.com/vmware-tanzu/velero/issues/3493)|[Carvel](https://github.com/vmware-tanzu/velero/issues/3493) based installation (in addition to the existing *velero install* CLI).|
|[3531](https://github.com/vmware-tanzu/velero/issues/3531)|Test plan for Velero|
|[675](https://github.com/vmware-tanzu/velero/issues/675)|Velero command to generate debugging information. Will integrate with [Crashd - Crash Diagnostics](https://github.com/vmware-tanzu/velero/issues/675)|
|[2066](https://github.com/vmware-tanzu/velero/issues/2066)|CSI Snapshots GA|
|[3285](https://github.com/vmware-tanzu/velero/issues/3285)|Support Velero plugin versioning|
|[1975](https://github.com/vmware-tanzu/velero/issues/1975)|IPV6 support|
##### Desired items
|Issue|Description|
|---|---|
|[3533](https://github.com/vmware-tanzu/velero/issues/3533)|Upload Progress Monitoring|
|[2922](https://github.com/vmware-tanzu/velero/issues/2922)|Plugin timeouts|
|[3500](https://github.com/vmware-tanzu/velero/issues/3500)|Use distroless containers as a base|
|[3535](https://github.com/vmware-tanzu/velero/issues/3535)|Design doc for multiple cluster support|
|[3536](https://github.com/vmware-tanzu/velero/issues/3536)|Manifest for backup/restore|
##### Opportunistic items
|Issue|Description|
|---|---|
|Issues TBD|Controller migrations|
#### Long term roadmap items
|Theme|Description|Timeline|
|--|--|--|
|Restic Improvements|Introduce improvements in annotating resources for Restic backup|August 2020|
|Extensibility|Add restore hooks for enhanced recovery scenarios|August 2020|
|CSI|Continue improving the CSI snapshot capabilities and participate in the upstream K8s CSI community|Long running (dependent on CSI working group)|
|Backup/Restore|Improvements to long-running copy operations from a performance and reliability standpoint|August 2020|
|UX|Improvements to install and configuration user experience|August 2020|
|Restic Improvements|Improve the use of Restic in Velero and offer stable support|Dec 2020|
|Perf & Scale|Introduce a scalable model by using a worker pod for each backup/restore operation and improve operations|Dec 2020|
|Backup/Restore|Better backup and restore semantics for certain Kubernetes resources like stateful sets, operators|Dec 2020|
|Security|Enable the use of custom credential providers|Dec 2020|
|Self-Service & Multitenancy|Reduce friction by enabling developers to backup their namespaces via self-service. Introduce a Velero multi-tenancy model, enabling owners of namespaces to backup and restore within their access scope|Mar 2021|
|Backup/Restore|Cross availability zone or region backup and restore|Mar 2021|
|Application Consistency|Offer blueprints for backing up and restoring popular applications|May 2021|
|Backup/Restore|Data only backup and restore|May 2021|
|Backup/Restore|Introduce the ability to overwrite existing objects during a restore|May 2021|
|Backup/Restore|What-if dry run for backup and restore|May 2021|
|---|---|---|
|Restic Improvements|Introduce improvements in annotating resources for Restic backup|TBD|
|Extensibility|Add restore hooks for enhanced recovery scenarios|TBD|
|CSI|Continue improving the CSI snapshot capabilities and participate in the upstream K8s CSI community|1.7.0 + Long running (dependent on CSI working group)|
|Backup/Restore|Improvements to long-running copy operations from a performance and reliability standpoint|1.7.0|
|Quality/Reliability| Enable automated end-to-end testing |1.6.0|
|UX|Improvements to install and configuration user experience|Dec 2020|
|Restic Improvements|Improve the use of Restic in Velero and offer stable support|TBD|
|Perf & Scale|Introduce a scalable model by using a worker pod for each backup/restore operation and improve operations|1.8.0|
|Backup/Restore|Better backup and restore semantics for certain Kubernetes resources like stateful sets, operators|2.0|
|Security|Enable the use of custom credential providers|1.6.0|
|Self-Service & Multitenancy|Reduce friction by enabling developers to backup their namespaces via self-service. Introduce a Velero multi-tenancy model, enabling owners of namespaces to backup and restore within their access scope|TBD|
|Backup/Restore|Cross availability zone or region backup and restore|TBD|
|Application Consistency|Offer blueprints for backing up and restoring popular applications|TBD|
|Backup/Restore|Data only backup and restore|TBD|
|Backup/Restore|Introduce the ability to overwrite existing objects during a restore|TBD|
|Backup/Restore|What-if dry run for backup and restore|1.7.0|

128
SECURITY.md Normal file
View File

@@ -0,0 +1,128 @@
# Security Release Process
Velero is an open source tool with a growing community devoted to safe backup and restore, disaster recovery, and data migration of Kubernetes resources and persistent volumes. The community has adopted this security disclosure and response policy to ensure we responsibly handle critical issues.
## Supported Versions
The Velero project maintains the following [governance document](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md), [release document](https://github.com/vmware-tanzu/velero/blob/f42c63af1b9af445e38f78a7256b1c48ef79c10e/site/docs/main/release-instructions.md), and [support document](https://velero.io/docs/main/support-process/). Please refer to these for release and related details. Only the most recent version of Velero is supported. Each [release](https://github.com/vmware-tanzu/velero/releases) includes information about upgrading to the latest version.
## Reporting a Vulnerability - Private Disclosure Process
Security is of the highest importance and all security vulnerabilities or suspected security vulnerabilities should be reported to Velero privately, to minimize attacks against current users of Velero before they are fixed. Vulnerabilities will be investigated and patched on the next patch (or minor) release as soon as possible. This information could be kept entirely internal to the project.
If you know of a publicly disclosed security vulnerability for Velero, please **IMMEDIATELY** contact the VMware Security Team (security@vmware.com).
**IMPORTANT: Do not file public issues on GitHub for security vulnerabilities**
To report a vulnerability or a security-related issue, please contact the VMware email address with the details of the vulnerability. The email will be fielded by the VMware Security Team and then shared with the Velero maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/velero/issues/new/choose) instead.
## Proposed Email Content
Provide a descriptive subject line and in the body of the email include the following information:
* Basic identity information, such as your name and your affiliation or company.
* Detailed steps to reproduce the vulnerability (POC scripts, screenshots, and logs are all helpful to us).
* Description of the effects of the vulnerability on Velero and the related hardware and software configurations, so that the VMware Security Team can reproduce it.
* How the vulnerability affects Velero usage and an estimation of the attack surface, if there is one.
* List other projects or dependencies that were used in conjunction with Velero to produce the vulnerability.
## When to report a vulnerability
* When you think Velero has a potential security vulnerability.
* When you suspect a potential vulnerability but you are unsure that it impacts Velero.
* When you know of or suspect a potential vulnerability on another project that is used by Velero.
## Patch, Release, and Disclosure
The VMware Security Team will respond to vulnerability reports as follows:
1. The Security Team will investigate the vulnerability and determine its effects and criticality.
2. If the issue is not deemed to be a vulnerability, the Security Team will follow up with a detailed reason for rejection.
3. The Security Team will initiate a conversation with the reporter within 3 business days.
4. If a vulnerability is acknowledged and the timeline for a fix is determined, the Security Team will work on a plan to communicate with the appropriate community, including identifying mitigating steps that affected users can take to protect themselves until the fix is rolled out.
5. The Security Team will also create a [CVSS](https://www.first.org/cvss/specification-document) using the [CVSS Calculator](https://www.first.org/cvss/calculator/3.0). The Security Team makes the final call on the calculated CVSS; it is better to move quickly than making the CVSS perfect. Issues may also be reported to [Mitre](https://cve.mitre.org/) using this [scoring calculator](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator). The CVE will initially be set to private.
6. The Security Team will work on fixing the vulnerability and perform internal testing before preparing to roll out the fix.
7. The Security Team will provide early disclosure of the vulnerability by emailing the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list. Distributors can initially plan for the vulnerability patch ahead of the fix, and later can test the fix and provide feedback to the Velero team. See the section **Early Disclosure to Velero Distributors List** for details about how to join this mailing list.
8. A public disclosure date is negotiated by the VMware SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if its already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The VMware Security Team holds the final say when setting a public disclosure date.
9. Once the fix is confirmed, the Security Team will patch the vulnerability in the next patch or minor release, and backport a patch release into all earlier supported releases. Upon release of the patched version of Velero, we will follow the **Public Disclosure Process**.
## Public Disclosure Process
The Security Team publishes a [public advisory](https://github.com/vmware-tanzu/velero/security/advisories) to the Velero community via GitHub. In most cases, additional communication via Slack, Twitter, mailing lists, blog and other channels will assist in educating Velero users and rolling out the patched release to affected users.
The Security Team will also publish any mitigating steps users can take until the fix can be applied to their Velero instances. Velero distributors will handle creating and publishing their own security advisories.
## Mailing lists
* Use security@vmware.com to report security concerns to the VMware Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure.
* Join the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list for early private information and vulnerability disclosure. Early disclosure may include mitigating steps and additional information on security patch releases. See below for information on how Velero distributors or vendors can apply to join this list.
## Early Disclosure to Velero Distributors List
The private list is intended to be used primarily to provide actionable information to multiple distributor projects at once. This list is not intended to inform individuals about security issues.
## Membership Criteria
To be eligible to join the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list, you should:
1. Be an active distributor of Velero.
2. Have a user base that is not limited to your own organization.
3. Have a publicly verifiable track record up to the present day of fixing security issues.
4. Not be a downstream or rebuild of another distributor.
5. Be a participant and active contributor in the Velero community.
6. Accept the Embargo Policy that is outlined below.
7. Have someone who is already on the list vouch for the person requesting membership on behalf of your distribution.
**The terms and conditions of the Embargo Policy apply to all members of this mailing list. A request for membership represents your acceptance to the terms and conditions of the Embargo Policy.**
## Embargo Policy
The information that members receive on the Velero Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the VMware Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
Before you share any information from the list with members of your team who are required to fix the issue, these team members must agree to the same terms, and only be provided with information on a need-to-know basis.
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the VMware Security Team (security@vmware.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
## Requesting to Join
Send new membership requests to projectvelero-distributors@googlegroups.com. In the body of your request please specify how you qualify for membership and fulfill each criterion listed in the Membership Criteria section above.
## Confidentiality, integrity and availability
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The VMware Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
Note that we do not currently consider the default settings for Velero to be secure-by-default. It is necessary for operators to explicitly configure settings, role based access control, and other resource related features in Velero to provide a hardened Velero environment. We will not act on any security disclosure that relates to a lack of safe defaults. Over time, we will work towards improved safe-by-default configuration, taking into account backwards compatibility.

View File

@@ -4,4 +4,4 @@ Thanks for trying out Velero! We welcome all feedback, find all the ways to conn
- [Velero Community](https://velero.io/community/)
You can find details on the Velero maintainers' support process [here](https://velero.io/docs/master/support-process/).
You can find details on the Velero maintainers' support process [here](https://velero.io/docs/main/support-process/).

265
Tiltfile Normal file
View File

@@ -0,0 +1,265 @@
# -*- mode: Python -*-
k8s_yaml([
'config/crd/bases/velero.io_backups.yaml',
'config/crd/bases/velero.io_backupstoragelocations.yaml',
'config/crd/bases/velero.io_deletebackuprequests.yaml',
'config/crd/bases/velero.io_downloadrequests.yaml',
'config/crd/bases/velero.io_podvolumebackups.yaml',
'config/crd/bases/velero.io_podvolumerestores.yaml',
'config/crd/bases/velero.io_resticrepositories.yaml',
'config/crd/bases/velero.io_restores.yaml',
'config/crd/bases/velero.io_schedules.yaml',
'config/crd/bases/velero.io_serverstatusrequests.yaml',
'config/crd/bases/velero.io_volumesnapshotlocations.yaml',
])
# default values
settings = {
"default_registry": "",
"enable_restic": False,
"enable_debug": False,
"debug_continue_on_start": True, # Continue the velero process by default when in debug mode
"create_backup_locations": False,
"setup-minio": False,
}
# global settings
settings.update(read_json(
"tilt-resources/tilt-settings.json",
default = {},
))
k8s_yaml(kustomize('tilt-resources'))
k8s_yaml('tilt-resources/deployment.yaml')
if settings.get("enable_debug"):
k8s_resource('velero', port_forwards = '2345')
# TODO: Need to figure out how to apply port forwards for all restic pods
if settings.get("enable_restic"):
k8s_yaml('tilt-resources/restic.yaml')
if settings.get("create_backup_locations"):
k8s_yaml('tilt-resources/velero_v1_backupstoragelocation.yaml')
if settings.get("setup-minio"):
k8s_yaml('examples/minio/00-minio-deployment.yaml', allow_duplicates = True)
# By default, Tilt automatically allows Minikube, Docker for Desktop, Microk8s, Red Hat CodeReady Containers, Kind, K3D, and Krucible.
allow_k8s_contexts(settings.get("allowed_contexts"))
default_registry(settings.get("default_registry"))
local_goos = str(local("go env GOOS", quiet = True, echo_off = True)).strip()
git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip()
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.15.3 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \
wget --output-document /start.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/start.sh && \
chmod +x /start.sh && chmod +x /restart.sh
"""
additional_docker_helper_commands = """
# Install delve to allow debugging
RUN go get github.com/go-delve/delve/cmd/dlv
RUN wget -qO- https://dl.k8s.io/v1.19.2/kubernetes-client-linux-amd64.tar.gz | tar xvz
RUN wget -qO- https://get.docker.com | sh
"""
additional_docker_build_commands = """
COPY --from=tilt-helper /go/bin/dlv /usr/bin/dlv
COPY --from=tilt-helper /usr/bin/docker /usr/bin/docker
COPY --from=tilt-helper /go/kubernetes/client/bin/kubectl /usr/bin/kubectl
"""
##############################
# Setup Velero
##############################
def get_debug_flag():
"""
Returns the flag to enable debug building of Velero if debug
mode is enabled.
"""
if settings.get('enable_debug'):
return "DEBUG=1"
return ""
# Set up a local_resource build of the Velero binary. The binary is written to _tiltbuild/velero.
local_resource(
"velero_server_binary",
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild;PKG=. BIN=velero GOOS=linux GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild ' + get_debug_flag() + ' ./hack/build.sh',
deps = ["cmd", "internal", "pkg"],
ignore = ["pkg/cmd"],
)
local_resource(
"velero_local_binary",
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/local;PKG=. BIN=velero GOOS=' + local_goos + ' GOARCH=amd64 GIT_SHA=' + git_sha + ' VERSION=main GIT_TREE_STATE=dirty OUTPUT_DIR=_tiltbuild/local ' + get_debug_flag() + ' ./hack/build.sh',
deps = ["internal", "pkg/cmd"],
)
local_resource(
"restic_binary",
cmd = 'cd ' + '.' + ';mkdir -p _tiltbuild/restic; BIN=velero GOOS=' + local_goos + ' GOARCH=amd64 RESTIC_VERSION=0.12.0 OUTPUT_DIR=_tiltbuild/restic ./hack/download-restic.sh',
)
# Note: we need a distro with a bash shell to exec into the Velero container
tilt_dockerfile_header = """
FROM ubuntu:focal as tilt
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -qq -y ca-certificates tzdata && rm -rf /var/lib/apt/lists/*
WORKDIR /
COPY --from=tilt-helper /start.sh .
COPY --from=tilt-helper /restart.sh .
COPY velero .
COPY restic/restic /usr/bin/restic
"""
dockerfile_contents = "\n".join([
tilt_helper_dockerfile_header,
additional_docker_helper_commands,
tilt_dockerfile_header,
additional_docker_build_commands,
])
def get_velero_entrypoint():
"""
Returns the entrypoint for the Velero container image.
"""
entrypoint = ["sh", "/start.sh"]
if settings.get("enable_debug"):
# If debug mode is enabled, start the velero process using Delve
entrypoint.extend(
["dlv", "--listen=:2345", "--headless=true", "--api-version=2", "--accept-multiclient", "exec"])
# Set whether or not to continue the debugged process on start
# See https://github.com/go-delve/delve/blob/master/Documentation/usage/dlv_exec.md
if settings.get("debug_continue_on_start"):
entrypoint.append("--continue")
entrypoint.append("--")
entrypoint.append("/velero")
return entrypoint
# Set up an image build for Velero. The live update configuration syncs the output from the local_resource
# build into the container.
docker_build(
ref = "velero/velero",
context = "_tiltbuild",
dockerfile_contents = dockerfile_contents,
target = "tilt",
entrypoint = get_velero_entrypoint(),
live_update = [
sync("./_tiltbuild/velero", "/velero"),
run("sh /restart.sh"),
])
##############################
# Setup plugins
##############################
def load_provider_tiltfiles():
all_providers = settings.get("providers", {})
enable_providers = settings.get("enable_providers", [])
providers = []
## Load settings only for providers to enable
for name in enable_providers:
repo = all_providers.get(name)
if not repo:
print("Enabled provider '{}' does not exist in list of supported providers".format(name))
continue
file = repo + "/tilt-provider.json"
if not os.path.exists(file):
print("Provider settings not found for \"{}\". Please ensure this plugin repository has a tilt-provider.json file included.".format(name))
continue
provider_details = read_json(file, default = {})
if type(provider_details) == "dict":
provider_details["name"] = name
if "context" in provider_details:
provider_details["context"] = os.path.join(repo, "/", provider_details["context"])
else:
provider_details["context"] = repo
if "go_main" not in provider_details:
provider_details["go_main"] = "main.go"
providers.append(provider_details)
return providers
# Enable each provider
def enable_providers(providers):
if not providers:
print("No providers to enable.")
return
for p in providers:
enable_provider(p)
# Configures a provider by doing the following:
#
# 1. Enables a local_resource go build of the provider's local binary
# 2. Configures a docker build for the provider, with live updating of the local binary
def enable_provider(provider):
name = provider.get("name")
plugin_name = provider.get("plugin_name")
# Note: we need a distro with a shell to do a copy of the plugin binary
tilt_dockerfile_header = """
FROM ubuntu:focal as tilt
WORKDIR /
COPY --from=tilt-helper /start.sh .
COPY --from=tilt-helper /restart.sh .
COPY """ + plugin_name + """ .
"""
dockerfile_contents = "\n".join([
tilt_helper_dockerfile_header,
additional_docker_helper_commands,
tilt_dockerfile_header,
additional_docker_build_commands,
])
context = provider.get("context")
go_main = provider.get("go_main", "main.go")
live_reload_deps = []
for d in provider.get("live_reload_deps", []):
live_reload_deps.append(os.path.join(context, "/", d))
# Set up a local_resource build of the plugin binary. The main.go path must be provided via go_main option. The binary is written to _tiltbuild/<NAME>.
local_resource(
name + "_plugin",
cmd = 'cd ' + context + ';mkdir -p _tiltbuild;PKG=' + context + ' BIN=' + go_main + ' GOOS=linux GOARCH=amd64 OUTPUT_DIR=_tiltbuild ./hack/build.sh',
deps = live_reload_deps,
)
# Set up an image build for the plugin. The live update configuration syncs the output from the local_resource
# build into the init container, and that restarts the Velero container.
docker_build(
ref = provider.get("image"),
context = os.path.join(context, "/_tiltbuild/"),
dockerfile_contents = dockerfile_contents,
target = "tilt",
entrypoint = ["/bin/bash", "-c", "cp /" + plugin_name + " /target/."],
live_update = [
sync(os.path.join(context, "/_tiltbuild/", plugin_name), os.path.join("/", plugin_name))
]
)
##############################
# Start
#############################
enable_providers(load_provider_tiltfiles())

View File

@@ -69,8 +69,8 @@ carefully to ensure a successful upgrade!**
- The `Config` CRD has been replaced by `BackupStorageLocation` and `VolumeSnapshotLocation` CRDs.
- The interface for external plugins (object/block stores, backup/restore item actions) has changed. If you have authored any custom plugins, they'll
need to be updated for v0.10.
- The [`ObjectStore.ListCommonPrefixes`](https://github.com/heptio/ark/blob/master/pkg/cloudprovider/object_store.go#L50) signature has changed to add a `prefix` parameter.
- Registering plugins has changed. Create a new plugin server with the `NewServer` function, and register plugins with the appropriate functions. See the [`Server`](https://github.com/heptio/ark/blob/master/pkg/plugin/server.go#L37) interface for details.
- The [`ObjectStore.ListCommonPrefixes`](https://github.com/vmware-tanzu/velero/blob/main/pkg/cloudprovider/object_store.go#L50) signature has changed to add a `prefix` parameter.
- Registering plugins has changed. Create a new plugin server with the `NewServer` function, and register plugins with the appropriate functions. See the [`Server`](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/server.go#L37) interface for details.
- The organization of Ark data in object storage has changed. Existing data will need to be moved around to conform to the new layout.
### All Changes
@@ -89,7 +89,7 @@ need to be updated for v0.10.
- [ec013e6f](https://github.com/heptio/ark/commit/ec013e6f) Document upgrading plugins in the deployment
- [d6162e94](https://github.com/heptio/ark/commit/d6162e94) fix goreleaser bugs
- [a15df276](https://github.com/heptio/ark/commit/a15df276) Add correct link and change role
- [46bed015](https://github.com/heptio/ark/commit/46bed015) add 0.10 breaking changes warning to readme in master
- [46bed015](https://github.com/heptio/ark/commit/46bed015) add 0.10 breaking changes warning to readme in main
- [e3a7d6a2](https://github.com/heptio/ark/commit/e3a7d6a2) add content for issue 994
- [400911e9](https://github.com/heptio/ark/commit/400911e9) address docs issue #978
- [b818cc27](https://github.com/heptio/ark/commit/b818cc27) don't require a default provider VSL if there's only 1
@@ -247,7 +247,7 @@ need to be updated for v0.10.
- [5b89f7b6](https://github.com/heptio/ark/commit/5b89f7b6) Skip backup sync if it already exists in k8s
- [c6050845](https://github.com/heptio/ark/commit/c6050845) restore controller: switch to 'c' for receiver name
- [706ae07d](https://github.com/heptio/ark/commit/706ae07d) enable a schedule to be provided as the source for a restore
- [aea68414](https://github.com/heptio/ark/commit/aea68414) fix up Slack link in troubleshooting on master branch
- [aea68414](https://github.com/heptio/ark/commit/aea68414) fix up Slack link in troubleshooting on main branch
- [bb8e2e91](https://github.com/heptio/ark/commit/bb8e2e91) Document how to run the Ark server locally
- [dc84e591](https://github.com/heptio/ark/commit/dc84e591) Remove outdated namespace deletion content
- [23abbc9a](https://github.com/heptio/ark/commit/23abbc9a) fix paths

View File

@@ -8,7 +8,7 @@
### Bug fixes
* If a Service is headless, retain ClusterIP = None when backing up and restoring.
* Use the specifed --label-selector when listing backups, schedules, and restores.
* Use the specified --label-selector when listing backups, schedules, and restores.
* Restore namespace mapping functionality that was accidentally broken in 0.5.0.
* Always include namespaces in the backup, regardless of the --include-cluster-resources setting.

View File

@@ -104,7 +104,7 @@
### Download
- https://github.com/heptio/ark/releases/tag/v0.9.3
### Bug Fixes
* Initalize Prometheus metrics when creating a new schedule (#689, @lemaral)
* Initialize Prometheus metrics when creating a new schedule (#689, @lemaral)
## v0.9.2
@@ -137,7 +137,7 @@
### Highlights:
* Ark now has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called [restic](https://github.com/restic/restic).
This provides users an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume, whether or not it has snapshot support
integrated with Ark. For more information, see the [documentation](https://github.com/heptio/ark/blob/master/docs/restic.md).
integrated with Ark. For more information, see the [documentation](https://github.com/vmware-tanzu/velero/blob/main/docs/restic.md).
* Support for Prometheus metrics has been added! View total number of backup attempts (including success or failure), total backup size in bytes, and backup
durations. More metrics coming in future releases!

View File

@@ -75,7 +75,7 @@ Finally, thanks to testing by [Dylan Murray](https://github.com/dymurray) and [S
* Adds configurable CPU/memory requests and limits to the Velero Deployment generated by velero install. (#1678, @prydonius)
* Store restic PodVolumeBackups in obj storage & use that as source of truth like regular backups. (#1577, @carlisia)
* Update Velero Deployment to use apps/v1 API group. `velero install` and `velero plugin add/remove` commands will now require Kubernetes 1.9+ (#1673, @nrb)
* Respect the --kubecontext and --kubeconfig arugments for `velero install`. (#1656, @nrb)
* Respect the --kubecontext and --kubeconfig arguments for `velero install`. (#1656, @nrb)
* add plugin for updating PV & PVC storage classes on restore based on a config map (#1621, @skriss)
* Add restic support for CSI volumes (#1615, @nrb)
* bug fix: Fixed namespace usage with cli command 'version' (#1630, @jwmatthews)

View File

@@ -90,7 +90,7 @@ We fixed a large number of bugs and made some smaller usability improvements in
### All Changes
* Corrected the selfLink for Backup CR in site/docs/master/output-file-format.md (#2292, @RushinthJohn)
* Corrected the selfLink for Backup CR in site/docs/main/output-file-format.md (#2292, @RushinthJohn)
* Back up schema-less CustomResourceDefinitions as v1beta1, even if they are retrieved via the v1 endpoint. (#2264, @nrb)
* Bug fix: restic backup volume snapshot to the second location failed (#2244, @jenting)
* Added support of using PV name from volumesnapshotter('SetVolumeID') in case of PV renaming during the restore (#2216, @mynktl)
@@ -109,7 +109,7 @@ We fixed a large number of bugs and made some smaller usability improvements in
* bug fix: only prioritize restoring `replicasets.apps`, not `replicasets.extensions` (#2157, @skriss)
* bug fix: restore both `replicasets.apps` *and* `replicasets.extensions` before `deployments` (#2120, @skriss)
* bug fix: don't restore cluster-scoped resources when restoring specific namespaces and IncludeClusterResources is nil (#2118, @skriss)
* Enableing Velero to switch credentials (`AWS_PROFILE`) if multiple s3-compatible backupLocations are present (#2096, @dinesh)
* Enabling Velero to switch credentials (`AWS_PROFILE`) if multiple s3-compatible backupLocations are present (#2096, @dinesh)
* bug fix: deep-copy backup's labels when constructing snapshot tags, so the PV name isn't added as a label to the backup (#2075, @skriss)
* remove the `fsfreeze-pause` image being published from this repo; replace it with `ubuntu:bionic` in the nginx example app (#2068, @skriss)
* add support for a private registry with a custom port in a restic-helper image (#1999, @cognoz)

View File

@@ -1,3 +1,28 @@
## v1.4.2
### 2020-07-13
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.4.2
### Container Image
`velero/velero:v1.4.2`
### Documentation
https://velero.io/docs/v1.4/
### Upgrading
https://velero.io/docs/v1.4/upgrade-to-1.4/
### All Changes
* log a warning instead of erroring if an additional item returned from a plugin can't be found in the Kubernetes API (#2595, @skriss)
* Adjust restic default time out to 4 hours and base pod resource requests to 500m CPU/512Mi memory. (#2696, @nrb)
* capture version of the CRD prior before invoking the remap_crd_version backup item action (#2683, @ashish-amarnath)
## v1.4.1
This tag was created in code, but has no associated docker image due to misconfigured building infrastructure. v1.4.2 fixes this.
## v1.4.0
### 2020-05-26

View File

@@ -0,0 +1,82 @@
## v1.5.1
### 2020-09-16
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.5.1
### Container Image
`velero/velero:v1.5.1`
### Documentation
https://velero.io/docs/v1.5/
### Upgrading
https://velero.io/docs/v1.5/upgrade-to-1.5/
### Highlights
* Auto Volume Backup Using Restic with `--default-volumes-to-restic` flag
* DeleteItemAction plugins
* Code modernization
* Restore Hooks: InitContianer Restore Hooks and Exec Restore Hooks
### All Changes
* 🏃‍♂️ add shortnames for CRDs (#2911, @ashish-amarnath)
* Use format version instead of version on `velero backup describe` since version has been deprecated (#2901, @jenting)
* fix EnableAPIGroupersions output log format (#2882, @jenting)
* Convert ServerStatusRequest controller to kubebuilder (#2838, @carlisia)
* rename the PV if VolumeSnapshotter has modified the PV name (#2835, @pawanpraka1)
* Implement post-restore exec hooks in pod containers (#2804, @areed)
* Check for errors on restic backup command (#2863, @dymurray)
* 🐛 fix passing LDFLAGS across build stages (#2853, @ashish-amarnath)
* Feature: Invoke DeleteItemAction plugins based on backup contents when a backup is deleted. (#2815, @nrb)
* When JSON logging format is enabled, place error message at "error.message" instead of "error" for compatibility with Elasticsearch/ELK and the Elastic Common Schema (#2830, @bgagnon)
* discovery Helper support get GroupVersionResource and an APIResource from GroupVersionKind (#2764, @runzexia)
* Migrate site from Jekyll to Hugo (#2720, @tbatard)
* Add the DeleteItemAction plugin type (#2808, @nrb)
* 🐛 Manually patch the generated yaml for restore CRD as a hacky workaround (#2814, @ashish-amarnath)
* Setup crd validation github action on k8s versions (#2805, @ashish-amarnath)
* 🐛 Supply command to run restic-wait init container (#2802, @ashish-amarnath)
* Make init and exec restore hooks as optional in restore hookSpec (#2793, @ashish-amarnath)
* Implement restore hooks injecting init containers into pod spec (#2787, @ashish-amarnath)
* Pass default-volumes-to-restic flag from create schedule to backup (#2776, @ashish-amarnath)
* Enhance Backup to support backing up resources in specific orders and add --ordered-resources option to support this feature. (#2724, @phuong)
* Fix inconsistent type for the "resource" structured logging field (#2796, @bgagnon)
* Add the ability to set the allowPrivilegeEscalation flag in the securityContext for the Restic restore helper. (#2792, @doughepi)
* Add cacert flag for velero backup-location create (#2778, @jenting)
* Exclude volumes mounting secrets and configmaps from defaulting volume backups to restic (#2762, @ashish-amarnath)
* Add types to implement restore hooks (#2761, @ashish-amarnath)
* Add wait group and error channel for restore hooks to restore context. (#2755, @areed)
* Refactor image builds to use buildx for multi arch image building (#2754, @robreus)
* Add annotation key constants for restore hooks (#2750, @ashish-amarnath)
* Adds Start and CompletionTimestamp to RestoreStatus
Displays the Timestamps when issued a print or describe (#2748, @thejasbabu)
* Move pkg/backup/item_hook_handlers.go to internal/hook (#2734, @nrb)
* add metrics for restic back up operation (#2719, @ashish-amarnath)
* StorageGrid compatibility by removing explicit gzip accept header setting (#2712, @fvsqr)
* restic: add support for setting SecurityContext (runAsUser, runAsGroup) for restore (#2621, @jaygridley)
* Add backupValidationFailureTotal to metrics (#2714, @kathpeony)
* bump Kubernetes module dependencies to v0.18.4 to fix https://github.com/vmware-tanzu/velero/issues/2540 by adding code compatibility with kubernetes v1.18 (#2651, @laverya)
* Add a BSL controller to handle validation + update BSL status phase (validation removed from the server and no longer blocks when there's any invalid BSL) (#2674, @carlisia)
* updated acceptable values on cron schedule from 0-7 to 0-6 (#2676, @dthrasher)
* Improve velero download doc (#2660, @carlisia)
* Update basic-install and release-instructions documentation for Windows Chocolatey package (#2638, @adamrushuk)
* move CSI plugin out of prototype into beta (#2636, @ashish-amarnath)
* Add a new supported provider for an object storage plugin for Storj (#2635, @jessicagreben)
* Update basic-install.md documentation: Add windows cli installation option via chocolatey (#2629, @adamrushuk)
* Documentation: Update Jekyll to 4.1.0. Switch from redcarpet to kramdown for Markdown renderer (#2625, @tbatard)
* improve builder image handling so that we don't rebuild each `make shell` (#2620, @mauilion)
* first check if there are pending changed on the build-image dockerfile if so build it.
* then check if there is an image in the registry if so pull it.
* then build an image cause we don't have a cached image. (this handles the backward compat case.)
* fix make clean to clear go mod cache before removing dirs (for containerized builds)
* Add linter checks to Makefile (#2615, @tbatard)
* add a CI check for a changelog file (#2613, @ashish-amarnath)
* implement option to back up all volumes by default with restic (#2611, @ashish-amarnath)
* When a timeout string can't be parsed, log the error as a warning instead of silently consuming the error. (#2610, @nrb)
* Azure: support using `aad-pod-identity` auth when using restic (#2602, @skriss)
* log a warning instead of erroring if an additional item returned from a plugin can't be found in the Kubernetes API (#2595, @skriss)
* when creating new backup from schedule from cli, allow backup name to be automatically generated (#2569, @cblecker)
* Convert manifests + BSL api client to kubebuilder (#2561, @carlisia)
* backup/restore: reinstantiate backup store just before uploading artifacts to ensure credentials are up-to-date (#2550, @skriss)

View File

@@ -0,0 +1,93 @@
## v1.6.1
### 2021-06-21
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.6.1
### Container Image
`velero/velero:v1.6.1`
### Documentation
https://velero.io/docs/v1.6/
### Upgrading
https://velero.io/docs/v1.6/upgrade-to-1.6/
### All Changes
* Fix CR restore regression introduced in 1.6 restore progress. (#3845, @sseago)
* Skip the restore of volumes that originally came from a projected volume when using restic. (#3877, @zubron)
* skip backuping projected volume when using restic (#3866, @alaypatel07)
* 🐛 Fix plugin name derivation from image name (#3711, @ashish-amarnath)
## v1.6.0
### 2021-04-12
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.6.0
### Container Image
`velero/velero:v1.6.0`
### Documentation
https://velero.io/docs/v1.6/
### Upgrading
https://velero.io/docs/v1.6/upgrade-to-1.6/
### Highlights
* Support for per-BSL credentials
* Progress reporting for restores
* Restore API Groups by priority level
* Restic v0.12.0 upgrade
* End-to-end testing
* CLI usability improvements
### All Changes
* Add support for restic to use per-BSL credentials. Velero will now serialize the secret referenced by the `Credential` field in the BSL and use this path when setting provider specific environment variables for restic commands. (#3489, @zubron)
* Upgrade restic from v0.9.6 to v0.12.0. (#3528, @ashish-amarnath)
* Progress reporting added for Velero Restores (#3125, @pranavgaikwad)
* Add uninstall option for velero cli (#3399, @vadasambar)
* Add support for per-BSL credentials. Velero will now serialize the secret referenced by the `Credential` field in the BSL and pass this path through to Object Storage plugins via the `config` map using the `credentialsFile` key. (#3442, @zubron)
* Fixed a bug where restic volumes would not be restored when using a namespace mapping. (#3475, @zubron)
* Restore API group version by priority. Increase timeout to 3 minutes in DeploymentIsReady(...) function in the install package (#3133, @codegold79)
* Add field and cli flag to associate a credential with a BSL on BSL create|set. (#3190, @carlisia)
* Add colored output to `describe schedule/backup/restore` commands (#3275, @mike1808)
* Add CAPI Cluster and ClusterResourceSets to default restore priorities so that the capi-controller-manager does not panic on restores. (#3446, @nrb)
* Use label to select Velero deployment in plugin cmd (#3447, @codegold79)
* feat: support setting BackupStorageLocation CA certificate via `velero backup-location set --cacert` (#3167, @jenting)
* Add restic initContainer length check in pod volume restore to prevent restic plugin container disappear in runtime (#3198, @shellwedance)
* Bump versions of external snapshotter and others in order to make `go get` to succeed (#3202, @georgettica)
* Support fish shell completion (#3231, @jenting)
* Change the logging level of PV deletion timeout from Debug to Warn (#3316, @MadhavJivrajani)
* Set the BSL created at install time as the "default" (#3172, @carlisia)
* Capitalize all help messages (#3209, @jenting)
* Increased default Velero pod memory limit to 512Mi (#3234, @dsmithuchida)
* Fixed an issue where the deletion of a backup would fail if the backup tarball couldn't be downloaded from object storage. Now the tarball is only downloaded if there are associated DeleteItemAction plugins and if downloading the tarball fails, the plugins are skipped. (#2993, @zubron)
* feat: add delete sub-command for BSL (#3073, @jenting)
* 🐛 BSLs with validation disabled should be validated at least once (#3084, @ashish-amarnath)
* feat: support configures BackupStorageLocation custom resources to indicate which one is the default (#3092, @jenting)
* Added "--preserve-nodeports" flag to preserve original nodePorts when restoring. (#3095, @yusufgungor)
* Owner reference in backup when created from schedule (#3127, @matheusjuvelino)
* issue: add flag to the schedule cmd to configure the `useOwnerReferencesInBackup` option #3176 (#3182, @matheusjuvelino)
* cli: allow creating multiple instances of Velero across two different namespaces (#2886, @alaypatel07)
* Feature: It is possible to change the timezone of the container by specifying in the manifest.. env: [TZ: Zone/Country], or in the Helm Chart.. configuration: {extraEnvVars: [TZ: 'Zone/Country']} (#2944, @mickkael)
* Fix issue where bare `velero` command returned an error code. (#2947, @nrb)
* Restore CRD Resource name to fix CRD wait functionality. (#2949, @sseago)
* Fixed 'velero.io/change-pvc-node-selector' plugin to fetch configmap using label key "velero.io/change-pvc-node-selector" (#2970, @mynktl)
* Compile with Go 1.15 (#2974, @gliptak)
* Fix BSL controller to avoid invoking init() on all BSLs regardless of ValidationFrequency (#2992, @betta1)
* Ensure that bound PVCs and PVs remain bound on restore. (#3007, @nrb)
* Allows the restic-wait container to exist in any order in the pod being restored. Prints a warning message in the case where the restic-wait container isn't the first container in the list of initialization containers. (#3011, @doughepi)
* Add warning to velero version cmd if the client and server versions mismatch. (#3024, @cvhariharan)
* 🐛 Use namespace and name to match PVB to Pod restore (#3051, @ashish-amarnath)
* Fixed various typos across codebase (#3057, @invidian)
* 🐛 ItemAction plugins for unresolvable types should not be run for all types (#3059, @ashish-amarnath)
* Basic end-to-end tests, generate data/backup/remove/restore/verify. Uses distributed data generator (#3060, @dsu-igeek)
* Added GitHub Workflow running Codespell for spell checking (#3064, @invidian)
* Pass annotations from schedule to backup it creates the same way it is done for labels. Add WithannotationsMap function to builder to be able to pass map instead of key/val list (#3067, @funkycode)
* Add instructions to clone repository for examples in docs (#3074, @MadhavJivrajani)
* 🏃‍♂️ update setup-kind github actions CI (#3085, @ashish-amarnath)
* Modify wrong function name to correct one. (#3106, @shellwedance)

View File

@@ -1 +0,0 @@
backup/restore: reinstantiate backup store just before uploading artifacts to ensure credentials are up-to-date

View File

@@ -1 +0,0 @@
Convert manifests + BSL api client to kubebuilder

View File

@@ -1 +0,0 @@
when creating new backup from schedule from cli, allow backup name to be automatically generated

View File

@@ -1 +0,0 @@
log a warning instead of erroring if an additional item returned from a plugin can't be found in the Kubernetes API

View File

@@ -1 +0,0 @@
Azure: support using `aad-pod-identity` auth when using restic

View File

@@ -1 +0,0 @@
When a timeout string can't be parsed, log the error as a warning instead of silently consuming the error.

View File

@@ -1 +0,0 @@
implement option to back up all volumes by default with restic

View File

@@ -1 +0,0 @@
add a CI check for a changelog file

View File

@@ -1 +0,0 @@
Add linter checks to Makefile

View File

@@ -1,6 +0,0 @@
improve builder image handling so that we don't rebuild each `make shell`
first check if there are pending changed on the build-image dockerfile if so build it.
then check if there is an image in the registry if so pull it.
then build an image cause we don't have a cached image. (this handles the backward compat case.)
fix make clean to clear go mod cache before removing dirs (for containerized builds)

View File

@@ -1,3 +0,0 @@
Documentation: Update Jekyll to 4.1.0
Switch from redcarpet to kramdown for Markdown renderer

View File

@@ -1 +0,0 @@
Update basic-install.md documentation: Add windows cli installation option via chocolatey

View File

@@ -1 +0,0 @@
Add a new supported provider for an object storage plugin for Storj

View File

@@ -1 +0,0 @@
move CSI plugin out of prototype into beta

View File

@@ -1 +0,0 @@
Update basic-install and release-instructions documentation for Windows Chocolatey package

View File

@@ -1 +0,0 @@
bump Kubernetes module dependencies to v0.18.4 to fix https://github.com/vmware-tanzu/velero/issues/2540 by adding code compatibility with kubernetes v1.18

View File

@@ -1 +0,0 @@
Improve velero download doc

View File

@@ -1 +0,0 @@
Add a BSL controller to handle validation + update BSL status phase (validation removed from the server and no longer blocks when there's any invalid BSL)

View File

@@ -1 +0,0 @@
updated acceptable values on cron schedule from 0-7 to 0-6

View File

@@ -1 +0,0 @@
capture version of the CRD prior before invoking the remap_crd_version backup item action

View File

@@ -1 +0,0 @@
Adjust restic default time out to 4 hours and base pod resource requests to 500m CPU/512Mi memory.

View File

@@ -1 +0,0 @@
Add backupValidationFailureTotal to metrics

View File

@@ -18,7 +18,7 @@ spec:
scope: Namespaced
validation:
openAPIV3Schema:
description: Backup is a Velero resource that respresents the capture of Kubernetes
description: Backup is a Velero resource that represents the capture of Kubernetes
cluster state at a point in time (API objects and associated volume state).
properties:
apiVersion:
@@ -303,6 +303,15 @@ spec:
are ANDed.
type: object
type: object
orderedResources:
additionalProperties:
type: string
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the Kind name and value is a list
of resource names separated by commas. Each resource name has format
"namespace/resourcename". For cluster resources, simply use "resourcename".
nullable: true
type: object
snapshotVolumes:
description: SnapshotVolumes specifies whether to take cloud snapshots
of any PV's referenced in the set of objects included in the Backup.

View File

@@ -21,11 +21,17 @@ spec:
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
- JSONPath: .spec.default
description: Default backup storage location
name: Default
type: boolean
group: velero.io
names:
kind: BackupStorageLocation
listKind: BackupStorageLocationList
plural: backupstoragelocations
shortNames:
- bsl
singular: backupstoragelocation
preserveUnknownFields: false
scope: Namespaced
@@ -69,6 +75,28 @@ spec:
type: string
description: Config is for provider-specific configuration fields.
type: object
credential:
description: Credential contains the credential information intended
to be used with this location
properties:
key:
description: The key of the secret to select from. Must be a valid
secret key.
type: string
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
optional:
description: Specify whether the Secret or its key must be defined
type: boolean
required:
- key
type: object
default:
description: Default indicates this location is the default backup storage
location.
type: boolean
objectStorage:
description: ObjectStorageLocation specifies the settings necessary
to connect to a provider's object storage.

View File

@@ -16,6 +16,8 @@ spec:
singular: downloadrequest
preserveUnknownFields: false
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: DownloadRequest is a request to download an artifact from backup

File diff suppressed because it is too large Load Diff

View File

@@ -318,6 +318,16 @@ spec:
are ANDed.
type: object
type: object
orderedResources:
additionalProperties:
type: string
description: OrderedResources specifies the backup order of resources
of specific Kind. The map key is the Kind name and value is a
list of resource names separated by commas. Each resource name
has format "namespace/resourcename". For cluster resources, simply
use "resourcename".
nullable: true
type: object
snapshotVolumes:
description: SnapshotVolumes specifies whether to take cloud snapshots
of any PV's referenced in the set of objects included in the Backup.
@@ -338,6 +348,11 @@ spec:
type: string
type: array
type: object
useOwnerReferencesInBackup:
description: UseOwnerReferencesBackup specifies whether to use OwnerReferences
on backups created by this Schedule.
nullable: true
type: boolean
required:
- schedule
- template

View File

@@ -13,9 +13,13 @@ spec:
kind: ServerStatusRequest
listKind: ServerStatusRequestList
plural: serverstatusrequests
shortNames:
- ssr
singular: serverstatusrequest
preserveUnknownFields: false
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: ServerStatusRequest is a request to access current status information

View File

@@ -53,7 +53,7 @@ spec:
a Velero VolumeSnapshotLocation.
properties:
phase:
description: VolumeSnapshotLocationPhase is the lifecyle phase of a
description: VolumeSnapshotLocationPhase is the lifecycle phase of a
Velero VolumeSnapshotLocation.
enum:
- Available

File diff suppressed because one or more lines are too long

View File

@@ -26,3 +26,43 @@ rules:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- downloadrequests
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- downloadrequests/status
verbs:
- get
- patch
- update
- apiGroups:
- velero.io
resources:
- serverstatusrequests
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- velero.io
resources:
- serverstatusrequests/status
verbs:
- get
- patch
- update

View File

@@ -17,7 +17,7 @@ spec:
scope: ""
validation:
openAPIV3Schema:
description: Backup is a Velero resource that respresents the capture of Kubernetes
description: Backup is a Velero resource that represents the capture of Kubernetes
cluster state at a point in time (API objects and associated volume state).
properties:
apiVersion:
@@ -679,6 +679,10 @@ spec:
PVs from snapshot (via the cloudprovider).
nullable: true
type: boolean
preserveNodePorts:
description: PreserveNodePorts specifies whether to restore old nodePorts from backup.
nullable: true
type: boolean
scheduleName:
description: ScheduleName is the unique name of the Velero schedule
to restore from. If specified, and BackupName is empty, Velero will

View File

@@ -52,7 +52,7 @@ spec:
of a Velero VolumeSnapshotLocation.
properties:
phase:
description: VolumeSnapshotLocationPhase is the lifecyle phase of
description: VolumeSnapshotLocationPhase is the lifecycle phase of
a Velero VolumeSnapshotLocation.
enum:
- Available

View File

@@ -84,7 +84,7 @@ If the metadata file does not exist, this is an older backup and we cannot displ
### Fetch backup contents archive and walkthrough to list contents
Instead of recording new metadata about what resources have been backed up, we could simply download the backup contents archive and walkthrough it to list the contents everytime `velero backup describe <name> --details` is run.
Instead of recording new metadata about what resources have been backed up, we could simply download the backup contents archive and walkthrough it to list the contents every time `velero backup describe <name> --details` is run.
The advantage of this approach is that we don't need to change any backup procedures as we already have this content, and we will also be able to list resources for older backups.
Additionally, if we wanted to expose more information about the backed up resources, we can do so without having to update what we store in the metadata.

View File

@@ -176,7 +176,7 @@ This will allow the development to continue on the feature while it's in pre-pro
[`BackupStore.PutBackup`][9] will receive an additional argument, `volumeSnapshots io.Reader`, that contains the JSON representation of `VolumeSnapshots`.
This will be written to a file named `csi-snapshots.json.gz`.
[`defaultRestorePriorities`][11] should be rewritten to the following to accomodate proper association between the CSI objects and PVCs. `CustomResourceDefinition`s are moved up because they're necessary for creating the CSI CRDs. The CSI CRDs are created before `PersistentVolume`s and `PersistentVolumeClaim`s so that they may be used as data sources.
[`defaultRestorePriorities`][11] should be rewritten to the following to accommodate proper association between the CSI objects and PVCs. `CustomResourceDefinition`s are moved up because they're necessary for creating the CSI CRDs. The CSI CRDs are created before `PersistentVolume`s and `PersistentVolumeClaim`s so that they may be used as data sources.
GitHub issue [1565][17] represents this work.
```go
@@ -248,7 +248,7 @@ Volumes with any other `PersistentVolumeSource` set will use Velero's current Vo
### VolumeSnapshotLocations and VolumeSnapshotClasses
Velero uses its own `VolumeSnapshotLocation` CRDs to specify configuration options for a given storage system.
In Velero, this often includes topology information such as regions or availibility zones, as well as credential information.
In Velero, this often includes topology information such as regions or availability zones, as well as credential information.
CSI volume snapshotting has a `VolumeSnapshotClass` CRD which also contains configuration options for a given storage system, but these options are not the same as those that Velero would use.
Since CSI volume snapshotting is operating within the same storage system that manages the volumes already, it does not need the same topology or credential information that Velero does.
@@ -269,7 +269,7 @@ Additionally, the VolumeSnapshotter plugins and CSI volume snapshot drivers over
Thus, there's not a logical place to fit the creation of VolumeSnapshot creation in the VolumeSnapshotter interface.
* Implement CSI logic directly in Velero core code.
The plugins could be packaged separately, but that doesn't necessarily make sense with server and client changes being made to accomodate CSI snapshot lookup.
The plugins could be packaged separately, but that doesn't necessarily make sense with server and client changes being made to accommodate CSI snapshot lookup.
* Implementing the CSI logic entirely in external plugins.
As mentioned above, the necessary plugins for `PersistentVolumeClaim`, `VolumeSnapshot`, and `VolumeSnapshotContent` could be hosted out-out-of-tree from Velero.
@@ -306,16 +306,16 @@ Without these objects, the provider-level snapshots cannot be located in order t
[1]: https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/
[2]: https://github.com/kubernetes-csi/external-snapshotter/blob/master/pkg/apis/volumesnapshot/v1alpha1/types.go#L41
[3]: https://github.com/kubernetes-csi/external-snapshotter/blob/master/pkg/apis/volumesnapshot/v1alpha1/types.go#L161
[4]: https://github.com/heptio/velero/blob/master/pkg/volume/snapshot.go#L21
[5]: https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/pod_volume_backup.go#L88
[4]: https://github.com/heptio/velero/blob/main/pkg/volume/snapshot.go#L21
[5]: https://github.com/heptio/velero/blob/main/pkg/apis/velero/v1/pod_volume_backup.go#L88
[6]: https://github.com/heptio/velero-csi-plugin/
[7]: https://github.com/heptio/velero/blob/master/pkg/plugin/velero/volume_snapshotter.go#L26
[8]: https://github.com/heptio/velero/blob/master/pkg/controller/backup_controller.go#L560
[9]: https://github.com/heptio/velero/blob/master/pkg/persistence/object_store.go#L46
[10]: https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/labels_annotations.go#L21
[11]: https://github.com/heptio/velero/blob/master/pkg/cmd/server/server.go#L471
[12]: https://github.com/heptio/velero/blob/master/pkg/cmd/util/output/backup_describer.go
[13]: https://github.com/heptio/velero/blob/master/pkg/cmd/util/output/backup_describer.go#L214
[7]: https://github.com/heptio/velero/blob/main/pkg/plugin/velero/volume_snapshotter.go#L26
[8]: https://github.com/heptio/velero/blob/main/pkg/controller/backup_controller.go#L560
[9]: https://github.com/heptio/velero/blob/main/pkg/persistence/object_store.go#L46
[10]: https://github.com/heptio/velero/blob/main/pkg/apis/velero/v1/labels_annotations.go#L21
[11]: https://github.com/heptio/velero/blob/main/pkg/cmd/server/server.go#L471
[12]: https://github.com/heptio/velero/blob/main/pkg/cmd/util/output/backup_describer.go
[13]: https://github.com/heptio/velero/blob/main/pkg/cmd/util/output/backup_describer.go#L214
[14]: https://github.com/kubernetes/kubernetes/blob/8ea9edbb0290e9de1e6d274e816a4002892cca6f/pkg/controller/volume/persistentvolume/util/util.go#L69
[15]: https://github.com/kubernetes/kubernetes/pull/30285
[16]: https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L237

View File

@@ -73,8 +73,8 @@ This same approach can be taken for CA bundles. The bundle can be stored in a
secret which is referenced on the BSL and written to a temp file prior to
invoking Restic.
[1](https://github.com/vmware-tanzu/velero/blob/master/pkg/restic/repository_manager.go#L238-L245)
[2](https://github.com/vmware-tanzu/velero/blob/master/pkg/restic/common.go#L168-L203)
[1](https://github.com/vmware-tanzu/velero/blob/main/pkg/restic/repository_manager.go#L238-L245)
[2](https://github.com/vmware-tanzu/velero/blob/main/pkg/restic/common.go#L168-L203)
## Detailed Design
@@ -126,7 +126,7 @@ would look like:
$ velero client config set cacert PATH
```
[1]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/master/velero-plugin-for-aws/object_store.go#L135
[2]: https://github.com/vmware-tanzu/velero/blob/master/pkg/restic/command.go#L47
[3]: https://github.com/restic/restic/blob/master/internal/backend/http_transport.go#L81
[4]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/master/velero-plugin-for-aws/object_store.go#L154
[1]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/main/velero-plugin-for-aws/object_store.go#L135
[2]: https://github.com/vmware-tanzu/velero/blob/main/pkg/restic/command.go#L47
[3]: https://github.com/restic/restic/blob/main/internal/backend/http_transport.go#L81
[4]: https://github.com/vmware-tanzu/velero-plugin-for-aws/blob/main/velero-plugin-for-aws/object_store.go#L154

View File

@@ -0,0 +1,199 @@
# Delete Item Action Plugins
## Abstract
Velero should provide a way to delete items created during a backup, with a model and interface similar to that of BackupItemAction and RestoreItemAction plugins.
These plugins would be invoked when a backup is deleted, and would receive items from within the backup tarball.
## Background
As part of Container Storage Interface (CSI) snapshot support, Velero added a new pattern for backing up and restoring snapshots via BackupItemAction and RestoreItemAction plugins.
When others have tried to use this pattern, however, they encountered issues with deleting the resources made in their own ItemAction plugins, as Velero does not expose any sort of extension at backup deletion time.
These plugins largely seek to delete resources that exist outside of Kubernetes.
This design seeks to provide the missing extension point.
## Goals
- Provide a DeleteItemAction API for plugins to implement
- Update Velero backup deletion logic to invoke registered DeleteItemAction plugins.
## Non Goals
- Specific implementations of the DeleteItemAction API beyond test cases.
- Rollback of DeleteItemAction execution.
## High-Level Design
The DeleteItemAction plugin API will closely resemble the RestoreItemAction plugin design, in that plugins will receive the Velero `Backup` Go struct that is being deleted and a matching Kubernetes resource extracted from the backup tarball.
The Velero backup deletion process will be modified so that if there are any DeleteItemAction plugins registered, the backup tarball will be downloaded and extracted, similar to how restore logic works now.
Then, each item in the backup tarball will be iterated over to see if a DeleteItemAction plugin matches for it.
If a DeleteItemAction plugin matches, the `Backup` and relevant item will be passed to the DeleteItemAction.
The DeleteItemAction plugins will be run _first_ in the backup deletion process, before deleting snapshots from storage or `Restore`s from the Kubernetes API server.
DeleteItemAction plugins *cannot* rollback their actions.
This is because there is currently no way to recover other deleted components of a backup, such as volume/restic snapshots or other DeleteItemAction resources.
DeleteItemAction plugins will be run in alphanumeric order based on their registered names.
## Detailed Design
### New types
The `DeleteItemAction` interface is as follows:
```go
// DeleteItemAction is an actor that performs an action based on an item in a backup that is being deleted.
type DeleteItemAction interface {
// AppliesTo returns information about which resources this action should be invoked for.
// A DeleteItemAction's Execute function will only be invoked on items that match the returned
// selector. A zero-valued ResourceSelector matches all resources.
AppliesTo() (ResourceSelector, error)
// Execute allows the ItemAction to perform arbitrary logic with the item being deleted.
Execute(DeleteItemActionInput) error
}
```
The `DeleteItemActionInput` type is defined as follows:
```go
type DeleteItemActionInput struct {
// Item is the item taken from the pristine backed up version of resource.
Item runtime.Unstructured
// Backup is the representation of the backup resource processed by Velero.
Backup *api.Backup
}
```
Both `DeleteItemAction` and `DeleteItemActionInput` will be defined in `pkg/plugin/velero/delete_item_action.go`.
### Generate protobuf definitions and client/servers
In `pkg/plugin/proto`, add `DeleteItemAction.proto`.
Protobuf definitions will be necessary for:
```protobuf
message DeleteItemActionExecuteRequest {
...
}
message DeleteItemActionExecuteResponse {
...
}
message DeleteItemActionAppliesToRequest {
...
}
message DeleteItemActionAppliesToResponse {
...
}
service DeleteItemAction {
rpc AppliesTo(DeleteItemActionAppliesToRequest) returns (DeleteItemActionAppliesToResponse)
rpc Execute(DeleteItemActionExecuteRequest) returns (DeleteItemActionExecuteResponse)
}
```
Once these are written, then a client and server implementation can be written in `pkg/plugin/framework/delete_item_action_client.go` and `pkg/plugin/framework/delete_item_action_server.go`, respectively.
These should be largely the same as the client and server implementations for `RestoreItemAction` and `BackupItemAction` plugins.
### Restartable delete plugins
Similar to `RestoreItemAction` and `BackupItemAction` plugins, restartable processes will need to be implemented.
In `pkg/plugin/clientmgmt`, add `restartable_delete_item_action.go`, creating the following unexported type:
```go
type restartableDeleteItemAction struct {
key kindAndName
sharedPluginProcess RestartableProcess
config map[string]string
}
// newRestartableDeleteItemAction returns a new restartableDeleteItemAction.
func newRestartableDeleteItemAction(name string, sharedPluginProcess RestartableProcess) *restartableDeleteItemAction {
// ...
}
// getDeleteItemAction returns the delete item action for this restartableDeleteItemAction. It does *not* restart the
// plugin process.
func (r *restartableDeleteItemAction) getDeleteItemAction() (velero.DeleteItemAction, error) {
// ...
}
// getDelegate restarts the plugin process (if needed) and returns the delete item action for this restartableDeleteItemAction.
func (r *restartableDeleteItemAction) getDelegate() (velero.DeleteItemAction, error) {
// ...
}
// AppliesTo restarts the plugin's process if needed, then delegates the call.
func (r *restartableDeleteItemAction) AppliesTo() (velero.ResourceSelector, error) {
// ...
}
// Execute restarts the plugin's process if needed, then delegates the call.
func (r *restartableDeleteItemAction) Execute(input *velero.DeleteItemActionInput) (error) {
// ...
}
```
This file will be very similar in structure to
### Plugin manager changes
Add the following methods to `pkg/plugin/clientmgmt/manager.go`'s `Manager` interface:
```go
type Manager interface {
...
// Get DeleteItemAction returns a DeleteItemAction plugin for name.
GetDeleteItemAction(name string) (DeleteItemAction, error)
// GetDeteteItemActions returns the all DeleteItemAction plugins.
GetDeleteItemActions() ([]DeleteItemAction, error)
}
```
The unexported `manager` type should implement both the `GetDeleteItemAction` and `GetDeleteItemActions`.
Both of these methods should have the same exception for `velero.io/`-prefixed plugins that all other types do.
`GetDeleteItemAction` and `GetDeleteItemActions` will invoke the `restartableDeleteItemAction` implementations.
### Deletion controller modifications
`pkg/controller/backup_deletion_controller.go` will be updated to have plugin management invoked.
In `processRequest`, before deleting snapshots, get any registered `DeleteItemAction` plugins.
If there are none, proceed as normal.
If there are one or more, download the backup tarball from backup storage, untar it to temporary storage, and iterate through the items, matching them to the applicable plugins.
## Alternatives Considered
Another proposal for higher level `DeleteItemActions` was initially included, which would require implementors to individually download the backup tarball themselves.
While this may be useful long term, it is not a good fit for the current goals as each plugin would be re-implementing a lot of boilerplate.
See the deletion-plugins.md file for this alternative proposal in more detail.
The `VolumeSnapshotter` interface is not generic enough to meet the requirements here, as it is specifically for taking snapshots of block devices.
## Security Considerations
By their nature, `DeleteItemAction` plugins will be deleting data, which would normally be a security concern.
However, these will only be invoked in two situations: either when a `BackupDeleteRequest` is sent via a user with the `velero` CLI or some other management system, or when a Velero `Backup` expires by going over its TTL.
Because of this, the data deletion is not a concern.
## Compatibility
In terms of backwards compatibility, this design should stay compatible with most Velero installations that are upgrading.
If not DeleteItemAction plugins are present, then the backup deletion process should proceed the same way it worked prior to their inclusion.
## Implementation
The implementation dependencies are, roughly, in the order as they are described in the [Detailed Design](#detailed-design) section.
## Open Issues

View File

@@ -0,0 +1,82 @@
# Deletion Plugins
Status: Alternative Proposal
## Abstract
Velero should introduce a new type of plugin that runs when a backup is deleted.
These plugins will delete any external resources associated with the backup so that they will not be left orphaned.
## Background
With the CSI plugin, Velero developers introduced a pattern of using BackupItemAction and RestoreItemAction plugins tied to PersistentVolumeClaims to create other resources to complete a backup.
In the CSI plugin case, Velero does clean up of these other resources, which are Kubernetes Custom Resources, within the core Velero server.
However, for external plugins that wish to use this same pattern, this is not a practical solution.
Velero's core cannot be extended for all possible Custom Resources, and not external resources that get created are Kubernetes Custom Resources.
Therefore, Velero needs some mechanism that allows plugin authors who have created resources within a BackupItemAction or RestoreItemAction plugin to ensure those resources are deleted, regardless of what system those resources reside in.
## Goals
- Provide a new plugin type in Velero that is invoked when a backup is deleted.
## Non Goals
- Implementations of specific deletion plugins.
- Rollback of deletion plugin execution.
## High-Level Design
Velero will provide a new plugin type that is similar to its existing plugin architecture.
These plugins will be referred to as `DeleteAction` plugins.
`DeleteAction` plugins will receive the `Backup` CustomResource being deleted on execution.
`DeleteAction` plugins cannot prevent deletion of an item.
This is because multiple `DeleteAction` plugins can be registered, and this proposal does not include rollback and undoing of a deletion action.
Thus, if multiple `DeleteAction` plugins have already run but another would request the deletion of a backup stopped, the backup that's retained would be inconsistent.
`DeleteActions` will apply to `Backup`s based on a label on the `Backup` itself.
In order to ensure that `Backup`s don't execute `DeleteAction` plugins that are not relevant to them, `DeleteAction` plugins can register an `AppliesTo` function which will define a label selector on Velero backups.
`DeleteActions` will be run in alphanumerical order by plugin name.
This order is somewhat arbitrary, but will be used to give authors and users a somewhat predictable order of events.
## Detailed Design
The `DeleteAction` plugins will implement the following Go interface, defined in `pkg/plugin/velero/deletion_action.go`:
```go
type DeleteAction struct {
// AppliesTo will match the DeleteAction plugin against Velero Backups that it should operate against.
AppliesTo()
// Execute runs the custom plugin logic and may connect to external services.
Execute(backup *api.backup) error
}
```
The following methods would be added to the `clientmgmt.Manager` interface in `pkg/pluginclientmgmt/manager.go`:
```
type Manager interface {
...
// GetDeleteActions returns the registered DeleteActions.
//TODO: do we need to get these by name, or can we get them all?
GetDeleteActions([]velero.DeleteAction, error)
...
```
## Alternatives Considered
TODO
## Security Considerations
TODO
## Compatibility
Backwards compatibility should be straight-forward; if there are no installed `DeleteAction` plugins, then the backup deletion process will proceed as it does today.
## Implementation
TODO
## Open Issues
In order to add a custom label to the backup, the backup must be modifiable inside of the `BackupItemActon` and `RestoreItemAction` plugins, which it currently is not. A work around for now is for the user to apply a label to the backup at creation time, but that is not ideal.

View File

@@ -3,7 +3,7 @@
Status: Accepted
Some features may take a while to get fully implemented, and we don't necessarily want to have long-lived feature branches
A simple feature flag implementation allows code to be merged into master, but not used unless a flag is set.
A simple feature flag implementation allows code to be merged into main, but not used unless a flag is set.
## Goals

View File

@@ -45,7 +45,7 @@ Currently, the Velero repository sits under the Heptio GitHub organization. With
### Notes/How-Tos
#### Transfering the GH repository
#### Transferring the GH repository
All action items needed for the repo transfer are listed in the Todo list above. For details about what gets moved and other info, this is the GH documentation: https://help.github.com/en/articles/transferring-a-repository
@@ -57,7 +57,7 @@ Someone with owner permission on the new repository needs to go to their Travis
After this, webhook notifications can be added following these instructions: https://docs.travis-ci.com/user/notifications/#configuring-webhook-notifications.
#### Transfering ZenHub
#### Transferring ZenHub
Pre-requisite: A new Zenhub account must exist for a vmware or vmware-tanzu organization.

View File

@@ -154,7 +154,7 @@ In order to know the CR created for the particular backup of a volume, Velero ad
- `velero.io/pv-name` with value as volume that is undergoing backup
Backup name being unique won't cause issues like duplicates in identifying the CR.
Labels will be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/master/pkg/label/label.go#L35).
Labels will be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/main/pkg/label/label.go#L35).
If Plugin supports showing progress of the operation it is performing, it does following:
- finds the VolumePluginBackup CR related to this backup operation by using `tags` passed in CreateSnapshot call
@@ -281,7 +281,7 @@ In order to know the CR created for the particular restore of a volume, Velero a
- `velero.io/snapshot-id` with value as snapshot id that need to be restored
- `velero.io/provider` with value as `Provider` in `VolumeSnapshotLocation`
Labels will be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/master/pkg/label/label.go#L35).
Labels will be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/main/pkg/label/label.go#L35).
Plugin will be able to identify CR by using snapshotID that it received as parameter of CreateVolumeFromSnapshot API, and plugin's Provider name.
It updates the progress of restore operation regularly if plugin supports feature of showing progress.
@@ -376,7 +376,7 @@ In order to know the CR created for the particular backup of a volume, volume sn
Backup name being unique won't cause issues like duplicates in identifying the CR.
Plugin need to sanitize the value that can be set for above labels. Label need to be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/master/pkg/label/label.go#L35).
Plugin need to sanitize the value that can be set for above labels. Label need to be set with the value returned from `GetValidName` function. (https://github.com/vmware-tanzu/velero/blob/main/pkg/label/label.go#L35).
Though no restrictions are required on the name of CR, as a general practice, volume snapshotter can name this CR with the value same as return value of CreateSnapshot.
@@ -413,7 +413,7 @@ However, it can provide preference over latest supported API.
If new fields are added without changing API version, it won't cause any problem as these resources are intended to provide information, and, there is no reconciliation on these resources.
### Compatibility of latest plugin with older version of Velero
Plugin that supports this CR should handle the situation gracefully when CRDs are not installed. It can handle the errors occured during creation/updation of the CRs.
Plugin that supports this CR should handle the situation gracefully when CRDs are not installed. It can handle the errors occurred during creation/updation of the CRs.
## Limitations:
@@ -432,7 +432,7 @@ But, this involves good amount of changes and needs a way for backward compatibi
As volume plugins are mostly K8s native, its fine to go ahead with current limiation.
### Update Backup CR
Instead of creating new CRs, plugins can directly update the status of Backup CR. But, this deviates from current approach of having seperate CRs like PodVolumeBackup/PodVolumeRestore to know operations progress.
Instead of creating new CRs, plugins can directly update the status of Backup CR. But, this deviates from current approach of having separate CRs like PodVolumeBackup/PodVolumeRestore to know operations progress.
### Restricting on name rather than using labels
Instead of using labels to identify the CR related to particular backup on a volume, restrictions can be placed on the name of VolumePluginBackup CR to be same as the value returned from CreateSnapshot.

View File

@@ -32,14 +32,14 @@ It will also update the `spec.volumeName` of the related persistent volume claim
## Detailed Design
In `pkg/restore/restore.go`, around [line 872](https://github.com/heptio/velero/blob/master/pkg/restore/restore.go#L872), Velero has special-case code for persistent volumes.
In `pkg/restore/restore.go`, around [line 872](https://github.com/vmware-tanzu/velero/blob/main/pkg/restore/restore.go#L872), Velero has special-case code for persistent volumes.
This code will be updated to check for the two preconditions described in the previous section.
If the preconditions are met, the object will be given a new name.
The persistent volume will also be annotated with the original name, e.g. `velero.io/original-pv-name=NAME`.
Importantly, the name change will occur **before** [line 890](https://github.com/heptio/velero/blob/master/pkg/restore/restore.go#L890), where Velero checks to see if it should restore the persistent volume.
Importantly, the name change will occur **before** [line 890](https://github.com/vmware-tanzu/velero/blob/main/pkg/restore/restore.go#L890), where Velero checks to see if it should restore the persistent volume.
Additionally, the old and new persistent volume names will be recorded in a new field that will be added to the `context` struct, `renamedPVs map[string]string`.
In the special-case code for persistent volume claims starting on [line 987](https://github.com/heptio/velero/blob/master/pkg/restore/restore.go#L987), Velero will check to see if the claimed persistent volume has been renamed by looking in `ctx.renamedPVs`.
In the special-case code for persistent volume claims starting on [line 987](https://github.com/heptio/velero/blob/main/pkg/restore/restore.go#L987), Velero will check to see if the claimed persistent volume has been renamed by looking in `ctx.renamedPVs`.
If so, Velero will update the persistent volume claim's `spec.volumeName` to the new name.
## Alternatives Considered

View File

@@ -63,7 +63,7 @@ With the `--json` flag, `restic backup` outputs single lines of JSON reporting t
The [command factory for backup](https://github.com/heptio/velero/blob/af4b9373fc73047f843cd4bc3648603d780c8b74/pkg/restic/command_factory.go#L37) will be updated to include the `--json` flag.
The code to run the `restic backup` command (https://github.com/heptio/velero/blob/af4b9373fc73047f843cd4bc3648603d780c8b74/pkg/controller/pod_volume_backup_controller.go#L241) will be changed to include a Goroutine that reads from the command's stdout stream.
The implementation of this will largely follow [@jmontleon's PoC](https://github.com/fusor/velero/pull/4/files) of this.
The Goroutine will periodically read the stream (every 10 seconds) and get the last printed status line, which will be convered to JSON.
The Goroutine will periodically read the stream (every 10 seconds) and get the last printed status line, which will be converted to JSON.
If `bytes_done` is empty, restic has not finished scanning the volume and hasn't calculated the `total_bytes`.
In this case, we will not update the PodVolumeBackup and instead will wait for the next iteration.
Once we get a non-zero value for `bytes_done`, the `bytes_done` and `total_bytes` properties will be read and the PodVolumeBackup will be patched to update `status.Progress.BytesDone` and `status.Progress.TotalBytes` respectively.

View File

@@ -0,0 +1,143 @@
# Restore Hooks
This document proposes a solution that allows a user to specify Restore Hooks, much like Backup Hooks, that can be executed during the restore process.
## Goals
- Enable custom commands to be run during a restore in order to mirror the commands that are available to the backup process.
- Provide observability into the result of commands run in restored pods.
## Non Goals
- Handling any application specific scenarios (postgres, mongo, etc)
## Background
Velero supports Backup Hooks to execute commands before and/or after a backup.
This enables a user to, among other things, prepare data to be backed up without having to freeze an in-use volume.
An example of this would be to attach an empty volume to a Postgres pod, use a backup hook to execute `pg_dump` from the data volume, and back up the volume containing the export.
The problem is that there's no easy or automated way to include an automated restore process.
After a restore with the example configuration above, the postgres pod will be empty, but there will be a need to manually exec in and run `pg_restore`.
## High-Level Design
The Restore spec will have a `spec.hooks` section matching the same section on the Backup spec except no `pre` hooks can be defined - only `post`.
Annotations comparable to the annotations used during backup can also be set on pods.
For each restored pod, the Velero server will check if there are any hooks applicable to the pod.
If a restored pod has any applicable hooks, Velero will wait for the container where the hook is to be executed to reach status Running.
The Restore log will include the results of each post-restore hook and the Restore object status will incorporate the results of hooks.
The Restore log will include the results of each hook and the Restore object status will incorporate the results of hooks.
A new section at `spec.hooks.resources.initContainers` will allow for injecting initContainers into restored pods.
Annotations can be set as an alternative to defining the initContainers in the Restore object.
## Detailed Design
Post-restore hooks can be defined by annotation and/or by an array of resource hooks in the Restore spec.
The following annotations are supported:
- post.hook.restore.velero.io/container
- post.hook.restore.velero.io/command
- post.hook.restore.velero.io/on-error
- post.hook.restore.velero.io/exec-timeout
- post.hook.restore.velero.io/wait-timeout
Init restore hooks can be defined by annotation and/or in the new `initContainers` section in the Restore spec.
The initContainers schema is `pod.spec.initContainers`.
The following annotations are supported:
- init.hook.restore.velero.io/timeout
- init.hook.restore.velero.io/initContainers
This is an example of defining hooks in the Restore spec.
```yaml
apiVersion: velero.io/v1
kind: Restore
spec:
...
hooks:
resources:
-
name: my-hook
includedNamespaces:
- '*'
excludedNamespaces:
- some-namespace
includedResources:
- pods
excludedResources: []
labelSelector:
matchLabels:
app: velero
component: server
post:
-
exec:
container: postgres
command:
- /bin/bash
- -c
- rm /docker-entrypoint-initdb.d/dump.sql
onError: Fail
timeout: 10s
readyTimeout: 60s
init:
timeout: 120s
initContainers:
- name: restore
image: postgres:12
command: ["/bin/bash", "-c", "mv /backup/dump.sql /docker-entrypoint-initdb.d/"]
volumeMounts:
- name: backup
mountPath: /backup
```
As with Backups, if an annotation is defined on a pod then no hooks from the Restore spec will be applied.
### Implementation
The types and function in pkg/backup/item_hook_handler.go will be moved to a new package (pkg/hooks) and exported so they can be used for both backups and restores.
The post-restore hooks implementation will closely follow the design of restoring pod volumes with restic.
The pkg/restore.context type will have new fields `hooksWaitGroup` and `hooksErrs` comparable to `resticWaitGroup` and `resticErr`.
The pkg/restore.context.execute function will start a goroutine for each pod with applicable hooks and then continue with restoring other items.
Each hooks goroutine will create a pkg/util/hooks.ItemHookHandler for each pod and send any error on the context.hooksErrs channel.
The ItemHookHandler already includes stdout and stderr and other metadata in the Backup log so the same logs will automatically be added to the Restore log (passed as the first argument to the ItemHookhandler.HandleHooks method.)
The pkg/restore.context.execute function will wait for the hooksWaitGroup before returning.
Any errors received on context.hooksErrs will be added to errs.Velero.
One difference compared to the restic restore design is that any error on the context.hooksErrs channel will cancel the context of all hooks, since errors are only reported on this channel if the hook specified `onError: Fail`.
However, canceling the hooks goroutines will not cancel the restic goroutines.
In practice the restic goroutines will complete before the hooks since the hooks do not run until a pod is ready, but it's possible a hook will be executed and fail while a different pod is still in the pod volume restore phase.
Failed hooks with `onError: Continue` will appear in the Restore log but will not affect the status of the parent Restore.
Failed hooks with `onError: Fail` will cause the parent Restore to have status Partially Failed.
If initContainers are specified for a pod, Velero will inject the containers into the beginning of the pod's initContainers list.
If a restic initContainer is also being injected, the restore initContainers will be injected directly after the restic initContainer.
The restore will use a RestoreItemAction to inject the initContainers.
Stdout and stderr of the restore initContainers will not be added to the Restore logs.
InitContainers that fail will not affect the parent Restore's status.
## Alternatives Considered
Wait for all restored Pods to report Ready, then execute the first hook in all applicable Pods simultaneously, then proceed to the next hook, etc.
That could introduce deadlock, e.g. if an API pod cannot be ready until the DB pod is restored.
Put the restore hooks on the Backup spec as a third lifecycle event named `restore` along with `pre` and `post`.
That would be confusing since `pre` and `post` would appear in the Backup log but `restore` would only be in the Restore log.
Execute restore hooks in parallel for each Pod.
That would not match the behavior of Backups.
Wait for PodStatus ready before executing the post-restore hooks in any container.
There are cases where the pod should not report itself ready until after the restore hook has run.
Include the logs from initContainers in the Restore log.
Unlike exec hooks where stdout and stderr are permanently lost if not added to the Restore log, the logs of the injected initContainers are available through the K8s API with kubectl or another client.
## Security Considerations
Stdout or stderr in the Restore log may contain sensitive information, but the same risk already exists for Backup hooks.

View File

@@ -276,7 +276,7 @@ The value for these flags will be stored as annotations.
#### Handling CA certs
In anticipation of a new configuration implementation to handle custom CA certs (as per design doc https://github.com/vmware-tanzu/velero/blob/master/design/custom-ca-support.md), a new flag `velero storage-location create/set --cacert-file mapStringString` is proposed. It sets the configuration to use for creating a secret containing a custom certificate for an S3 location of a plugin provider. Format is provider:path-to-file.
In anticipation of a new configuration implementation to handle custom CA certs (as per design doc https://github.com/vmware-tanzu/velero/blob/main/design/custom-ca-support.md), a new flag `velero storage-location create/set --cacert-file mapStringString` is proposed. It sets the configuration to use for creating a secret containing a custom certificate for an S3 location of a plugin provider. Format is provider:path-to-file.
See discussion https://github.com/vmware-tanzu/velero/pull/2259#discussion_r384700723 for more clarification.
@@ -370,4 +370,4 @@ https://github.com/jpeach/contour/tree/1c575c772e9fd747fba72ae41ab99bdae7a01864/
## Security Considerations
N/A
N/A

220
design/restore-progress.md Normal file
View File

@@ -0,0 +1,220 @@
# Restore progress reporting
Velero _Backup_ resource provides real-time progress of an ongoing backup by means of a _Progress_ field in the CR. Velero _Restore_, on the other hand, only shows one of the phases (InProgress, Completed, PartiallyFailed, Failed) of the ongoing restore. In this document, we propose detailed progress reporting for Velero _Restore_. With the introduction of the proposed _Progress_ field, Velero _Restore_ CR will look like:
```yml
apiVersion: velero.io/v1
kind: Restore
metadata:
name: test-restore
namespace: velero
spec:
[...]
status:
phase: InProgress
progress:
itemsRestored: 100
totalItems: 140
```
## Goals
- Enable progress reporting for Velero Restore
## Non Goals
- Estimate time to completion
## Background
The current _Restore_ CR lets users know whether a restore is in-progress or completed (failed/succeeded). While this basic piece of information is useful to the end user, there seems to be room for improvement in the user experience. The _Restore_ CR can show detailed progress in terms of the number of resources restored so far and the total number of resources to be restored. This will be particularly useful for restores that run for a longer duration of time. Such progress reporting already exists for Velero _Backup_. This document proposes similar implementation for Velero _Restore_.
## High-Level Design
We propose to divide the restore process in two steps. The first step will collect all the items to be restored from the backup tarball. It will apply the label selector and include/exclude rules on the resources / items and store them (preserving the priority order) in an in-memory data structure. The second step will read the collected items and restore them.
## Detailed Design
### Progress struct
A new struct will be introduced to store progress information:
```go
type RestoreProgress struct {
TotalItems int `json:"totalItems,omitempty`
ItemsRestored int `json:"itemsRestored,omitempty`
}
```
`RestoreStatus` will include the above struct:
```go
type RestoreStatus struct {
[...]
Progress *RestoreProgress `json:"progress,omitempty"`
}
```
### Modifications to restore.go
Currently, the restore process works by looping through the resources in the backup tarball and restoring them one-by-one in the same pass:
```go
func (ctx *context) execute(...) {
[...]
for _, resource := range getOrderedResources(...) {
[...]
for namespace, items := range resourceList.ItemsByNamespace {
[...]
for _, item := range items {
[...]
// restore item here
w, e := restoreItem(...)
}
}
}
}
```
We propose to remove the call to `restoreItem()` in the inner most loop and instead store the item in a data structure. Once all the items are collected, we loop through the array of collected items and make a call to `restoreItem()`:
```go
func (ctx *context) getOrderedResourceCollection(...) {
collectedResources := []restoreResource
for _, resource := range getOrderedResources(...) {
[...]
for namespace, items := range resourceList.ItemsByNamespace {
[...]
collectedResource := restoreResource{}
for _, item := range items {
[...]
// store item in a data structure
collectedResource.itemsByNamespace[originalNamespace] = append(collectedResource.itemsByNamespace[originalNamespace], item)
}
}
collectedResources.append(collectedResources, collectedResource)
}
return collectedResources
}
func (ctx *context) execute(...) {
[...]
// get all items
resources := ctx.getOrderedResourceCollection(...)
for _, resource := range resources {
[...]
for _, items := range resource.itemsByNamespace {
[...]
for _, item := range items {
[...]
// restore the item
w, e := restoreItem(...)
}
}
}
[...]
}
```
We introduce two new structs to hold the collected items:
```go
type restoreResource struct {
resource string
itemsByNamespace map[string][]restoreItem
totalItems int
}
type restoreItem struct {
targetNamespace string
name string
}
```
Each group resource is represented by `restoreResource`. The map `itemsByNamespace` is indexed by `originalNamespace`, and the values are list of `items` in the original namespace. `totalItems` is simply the count of all items which are present in the nested map of namespace and items. It is updated every time an item is added to the map. Each item represented by `restoreItem` has `name` and the resolved `targetNamespace`.
### Calculating progress
The total number of items can be calculated by simply adding the number of total items present in the map of all resources.
```go
totalItems := 0
for _, resource := range collectedResources {
totalItems += resource.totalItems
}
```
The additional items returned by the plugins will still be discovered at the time of plugin execution. The number of `totalItems` will be adjusted to include such additional items. As a result, the number of total items is expected to change whenever plugins execute:
```go
i := 0
for _, resource := range resources {
[...]
for _, items := range resource.itemsByNamespace {
[...]
for _, item := range items {
[...]
// restore the item
w, e := restoreItem(...)
i++
// calculate the actual count of resources
actualTotalItems := len(ctx.restoredItems) + (totalItems - i)
}
}
}
```
### Updating progress
The updates to the `progress` field in the CR can be sent on a channel as soon as an item is restored. A goroutine receiving update on that channel can make an `Update()` call to update the _Restore_ CR. This will require us to pass an instance of `RestoresGetter` to the `kubernetesRestorer` struct.
## Alternatives Considered
As an alternative, we have considered an approach which doesn't divide the restore process in two steps.
With that approach, the total number of items will be read from the Backup CR. We will keep three counters, `totalItems`, `skippedItems` and `restoredItems`:
```yml
status:
phase: InProgress
progress:
totalItems: 100
skippedItems: 20
restoredItems: 79
```
This approach doesn't require us to find the number of total items beforehand.
## Security Considerations
Omitted
## Compatibility
Omitted
## Implementation
TBD
## Open Issues
https://github.com/vmware-tanzu/velero/issues/21

View File

@@ -0,0 +1,207 @@
# Restore API Group Version by Priority Level When EnableAPIGroupVersions Feature is Set
Status: Draft
## Abstract
This document proposes a solution to select an API group version to restore from the versions backed up using the feature flag EnableAPIGroupVersions.
## Background
It is possible that between the time a backup has been made and a restore occurs that the target Kubernetes version has incremented more than one version. In such a case where at least a versions of Kubernetes was skipped, the preferred source cluster's API group versions for resources may no longer be supported by the target cluster. With [PR#2373](https://github.com/vmware-tanzu/velero/pull/2373), all supported API group versions were backed up if the EnableAPIGroupVersions feature flag was set for Velero. The next step (outlined by this design proposal) will be to see if any of the backed up versions are supported in the target cluster and if so, choose one to restore for each backed up resource.
## Goals
- Choose an API group to restore from backups given a priority system or a user-provided prioritization of versions.
- Restore resources using the chosen API group version.
## Non Goals
- Allow users to restore onto a cluster that is running a Kubernetes version older than the source cluster. The changes proposed here only allow for skipping ahead to a newer Kubernetes version, but not going backward.
- Allow restoring from backups created using Velero version 1.3 or older. This proposal will only work on backups created using Velero 1.4+.
- Modifying the compressed backup tarball files. We don't want to risk corrupting the backups.
- Using plugins to restore a resource when the target supports none of the source cluster's API group versions. The ability to use plugins will hopefully be something added in the future, but not at this time.
## High-Level Design
During restore, the proposal is that Velero will determine if the `APIGroupVersionsFeatureFlag` was enabled in the target cluster and `Status.FormatVersion 1.1.0` was used during backup. Only if these two conditions are met will the changes proposed here take effect.
The proposed code starts with creating three lists for each backed up resource. The three lists will be created by
(1) reading the directory names in the backup tarball file and seeing which API group versions were backed up from the source cluster,
(2) looking at the target cluster and determining which API group versions are supported, and
(3) getting config maps from the target cluster in order to get user-defined prioritization of versions.
The three lists will be used to create a map of chosen versions for each resource to restore. If there is a user-defined list of priority versions, the versions will be checked against the supported versions lists. The highest user-defined priority version that is/was supported by both target and source clusters will be the chosen version for that resource. If no user specified versions are supported by neither target nor source, the versions will be logged and the restore will continue with other prioritizations.
Without a user-defined prioritization of versions, the following version prioritization will be followed, starting from the highest priority: target cluster preferred version, source cluster preferred version, and a common supported version. Should there be multiple common supported versions, the one that will be chosen will be based on the [Kubernetes version priorities](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-priority).
Once the version to restore is chosen, the file path to the backed up resource in the tarball will be modified such that it points to the resources' chosen API group version. If no version is found in common between the source and target clusters, the chosen version will default to the source cluster's preferred version (the version being restored currently without the changes proposed here). Restore will be allowed to continue as before.
## Detailed Design
There are six objectives to achieve the above stated goals:
1. Determine if the APIGroupVersionsFeatureFlag is enabled and Backup Objects use Status.FormatVersion 1.1.0.
1. List the backed up API group versions.
1. List the API group versions supported by the target cluster.
1. Get the user-defined version priorities.
1. Use a priority system to determine which version to restore. The source preferred version will be the default if the priorities fail.
1. Modify the paths to the backup files in the tarball in the resource restore process.
### Objective 1: Determine if the APIGroupVersionsFeatureFlag is enabled and Backup Objects use Status.FormatVersion 1.1.0
For restore to be able to choose from multiple supported backed up versions, the feature flag must have been enabled during the restore processes. Backup objects must also have [Status.FormatVersion == "1.1.0"](https://github.com/vmware-tanzu/velero/blob/a1e182e723a8c5f6d4175d8db2361233a94d2502/pkg/backup/backup.go#L58).
The reason for checking for the feature flag during restore is to ensure the user would like to restore a version that might not be the source cluster preferred version. This check is done via `features.IsEnabled(velerov1api.APIGroupVersionsFeatureFlag)`.
The reason for checking `Status.FormatVersion` is to ensure the changes made by this proposed design is backward compatible. Only with Velero version 1.4 and forward was Format Version 1.1.0 used to structure the backup directories. Format Version 1.1.0 is required for the restore process proposed in this design doc to work. Before v1.4, the backed up files were in a directory structure that will not be recognized by the proposed code changes. In this case, restore should not attempt to restore from multiple versions as they will not exist.
The [`Status.FormatVersion`](https://github.com/vmware-tanzu/velero/blob/6808acd92e30848056a21faf373af03ddb8a3b71/pkg/apis/velero/v1/backup.go#L235) is stored in a `restoreContext` struct field called [`backup`](https://github.com/vmware-tanzu/velero/blob/6808acd92e30848056a21faf373af03ddb8a3b71/pkg/restore/restore.go#L229). The full chain is `ctx.backup.Status.FormatVersion`.
The above two checks can be done inside a new method on the `*restoreContext` object with the method signature `meetsAPIGVRestoreReqs() bool`. This method can remain in the `restore` package, but for organizational purposes, it can be moved to a file called `prioritize_group_version.go`.
### Objective 2: List the backed up API group versions
Currently, in `pkg/restore/restore.go`, in the `execute(...)` method, around [line 363](https://github.com/vmware-tanzu/velero/blob/7a103b9eda878769018386ecae78da4e4f8dde83/pkg/restore/restore.go#L363), the resources and their backed up items are saved in a map called `backupResources`.
At this point, the feature flag and format versions can be checked (described in Objective #1). If the requirements are met, the `backedupResources` map can be sent to a method (to be created) with the signature `ctx.chooseAPIVersionsToRestore(backupResources)`. The `ctx` object has the type `*restore.Context`.
The `chooseAPIVersionsToRestore` method can remain in the `restore` package, but for organizational purposes, it can be moved to a file called `prioritize_group_version.go`.
Inside the `chooseAPIVersionsToRestore` method, we can take advantage of the `archive` package's `Parser` type. `ParseGroupVersions(backupDir string) (map[string]metav1.APIGroup, error)`. The `ParseGroupVersions(...)` method will loop through the `resources`, `resource.group`, and group version directories to populate a map called `sourceRGVersions`.
The `sourceRGVersions` map's keys will be strings in the format `<resource>.<group>`, e.g. "horizontalpodautoscalers.autoscaling". The values will be APIGroup structs. The API Group struct can be imported from k8s.io/apimachinery/pkg/apis/meta/v1. Order the APIGroup.Versions slices using a sort function copied from `k8s.io/apimachinery/pkg/version`.
```go
sort.SliceStable(gvs, func(i, j int) bool {
return version.CompareKubeAwareVersionStrings(gvs[i].Version, gvs[j].Version) > 0
})
```
### Objective 3: List the API group versions supported by the target cluster
Still within the `chooseAPIVersionsToRestore` method, the target cluster's resource group versions can now be obtained.
```go
targetRGVersions := ctx.discoveryHelper.APIGroups()
```
Order the APIGroup.Versions slices using a sort function copied from `k8s.io/apimachinery/pkg/version`.
```go
sort.SliceStable(gvs, func(i, j int) bool {
return version.CompareKubeAwareVersionStrings(gvs[i].Version, gvs[j].Version) > 0
})
```
### Objective 4: Get the user-defined version priorities
Still within the `chooseAPIVersionsToRestore` method, the user-defined version priorities can be retrieved. These priorities are expected to be in a config map named `enableapigroupversions` in the `velero` namespace. An example config map is
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: enableapigroupversions
  namespace: velero
data:
  restoreResourcesVersionPriority: | -
    rockbands.music.example.io=v2beta1,v2beta2
    orchestras.music.example.io=v2,v3alpha1
    subscriptions.operators.coreos.com=v2,v1
```
In the config map, the resources and groups and the user-defined version priorities will be listed in the `data.restoreResourcesVersionPriority` field following the following general format: `<group>.<resource>=<version 1>[, <version n> ...]`.
A map will be created to store the user-defined priority versions. The map's keys will be strings in the format `<resource>.<group>`. The values will be APIGroup structs that will be imported from `k8s.io/apimachinery/pkg/apis/meta/v1`. Within the APIGroup structs will be versions in the order that the user provides in the config map. The PreferredVersion field in APIGroup struct will be left empty.
### Objective 5: Use a priority system to determine which version to restore. The source preferred version will be the default if the priorities fail
Determining the priority will also be done in the `chooseAPIVersionsToRestore` method. Once a version is chosen, it will be stored in a new map of the form `map[string]ChosenGRVersion` where the key is the `<resource>.<group>` and the values are of the `ChosenGroupVersion` struct type (shown below). The map will be saved to the `restore.Context` object in a field called `chosenGrpVersToRestore`.
```go
type ChosenGroupVersion struct {
Group string
Version string
Dir string
}
```
The first method called will be `ctx.gatherSTUVersions()` and it will gather the source cluster group resource and versions (`sgvs`), target cluster group versions (`tgvs`), and custom user resource and group versions (`ugvs`).
Loop through the source cluster resource and group versions (`sgvs`). Find the versions for the group in the target cluster.
An attempt will first be made to `findSupportedUserVersion`. Loop through the resource.groups in the custom user resource and group versions (`ugvs`) map. If a version is supported by both `tgvs` and `sgvs`, that will be set as the chosen version for the corresponding resource in `ctx.chosenGrpVersToRestore`
If no three-way match can be made between the versions in `ugvs`, `tgvs`, and `sgvs`, move on to attempting to use the target cluster preferred version. Loop through the `sgvs` versions for the resource and see if any of them match the first item in the `tgvs` version list. Because the versions in `tgvs` have been ordered, the first version in the version slide will be the preferred version.
If target preferred version cannot be used, attempt to choose the source cluster preferred version. Loop through the target versions and see if any of them match the first item in the source version slice, which will be the preferred version due to Kubernetes version ordering.
If neither clusters' preferred version can be used, look through remaining versions in the target version list and see if there is a match with the remaining versions in the source versions list.
If none of the previous checks produce a chosen version, the source preferred version will be the default and the restore process will continue.
Here is another way to list the priority versions described above:
- **Priority 0** ((User override). Users determine restore version priority using a config map
- **Priority 1**. Target preferred version can be used.
- **Priority 2**. Source preferred version can be used.
- **Priority 3**. A common supported version can be used. This means
- target supported version == source supported version
- if multiple support versions intersect, choose the version using the [Kubernetes version prioritization system](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-priority)
If there is no common supported version between target and source clusters, then the default `ChosenGRVersion` will be the source preferred version. This is the version that would have been assumed for restore before the changes proposed here.
Note that adding a field to `restore.Context` will mean having to make a map for the field during instantiation.
To see example cases with version priorities, see a blog post written by Rafael Brito: https://github.com/brito-rafa/k8s-webhooks/tree/master/examples-for-projectvelero.
### Objective 6: Modify the paths to the backup files in the tarball
The method doing the bulk of the restoration work is `ctx.restoreResource(...)`. Inside this method, around [line 714](https://github.com/vmware-tanzu/velero/blob/7a103b9eda878769018386ecae78da4e4f8dde83/pkg/restore/restore.go#L714) in `pkg/restore/restore.go`, the path to backup json file for the item being restored is set.
After the groupResource is instantiated at pkg/restore/restore.go:733, and before the `for` loop that ranges through the `items`, the `ctx.chosenGRVsToRestore` map can be checked. If the groupResource exists in the map, the path saved to `resource` variable can be updated.
Currently, the item paths look something like
```bash
/var/folders/zj/vc4ln5h14djg9svz7x_t1d0r0000gq/T/620385697/resources/horizontalpodautoscalers.autoscaling/namespaces/myexample/php-apache-autoscaler.json
```
This proposal will have the path changed to something like
```bash
/var/folders/zj/vc4ln5h14djg9svz7x_t1d0r0000gq/T/620385697/resources/horizontalpodautoscalers.autoscaling/v2beta2/namespaces/myexample/php-apache-autoscaler.json
```
The `horizontalpodautoscalers.autoscaling` part of the path will be updated to `horizontalpodautoscalers.autoscaling/v2beta2` using
```go
version, ok := ctx.chosenGVsToRestore[groupResource.String()]
if ok {
resource = filepath.Join(groupResource.String(), version.VerDir)
}
```
The restore can now proceed as normal.
## Alternatives Considered
- Look for plugins if no common supported API group version could be found between the target and source clusters. We had considered searching for plugins that could handle converting an outdated resource to a new one that is supported in the target cluster, but it is difficult, will take a lot of time, and currently will not be useful because we are not aware of such plugins. It would be better to keep the initial changes simple to see how it works out and progress to more complex solutions as demand necessitates.
- It was considered to modify the backed up json files such that the resources API versions are supported by the target but modifying backups is discouraged for several reasons, including introducing data corruption.
## Security Considerations
I can't think of any additional risks in terms of Velero security here.
## Compatibility
I have made it such that the changes in code will only affect Velero installations that have `APIGroupVersionsFeatureFlag` enabled during restore and Format Version 1.1.0 was used during backup. If both these requirements are not met, the changes will have no affect on the restore process, making the changes here entirely backward compatible.
## Implementation
This first draft of the proposal will be submitted Oct. 30, 2020. Once this proposal is approved, I can have the code and unit tests written within a week and submit a PR that fixes Issue #2551.
## Open Issues
At the time of writing this design proposal, I had not seen any of @jenting's work for solving Issue #2551. He had independently covered the first two priorities I mentioned above before I was even aware of the issue. I hope to not let his efforts go to waste and welcome incorporating his ideas here to make this design proposal better.

135
design/secrets.md Normal file
View File

@@ -0,0 +1,135 @@
# Support for multiple provider credentials
Currently, Velero only supports a single credential secret per location provider/plugin. Velero creates and stores the plugin credential secret under the hard-coded key `secret.cloud-credentials.data.cloud`.
This makes it so switching from one plugin to another necessitates overriding the existing credential secret with the appropriate one for the new plugin.
## Goals
- To allow Velero to create and store multiple secrets for provider credentials, even multiple credentials for the same provider
- To improve the UX for configuring the velero deployment with multiple plugins/providers.
## Non Goals
- To make any change except what's necessary to handle multiple credentials
- To allow multiple credentials for or change the UX for node-based authentication (e.g. AWS IAM, GCP Workload Identity, Azure AAD Pod Identity).
## Design overview
Instead of one credential per Velero deployment, multiple credentials can be added and used with different BSLs VSLs.
There are two aspects to handling multiple credentials:
- Modifying how credentials are configured and specified by the user
- Modifying how credentials are provided to the plugin processes
Each of these aspects will be discussed in turn.
### Credential configuration
Currently, Velero creates a secret (`cloud-credentials`) during install with a single entry that contains the contents of the credentials file passed by the user.
Instead of adding new CLI options to Velero to create and manage credentials, users will create their own Kubernetes secrets within the Velero namespace and reference these.
This approach is being chosen as it allows users to directly manage Kubernetes secrets objects as they wish and it removes the need for wrapper functions to be created within Velero to manage the creation of secrets.
An initial approach to this problem included modifying the existing `cloud-credentials` secret to add a new entry with each new set of credentials.
It is likely that this approach would encounter problems as users added more credentials as the maximum size of Secret in Kubernetes is 1MB.
By allowing users to create Secrets as they need to, we remove these potential limitations.
To enable the use of existing Kubernetes secrets, BSLs and VSLs will be modified to have a new field `Credential`.
This field will be a [`SecretKeySelector`](https://godoc.org/k8s.io/api/core/v1#SecretKeySelector) which will enable the user to specify which key within a particular secret the BSL/VSL should use.
The CLI for managing BSLs and VSLs will be modified to allow the user to set these credentials.
Both `velero backup-location (create|set)` and `velero snapshot-location (create|set)` will have a new flag (`--credential`) to specify the secret and key within the secret to use.
This flag will take a key-value pair in the format `<secret-name>=<key-in-secret>`.
The arguments will be validated to ensure that the secret exists in the Velero namespace.
### Making credentials available to plugins
There are three different approaches that can be taken to provide credentials to plugin processes:
1. Providing the path to the credentials file as an environment variable per plugin. This is how credentials are currently passed.
1. Include the path to the credentials file in the `config` map passed to a plugin.
1. Include the details of the secret in the `config` map passed to a plugin.
The last two options require changes to the plugin as the plugin will need to instantiate a client using the provided credentials.
The client libraries used by the plugins will not be able to rely on the credentials details being available in the environment as they currently do.
We have selected option 2 as the approach to take which will be described below.
### Including the credentials file path in the `config` map
Prior to using any secret for a BSL or VSL, it will need to be serialized to disk.
Using the details in the `Credential` field in the BSL/VSL, the contents of the Secret will be read and serialized.
To achieve this, we will create a new package, `credentials`, which will introduce new types and functions to manage the fetching of credentials based on a `SecretKeySelector`.
This will also be responsible for serializing the fetched credentials to a temporary directory on the Velero pod filesystem.
The path where a set of credentials will be written to will be a fixed path based on the namespace, name, and key from the secret rather than a randomly named file as is usual with temporary files.
The reason for this is that `BackupStore`s are frequently created within the controllers and the credentials must be serialized before any plugin APIs are called, which would result in a quick accumulation of temporary credentials files.
For example, the default validation frequency for BackupStorageLocations is one minute.
This means that any time a `BackupStore`, or other type which requires credentials, is created, the credentials will be fetched from the API server and may overwrite any existing use of that credential.
If we instead wanted to use an unique file each time, we could work around the of multiple files being written by cleaning up the temporary files upon completion of the plugin operations, if this information is known.
Once the credentials have been serialized, this path will be made available to the plugins.
Instead of setting the necessary environment variable for the plugin process, the `config` map for the BSL/VSL will be modified to include an addiitional entry with the path to the credentials file: `credentialsFile`.
This will be passed through when [initializing the BSL/VSL](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/velero/object_store.go#L27-L30) and it will be the responsibility of the plugin to use the passed credentials when starting a session.
For an example of how this would affect the AWS plugin, see [this PR](https://github.com/vmware-tanzu/velero-plugin-for-aws/pull/69).
The restic controllers will also need to be updated to use the correct credentials.
The BackupStorageLocation for a given PVB/PVR will be fetched and the `Credential` field from that BSL will be serialized.
The existing setup for the restic commands use the credentials from the environment variables with [some repo provider specific overrides](https://github.com/vmware-tanzu/velero/blob/main/pkg/controller/pod_volume_backup_controller.go#L260-L273).
Instead of relying on the existing environment variables, if there are credentials for a particular BSL, the environment will be specifically created for each `RepoIdentifier`.
This will use a lot of the existing logic with the exception that it will be modified to work with a serialized secret rather than find the secret file from an environment variable.
Currently, GCP is the only provider that relies on the existing environment variables with no specific overrides.
For GCP, the environment variable will be overwritten with the path of the serialized secret.
## Backwards compatibility
For now, regardless of the approaches used above, we will still support the existing workflow.
Users will be able to set credentials during install and a secret will be created for them.
This secret will still be mounted into the Velero pods and the appropriate environment variables set.
This will allow users to use versions of plugins which haven't yet been updated to use credentials directly, such as with many community created plugins.
Multiple credential handling will only be used in the case where a particular BSL/VSL has been modified to use an existing secret.
## Security Considerations
Although the handling of secrets will be similar to how credentials are currently managed within Velero, care must be taken to ensure that any new code does not leak the contents of secrets, for example, including them within logs.
## Alternatives Considered
As mentioned above, there were three potential approaches for providing this support.
The approaches that were not selected are detailed below for reference.
#### Providing the credentials via environment variables
To continue to provide the credentials via the environment, plugins will need to be invoked differently so that the correct credential is used.
Currently, there is a single secret, which is mounted into every pod deployed by Velero (the Velero Deployment and the Restic DaemonSet) at the path `/credentials/cloud`.
This path is made known to all plugins through provider specific environment variables and all possible provider environment variables are set to this path.
Instead of setting the environment variables for all the pods, we can modify plugin processes are created so that the environment variables are set on a per plugin process basis.
Prior to using any secret for a BSL or VSL, it will need to be serialized to disk.
Using the details in the `Credential` field in the BSL/VSL, the contents of the Secret will be read and serialized to a file.
Each plugin process would still have the same set of environment variables set, however the value used for each of these variables would instead be the path to the serialized secret.
To set the environment variables for a plugin process, the plugin manager must be modified so that when creating an ObjectStore or VolumeSnapshotter, we pass in the entire BSL/VSL object, rather than [just the provider](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/clientmgmt/manager.go#L132-L158).
The plugin manager currently stores a map of [plugin executables to an associated `RestartableProcess`](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/clientmgmt/manager.go#L59-L70).
New restartable processes are created only [with the executable that the process would run](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/clientmgmt/manager.go#L122).
This could be modified to also take the necessary environment variables so that when [underlying go-plugin process is created](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/clientmgmt/client_builder.go#L78), these environment variables could be provided and would be set on the plugin process.
Taking this approach would not require any changes from plugins as the credentials information would be made available to them in the same way.
However, it is quite a significant change in how we initialize and invoke plugins.
We would also need to ensure that the restic controllers are updated in the same way so that correct credentials are used (when creating a `ResticRepository` or processing `PodVolumeBackup`/`PodVolumeRestore`).
This could be achieved by modifying the existing function to [run a restic command](https://github.com/vmware-tanzu/velero/blob/main/pkg/restic/repository_manager.go#L237-L290).
This function already sets environment variables for the restic process depending on which storage provider is being used.
#### Include the details of the secret in `config` map passed to a plugin
This approach is like the selected approach of passing the credentials file via the `config` map, however instead of the Velero process being responsible for serializing the file to disk prior to invoking the plugin, the `Credential SecretKeySelector` details will be passed through to the plugin.
It will be the responsibility of the plugin to fetch the secret from the Kubernetes API and perform the necessary steps to make it available for use when creating a session, for example, serializing the contents to disk, or evaluating the contents and adding to the process environment.
This approach has an additional burden on the plugin author over the previous approach as it requires the author to create a client to communicate with the Kubernetes API to retrieve the secret.
Although it would be the responsibility of the plugin to serialize the credential and use it directly, Velero would still be responsible for serializing the secret so that it could be used with the restic controllers as in the selected approach.

View File

@@ -0,0 +1,251 @@
# Wait for AdditionalItems to be ready on Restore
When a velero `RestoreItemAction` plugin returns a list of resources
via `AdditionalItems`, velero restores these resources before
restoring the current resource. There is a race condition here, as it
is possible that after running the restore on these returned items,
the current item's restore might execute before the additional items
are available. Depending on the nature of the dependency between the
current item and the additional items, this could cause the restore of
the current item to fail.
## Goals
- Enable Velero to ensure that Additional items returned by a restore
plugin's `Execute` func are ready before restoring the current item
- Augment the RestoreItemAction plugin interface to allow the plugins
to determine when an additional item is ready, since doing so
requires knowledge specific to the resource type.
## Background
Because Velero does not wait after restoring additional items to
restore the current item, in some cases the current item restore will
fail if the additional items are not yet ready. Velero (and the
`RestoreItemAction` plugins) need to implement this "wait until ready"
functionality.
## High-Level Design
After each RestoreItemAction execute call (and following restore of
any returned additional items) , we need to wait for these returned
Additional Items to be ready before restoring the current item. In
order to do this, we also need to extend the `RestoreItemActionExecuteOutput`
struct to allow the plugin which returned additional items to
determine whether they are ready.
## Detailed Design
### `restoreItem` Changes
When each `RestoreItemAction` `Execute()` call returns, the
`RestoreItemActionExecuteOutput` struct contains a slice of
`AdditionalItems` which must be restored before this item can be
restored. After restoring these items, Velero needs to be able to wait
for them to be ready before moving on to the next item. Right after
looping over the additional items at
https://github.com/vmware-tanzu/velero/blob/main/pkg/restore/restore.go#L960-L991
we still have a reference to the additional items (`GroupResource` and
namespaced name), as well as a reference to the `RestoreItemAction`
plugin which required it.
At this point, if the `RestoreItemActionExecuteOutput` includes a
non-nil `AdditionalItemsReadyFunc` we need to call a func similar to
`crdAvailable` which we will call `itemsAvailable`
https://github.com/vmware-tanzu/velero/blob/main/pkg/restore/restore.go#L623
This func should also be defined within restore.go
Instead of the one minute CRD timeout, we'll use a timeout specific to
waiting for additional items. There will be a new field added to
serverConfig, `additionalItemsReadyTimeout`, with a
`defaultAdditionalItemsReadyTimeout` const set to 10 minutes. In addition,
each plugin will be able to define an override for the global
server-level value, which will be added as another optional field in
the `RestoreItemActionExecuteOutput` struct. Instead of the
`IsUnstructuredCRDReady` call, we'll call the returned
`AdditionalItemsReadyFunc` passing in the same `AdditionalItems` slice
as an argument (with items which failed to restore filtered out). If
this func returns an error, then `itemsAvailable` will
propagate the error, and `restoreItem` will handle it the same way it
handles an error return on restoring an additional item. If the
timeout is reached without ready returning true, velero will continue
on to attempt restore of the current item.
### `RestoreItemActionExecuteOutput` changes
Two new fields will be added to `RestoreItemActionExecuteOutput`, both
optional. `AdditionalItemsReadyTimeout`, if specified, will override
`serverConfig.additionalItemsReadyTimeout`. If the new func field
`AdditionalItemsReadyFunc` is non-nil, then `restoreItem` will call
`itemsAvailable` which will invoke the plugin func
`AdditionalItemsReadyFunc` and wait until the func returns true or the
timeout is reached. If `AdditionalItemsReadyFunc` is nil (the default
case), then current velero behavior will be followed. Existing plugins
which do not need to signal to wait for `AdditionalItems` won't need
to change their `Execute()` functions.
In addition, a new func, `WithItemsWait(readyFunc *func)` will
be added to `RestoreItemActionExecuteOutput` similar to
`WithoutRestore()` which will set `AdditionalItemsReadyFunc` to
`readyfunc`. This will allow a plugin to include waiting for
AdditionalItems like this:
```
func AreItemsReady (restore *api.Restore, additionalItems []ResourceIdentifier) (bool, error) {
...
return true, nil
}
func (p *RestorePlugin) Execute(input *velero.RestoreItemActionExecuteInput) (*velero.RestoreItemActionExecuteOutput, error) {
...
return velero.NewRestoreItemActionExecuteOutput(input.Item).WithItemsWait(AreItemsReady), nil
}
```
```
// RestoreItemActionExecuteOutput contains the output variables for the ItemAction's Execution function.
type RestoreItemActionExecuteOutput struct {
// UpdatedItem is the item being restored mutated by ItemAction.
UpdatedItem runtime.Unstructured
// AdditionalItems is a list of additional related items that should
// be restored.
AdditionalItems []ResourceIdentifier
// SkipRestore tells velero to stop executing further actions
// on this item, and skip the restore step. When this field's
// value is true, AdditionalItems will be ignored.
SkipRestore bool
// AdditionalItemsReadyFunc is a func which returns true if
// the additionalItems passed into the func are
// ready/available. A nil value for this func means that
// velero will not wait for the items to be ready before
// attempting to restore the current item.
AdditionalItemsReadyFunc func(restore *api.Restore, []ResourceIdentifier) (bool, error)
// AdditionalItemsReadyTimeout will override serverConfig.additionalItemsReadyTimeout
// if specified. This value specifies how long velero will wait
// for additional items to be ready before moving on.
AdditionalItemsReadyTimeout *time.Duration
}
// WithoutRestore returns SkipRestore for RestoreItemActionExecuteOutput
func (r *RestoreItemActionExecuteOutput) WithItemsWait(
readyFunc func(*api.Restore, []ResourceIdentifier)
) *RestoreItemActionExecuteOutput {
r.AdditionalItemsReadyFunc = readyFunc
return r
}
```
### Earlier iteration (no longer the current implementation plan)
What follows is the first iteration of the design. Everything from
here is superseded by the content above. The options below require
either breaking backwards compatibility or dealing with runtime
casting and optional interfaces. Adding the func pointer to
`RestoreItemActionExecuteOutput` resolves the problem without
requiring either.
#### `RestoreItemActionExecuteOutput` changes
A new boolean field will be added to
`RestoreItemActionExecuteOutput`. If `WaitForAdditionalItems` is true,
then `restoreItem` will call `itemsAvailable` which will invoke the
plugin func `AreAdditionalItemsReady` and wait until the func returns
true or the timeout is reached. If `WaitForAdditionalItems` is false
(the default case), then current velero behavior will be
followed. Existing plugins which do not need to signal to wait for
`AdditionalItems` won't need to change their `Execute()` functions.
In addition, a new func, `WithItemsWait()` will be added to
`RestoreItemActionExecuteOutput` similar to `WithoutRestore()` which
will set the `WaitForAdditionalItems` bool to `true`.
```
// RestoreItemActionExecuteOutput contains the output variables for the ItemAction's Execution function.
type RestoreItemActionExecuteOutput struct {
// UpdatedItem is the item being restored mutated by ItemAction.
UpdatedItem runtime.Unstructured
// AdditionalItems is a list of additional related items that should
// be restored.
AdditionalItems []ResourceIdentifier
// SkipRestore tells velero to stop executing further actions
// on this item, and skip the restore step. When this field's
// value is true, AdditionalItems will be ignored.
SkipRestore bool
// WaitForAdditionalItems determines whether velero will wait
// until AreAdditionalItemsReady returns true before restoring
// this item. If this field's value is true, then after restoring
// the returned AdditionalItems, velero will not restore this item
// until AreAdditionalItemsReady returns true or the timeout is
// reached. Otherwise, AreAdditionalItemsReady is not called.
WaitForAdditionalItems bool
}
```
#### `RestoreItemAction` plugin interface changes
In order to implement the `AreAdditionalItemsReady` plugin func, there
are two different approaches we could take.
The first would be to simply add another entry to the
`RestoreItemAction` interface:
```
type RestoreItemAction interface {
// AppliesTo returns information about which resources this action should be invoked for.
// A RestoreItemAction's Execute function will only be invoked on items that match the returned
// selector. A zero-valued ResourceSelector matches all resources.
AppliesTo() (ResourceSelector, error)
// Execute allows the ItemAction to perform arbitrary logic with the item being restored,
// including mutating the item itself prior to restore. The item (unmodified or modified)
// should be returned, along with an optional slice of ResourceIdentifiers specifying additional
// related items that should be restored, a warning (which will be logged but will not prevent
// the item from being restored) or error (which will be logged and will prevent the item
// from being restored) if applicable.
Execute(input *RestoreItemActionExecuteInput) (*RestoreItemActionExecuteOutput, error)
// AreAdditionalItemsReady allows the ItemAction to communicate whether the passed-in
// slice of AdditionalItems (previously returned by Execute())
// are ready. Returns true if all items are ready, and false otherwise
AreAdditionalItemsReady(restore *api.Restore, AdditionalItems []ResourceIdentifier) (bool, error)
}
```
The downside of this approach is that it is not backwards compatible,
and every `RestoreItemAction` plugin will have to implement the new
func, simply to return `true` in most cases, since the plugin will
either never return `AdditionalItems` from Execute or not have any
special readiness requirements.
The alternative to this would be to define an additional interface for
the optional func, leaving the `RestoreItemAction` interface alone.
```
type RestoreItemActionReadyCheck interface {
// AreAdditionalItemsReady allows the ItemAction to communicate whether the passed-in
// slice of AdditionalItems (previously returned by Execute())
// are ready. Returns true if all items are ready, and false otherwise
AreAdditionalItemsReady(restore *api.Restore, AdditionalItems []ResourceIdentifier) (bool, error)
}
```
In this case, existing plugins which do not need this functionality
can remain as-is, while plugins which want to make use of this
functionality will just need to implement the optional func. With the
optional interface approach, `itemsAvailable` will only wait if the
plugin can be type-asserted to the new interface:
```
if actionWithReadyCheck, ok := action.(RestoreItemActionReadyCheck); ok {
// wait for ready/timeout
} else {
return true, nil
}
```

Some files were not shown because too many files have changed in this diff Show More