Bump up restic to v0.12.1 to fix CVE-2020-26160.
Bump up module "github.com/vmware-tanzu/crash-diagnostics" to v0.3.7 to fix CVE-2020-29652.
The "github.com/vmware-tanzu/crash-diagnostics" updates client-go to v0.22.2 which introduces several break changes, this commit updates the related codes as well
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
After the PR to implement `velero debug` - #4022 is reviewed, there are some
suggestion to let the command collect more resources, this commit make
the change to the design doc to reflect those changes.
It also remove some sections that are no longer relevant after `crashd`
has made enhancement in the v0.3.4 release.
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
* #4040 - documentation - adding more troubleshooting information during Restic restore
Signed-off-by: Rafael Brito <rbrito@vmware.com>
* #4040 - documentation - adding more troubleshooting information during Restic restore and minor changes
Signed-off-by: Rafael Brito <rbrito@vmware.com>
* #4040 - documentation - tweaks on restic page
Signed-off-by: Rafael Brito <rbrito@vmware.com>
If the "--snapshot-volumes=false" isn't specified explicitly, the vSphere plugin will always take snapshots for the volumes even though the "--default-volumes-to-restic" is specified
This can be removed if the logic of vSphere plugin changes
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
This commit makes several changes to `tag-release.sh` according to the
change in release process:
1. It will support a "ON_RELEASE_BRANCH" param passed via env variable.
When it's set to "TRUE". The release will be created on the commit of
branch like `release-xxx`. This enables us to create release branch
before GA and tag RC release.
2. It removes the code to push a new branch to upstream. This is
because we decided to create branch manually. For patch releases, we
will not push the change to release branch, instead, we will make
sure the release branch has all commits cherrypicked BEFORE we run
this script to tag the release.
After the change the script will focus on only tag the release, not
making other code change to release branches.
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
In upgrade test, both original and to-be-upgrading velero installation should use the compatible plugins, but currently, plugin value is determined by provider.
Signed-off-by: danfengl <danfengl@vmware.com>
The errors of restore/backup may be stored in object storage
The well formatted output of describe is also helpful for debugging.
This commit add the command to the crashd script so the output of
"velero backup/restore describe xxx" can be collected
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
This commits updates the `troubleshooting` section in the doc to ask
users to collect log via `velero debug`.
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
* Add namespace validation in the client
Signed-off-by: F. Gold <fgold@vmware.com>
* Add namespace validation in the backup controller
Signed-off-by: F. Gold <fgold@vmware.com>
* Add changelog for PR 4057
Signed-off-by: F. Gold <fgold@vmware.com>
* Update Copyright notice
Signed-off-by: F. Gold <fgold@vmware.com>
* Update include_excludes_test.go to follow Go standards and be easier to read
Signed-off-by: F. Gold <fgold@vmware.com>
* Add unit tests for namespace validation functions
Signed-off-by: F. Gold <fgold@vmware.com>
* Make changes per review comments
- use one set of namespace validation logic instead of writing two
- remove duplicate namespace validation functions and tests
- add namespace validation tests in includes_excludes_test.go
Signed-off-by: F. Gold <fgold@vmware.com>
* Return all ns validation err msgs as error list
Signed-off-by: F. Gold <fgold@vmware.com>
* Make error message more clear
Signed-off-by: F. Gold <fgold@vmware.com>
Velero was including DownwardAPI volumes when backing up with restic.
When restoring these volumes, it triggered a known issue with restic (as
seen in #3863). Like projected volumes, these volumes should be skipped
as their contents are populated by the Kubernetes API server.
With this change, we are now skipping the restic backup of volumes with
a DownwardAPI source. We are also skipping the restore of any volume
that had a DownwardAPI source as there will exist backups that were
taken prior to this fix being introduced. This will allow these backups
to be restored succesfully.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
This commit removes `IsUnstructuredCRDReady` since
kubernetes/kubernetes#87675 is fixed.
Is uses `Is1CRDReady` to check the readiness of CRD.
After v1.7 we may consider merge the funcx `IsV1Beta1CRDReady` and
`IsV1CRDReady`
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
1. Support to customize the restic restore helper image
2. Use a seperated context when doing the clean up works
3. Wait a while before doing the the restore for aws to avoid #1799
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
1. Check the error when waiting for restice daemonset to be ready, so
the timeout will be reported
2. Add support for gcp provider and fail early if the provider is
unknown
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
This PR added a subcommand `velero debug`, which leverages `crashd` to
collect logs and specs of velero server components and bundle them in a
tarball.
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
Do this for two reasons:
1. Verify the functionalities for installation and uninstllation of CLI
2. We want to add upgrade test case which needs to install different versions of velero, calling libraries is impossible for this
fixes#4062
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
* Add document for TLS error 116
When using a custom S3 compatible server, backups/restore may fail with
TLS error 116. This happens because the S3 server expects Velero to
send client certificate during SSL TLS v1.3 handshake.
You will need to modify your S3 server settings to turn off client
certificate authentication.
Signed-off-by: Himanshu Mehra <himanshu.mehra91@gmail.com>
* Add document for TLS error 116
When using a custom S3 compatible server, backups/restore may fail with
TLS error 116. This happens because the S3 server expects Velero to
send client certificate during SSL TLS v1.3 handshake.
You will need to modify your S3 server settings to turn off client
certificate authentication.
Signed-off-by: Himanshu Mehra <himanshu.mehra91@gmail.com>
* Address comments from reviewers
Signed-off-by: Himanshu Mehra <himanshu.mehra91@gmail.com>
Wait the namespace deletion completed before removing the CRDs when uninstalling the velero
Fixes#3974
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
a timestamp. If two requests were happening very close together for the
same backup, the second would fail randomly.
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
Fix the random failure by increasing the timeout and introducing few minor refactor/bug fixes
Fixes#3970
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
Instead of converting the unstructured item to check for the presence of
the `kube-aggregator.kubernetes.io/automanaged` label, use this label in
the `AppliesTo` to enable the restore logic to select the item. This
means that any item that matches the selector will have restore skipped.
Also add a new test case to the restore action test to check that label
selectors are applied correctly.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
It was discovered during Velero 1.6.3 upgrade testing that Velero was
restoring `APIService` objects for APIs that are no longer being served
by Kubernetes 1.22. If these items were restored, it would break the
behaviour of discovery within the cluster.
This change introduces a new RestoreItemAction plugin that skips the
restore of any `APIService` object which is managed by Kubernetes such
as those for built-in APIs or CRDs. The `APIService`s for these will be
created when the Kubernetes API server starts or when new CRDs are
registered. These objects are identified by looking for the
`kube-aggregator.kubernetes.io/automanaged` label.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Use appropriate CRD API during readiness check
The readiness check for the Velero CRDs was still using the v1beta1 API.
This would cause the readiness check to fail on 1.22 clusters as the
v1beta1 API is no longer available. Previously, this error would be
ignored and the installation would proceed, however with #4002, we are
no longer ignoring errors from this check.
This change modifies the CRD readiness check to check the CRDs using the
same API version that was used when submitting the CRDs to the cluster.
It also introduces a new CRD builder using the V1 API for testing.
This change also fixes a bug that was identified in the polling code
where if the CRDs were not ready on the first polling iteration, they
would be added again to the list of CRDs to check resulting in
duplicates. This would cause the length check to fail on all subsequent
polls and the timeout would always be reached.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Remove duplicate V1 CRD builder and update comment
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
Add the image pull secret to the service account when deploying velero and kibishii to avoid the image pull limit issue of Docker Hub
Fixes#3966
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
The backup name must be no more than 63 characters otherwise we'll get error on vSphere platform:
Failed to create snapshot record: Snapshot.backupdriver.cnsdp.vmware.com \"snap-8945e7df-069e-4f56-aeb5-75b1dd87547f\" is invalid: metadata.labels: Invalid value: \"backup-bsl-e7a1d0f3-2f29-4d80-9184-6214dac91d96-e7a1d0f3-2f29-4d80-9184-6214dac91d96\": must be no more than 63 characters"
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
If the Velero CLI can't discover the Kubernetes preferred CRD API
version, use the flag --crds-version to determine the CRDs version.
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
As we add more E2E test cases, this'll cause the job takes a lot of time before checking pass for the pull requests, this commit changes the test cases(only basic cases) runs for PR
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
Generate test report for the E2E testing so that we can check the test result in the automation pipelines easily
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
This commit add `Enhancement/User` as an exempt label such that issues
like #3772 won't be closed by the stale bot.
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
* Add the design for `velero debug`
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
* Add namespace for capturing `velero version`
Signed-off-by: Daniel Jiang <jiangd@vmware.com>
The `push-build-image` target was broken in #3634. The `ifneq`
conditional block had tabs for indentation which results in incorrect
behaviour. Instead, remove whitespace before the conditional block like
we do for other similar blocks.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
This adds a new `buildinfo` variable `ImageRegistry` that can set at
build time like the `Version` variable. This allows us to customise the
Velero binary to use different registries.
If the variable is set, this variable wille be used when creating the
URIs for both the main `velero` and `velero-restic-restore-helper`
images. If it is not set, default to using Dockerhub (`velero/velero`,
`velero/velero-restic-restore-helper`).
There are numerous ways in which the Velero binary can be built so all
of them have been updated to add the new link time flag to set the
variable:
* `make local` (used for local developer builds to build for the local
OS and ARCH)
* `make build` (used by developers and also VMware internal builds to
build a specific OS and ARCH)
* Goreleaser config (used when creating OSS release binaries)
* Dockerfile (used to build the Velero binary used within the image)
All of these workflows are currently triggered from our Makefile where
the variable `REGISTRY` is already available with the default value of
`velero` and used to build the image tag. Where the new `ImageRegistry`
build variable is needed, we pass through this Makefile variable to
those tasks so it can be used accordingly.
The GitHub action and the `./hack/docker-push.sh` script used to push
container images has not been modified. This will continue to use the
default registry specified in the Makefile and will not explicitly pass
it in.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
Wenkai Yin recently joined the Velero team within VMware. He has been
contributing to the technical health of Velero, introducing important
changes such as running our E2E tests as part of our PR checks and will
continue to focus in this area.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
Daniel Jiang recently joined the Velero team within VMware and will be
taking on a technical leadership role. He has been contributing to the
project through community engagement including issue triage and
community support, and is taking on more significant feature development
within Velero such as the design and development of the `velero debug`
feature.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
1. Run the E2E test with kind(provision various versions of k8s cluster) and MinIO on Github Action
2. Bug fix: the variable "stdoutBuf" is assigned to both "installPluginCmd.Stdout" and "installPluginCmd.Stderr", this causes 'if !strings.Contains(stderrBuf.String(), "Duplicate value")' takes no effect as the "stderrBuf.String()" is always empty
3. Print the stdout and stderr for easy debugging
Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
In #3863, it was discovered that volumes from projected sources were
being backed up by restic when they should have been skipped. Restoring
these volumes triggers a known bug in restic.
In #3866, we started skipping volumes from a projected source, however
there will exist backups that were taken before this fix was introduced.
This change modifies the restore logic to skip the restore of any volume
that came from a projected source, allowing backups taken before #3866
to be restored successfully.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Remove controllers and sleeps in API groups e2e tests
Signed-off-by: F. Gold <fgold@vmware.com>
* Print command in AfterEach(...) and check error
Signed-off-by: F. Gold <fgold@vmware.com>
* Make change ahead of PR3764 changes in main
Signed-off-by: F. Gold <fgold@vmware.com>
* Update go.{mod,sum} files
Signed-off-by: F. Gold <fgold@vmware.com>
* Run make update
Signed-off-by: F. Gold <fgold@vmware.com>
* Add document describing manual test cases
This introduces a new document, `TESTING.md`, which describes manual
tests that are currently run as part of a Velero release and test cases
that we will want to introduce for future releases.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Move testing requirements doc to website
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
phases as part of Upload Progress Monitoring, fixes#3755 Add backup phases
needed for Upload Progress Monitoring
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
Previously `WithPlugins` only supported passing image URIs "by tag" --
e.g. `gcr.io/my-repo/my-image:v0.1.2`. With this commit, we add support
for pulling "by digest" -- e.g.
`gcr.io/my-repo/my-image@sha256:a75f9e8c3ced3943515f249597be389f8233e1258d289b11184796edceaa7dab`
Signed-off-by: Eric Fried <efried@redhat.com>
* use unstructured to marshal selective fields
Signed-off-by: Alay Patel <alay1431@gmail.com>
* add a sample test for string port in applied config
Signed-off-by: Alay Patel <alay1431@gmail.com>
* update changelog
Signed-off-by: Alay Patel <alay1431@gmail.com>
* Fix gh action
* Fix it maybe
* Update GH action version
* Set write permission for the job
* Use target
* Remove config that is already default
Signed-off-by: Carlisia <carlisia@grokkingtech.io>
This change is incompatible with velero-plugin-for-csi
releases <= v0.1.2
Remove special casing of CSI volumesnapshot artifacts
from backup deletion logic as this has been moved to
a DeleteItemAction plugin in the velero-plugin-for-csi repo
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
Change to add new plugin SnapshotItemAction, added started/updated fields to UploadProgress
Updated SnapshotItemAction, added additional tasks
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* Changes to secrets design
Removed references to Volume Storage Locations/VSLs
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* Description of current parallelism points
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
For internal builds of Velero, we need to be able to specify an
alternative Dockerfile which uses an alternative image registry to pull
the base images from. This change adapts our Makefile such that both the
main Dockerfile and build image Dockerfile can be overridden.
We have some special handling for the build image to only build when the
Dockerfile has changed. In this case, we check whether a custom
Dockerfile has been provided, and always rebuild in that case. For
custom build image Dockerfiles, use a fixed tag rather than the one
based on commit SHA of the original file.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
Snapshot tests can be run with Ginkgo focus "Snapshot" and restic tests with Ginkgo focus "Restic".
Restic and volume snapshot tests can now be run simultaneously.
Added check for kibishii app start after restore.
Consolidated kibishii pod checks into waitForKibishiiPods.
Added WaitForPods function to e2e/tests/common.goSnapshot tests are skipped automatically on kind clusters.
Fixed issue where velero_utils InstallVeleroServer was looking for the Restic daemon set in the "velero" namespace only (was ignoring io.Namespace)
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* Improve readbility and formatting of pkg/restore/restore.go
Signed-off-by: F. Gold <fgold@vmware.com>
* Update paths to include API group versions
Signed-off-by: F. Gold <fgold@vmware.com>
* Use full word, 'resource' instead of 'resrc'
Signed-off-by: F. Gold <fgold@vmware.com>
The test for multiple credentials assumed that the plugin for the
additional BSL provider was already installed. This will not be the case
when performing a clean install of Velero between tests.
This adds a new utility function to add the plugins that are necessary
for the additional BSL provider. It doesn't check which plugins are
already installed, it will just attempt to install and if the stderr
contains the message that it is a duplicate plugin, we ignore the error
and continue. This could be improved by instpecting the output from
`velero plugin get` but I opted for a quicker solution given the
upcoming release.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
Our previous render hook to create links would drop the fragment when
linking to headings within the current page or within other markdown
pages on the site.
This change parses the URL and formats the link correctly if it includes
a fragment. If the link is a header on the current page, it is rendered
as `http://<current-url>/#header`. If the link is a header on a
different page (e.g. page.md#header), it is rendered as
`http://<page-url>/#header`.
This change is taken from the following Hugo community support post:
https://discourse.gohugo.io/t/markdown-render-hooks-github-and-hugo-compatible-links/22543/14
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Use Credential from BSL for restic commands
This change introduces support for restic to make use of per-BSL
credentials. It makes use of the `credentials.FileStore` introduced in
PR #3442 to write the BSL credentials to disk. To support per-BSL
credentials for restic, the environment for the restic commands needs to
be modified for each provider to ensure that the credentials are
provided via the correct provider specific environment variables.
This change introduces a new function `restic.CmdEnv` to check the BSL
provider and create the correct mapping of environment variables for
each provider.
Previously, AWS and GCP could rely on the environment variables in the
Velero deployments to obtain the credentials file, but now these
environment variables need to be set with the path to the serialized
credentials file if a credential is set on the BSL.
For Azure, the credentials file in the environment was loaded and parsed
to set the environment variables for restic. Now, we check if the BSL
has a credential, and if it does, load and parse that file instead.
This change also introduces a few other small improvements. Now that we
are fetching the BSL to check for the `Credential` field, we can use the
BSL directly to get the `CACert` which means that we can remove the
`GetCACert` function. Also, now that we have a way to serialize secrets
to disk, we can use the `credentials.FileStore` to get a temp file for
the restic repo password and remove the `restic.TempCredentialsFile`
function.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Add documentation for per-BSL credentials
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Address review feedback
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Address review comments
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
We are no longer adding the Credentials field to the VSL so this reverts
part the change that added it (#3409).
The original PR also added the `snapshot-location set` command. This
command only included options for setting the credential but is part of
the work for #2426. Due to this, the command has been left in place
(with the credentials option removed) but has been hidden.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
This change adds an E2E test which exercises the mulitple credentials
feature added in #3489. The test creates a secret from the given
credentials and creates a BackupStorageLocation which uses those
credentials. A backup and restore is then performed to the default
BSL and to the newly created BSL.
This change adds new flags to the E2E test suite to configure the BSL
created and used in the test.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Load credentials and pass to ObjectStorage plugins
Update NewObjectBackupStore to take a CredentialsGetter which can be
used to get the credentials for a BackupStorageLocation if it has been
configured with a Credential. If the BSL has a credential, use that
SecretKeySelector to fetch the secret, write the contents to a temp file
and then pass that file through to the plugin via the config map using
the key `credentialsFile`. This relies on the plugin being able to use
this new config field.
This does not yet handle VolumeSnapshotLocations or ResticRepositories.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Address code reviews
Add godocs and comments.
Improve formatting and test names.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Address code reviews
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Add uninstall cmd
- init fn to uninstall velero
- abstract dynamic client creation to a separate fn
- creates a separate client per unstructured resource
- add delete client for CRDs
- export appendUnstructured
- add uninstall command to main cmd
- export `podTemplateOption`
- uninstall resources in the reverse order of installation
- fallback to `velero` if no ns is provided during uninstall
- skip deletion if the resource doesn't exist
- handle resource not found error
- match log formatting with cli install logs
- add Delete fn to fake client
- fix import order
- add changelog
- add comment doc for CreateClient fn
Signed-off-by: Suraj Banakar <suraj@infracloud.io>
* Re-use uninstall code from test suite
- move helper functions out of test suite
- this is to prevent cyclic imports
- move uninstall helpers to uninstall cmd
- call them from test suite
- revert export of variables/fns from install code
- because not required anymore
Signed-off-by: Suraj Banakar <suraj@infracloud.io>
* Revert `PodTemplateOption` -> `podTemplateOption`
Signed-off-by: Suraj Banakar <suraj@infracloud.io>
* Use uninstall helper under VeleroUninstall
- as a wrapper
- fix import related errors in test suite
Signed-off-by: Suraj Banakar <suraj@infracloud.io>
* Validate CRDs against latest Kubernetes versions
Add Kubernetes v1.19 and v1.20 series images, and consolidate the job
into a single file to reduce repetition.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Ignore job if the changes are only site/design
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Fix codespell error
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Cache Velero binary for reuse on workers
This will cache the Velero binary based on the PR number and a SHA256 of
the generated binary.
This way, the runners testing each version of Kubernetes do not need to
build it independently.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Fix GitHub event access
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Wrap output path in quotes
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Move code checkout to build step
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Also cache go modules
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Fix syntax issues
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Download cached binary on each node
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Use cached go modules on main CI
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
HTTPS requests were failing due to the ca-certificates package not being
installed in the Tilt image.
This change takes the command to install this package from our main
Dockerfile (which also includes installing tzdata).
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Use pod namespace from backup when matching PVBs
In #3051, we introduced an additional check to ensure that a PVB matched
a particular pod by checking both the name and the namespace of the pod.
This caused an issue when using a namespace mapping on restore. In the
case where a namespace mapping is being used, the check for whether a
PVB matches a particular pod will fail as the PVB was created for the
original pod namespace and is not aware of the new namespace mapping
being used. This resulted in PVRs not being created for pods that were
being restored into new namespaces. The restic init containers were
being created to wait on the volume restore, however this would cause
the restored pods to block indefinitely as they would be waiting for a
volume restore that was not scheduled.
To fix this, we use the original namespace of the pod from the backup to
match the PVB to the pod being restored, not the new namespace where
the pod is being restored into.
Fixes#3467.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Explain why the namespace mapping can't be used
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
Split plug-in provider into cloud provider/object provider
Moved velero install/uninstall for tests into velero_utils
Added remove of CRDs to test v elero uninstall
Added remove of cluster role binding to test velero uninstall
Added dump of velero describe and logs on error
Added velero namespace argument to velero_utils functions
Modified api group versions e2e tests to use VeleroInstall
Added velero logs dumps for api group versions e2e testing
Added DeleteNamespace to test/e2e/common.go
Fixed VeleroInstall to use the image specified
Changed enable_api_group_versions_test to use veleroNamespace instead of hardcoded "velero"
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* Use kubebuilder client for fetching restic secrets
Instead of using a SecretInformer for fetching secrets for restic, use
the cached client provided by the controller-runtime manager.
In order to use this client, the scheme for Secrets must be added to the
scheme used by the manager so this is added when creating the manager in
both the velero and restic servers.
This change also refactors some of the tests to add a shared utility for
creating a fake controller-runtime client which is now used among all
tests which use that client. This has been added to ensure that all
tests use the same client with the same scheme.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Add builder for SecretKeySelector
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Restore API group version by priority
Signed-off-by: F. Gold <fgold@vmware.com>
* Add changelog
Signed-off-by: F. Gold <fgold@vmware.com>
* Correct spelling
Signed-off-by: F. Gold <fgold@vmware.com>
* Refactor userResourceGroupVersionPriorities(...) to accept config map, adjust unit test
Signed-off-by: F. Gold <fgold@vmware.com>
* Move some unit tests into e2e
Signed-off-by: F. Gold <fgold@vmware.com>
* Add three e2e tests using Testify Suites
Summary of changes
Makefile - add testify e2e test target
go.sum - changed with go mod tidy
pkg/install/install.go - increased polling timeout
test/e2e/restore_priority_group_test.go - deleted
test/e2e/restore_test.go - deleted
test/e2e/velero_utils.go - made restic optional in velero install
test/e2e_testify/Makefile - makefile for testify e2e tests
test/e2e_testify/README.md - example command for running tests
test/e2e_testify/common_test.go - helper functions
test/e2e_testify/e2e_suite_test.go - prepare for tests and run
test/e2e_testify/restore_priority_apigv_test.go - test cases
Signed-off-by: F. Gold <fgold@vmware.com>
* Make changes per @nrb code review
Signed-off-by: F. Gold <fgold@vmware.com>
* Wait for pods in e2e tests
Signed-off-by: F. Gold <fgold@vmware.com>
* Remove testify suites e2e scaffolding moved to PR #3354
Signed-off-by: F. Gold <fgold@vmware.com>
* Make changes per @brito-rafa and Velero maintainers code reviews
- Made changes suggested by @brito-rafa in GitHub.
- We had a code review meeting with @carlisia, @dsu-igeek, @zubron, and @nrb
- and changes were made based on their suggetions:
- pull in logic from 'meetsAPIGVResotreReqs()' to restore.go.
- add TODO to remove APIGroupVersionFeatureFlag check
- have feature flag and backup version format checks in separate `if` statements.
- rename variables to be sourceGVs, targetGVs, and userGVs.
Signed-off-by: F. Gold <fgold@vmware.com>
* Convert Testify Suites e2e tests to existing Ginkgo framework
Signed-off-by: F. Gold <fgold@vmware.com>
* Made changes per @zubron PR review
Signed-off-by: F. Gold <fgold@vmware.com>
* Run go mod tidy after resolving go.sum merge conflict
Signed-off-by: F. Gold <fgold@vmware.com>
* Add feature documentation to velero.io site
Signed-off-by: F. Gold <fgold@vmware.com>
* Add config map e2e test; rename e2e test file and name
Signed-off-by: F. Gold <fgold@vmware.com>
* Update go.{mod,sum} files
Signed-off-by: F. Gold <fgold@vmware.com>
* Move CRDs and CRs to testdata folder
Signed-off-by: F. Gold <fgold@vmware.com>
* Fix typos in cert-manager to pass codespell CICD check
Signed-off-by: F. Gold <fgold@vmware.com>
* Make changes per @nrb code review round 2
- make checkAndReadDir function private
- add info level messages when priorties 1-3 API group versions can not be used
Signed-off-by: F. Gold <fgold@vmware.com>
* Make user config map rules less strict
Signed-off-by: F. Gold <fgold@vmware.com>
* Update e2e test image version in example
Signed-off-by: F. Gold <fgold@vmware.com>
* Update case A music-system controller code
Signed-off-by: F. Gold <fgold@vmware.com>
* Documentation updates
Signed-off-by: F. Gold <fgold@vmware.com>
* Update migration case documentation
Signed-off-by: F. Gold <fgold@vmware.com>
* Use label to select Velero deployment in plugin cmd
Signed-off-by: F. Gold <fgold@vmware.com>
* Move veleroLabel constant closer to usage
Signed-off-by: F. Gold <fgold@vmware.com>
* Add changelog
Signed-off-by: F. Gold <fgold@vmware.com>
* Remove year from copyright in new file
Signed-off-by: F. Gold <fgold@vmware.com>
* Export and use install.Labels() function
Signed-off-by: F. Gold <fgold@vmware.com>
Restoring CAPI workload clusters without this ordering caused the
capi-controller-manager code to panic, resulting in an unhealthy cluster
state.
This can be worked around
(https://community.pivotal.io/s/article/5000e00001pJyN41611954332537?language=en_US),
but we provide the inclusion of these resources as a default in order to
provide a better out-of-the-box experience.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
The labeler action was failing as it was looking for
`.github/labels.yaml` but the file has the suffix `.yml`. This change
fixes the path used by the labeler action.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
The prow-action plugin will pre-pend `area` or `kind` to labels, so
unify them into a common format.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add colors to describe command
* Add colors to describe backups/restore/schedules commands
* Make name in the output bold
* Disable colors via `--colorized` flag or if velero isn't in TTY
Co-authored-by: Clay Kauzlaric <ckauzlaric@vmware.com>
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>
Signed-off-by: Mikael Manukyan <mmanukyan@vmware.com>
* Add changelog
* and run make update
Co-authored-by: Mikael Manukyan <mmanukyan@vmware.com>
Signed-off-by: Mikael Manukyan <mmanukyan@vmware.com>
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>
* Add colorized to the client config file
Co-authored-by: Mikael Manukyan <mmanukyan@vmware.com>
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>
Co-authored-by: Mikael Manukyan <mmanukyan@vmware.com>
* allow client config to use string values
* the command `velero client config set colorized=false` writes a string
value of "false" into the config. This change allows that string to be
accepted and converted into a boolean when used in program.
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>
* Add docs about colored CLI output
Co-authored-by: Mikael Manukyan <mmanukyan@vmware.com>
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>
* Update site/content/docs/main/customize-installation.md
Co-authored-by: JenTing Hsiao <jenting.hsiao@suse.com>
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>
* docs: remove comma
* as per @carlisia 's suggestion
Signed-off-by: Clay Kauzlaric <ckauzlaric@vmware.com>
Co-authored-by: Clay Kauzlaric <ckauzlaric@vmware.com>
Co-authored-by: Clay Kauzlaric <clay.kauzlaric@gmail.com>
Co-authored-by: JenTing Hsiao <jenting.hsiao@suse.com>
With #3327, the restic binary for the Tilt Velero image is downloaded on
the local machine using the `./hack/download-restic.sh` script. This
script relies on `wget` being availabe on the local machine. `wget` is
not commonly available on macOS but `curl` is. This change modifies the
`./hack/download-restic.sh` script to use `curl` instead as it is
available on both Linux and macOS and is available in our `golang`
docker build image.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
In preparation for modifying the instantiation of `BackupStores` to be
able to load credentials, change the function `NewObjectBackupStore` to
be an interface that is passed in to all controllers.
Previously, the function to get a new backup store was configurable but
for many controllers was fixed to use `NewObjectBackupStore`. This
change introduces an interface for getting the backup store and wraps
the functionality from `NewObjectBackupStore` in a type which implements
this interface. This will allow more flexibility when introducing
credentials for a specific backup store as it will allow us to create a
new `ObjectBackupStoreGetter` type which can be configured to add
credentials config when creating the ObjectBackupStore without needing
to change the API used by the controllers.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
In #3310, the Dockerfile for the Tilt Velero container was modified to
call the `./hack/download-restic.sh` script. A side effect of this
change was that the context for the docker build was much larger as it
was the root of the Velero repo, rather than just the `_tiltbuild`
directory. With the frequent rebuilds of the image that happen when
using Tilt, a large amounts of disk space was being consumed by the
different layers of images builds in the Docker overlay filesystem (as
diffs could include the `.go` directory which can be several GBs).
This change modifies the `download-restic.sh` script to allow the output
directory for the restic binary to be configured. This means that the
script can be called directly from the Tiltfile and can be managed
outside the container build. This allows us to restore the previous
`_tiltbuild` context. It also speeds up image builds as we can download
restic once and use it for all builds rather than redownloading
frequently.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
now versions are working and there are code changes that need to happen
- release candidate versions are aligned and working
- replaces fields are removed and not required anymore
controller runtime has been changed during the 'make' command
Signed-off-by: Ron Green <11993626+georgettica@users.noreply.github.com>
This change customises the issue template chooser to include a link to
the Community Support Q&A discussion board. This lets users know that
there is another place to ask questions related to using Velero.
This change also disables the creation of blank issues to prevent issues
that don't follow either the bug or feature request templates from being
opened.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Add Tilt configuration to debug using Delve
This change adds support to run the Velero process in Tilt using
[Delve](https://github.com/go-delve/delve).
This does not include support for debugging the Velero process in the
restic pods, just in the Velero deployment.
For an optimal debugging experience, this change also introduces a new
flag (`DEBUG`) to the `hack/build.sh` script to enable a "debug" build
of the Velero binary. This flag, if enabled, will build the binary
without optimisations and inlining. Disabling optimisations and inlining
is recommended by Delve.
Two configuration options have been added to the Tilt settings. The
first, `enable_debug`, is to control whether debugging should be
enabled. If enabled, the process will be started by Delve, and the Delve
server port (2345) will be forwarded to the local machine.
The second option, `debug_continue_on_start`, is to control whether the
process should "continue" when started by Delve or should be paused.
By default, debugging is disabled, and if in debug mode, the process
will continue.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Add spaces around keyword args
Starlark prefers spaces around `=` in keyword arguments:
https://docs.bazel.build/versions/master/skylark/bzl-style.html#keyword-arguments
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Remove unnecessary command from Dockerfile
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Add note to connect after Tilt is running
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
This change adds an additional set of commands to Dockerfile for the
Velero image which adds the `hack/download-restic.sh` script, installs
the necessary dependencies, and then runs that script.
In order to copy the script from the `hack` directory, the context for
building the image has been changed to the root of the velero
repository.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
This changes the codespell action config to use a relative path for the
generated crds.go file as the current pattern used fails the check used
by codespell (which uses the `fnmatch` module).
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* added useOwnerReferencesInBackup to crd velerio.io_schedules
Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
* added UseOwnerReferencesInBackup property to schedule.go
Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
* deepcopy schedule configured for reference the property UseOwnerReferencesInBackup
Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
* added UseOwnerReferencesInBackup property verification to modify OwnerReferences from backup
Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
* created changelog
Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
* removed deepcopy schedule configured for reference the property UseOwnerReferencesInBackup
Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
* running make update
Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
* running make update
Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
* updated the year at the top of the schedule.go file for 2020
Signed-off-by: matheusjuvelino <matheus.juvelino@outlook.com>
* -> Preserve nodePort support when restoring via "--preserve-nodeports" flag
Signed-off-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>
* -> Added changelog.
Signed-off-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>
* -> Unit test added.
-> Using boolptr.IsSetToTrue for bool ptr check.
Signed-off-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>
* -> Unit test added.
-> Using boolptr.IsSetToTrue for bool ptr check.
Signed-off-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>
* -> Other restore errors log level changed from info to error.
-> Documentation updated about Velero nodePort restore logic and preservation of them.
Signed-off-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>
Co-authored-by: Yusuf Güngör <yusuf.gungor@hepsiburada.com>
As long as a milestone and the board have the same title, then this
workflow should take care of adding an issue into the GitHub Project
board when an existing issue is given a milestone.
It does NOT support checking for a milestone when an issue is edited or
created though, due to limitations on GitHub Actions syntax right now -
there's not a great way to validate against an empty `milestone` object
at the moment, per https://docs.github.com/en/free-pro-team@latest/actions/reference/context-and-expression-syntax-for-github-actions
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Draft design doc for restoring API group version by priority level
Signed-off-by: F. Gold <fgold@vmware.com>
* Make changes per @jenting review and use filepath to join paths
Signed-off-by: F. Gold <fgold@vmware.com>
* Update design doc with config map and k8s version priorities
Signed-off-by: F. Gold <fgold@vmware.com>
* Edit k8s doc URL per @jenting's review comment
Signed-off-by: F. Gold <fgold@vmware.com>
* Editorial changes
Signed-off-by: F. Gold <fgold@vmware.com>
* Changes per @nrb PR review and other edits
Signed-off-by: F. Gold <fgold@vmware.com>
* Update Status.FormatVersion check sections and minor edits
Signed-off-by: F. Gold <fgold@vmware.com>
* Add default field to BSL CRD
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Add a new flag `--default` under `velero backup-location create`
add a new flag `--default` under `velero backup-location create`
to specify this new location to be the new default BSL.
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Add a new default field under `velero backup-location get`
add a new default field under `velero backup-location get` to indicate
which BSL is the default one.
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Add a new sub-command and flag under `velero backup-location`
Add a new sub-command called `velero backup-location set` sub-command
and a new flag `velero backup-cation set --default` to configure which
BSL is the default one.
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Add new flag to get the default backup-location
Add a new flag `--default` under `velero backup-location get`
to displays the current default BSL.
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Configures default BSL in BSL controller
When upgrade the BSL CRDs, none of the BSL has been labeled as default.
Sets the BSL default field to true if the BSL name matches to the default BSL setting.
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Configures the default BSL in BSL controller for velero upgrade
When upgrade the BSL CRDs, none of the BSL be marked as the default.
Sets the BSL `.spec.default: true` if the BSL name matches against the
`velero server --default-backup-storage-location`.
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Add unit test to test default BSL behavior
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Update check which one is the default BSL in backup/backup_sync/restore controller
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Add changelog
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Update docs locations.md and upgrade-to-1.6.md
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* 🐛 BSLs with validation disabled should be validated at least once
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
* review comments
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
PR #3110 introduced a new action for performing the login to Dockerhub
as part of image building and pushing however there is an error with the
configuration and the credentials are not being passed through
correctly. This change reverts to the previous log in approach.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
The previous buildx action that we were using has been archived and
users are recommended to switch to the new action provided by Docker.
The previous action also included setting up QEMU. This is now provided
as a separate action which needs to be run separately.
This change also replaces the direct use of `docker login` with the new
`login-action`. This new action also handles logging out once the build
is complete.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* feat: add delete sub-command for backup-location
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Change to use kubebuilder/runtimecontroller API
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* fix get BSL by label doesn't work
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Update changelog
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Ordering by alphabet
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Better example format for help message
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Capital the comments
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
* Don't fail backup if downloading tarball fails
Previously, we would always attempt to download the tarball for a backup
for processing DeleteItemAction plugins, even if there weren't any.
This caused an issue for some users in the case where the backup tarball
had been deleted from object storage as the backup deletion would fail.
Now, we only attempt to download the tarball in the case where there are
DeleteItemAction plugins. If downloading that tarball fails, we log
the error, skip the processing of the DeleteItemAction plugins and
proceed with the rest of the deletion.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Skip file removal in closeAndRemoveFile if nil
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Basic end-to-end tests, generate data/backup/remove/restore/verify
Uses distributed data generator
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* Moved backup/restore into velero_utils, started using a name for the restore
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* remove checked in binary and update test/e2e Makefile
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
* Ran make update
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* Save
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
* Ran make update
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* Basic end-to-end test, generate data/backup/remove/restore/verify
Uses distributed data generator
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* Changed tests/e2e Makefile to just use go get to install ginkgo in the GOPATH/bin
Updated to ginkgo 1.14.2
Put cobra back to v0.0.7
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* Added CLOUD_PLATFORM env variable to Makefile, updated README, removed ginkgo from .gitignore
Signed-off-by: Dave Smith-Uchida <dsmithuchida@vmware.com>
* choose velero CLI binary based on local env
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
Co-authored-by: Ashish Amarnath <ashisham@vmware.com>
By running the following command:
codespell -S .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico -L \
iam,aks,ist,bridget,ue
Signed-off-by: Mateusz Gozdek <mgozdekof@gmail.com>
* fixing label for 'velero.io/change-pvc-node-selector' plugin in site document
Signed-off-by: mayank <mayank.patel@mayadata.io>
* Fixing "velero.io/change-pvc-node-selector" to fetch config using plugin name
Signed-off-by: mayank <mayank.patel@mayadata.io>
* adding changelog
Signed-off-by: mayank <mayank.patel@mayadata.io>
This change modifies the kubebuilder annotations for the Velero CRDs to
include `additionalPrinterColumns` so that more information is exposed
when using `kubectl get`.
For each of the CRDs, annotations have been added to make the output
for `kubectl get` match the output from the equivalent `velero get`
command as closely as possible. There are some cases where this output
could not be replicated, such as the `EXPIRES` column for Backups, due
to the limitations of JSONPath expressions within the resulting CRD
defition. Some columns undergo processing and formatting before being
printed by the Velero CLI which cannot be replicated using JSONPath. In
these cases, these printer columns have been omitted.
For other CRDs where there is no `velero get` equivalent, such as
`PodVolumeBackup` and `PodVolumeRestore`, a best effort has been made to
expose information that provides value.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Adding handling of restic-wait init container at any order with warning.
Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>
* Adding newline at end of files to match convention.
Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>
* Formatting.
Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>
* Update copyright year on modified files.
Signed-off-by: Piper Dougherty <doughertypiper@gmail.com>
* Only remove the UID from a PV's claimRef
The UID is the only part of a claimRef that might prevent it from being
rebound correctly on a restore. The namespace and name within the
claimRef should be preserved in order to ensure that the PV is claimed
by the correct PVC on restore.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Remap PVs claimRef.namespace on relevant restores
When remapping namespaces, any included PVs need to have their claimRef
updated to point remapped namespaces to the new namespace name in order
to be bound to the correct PVC.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Update tests and ensure claimRef namespace remaps
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Remove lowercased uid field from unstructured PV
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Fix issues that prevented PVs from being restored
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add changelog
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Dynamically reprovision volumes without snapshots
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Update test for lower case uid field
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Remove stray debugging print statement
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Fix typo, remove extra code, add tests.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* create CRB with velero-<namespace>
This will allow creating multiple instances of velero,
across two different namespaces
Signed-off-by: Alay Patel <alay1431@gmail.com>
* add changelog
Signed-off-by: Alay Patel <alay1431@gmail.com>
* add package var DefaultVeleroNamespace and use it wherever needed
Signed-off-by: Alay Patel <alay1431@gmail.com>
The command to check for an existing release branch only checked for
local branches. We should be considering both local and remote branches
before cherry-picking commits for the new release.
This change checks for existing local and remote release branches and
creates or updates them accordingly.
* If a remote branch exists, but a local branch does not, checkout the
remote branch and track it.
* If the remote branch and local branch exists, checkout the local
branch and ensure that the latest commits from the remote are pulled.
* Otherwise, if the remote branch does not exist, create it locally if
needed.
This also handles the case where an existing release branch may be
tracked in multiple remotes as the remote to use is explicitly stated.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
We instruct users to update the CRDs when upgrading to 1.4 and 1.5 which
involves using `kubectl apply` to apply the CRD configuration. The CRD
configuration generated by `velero install` includes fields which are
not valid when running Kubernetes v1.14 or earlier. We instruct users to
work around this when doing a customised velero install, but not when
upgrading to newer versions. This change updates the upgrade
instructions for v1.4 and v1.5 to include the use of `--validate=false`
flag when running `kubectl apply`.
See #2077 and #2311 for more context.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* restore proper lowercase/plural CRD resource
This commit restores the proper resource string
"customresourcedefinitions" for CRD. The prior change to
"CustomResourceDefinition" was made because this was being used
in another place to populate the CRD "Kind" field in
remap_crd_version_action.go -- there, just use the correct Kind
string instead of pulling from Resource.
Signed-off-by: Scott Seago <sseago@redhat.com>
* add changelog
Signed-off-by: Scott Seago <sseago@redhat.com>
The release script assumes that the remote for the vmware-tanzu/velero
repository is called `upstream`. It may be the case that this remote is
configured to use a different name. This change updates the script to
allow the remote name being used to be configured by setting the
environment variable `REMOTE` before running the script. If the variable
is not set, the remote defaults to `upstream`.
The release instructions have also been updated to reflect this change.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
This change addresses some issues in the documentation and scripts that
were found during the v1.5.1 release:
* Fix the path to the changelog script in the Makefile
* Fix the path to the pre-release TOC in the docs
* Improve the instructions for creating/updating the upgrade
instructions page.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Don't attempt to publish docker images on forks
When the Main CI workflow runs on a fork, the docker push step will
always fail because the appropriate secrets are missing. This is
annoying at best and causes CI on forks to be untrustworthy at worst.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Use single quotes for string, as github expects
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Blop post announcing Velero 1.5
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
* Remove hardcoded deploy preview URL
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Remove base URL entirely
Since there's not really an easy way to use the preview URL environment
variables in the netlify.toml, remove the baseURL argument entirely
from the build command.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Update blog post date and expected tag link
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
Co-authored-by: Ashish Amarnath <ashisham@vmware.com>
Now that Exec restore hooks have been added in #2804 and are available
in 1.5.0-rc1, we can remove the line that states that they are coming
soon.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
A number of links still pointed to the old master branch and resulted in
404s. This updates those links to point to the new main branch.
Signed-off-by: Bridget McErlean <bmcerlean@vmware.com>
* Exec hooks in restored pods
Signed-off-by: Andrew Reed <andrew@replicated.com>
* WaitExecHookHandler implements ItemHookHandler
This required adding a context.Context argument to the ItemHookHandler
interface which is unused by the DefaultItemHookHandler implementation.
It also means passing nil for the []ResourceHook argument since that
holds BackupResourceHook.
Signed-off-by: Andrew Reed <andrew@replicated.com>
* WaitExecHookHandler unit tests
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Changelog and go fmt
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Fix double import
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Default to first contaienr in pod
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Use constants for hook error modes in tests
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Revert to separate WaitExecHookHandler interface
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Negative tests for invalid timeout annotations
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Rename NamedExecRestoreHook PodExecRestoreHook
Also make field names more descriptive.
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Cleanup test names
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Separate maxHookWait and add unit tests
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Comment on maxWait <= 0
Also info log container is not running for hooks to execute in.
Also add context error to hooks not executed errors.
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Remove log about default for invalid timeout
There is no default wait or exec timeout.
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Linting
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Fix log message and rename controller to podWatcher
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Comment on exactly-once semantics for handler
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Fix logging and comments
Use filed logger for pod in handler.
Add comment about pod changes in unit tests.
Use kube util NamespaceAndName in messages.
Signed-off-by: Andrew Reed <andrew@replicated.com>
* Fix maxHookWait
Signed-off-by: Andrew Reed <andrew@replicated.com>
* fix: rename the PV if VolumeSnapshotter has modified the PV name
When VolumeSnapshotter sets the PV name via SetVolumeID and PV is
not there in the cluster, velero does not rename the PV. Which causes
the pvc to be in the lost state as pvc points to the old PV but pv object
has been renamed by VolumeSnapshotter.
Signed-off-by: Pawan <pawan@mayadata.io>
* adding a test case for pv rename
Signed-off-by: Pawan <pawan@mayadata.io>
* Update release checklist to include more info around blog posts and release announcements
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
* updating links
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
* update from review
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
* update docs to match style guide
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
* update web site guide
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
* add index files to api tyypes folder
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
* updating to using cascade
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
This metadata is required by hugo to discover the content in the
documentation website, without which a page not found is shown to the
viewer.
Fixes: #2831
Signed-off-by: Imran Pochi <imran@kinvolk.io>
* add note about windows support
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
* adding to 1.4 docs and adjusting wording to be more clear
Signed-off-by: Abigail McCarthy <mabigail@vmware.com>
The pull_request_target event is like pull_request, but runs in the
context of the target repo (Velero, in this case), instead of the fork.
This allows us to use the GitHub token secret as expected.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Refactor image builds to use buildx for multi arch image building
Signed-off-by: Rob Reus <rob@devrobs.nl>
* Adding image build sanity checks to Makefile
Signed-off-by: Rob Reus <rob@devrobs.nl>
* Making locally building of docker images possible
Signed-off-by: Rob Reus <rob@devrobs.nl>
* Adding docs on building container images using buildx
Signed-off-by: Rob Reus <rob@devrobs.nl>
* Adding changelog and implementing feedback from PR
Signed-off-by: Rob Reus <rob@devrobs.nl>
* Making GOPROXY used in the build containers configurable
Signed-off-by: Rob Reus <rob@devrobs.nl>
* remove explicit Accept-Encoding header
For StorageGrid compatibility the Accept-Encoding header should not be set, otherwise StorageGrid compresses the already compressed log files which are only decompressed by the client once
Signed-off-by: fvsqr <48791253+fvsqr@users.noreply.github.com>
* Removed explicit gzip Accept-Encoding header
For StorageGrid compatibility the Accept-Encoding header should not be set, otherwise StorageGrid compresses the already compressed log files which are only decompressed by the client once.
Unclear, how this affects Backup endpoints from Azure or GCP
Signed-off-by: fvsqr <48791253+fvsqr@users.noreply.github.com>
* Create 2712-fvsqr
Signed-off-by: fvsqr <48791253+fvsqr@users.noreply.github.com>
* Add design proposal for restore hooks
Signed-off-by: Marc Campbell <marc.e.campbell@gmail.com>
* Add details to restore hooks design
Signed-off-by: Marc Campbell <marc.e.campbell@gmail.com>
* Restore initContainers and requested changes
Change post-restore exec hooks to wait for container running status
instead of pod ready status.
Add separate exec timeout and wait timeouts for post-restore exec hooks.
Signed-off-by: Marc Campbell <marc.e.campbell@gmail.com>
Co-authored-by: Andrew Reed <andrew@replicated.com>
* k8s 1.18 import wip
backup, cmd, controller, generated, restic, restore, serverstatusrequest, test and util
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* go mod tidy
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* add changelog file
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* go fmt
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* update code-generator and controller-gen in CI
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* checkout proper code-generator version, regen
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* fix remaining calls
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* regenerate CRDs with ./hack/update-generated-crd-code.sh
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* use existing context in restic and server
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* fix test cases by resetting resource version
also use main library go context, not golang.org/x/net/context, in pkg/restore/restore.go
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* clarify changelog message
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* use github.com/kubernetes-csi/external-snapshotter/v2@v2.2.0-rc1
Signed-off-by: Andrew Lavery <laverya@umich.edu>
* run 'go mod tidy' to remove old external-snapshotter version
Signed-off-by: Andrew Lavery <laverya@umich.edu>
The labels 'Documentation', 'Website', and 'Design' are all for PRs
exclusively related to those things, not code, so they don't need to be
checked for changelogs or have the extra label applied.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* updated acceptable values on cron schedule for day of the week from 0-7 to 0-6
Signed-off-by: Daniel Thrasher <dannythrasher@gmail.com>
* added a changelog file to changelog directory
Signed-off-by: Daniel Thrasher <dannythrasher@gmail.com>
Co-authored-by: Daniel Thrasher <dannythrasher@gmail.com>
* Add linter to Makefile and build image
* Also make it part of verify step
Signed-off-by: Tony Batard <tbatard@pivotal.io>
* clean up of Makefile and permissions for .go/golangci-lint
Signed-off-by: Duffie Cooley <cooleyd@vmware.com>
* changed verify-lint.sh to lint.sh to avoid breaking ci
Signed-off-by: mtritabaugh <mtritabaugh@vmware.com>
* Add changelog
Signed-off-by: Tony Batard <tbatard@pivotal.io>
* Add LINTERS option to run only specific linters
* e.g. make lint LINTERS=unused,deadcode
Signed-off-by: Tony Batard <tbatard@pivotal.io>
* adding timeout to golangci-lint, and checking cache
Signed-off-by: Matyas Danter <mdanter@vmware.com>
* Fixed some formatting and added comments
Signed-off-by: Matyas Danter <mdanter@vmware.com>
* modifying lint script to use golangci.yaml
Signed-off-by: Matyas Danter <mdanter@vmware.com>
* update to move default linters to Makefile
Signed-off-by: mtritabaugh <mtritabaugh@vmware.com>
* Adding documentation for lint make targets.
Signed-off-by: Matyas Danter <mdanter@vmware.com>
* Update Copyright with current year
Signed-off-by: Tony Batard <tbatard@pivotal.io>
* initial git workflow commit
Signed-off-by: mtritabaugh <mtritabaugh@vmware.com>
* Added lint-all target and implemented -n as default
* Added a local-lint-all and lint-all target that will show lint errors
for all of the codebase
* changed the default of lint and local-lint to only show new lint
errors
Signed-off-by: Duffie Cooley <cooleyd@vmware.com>
* updated docs to reflect new target
Signed-off-by: mtritabaugh <mtritabaugh@vmware.com>
Co-authored-by: Duffie Cooley <cooleyd@vmware.com>
Co-authored-by: mtritabaugh <mtritabaugh@vmware.com>
Co-authored-by: Matyas Danter <mdanter@vmware.com>
* kubebuilder init - minimalist version
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add back main.go, apparently kb needs it
Signed-off-by: Carlisia <carlisia@vmware.com>
* Tweak makefile to accomodate kubebuilder expectations
Signed-off-by: Carlisia <carlisia@vmware.com>
* Port BSL to kubebuilder api client
Signed-off-by: Carlisia <carlisia@vmware.com>
* s/cache/client bc client fetches from cache
And other naming improvements
Signed-off-by: Carlisia <carlisia@vmware.com>
* So, .GetAPIReader is how we bypass the cache
In this case, the cache hasn't started yet
Signed-off-by: Carlisia <carlisia@vmware.com>
* Oh that's what this code was for... adding back
We still need to embed the CRDs as binary data in the Velero binary to
access the generated CRDs at runtime.
Signed-off-by: Carlisia <carlisia@vmware.com>
* Tie in CRD/code generation w/ existing scripts
Signed-off-by: Carlisia <carlisia@vmware.com>
* Mostly result of running update-fmt, updated file formatting
Signed-off-by: Carlisia <carlisia@vmware.com>
* Just a copyright fix
Signed-off-by: Carlisia <carlisia@vmware.com>
* All the test fixes
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add changelog + some cleanup
Signed-off-by: Carlisia <carlisia@vmware.com>
* Update backup manifest
Signed-off-by: Carlisia <carlisia@vmware.com>
* Remove unneeded auto-generated files
Signed-off-by: Carlisia <carlisia@vmware.com>
* Keep everything in the same (existing) package
Signed-off-by: Carlisia <carlisia@vmware.com>
* Fix/clean scripts, generated code, and calls
Deleting the entire `generated` directory and running `make update`
works. Modifying an api and running `make verify` works as expected.
Signed-off-by: Carlisia <carlisia@vmware.com>
* Clean up schema and client calls + code reviews
Signed-off-by: Carlisia <carlisia@vmware.com>
* Move all code gen to inside builder container
Signed-off-by: Carlisia <carlisia@vmware.com>
* Address code review
Signed-off-by: Carlisia <carlisia@vmware.com>
* Fix imports/aliases
Signed-off-by: Carlisia <carlisia@vmware.com>
* More code reviews
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add waitforcachesync
Signed-off-by: Carlisia <carlisia@vmware.com>
* Have manager register ALL controllers
This will allow for proper cache management.
Signed-off-by: Carlisia <carlisia@vmware.com>
* Status subresource is now enabled; cleanup
Signed-off-by: Carlisia <carlisia@vmware.com>
* More code reviews
Signed-off-by: Carlisia <carlisia@vmware.com>
* Clean up
Signed-off-by: Carlisia <carlisia@vmware.com>
* Manager registers ALL controllers for restic too
Signed-off-by: Carlisia <carlisia@vmware.com>
* More code reviews
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add deprecation warning/todo
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add documentation
Signed-off-by: Carlisia <carlisia@vmware.com>
* Add helpful comments
Signed-off-by: Carlisia <carlisia@vmware.com>
* Address code review
Signed-off-by: Carlisia <carlisia@vmware.com>
* More idiomatic Runnable
Signed-off-by: Carlisia <carlisia@vmware.com>
* Clean up imports
Signed-off-by: Carlisia <carlisia@vmware.com>
* log a warning instead of erroring if additional item can't be found
Signed-off-by: Steve Kriss <krisss@vmware.com>
* always show backup warning/error count in get/describe
Signed-off-by: Steve Kriss <krisss@vmware.com>
* changelog
Signed-off-by: Steve Kriss <krisss@vmware.com>
* Use a helper function when querying w/ backup label
Setting or querying for a backup label name should always pass the value
through the GetValidName function. This change passes query uses of the
backup label value through the GetValidName function by introducing 2
new helpers, one for making a Selector, one for making a ListOptions.
It also removes functions returning the same data, but under
unecessarily specific names.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Document using the label.GetValidName function
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Update copyright year
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Clarify labels.GetValidName and annotations
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Move functions to pkg/label
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Fix function comments
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Switch to backing up v1beta1 CRDs from API server
Instead of simply switching out the APIVersion string on a v1
CustomResourceDefinition object, re-download the object from the API
server entirely to get the correct fields.
This should fix validation errors upon restore.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Fix existing tests
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add full example CRDs to automated tests
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Move beta CRD lookup into helper function
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add case for preserveUnknownFields CRDs
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add PreserveUnknownFields case and refactor execute
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add older prometheus CRD test cases
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add changelog
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Backup all API Groups versions while keeping backward compatibility
Signed-off-by: Rafael Brito <rbrito@vmware.com>
* Backup all API Groups versions while keeping backward compatibility
Signed-off-by: Rafael Brito <rbrito@vmware.com>
* Adding feature flag to enable backup of multiple API group versions
Signed-off-by: Rafael Brito <rbrito@vmware.com>
* Install ca-certificates for ARM based container builds.
Signed-off-by: David Colon <dave@colon.dev>
* Adding changelog for PR 2481
Signed-off-by: David Colon <dave@colon.dev>
* Add documentation for --cacert feature
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
* Document objectStorage/caCert field.
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
* Add link to ca bundle docs in TOC and customize-installation
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
* bug fix: don't remove unresolvable includes from includes-excludes lists
Signed-off-by: Steve Kriss <krisss@vmware.com>
* changelog
Signed-off-by: Steve Kriss <krisss@vmware.com>
* Disabling validation for volumesnapshotlocation if the backup has snapshotvolume set to false
Signed-off-by: mayank <mayank.patel@mayadata.io>
* adding a changelog
Signed-off-by: mayank <mayank.patel@mayadata.io>
* addressing review comment
Signed-off-by: mayank <mayank.patel@mayadata.io>
Infomers won't start if cancelFunc is invoked as soon as the newServer
function exits via the defer
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
All changes to go.mod/go.sum besides the external-snapshotter repo are a
result of its transitive dependencies.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
Account for having CSI enabled or not, as well as having the snapshot
CRDs installed in the kubernetes cluster.
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add --cacert flag to velero cli commands
Adds a --cacert flag to the log and describe commands
that takes a path to a PEM-encoded certificate bundle
as an alternative to --insecure-skip-tls-verify for
dealing with self-signed certificates.
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
* Add --cacert flag to the installer
Allows setting the cacert field on the BSL during
the install process using the file at the path
specified by the --cacert field.
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
* Add changelog for #2368
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
CSI plugin for velero will use this to return secrets as additional
resource while backing up CSI objects
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
* Support setting a custom CA certificate for a BSL
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
* update CRDS
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
* Add changelog for #2353
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
* Clean up temp file from TestTempCACertFile
Signed-off-by: Sam Lucidi <slucidi@redhat.com>
* Add builders for CRD schemas
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add test case for #2319
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add failing test case
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Remove unnecessary print and temporary variable
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add some options for fixing the test case
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Switch to a JSON middle step to "fix" conversions
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
* Add comment and changelog
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
@@ -14,11 +14,11 @@ about: Tell us about a problem you are experiencing
**The output of the following commands will help us better understand what's going on**:
(Pasting long output into a [GitHub gist](https://gist.github.com) or other pastebin is fine.)
*`kubectl logs deployment/velero -n velero`
*`velero backup describe <backupname>` or `kubectl get backup/<backupname> -n velero -o yaml`
*`velero backup logs <backupname>`
*`velero restore describe <restorename>` or `kubectl get restore/<restorename> -n velero -o yaml`
*`velero restore logs <restorename>`
-`kubectl logs deployment/velero -n velero`
-`velero backup describe <backupname>` or `kubectl get backup/<backupname> -n velero -o yaml`
-`velero backup logs <backupname>`
-`velero restore describe <restorename>` or `kubectl get restore/<restorename> -n velero -o yaml`
-`velero restore logs <restorename>`
**Anything else you would like to add:**
@@ -33,3 +33,12 @@ about: Tell us about a problem you are experiencing
- Kubernetes installer & version:
- Cloud provider or hardware configuration:
- OS (e.g. from `/etc/os-release`):
**Vote on this issue!**
This is an invitation to the Velero community to vote on issues, you can see the project's [top voted issues listed here](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "I would like to see this bug fixed as soon as possible"
- :-1: for "There are more important bugs to focus on right now"
@@ -23,3 +23,11 @@ about: Suggest an idea for this project
- Kubernetes installer & version:
- Cloud provider or hardware configuration:
- OS (e.g. from `/etc/os-release`):
**Vote on this issue!**
This is an invitation to the Velero community to vote on issues, you can see the project's [top voted issues listed here](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "The project would be better with this feature added"
- :-1: for "This feature will not enhance the project in a meaningful way"
stale-issue-message:"This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. If a Velero team member has requested log or more information, please provide the output of the shared commands."
close-issue-message:"This issue was closed because it has been stalled for 5 days with no activity."
days-before-issue-stale:30
days-before-issue-close:5
# Disable stale PRs for now; they can remain open.
days-before-pr-stale:-1
days-before-pr-close:-1
# Only issues made after Feb 09 2021.
start-date:"2021-09-02T00:00:00"
# Only make issues stale if they have these labels. Comma separated.
@@ -50,9 +52,15 @@ SIGHUP integrates Velero in its [Fury Kubernetes Distribution][81] providing pre
**[MayaData][90]**
MayaData is a large user of Velero as well as a contributor. MayaData offers a Data Agility platform called [OpenEBS Director][91], that helps customers confidently and easily manage stateful workloads in Kubernetes. Velero is one of the core software building block of the OpenEBS Director's [DMaaS or data migration as a service offering][92] used to enable data protection strategies.
**[Okteto][93]**
Okteto integrates Velero in [Okteto Cloud][94] and [Okteto Enterprise][95] to periodically backup and restore our clusters for disaster recovery. Velero is also a core software building block to provide namespace cloning capabilities, a feature that allows our users cloning staging environments into their personal development namespace for providing production-like development environments.
**[Replicated][100]**<br>
Replicated uses the Velero open source project to enable snapshots in [KOTS][101] to backup Kubernetes manifests & persistent volumes. In addition to the default functionality that Velero provides, [KOTS][101] provides a detailed interface in the [Admin Console][102] that can be used to manage the storage destination and schedule, and to perform and monitor the backup and restore process.
## Adding your organization to the list of Velero Adopters
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
If you are using Velero and would like to be included in the list of `Velero Adopters`, add an SVG version of your logo to the `site/static/img/adopters` directory in this repo and submit a [pull request][3] with your change. Name the image file something that reflects your company (e.g., if your company is called Acme, name the image acme.png). See this for an example [PR][4].
### Adding a logo to velero.io
@@ -95,3 +103,10 @@ If you would like to add your logo to a future `Adopters of Velero` section on [
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
We as members, contributors, and leaders pledge to make participation in the Velero project and our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our community include:
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall community
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
@@ -29,56 +38,90 @@ Examples of unacceptable behavior include:
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [oss-coc@vmware.com](mailto:oss-coc@vmware.com). All complaints will be reviewed and investigated promptly and fairly.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at oss-coc@vmware.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of actions.
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including sustained inappropriate behavior.
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the community.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0,
available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
Authors are expected to follow some guidelines when submitting PRs. Please see [our documentation](https://velero.io/docs/master/code-standards/) for details.
Authors are expected to follow some guidelines when submitting PRs. Please see [our documentation](https://velero.io/docs/main/code-standards/) for details.
This document defines the project governance for Velero.
## Overview
**Velero**, an open source project, is committed to building an open, inclusive, productive and self-governing open source community focused on building a high quality tool that enables users to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes. The community is governed by this document with the goal of defining how community should work together to achieve this goal.
## Code Repositories
The following code repositories are governed by Velero community and maintained under the `vmware-tanzu\Velero` organization.
* **[Velero](https://github.com/vmware-tanzu/velero):** Main Velero codebase
* **[Helm Chart](https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero):** The Helm chart for the Velero server component
* **[Velero CSI Plugin](https://github.com/vmware-tanzu/velero-plugin-for-csi):** This repository contains Velero plugins for snapshotting CSI backed PVCs using the CSI beta snapshot APIs
* **[Velero Plugin for vSphere](https://github.com/vmware-tanzu/velero-plugin-for-vsphere):** This repository contains the Velero Plugin for vSphere. This plugin is a volume snapshotter plugin that provides crash-consistent snapshots of vSphere block volumes and backup of volume data into S3 compatible storage.
* **[Velero Plugin for AWS](https://github.com/vmware-tanzu/velero-plugin-for-aws):** This repository contains the plugins to support running Velero on AWS, including the object store plugin and the volume snapshotter plugin
* **[Velero Plugin for GCP](https://github.com/vmware-tanzu/velero-plugin-for-gcp):** This repository contains the plugins to support running Velero on GCP, including the object store plugin and the volume snapshotter plugin
* **[Velero Plugin for Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure):** This repository contains the plugins to support running Velero on Azure, including the object store plugin and the volume snapshotter plugin
* **[Velero Plugin Example](https://github.com/vmware-tanzu/velero-plugin-example):** This repository contains example plugins for Velero
## Community Roles
* **Users:** Members that engage with the Velero community via any medium (Slack, GitHub, mailing lists, etc.).
* **Contributors:** Regular contributions to projects (documentation, code reviews, responding to issues, participation in proposal discussions, contributing code, etc.).
* **Maintainers**: The Velero project leaders. They are responsible for the overall health and direction of the project; final reviewers of PRs and responsible for releases. Some Maintainers are responsible for one or more components within a project, acting as technical leads for that component. Maintainers are expected to contribute code and documentation, review PRs including ensuring quality of code, triage issues, proactively fix bugs, and perform maintenance tasks for these components.
### Maintainers
New maintainers must be nominated by an existing maintainer and must be elected by a supermajority of existing maintainers. Likewise, maintainers can be removed by a supermajority of the existing maintainers or can resign by notifying one of the maintainers.
### Supermajority
A supermajority is defined as two-thirds of members in the group.
A supermajority of [Maintainers](#maintainers) is required for certain
decisions as outlined above. A supermajority vote is equivalent to the number of votes in favor being at least twice the number of votes against. For example, if you have 5 maintainers, a supermajority vote is 4 votes. Voting on decisions can happen on the mailing list, GitHub, Slack, email, or via a voting service, when appropriate. Maintainers can either vote "agree, yes, +1", "disagree, no, -1", or "abstain". A vote passes when supermajority is met. An abstain vote equals not voting at all.
### Decision Making
Ideally, all project decisions are resolved by consensus. If impossible, any
maintainer may call a vote. Unless otherwise specified in this document, any
vote will be decided by a supermajority of maintainers.
Votes by maintainers belonging to the same company
will count as one vote; e.g., 4 maintainers employed by fictional company **Valerium** will
only have **one** combined vote. If voting members from a given company do not
agree, the company's vote is determined by a supermajority of voters from that
company. If no supermajority is achieved, the company is considered to have
abstained.
## Proposal Process
One of the most important aspects in any open source community is the concept
of proposals. Large changes to the codebase and / or new features should be
preceded by a proposal in our community repo. This process allows for all
members of the community to weigh in on the concept (including the technical
details), share their comments and ideas, and offer to help. It also ensures
that members are not duplicating work or inadvertently stepping on toes by
making large conflicting changes.
The project roadmap is defined by accepted proposals.
Proposals should cover the high-level objectives, use cases, and technical
recommendations on how to implement. In general, the community member(s)
interested in implementing the proposal should be either deeply engaged in the
proposal process or be an author of the proposal.
The proposal should be documented as a separated markdown file pushed to the root of the
`design` folder in the [Velero](https://github.com/vmware-tanzu/velero/tree/main/design)
repository via PR. The name of the file should follow the name pattern `<short
meaningful words joined by '-'>_design.md`, e.g:
`restore-hooks-design.md`.
Use the [Proposal Template](https://github.com/vmware-tanzu/velero/blob/main/design/_template.md) as a starting point.
### Proposal Lifecycle
The proposal PR can follow the GitHub lifecycle of the PR to indicate its status:
* **Open**: Proposal is created and under review and discussion.
* **Merged**: Proposal has been reviewed and is accepted (either by consensus or through a vote).
* **Closed**: Proposal has been reviewed and was rejected (either by consensus or through a vote).
## Lazy Consensus
To maintain velocity in a project as busy as Velero, the concept of [Lazy
Consensus](http://en.osswiki.info/concepts/lazy_consensus) is practiced. Ideas
and / or proposals should be shared by maintainers via
GitHub with the appropriate maintainer groups (e.g.,
`@vmware-tanzu/velero-maintainers`) tagged. Out of respect for other contributors,
major changes should also be accompanied by a ping on Slack or a note on the
Velero mailing list as appropriate. Author(s) of proposal, Pull Requests,
issues, etc. will give a time period of no less than five (5) working days for
comment and remain cognizant of popular observed world holidays.
Other maintainers may chime in and request additional time for review, but
should remain cognizant of blocking progress and abstain from delaying
progress unless absolutely needed. The expectation is that blocking progress
is accompanied by a guarantee to review and respond to the relevant action(s)
(proposals, PRs, issues, etc.) in short order.
Lazy Consensus is practiced for all projects in the `Velero` org, including
the main project repository and the additional repositories.
Lazy consensus does _not_ apply to the process of:
* Removal of maintainers from Velero
## Updating Governance
All substantive changes in Governance require a supermajority agreement by all maintainers.
[![Build Status][1]][2] [](https://bestpractices.coreinfrastructure.org/projects/3811)
## Overview
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. Velero lets you:
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a public cloud platform or on-premises. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Migrate cluster resources to other clusters.
@@ -33,8 +34,8 @@ If you are ready to jump in and test, add code, or help with documentation, foll
See [the list of releases][6] to find out about feature changes.
This document provides a link to the [Velero Project boards](https://github.com/vmware-tanzu/velero/projects) that serves as the up to date description of items that are in the release pipeline. The release boards have separate swim lanes based on prioritization. Most items are gathered from the community or include a feedback loop with the community. This should serve as a reference point for Velero users and contributors to understand where the project is heading, and help determine if a contribution could be conflicting with a longer term plan.
### How to help?
Discussion on the roadmap can take place in threads under [Issues](https://github.com/vmware-tanzu/velero/issues) or in [community meetings](https://velero.io/community/). Please open and comment on an issue if you want to provide suggestions, use cases, and feedback to an item in the roadmap. Please review the roadmap to avoid potential duplicated effort.
### How to add an item to the roadmap?
One of the most important aspects in any open source community is the concept of proposals. Large changes to the codebase and / or new features should be preceded by a [proposal](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md#proposal-process) in our repo.
For smaller enhancements, you can open an issue to track that initiative or feature request.
We work with and rely on community feedback to focus our efforts to improve Velero and maintain a healthy roadmap.
### Current Roadmap
The following table includes the current roadmap for Velero. If you have any questions or would like to contribute to Velero, please attend a [community meeting](https://velero.io/community/) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt.
Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Velero.
`Last Updated: October 2021`
#### 1.8.0 Roadmap (to be delivered January/February 2021)
|Issue|Description|Timeline|Notes|
|---|---|---|---|
|[4108](https://github.com/vmware-tanzu/velero/issues/4108), [4109](https://github.com/vmware-tanzu/velero/issues/4109)|Solution for CSI - Azure and AWS|2022 H1|Currently, Velero plugins for AWS and Azure cannot back up persistent volumes that were provisioned using the CSI driver. This will fix that.|
|[3229](https://github.com/vmware-tanzu/velero/issues/3229),[4112](https://github.com/vmware-tanzu/velero/issues/4112)|Moving data mover functionality from the Velero Plugin for vSphere into Velero proper|2022 H1|This work is a precursor to decoupling the Astrolabe snapshotting infrastructure.|
|[3533](https://github.com/vmware-tanzu/velero/issues/3533)|Upload Progress Monitoring|2022 H1|Finishing up the work done in the 1.7 timeframe. The data mover work depends on this.|
|[1975](https://github.com/vmware-tanzu/velero/issues/1975)|Test dual stack mode|2022 H1|We already tested IPv6, but we want to confirm that dual stack mode works as well.|
|[2082](https://github.com/vmware-tanzu/velero/issues/2082)|Delete Backup CRs on removing target location. |2022 H1||
|[3516](https://github.com/vmware-tanzu/velero/issues/3516)|Restore issue with MutatingWebhookConfiguration v1beta1 API version|2022 H1||
|[2308](https://github.com/vmware-tanzu/velero/issues/2308)|Restoring nodePort service that has nodePort preservation always fails if service already exists in the namespace|2022 H1||
|[4115](https://github.com/vmware-tanzu/velero/issues/4115)|Support for multiple set of credentials for VolumeSnapshotLocations|2022 H1||
|[1980](https://github.com/vmware-tanzu/velero/issues/1980)|Velero triggers backup immediately for scheduled backups|2022 H1||
|[4067](https://github.com/vmware-tanzu/velero/issues/4067)|Pre and post backup and restore hooks|2022 H1||
|[3742](https://github.com/vmware-tanzu/velero/issues/3742)|Carvel packaging for Velero for vSphere|2022 H1|AWS and Azure have been completed already.|
|[3285](https://github.com/vmware-tanzu/velero/issues/3285)|Design doc for Velero plugin versioning|2022 H1||
|[4231](https://github.com/vmware-tanzu/velero/issues/4231)|Technical health (prioritizing giving developers confidence and saving developers time)|2022 H1|More automated tests (especially the pre-release manual tests) and more automation of the running of tests.|
|[4110](https://github.com/vmware-tanzu/velero/issues/4110)|Solution for CSI - GCP|2022 H1|Currently, the Velero plugin for GCP cannot back up persistent volumes that were provisioned using the CSI driver. This will fix that.|
|[3742](https://github.com/vmware-tanzu/velero/issues/3742)|Carvel packaging for Velero for restic|2022 H1|AWS and Azure have been completed already.|
|[4111](https://github.com/vmware-tanzu/velero/issues/4111)|Ignore items returned by ItemSnapshotter.AlsoHandles during backup|2022 H1|This will enable backup of complex objects, because we can then tell Velero to ignore things that were already backed up when Velero was previously called recursively.|
Other work may make it into the 1.8 release, but this is the work that will be prioritized first.
Velero is an open source tool with a growing community devoted to safe backup and restore, disaster recovery, and data migration of Kubernetes resources and persistent volumes. The community has adopted this security disclosure and response policy to ensure we responsibly handle critical issues.
## Supported Versions
The Velero project maintains the following [governance document](https://github.com/vmware-tanzu/velero/blob/main/GOVERNANCE.md), [release document](https://github.com/vmware-tanzu/velero/blob/f42c63af1b9af445e38f78a7256b1c48ef79c10e/site/docs/main/release-instructions.md), and [support document](https://velero.io/docs/main/support-process/). Please refer to these for release and related details. Only the most recent version of Velero is supported. Each [release](https://github.com/vmware-tanzu/velero/releases) includes information about upgrading to the latest version.
## Reporting a Vulnerability - Private Disclosure Process
Security is of the highest importance and all security vulnerabilities or suspected security vulnerabilities should be reported to Velero privately, to minimize attacks against current users of Velero before they are fixed. Vulnerabilities will be investigated and patched on the next patch (or minor) release as soon as possible. This information could be kept entirely internal to the project.
If you know of a publicly disclosed security vulnerability for Velero, please **IMMEDIATELY** contact the VMware Security Team (security@vmware.com).
**IMPORTANT: Do not file public issues on GitHub for security vulnerabilities**
To report a vulnerability or a security-related issue, please contact the VMware email address with the details of the vulnerability. The email will be fielded by the VMware Security Team and then shared with the Velero maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/velero/issues/new/choose) instead.
## Proposed Email Content
Provide a descriptive subject line and in the body of the email include the following information:
* Basic identity information, such as your name and your affiliation or company.
* Detailed steps to reproduce the vulnerability (POC scripts, screenshots, and logs are all helpful to us).
* Description of the effects of the vulnerability on Velero and the related hardware and software configurations, so that the VMware Security Team can reproduce it.
* How the vulnerability affects Velero usage and an estimation of the attack surface, if there is one.
* List other projects or dependencies that were used in conjunction with Velero to produce the vulnerability.
## When to report a vulnerability
* When you think Velero has a potential security vulnerability.
* When you suspect a potential vulnerability but you are unsure that it impacts Velero.
* When you know of or suspect a potential vulnerability on another project that is used by Velero.
## Patch, Release, and Disclosure
The VMware Security Team will respond to vulnerability reports as follows:
1. The Security Team will investigate the vulnerability and determine its effects and criticality.
2. If the issue is not deemed to be a vulnerability, the Security Team will follow up with a detailed reason for rejection.
3. The Security Team will initiate a conversation with the reporter within 3 business days.
4. If a vulnerability is acknowledged and the timeline for a fix is determined, the Security Team will work on a plan to communicate with the appropriate community, including identifying mitigating steps that affected users can take to protect themselves until the fix is rolled out.
5. The Security Team will also create a [CVSS](https://www.first.org/cvss/specification-document) using the [CVSS Calculator](https://www.first.org/cvss/calculator/3.0). The Security Team makes the final call on the calculated CVSS; it is better to move quickly than making the CVSS perfect. Issues may also be reported to [Mitre](https://cve.mitre.org/) using this [scoring calculator](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator). The CVE will initially be set to private.
6. The Security Team will work on fixing the vulnerability and perform internal testing before preparing to roll out the fix.
7. The Security Team will provide early disclosure of the vulnerability by emailing the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list. Distributors can initially plan for the vulnerability patch ahead of the fix, and later can test the fix and provide feedback to the Velero team. See the section **Early Disclosure to Velero Distributors List** for details about how to join this mailing list.
8. A public disclosure date is negotiated by the VMware SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if it’s already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The VMware Security Team holds the final say when setting a public disclosure date.
9. Once the fix is confirmed, the Security Team will patch the vulnerability in the next patch or minor release, and backport a patch release into all earlier supported releases. Upon release of the patched version of Velero, we will follow the **Public Disclosure Process**.
## Public Disclosure Process
The Security Team publishes a [public advisory](https://github.com/vmware-tanzu/velero/security/advisories) to the Velero community via GitHub. In most cases, additional communication via Slack, Twitter, mailing lists, blog and other channels will assist in educating Velero users and rolling out the patched release to affected users.
The Security Team will also publish any mitigating steps users can take until the fix can be applied to their Velero instances. Velero distributors will handle creating and publishing their own security advisories.
## Mailing lists
* Use security@vmware.com to report security concerns to the VMware Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure.
* Join the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list for early private information and vulnerability disclosure. Early disclosure may include mitigating steps and additional information on security patch releases. See below for information on how Velero distributors or vendors can apply to join this list.
## Early Disclosure to Velero Distributors List
The private list is intended to be used primarily to provide actionable information to multiple distributor projects at once. This list is not intended to inform individuals about security issues.
## Membership Criteria
To be eligible to join the [Velero Distributors](https://groups.google.com/u/1/g/projectvelero-distributors) mailing list, you should:
1. Be an active distributor of Velero.
2. Have a user base that is not limited to your own organization.
3. Have a publicly verifiable track record up to the present day of fixing security issues.
4. Not be a downstream or rebuild of another distributor.
5. Be a participant and active contributor in the Velero community.
6. Accept the Embargo Policy that is outlined below.
7. Have someone who is already on the list vouch for the person requesting membership on behalf of your distribution.
**The terms and conditions of the Embargo Policy apply to all members of this mailing list. A request for membership represents your acceptance to the terms and conditions of the Embargo Policy.**
## Embargo Policy
The information that members receive on the Velero Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the VMware Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
Before you share any information from the list with members of your team who are required to fix the issue, these team members must agree to the same terms, and only be provided with information on a need-to-know basis.
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the VMware Security Team (security@vmware.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
## Requesting to Join
Send new membership requests to projectvelero-distributors@googlegroups.com. In the body of your request please specify how you qualify for membership and fulfill each criterion listed in the Membership Criteria section above.
## Confidentiality, integrity and availability
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The VMware Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
Note that we do not currently consider the default settings for Velero to be secure-by-default. It is necessary for operators to explicitly configure settings, role based access control, and other resource related features in Velero to provide a hardened Velero environment. We will not act on any security disclosure that relates to a lack of safe defaults. Over time, we will work towards improved safe-by-default configuration, taking into account backwards compatibility.
# Set up a local_resource build of the plugin binary. The main.go path must be provided via go_main option. The binary is written to _tiltbuild/<NAME>.
This folder contains logo images for Velero in gray (for light backgrounds) and white (for dark backgrounds like black tshirts or dark mode!) – horizontal and stacked… in .eps and .svg.
## Some general guidelines for usage
• Don’t alter the logos/graphics: resize, reformat, recolor. Keep them intact.
• Don’t separate the word mark (Velero) from the icon) – we are still building a strong name and identity – and the logo by itself doesn’t have any strong recognition or association with as yet: so best practice keep the two together. Nike kept its name with the swoosh for quite some time before the swoosh became iconic.
• Don’t append the name to another brand – let it stand alone!
@@ -69,8 +69,8 @@ carefully to ensure a successful upgrade!**
- The `Config` CRD has been replaced by `BackupStorageLocation` and `VolumeSnapshotLocation` CRDs.
- The interface for external plugins (object/block stores, backup/restore item actions) has changed. If you have authored any custom plugins, they'll
need to be updated for v0.10.
- The [`ObjectStore.ListCommonPrefixes`](https://github.com/heptio/ark/blob/master/pkg/cloudprovider/object_store.go#L50) signature has changed to add a `prefix` parameter.
- Registering plugins has changed. Create a new plugin server with the `NewServer` function, and register plugins with the appropriate functions. See the [`Server`](https://github.com/heptio/ark/blob/master/pkg/plugin/server.go#L37) interface for details.
- The [`ObjectStore.ListCommonPrefixes`](https://github.com/vmware-tanzu/velero/blob/main/pkg/cloudprovider/object_store.go#L50) signature has changed to add a `prefix` parameter.
- Registering plugins has changed. Create a new plugin server with the `NewServer` function, and register plugins with the appropriate functions. See the [`Server`](https://github.com/vmware-tanzu/velero/blob/main/pkg/plugin/server.go#L37) interface for details.
- The organization of Ark data in object storage has changed. Existing data will need to be moved around to conform to the new layout.
### All Changes
@@ -89,7 +89,7 @@ need to be updated for v0.10.
- [ec013e6f](https://github.com/heptio/ark/commit/ec013e6f) Document upgrading plugins in the deployment
* Initalize Prometheus metrics when creating a new schedule (#689, @lemaral)
* Initialize Prometheus metrics when creating a new schedule (#689, @lemaral)
## v0.9.2
@@ -137,7 +137,7 @@
### Highlights:
* Ark now has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called [restic](https://github.com/restic/restic).
This provides users an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume, whether or not it has snapshot support
integrated with Ark. For more information, see the [documentation](https://github.com/heptio/ark/blob/master/docs/restic.md).
integrated with Ark. For more information, see the [documentation](https://github.com/vmware-tanzu/velero/blob/main/docs/restic.md).
* Support for Prometheus metrics has been added! View total number of backup attempts (including success or failure), total backup size in bytes, and backup
durations. More metrics coming in future releases!
@@ -75,7 +75,7 @@ Finally, thanks to testing by [Dylan Murray](https://github.com/dymurray) and [S
* Adds configurable CPU/memory requests and limits to the Velero Deployment generated by velero install. (#1678, @prydonius)
* Store restic PodVolumeBackups in obj storage & use that as source of truth like regular backups. (#1577, @carlisia)
* Update Velero Deployment to use apps/v1 API group. `velero install` and `velero plugin add/remove` commands will now require Kubernetes 1.9+ (#1673, @nrb)
* Respect the --kubecontext and --kubeconfig arugments for `velero install`. (#1656, @nrb)
* Respect the --kubecontext and --kubeconfig arguments for `velero install`. (#1656, @nrb)
* add plugin for updating PV & PVC storage classes on restore based on a config map (#1621, @skriss)
* Add restic support for CSI volumes (#1615, @nrb)
* Allow `plugins/` as a valid top-level directory within backup storage locations. This directory is a place for plugin authors to store arbitrary data as needed. It is recommended to create an additional subdirectory under `plugins/` specifically for your plugin, e.g. `plugins/my-plugin-data/`. (#2350, @skriss)
* bug fix: don't panic in `velero restic repo get` when last maintenance time is `nil` (#2315, @skriss)
#### Custom Resource Definition Backup and Restore Improvements
This release includes a number of related bug fixes and improvements to how Velero backs up and restores custom resource definitions (CRDs) and instances of those CRDs.
We found and fixed three issues around restoring CRDs that were originally created via the `v1beta1` CRD API. The first issue affected CRDs that had the `PreserveUnknownFields` field set to `true`. These CRDs could not be restored into 1.16+ Kubernetes clusters, because the `v1` CRD API does not allow this field to be set to `true`. We added code to the restore process to check for this scenario, to set the `PreserveUnknownFields` field to `false`, and to instead set `x-kubernetes-preserve-unknown-fields` to `true` in the OpenAPIv3 structural schema, per Kubernetes guidance. For more information on this, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#pruning-versus-preserving-unknown-fields). The second issue affected CRDs without structural schemas. These CRDs need to be backed up/restored through the `v1beta1` API, since all CRDs created through the `v1` API must have structural schemas. We added code to detect these CRDs and always back them up/restore them through the `v1beta1` API. Finally, related to the previous issue, we found that our restore code was unable to handle backups with multiple API versions for a given resource type, and we’ve remediated this as well.
We also improved the CRD restore process to enable users to properly restore CRDs and instances of those CRDs in a single restore operation. Previously, users found that they needed to run two separate restores: one to restore the CRD(s), and another to restore instances of the CRD(s). This was due to two deficiencies in the Velero code. First, Velero did not wait for a CRD to be fully accepted by the Kubernetes API server and ready for serving before moving on; and second, Velero did not refresh its cached list of available APIs in the target cluster after restoring CRDs, so it was not aware that it could restore instances of those CRDs.
We fixed both of these issues by (1) adding code to wait for CRDs to be “ready” after restore before moving on, and (2) refreshing the cached list of APIs after restoring CRDs, so any instances of newly-restored CRDs could subsequently be restored.
With all of these fixes and improvements in place, we hope that the CRD backup and restore experience is now seamless across all supported versions of Kubernetes.
#### Multi-Arch Docker Images
Thanks to community members [@Prajyot-Parab](https://github.com/Prajyot-Parab) and [@shaneutt](https://github.com/shaneutt), Velero now provides multi-arch container images by using Docker manifest lists. We are currently publishing images for `linux/amd64`, `linux/arm64`, `linux/arm`, and `linux/ppc64le` in [our Docker repository](https://hub.docker.com/r/velero/velero/tags?page=1&name=v1.3&ordering=last_updated).
Users don’t need to change anything other than updating their version tag - the v1.3 image is `velero/velero:v1.3.0`, and Docker will automatically pull the proper architecture for the host.
For more information on manifest lists, see [Docker’s documentation](https://docs.docker.com/registry/spec/manifest-v2-2/).
#### Bug Fixes, Usability Enhancements, and More
We fixed a large number of bugs and made some smaller usability improvements in this release. Here are a few highlights:
- Support private registries with custom ports for the restic restore helper image ([PR #1999](https://github.com/vmware-tanzu/velero/pull/1999), [@cognoz](https://github.com/cognoz))
- Use AWS profile from BackupStorageLocation when invoking restic ([PR #2096](https://github.com/vmware-tanzu/velero/pull/2096), [@dinesh](https://github.com/dinesh))
- Allow restores from schedules in other clusters ([PR #2218](https://github.com/vmware-tanzu/velero/pull/2218), [@cpanato](https://github.com/cpanato))
* added support for arm and arm64 images (#2227, @shaneutt)
* when restoring from a schedule, validate by checking for backup(s) labeled with the schedule name rather than existence of the schedule itself, to allow for restoring from deleted schedules and schedules in other clusters (#2218, @cpanato)
* bug fix: back up server-preferred version of CRDs rather than always the `v1beta1` version (#2230, @skriss)
@@ -67,7 +109,7 @@ kubectl set image \
* bug fix: only prioritize restoring `replicasets.apps`, not `replicasets.extensions` (#2157, @skriss)
* bug fix: restore both `replicasets.apps`*and*`replicasets.extensions` before `deployments` (#2120, @skriss)
* bug fix: don't restore cluster-scoped resources when restoring specific namespaces and IncludeClusterResources is nil (#2118, @skriss)
* Enableing Velero to switch credentials (`AWS_PROFILE`) if multiple s3-compatible backupLocations are present (#2096, @dinesh)
* Enabling Velero to switch credentials (`AWS_PROFILE`) if multiple s3-compatible backupLocations are present (#2096, @dinesh)
* bug fix: deep-copy backup's labels when constructing snapshot tags, so the PV name isn't added as a label to the backup (#2075, @skriss)
* remove the `fsfreeze-pause` image being published from this repo; replace it with `ubuntu:bionic` in the nginx example app (#2068, @skriss)
* add support for a private registry with a custom port in a restic-helper image (#1999, @cognoz)
* Add details of CSI volumesnapshotcontents associated with a backup to `velero backup describe` when the `EnableCSI` feature flag is given on the velero client. (#2448, @nrb)
* Allow users the option to retrieve all versions of a given resource (instead of just the preferred version) from the API server with the `EnableAPIGroupVersions` feature flag. (#2373, @brito-rafa)
* Changed backup tarball format to store all versions of a given resource, updated backup tarball format to 1.1.0. (#2373, @brito-rafa)
* allow feature flags to be passed from install CLI (#2503, @ashish-amarnath)
* sync backups' CSI API objects into the cluster as part of the backup sync controller (#2496, @ashish-amarnath)
* bug fix: in error location logging hook, if the item logged under the `error` key doesn't implement the `error` interface, don't return an error since this is a valid scenario (#2487, @skriss)
* bug fix: in CRD restore plugin, don't use runtime.DefaultUnstructuredConverter.FromUnstructured(...) to avoid conversion issues when float64 fields contain int values (#2484, @skriss)
* during backup deletion also delete CSI volumesnapshotcontents that were created as a part of the backup but the associated volumesnapshot object does not exist (#2480, @ashish-amarnath)
* If plugins don't support the `--features` flag, don't pass it to them. Also, update the standard plugin server to ignore unknown flags. (#2479, @skriss)
* At backup time, if a CustomResourceDefinition appears to have been created via the v1beta1 endpoint, retrieve it from the v1beta1 endpoint instead of simply changing the APIVersion. (#2478, @nrb)
* update container base images from ubuntu:bionic to ubuntu:focal (#2471, @skriss)
* bug fix: when a resource includes/excludes list contains unresolvable items, don't remove them from the list, so that the list doesn't inadvertently end up matching *all* resources. (#2462, @skriss)
* Azure: add support for getting storage account key for restic directly from an environment variable (#2455, @jaygridley)
* Support to skip VSL validation for the backup having SnapshotVolumes set to false or created with `--snapshot-volumes=false` (#2450, @mynktl)
* report backup progress (number of items backed up so far out of an estimated total number of items) during backup in the logs and as status fields on the Backup custom resource (#2440, @skriss)
* bug fix: populate namespace in logs for backup errors (#2438, @skriss)
* during backup deletion also delete CSI volumesnapshots that were created as a part of the backup (#2411, @ashish-amarnath)
* bump Kubernetes module dependencies to v0.17.4 to get fix for https://github.com/kubernetes/kubernetes/issues/86149 (#2407, @skriss)
* bug fix: save PodVolumeBackup manifests to object storage even if the volume was empty, so that on restore, the PV is dynamically reprovisioned if applicable (#2390, @skriss)
* Adding new restoreItemAction for PVC to update the selected-node annotation (#2377, @mynktl)
* Added a --cacert flag to the install command to provide the CA bundle to use when verifying TLS connections to object storage (#2368, @mansam)
* Added a `--cacert` flag to the velero client describe, download, and logs commands to allow passing a path to a certificate to use when verifying TLS connections to object storage. Also added a corresponding client config option called `cacert` which takes a path to a certificate bundle to use as a default when `--cacert` is not specified. (#2364, @mansam)
* support setting a custom CA certificate on a BSL to use when verifying TLS connections (#2353, @mansam)
* adding annotations on backup CRD for k8s major, minor and git versions (#2346, @brito-rafa)
* When the EnableCSI feature flag is provided, upload CSI VolumeSnapshots and VolumeSnapshotContents to object storage as gzipped JSON. (#2323, @nrb)
* add CSI snapshot API types into default restore priorities (#2318, @ashish-amarnath)
* refactoring: wait for all informer caches to sync before running controllers (#2299, @skriss)
* refactor restore code to lazily resolve resources via discovery and eliminate second restore loop for instances of restored CRDs (#2248, @skriss)
* upgrade to go 1.14 and migrate from `dep` to go modules (#2214, @skriss)
* clarify the wording for restore describe for namespaces included
* Auto Volume Backup Using Restic with `--default-volumes-to-restic` flag
* DeleteItemAction plugins
* Code modernization
* Restore Hooks: InitContianer Restore Hooks and Exec Restore Hooks
### All Changes
* 🏃♂️ add shortnames for CRDs (#2911, @ashish-amarnath)
* Use format version instead of version on `velero backup describe` since version has been deprecated (#2901, @jenting)
* fix EnableAPIGroupersions output log format (#2882, @jenting)
* Convert ServerStatusRequest controller to kubebuilder (#2838, @carlisia)
* rename the PV if VolumeSnapshotter has modified the PV name (#2835, @pawanpraka1)
* Implement post-restore exec hooks in pod containers (#2804, @areed)
* Check for errors on restic backup command (#2863, @dymurray)
* 🐛 fix passing LDFLAGS across build stages (#2853, @ashish-amarnath)
* Feature: Invoke DeleteItemAction plugins based on backup contents when a backup is deleted. (#2815, @nrb)
* When JSON logging format is enabled, place error message at "error.message" instead of "error" for compatibility with Elasticsearch/ELK and the Elastic Common Schema (#2830, @bgagnon)
* discovery Helper support get GroupVersionResource and an APIResource from GroupVersionKind (#2764, @runzexia)
* Migrate site from Jekyll to Hugo (#2720, @tbatard)
* Add the DeleteItemAction plugin type (#2808, @nrb)
* 🐛 Manually patch the generated yaml for restore CRD as a hacky workaround (#2814, @ashish-amarnath)
* restic: add support for setting SecurityContext (runAsUser, runAsGroup) for restore (#2621, @jaygridley)
* Add backupValidationFailureTotal to metrics (#2714, @kathpeony)
* bump Kubernetes module dependencies to v0.18.4 to fix https://github.com/vmware-tanzu/velero/issues/2540 by adding code compatibility with kubernetes v1.18 (#2651, @laverya)
* Add a BSL controller to handle validation + update BSL status phase (validation removed from the server and no longer blocks when there's any invalid BSL) (#2674, @carlisia)
* updated acceptable values on cron schedule from 0-7 to 0-6 (#2676, @dthrasher)
* Improve velero download doc (#2660, @carlisia)
* Update basic-install and release-instructions documentation for Windows Chocolatey package (#2638, @adamrushuk)
* move CSI plugin out of prototype into beta (#2636, @ashish-amarnath)
* Add a new supported provider for an object storage plugin for Storj (#2635, @jessicagreben)
* Update basic-install.md documentation: Add windows cli installation option via chocolatey (#2629, @adamrushuk)
* Documentation: Update Jekyll to 4.1.0. Switch from redcarpet to kramdown for Markdown renderer (#2625, @tbatard)
* improve builder image handling so that we don't rebuild each `make shell` (#2620, @mauilion)
* first check if there are pending changed on the build-image dockerfile if so build it.
* then check if there is an image in the registry if so pull it.
* then build an image cause we don't have a cached image. (this handles the backward compat case.)
* fix make clean to clear go mod cache before removing dirs (for containerized builds)
* Add linter checks to Makefile (#2615, @tbatard)
* add a CI check for a changelog file (#2613, @ashish-amarnath)
* implement option to back up all volumes by default with restic (#2611, @ashish-amarnath)
* When a timeout string can't be parsed, log the error as a warning instead of silently consuming the error. (#2610, @nrb)
* Azure: support using `aad-pod-identity` auth when using restic (#2602, @skriss)
* log a warning instead of erroring if an additional item returned from a plugin can't be found in the Kubernetes API (#2595, @skriss)
* when creating new backup from schedule from cli, allow backup name to be automatically generated (#2569, @cblecker)
* Convert manifests + BSL api client to kubebuilder (#2561, @carlisia)
* backup/restore: reinstantiate backup store just before uploading artifacts to ensure credentials are up-to-date (#2550, @skriss)
* Add support for restic to use per-BSL credentials. Velero will now serialize the secret referenced by the `Credential` field in the BSL and use this path when setting provider specific environment variables for restic commands. (#3489, @zubron)
* Upgrade restic from v0.9.6 to v0.12.0. (#3528, @ashish-amarnath)
* Progress reporting added for Velero Restores (#3125, @pranavgaikwad)
* Add uninstall option for velero cli (#3399, @vadasambar)
* Add support for per-BSL credentials. Velero will now serialize the secret referenced by the `Credential` field in the BSL and pass this path through to Object Storage plugins via the `config` map using the `credentialsFile` key. (#3442, @zubron)
* Fixed a bug where restic volumes would not be restored when using a namespace mapping. (#3475, @zubron)
* Restore API group version by priority. Increase timeout to 3 minutes in DeploymentIsReady(...) function in the install package (#3133, @codegold79)
* Add field and cli flag to associate a credential with a BSL on BSL create|set. (#3190, @carlisia)
* Add colored output to `describe schedule/backup/restore` commands (#3275, @mike1808)
* Add CAPI Cluster and ClusterResourceSets to default restore priorities so that the capi-controller-manager does not panic on restores. (#3446, @nrb)
* Use label to select Velero deployment in plugin cmd (#3447, @codegold79)
* feat: support setting BackupStorageLocation CA certificate via `velero backup-location set --cacert` (#3167, @jenting)
* Add restic initContainer length check in pod volume restore to prevent restic plugin container disappear in runtime (#3198, @shellwedance)
* Bump versions of external snapshotter and others in order to make `go get` to succeed (#3202, @georgettica)
* Support fish shell completion (#3231, @jenting)
* Change the logging level of PV deletion timeout from Debug to Warn (#3316, @MadhavJivrajani)
* Set the BSL created at install time as the "default" (#3172, @carlisia)
* Capitalize all help messages (#3209, @jenting)
* Increased default Velero pod memory limit to 512Mi (#3234, @dsmithuchida)
* Fixed an issue where the deletion of a backup would fail if the backup tarball couldn't be downloaded from object storage. Now the tarball is only downloaded if there are associated DeleteItemAction plugins and if downloading the tarball fails, the plugins are skipped. (#2993, @zubron)
* feat: add delete sub-command for BSL (#3073, @jenting)
* 🐛 BSLs with validation disabled should be validated at least once (#3084, @ashish-amarnath)
* feat: support configures BackupStorageLocation custom resources to indicate which one is the default (#3092, @jenting)
* Added "--preserve-nodeports" flag to preserve original nodePorts when restoring. (#3095, @yusufgungor)
* Owner reference in backup when created from schedule (#3127, @matheusjuvelino)
* issue: add flag to the schedule cmd to configure the `useOwnerReferencesInBackup` option #3176 (#3182, @matheusjuvelino)
* cli: allow creating multiple instances of Velero across two different namespaces (#2886, @alaypatel07)
* Feature: It is possible to change the timezone of the container by specifying in the manifest.. env: [TZ: Zone/Country], or in the Helm Chart.. configuration: {extraEnvVars: [TZ: 'Zone/Country']} (#2944, @mickkael)
* Fix issue where bare `velero` command returned an error code. (#2947, @nrb)
* Restore CRD Resource name to fix CRD wait functionality. (#2949, @sseago)
* Fixed 'velero.io/change-pvc-node-selector' plugin to fetch configmap using label key "velero.io/change-pvc-node-selector" (#2970, @mynktl)
* Compile with Go 1.15 (#2974, @gliptak)
* Fix BSL controller to avoid invoking init() on all BSLs regardless of ValidationFrequency (#2992, @betta1)
* Ensure that bound PVCs and PVs remain bound on restore. (#3007, @nrb)
* Allows the restic-wait container to exist in any order in the pod being restored. Prints a warning message in the case where the restic-wait container isn't the first container in the list of initialization containers. (#3011, @doughepi)
* Add warning to velero version cmd if the client and server versions mismatch. (#3024, @cvhariharan)
* 🐛 Use namespace and name to match PVB to Pod restore (#3051, @ashish-amarnath)
* Fixed various typos across codebase (#3057, @invidian)
* 🐛 ItemAction plugins for unresolvable types should not be run for all types (#3059, @ashish-amarnath)
* Pass annotations from schedule to backup it creates the same way it is done for labels. Add WithannotationsMap function to builder to be able to pass map instead of key/val list (#3067, @funkycode)
* Add instructions to clone repository for examples in docs (#3074, @MadhavJivrajani)
* 🏃♂️ update setup-kind github actions CI (#3085, @ashish-amarnath)
* Modify wrong function name to correct one. (#3106, @shellwedance)
The Velero container images now use [distroless base images](https://github.com/GoogleContainerTools/distroless).
Using distroless images as the base ensures that only the packages and programs necessary for running Velero are included.
Unrelated libraries and OS packages, that often contain security vulnerabilities, are now excluded.
This change reduces the size of both the server and restic restore helper image by approximately 62MB.
As the [distroless](https://github.com/GoogleContainerTools/distroless) images do not contain a shell, it will no longer be possible to exec into Velero containers using these images.
#### New "debug" command
This release introduces the new `velero debug` command.
This command collects information about a Velero installation, such as pod logs and resources managed by Velero, in a tarball which can be provided to the Velero maintainer team to help diagnose issues.
### All changes
* Distinguish between different unnamed node ports when preserving (#4026, @sseago)
* Validate namespace in Velero backup create command (#4057, @codegold79)
* Empty the "ClusterIPs" along with "ClusterIP" when "ClusterIP" isn't "None" (#4101, @ywk253100)
* Add a RestoreItemAction plugin (`velero.io/apiservice`) which skips the restore of any `APIService` which is managed by Kubernetes. These are identified using the `kube-aggregator.kubernetes.io/automanaged` label. (#4028, @zubron)
* Change the base image to distroless (#4055, @ywk253100)
* Updated the version of velero/velero-plugin-for-aws version from v1.2.0 to v1.2.1 (#4064, @kahirokunn)
* Skip the backup and restore of DownwardAPI volumes when using restic. (#4076, @zubron)
* Bump up Go to 1.16 (#3990, @reasonerjt)
* Fix restic error when volume is emptyDir and Pod not running (#3993, @mahaupt)
* Select the velero deployment with both label and container name (#3996, @ywk253100)
* Wait for the namespace to be deleted before removing the CRDs during uninstall. This deprecates the `--wait` flag of the `uninstall` command (#4007, @ywk253100)
* Use the cluster preferred CRD API version when polling for Velero CRD readiness. (#4015, @zubron)
* Implement velero debug (#4022, @reasonerjt)
* Skip the restore of volumes that originally came from a projected volume when using restic. (#3877, @zubron)
* Run the E2E test with kind(provision various versions of k8s cluster) and MinIO on Github Action (#3912, @ywk253100)
* Fix -install-velero flag for e2e tests (#3919, @jaidevmane)
* Upgrade Velero ClusterRoleBinding to use v1 API (#3926, @jenting)
* enable e2e tests to choose crd apiVersion (#3941, @sseago)
* Fixing multipleNamespaceTest bug - Missing expect statement in test (#3983, @jaidevmane)
* Add --client-page-size flag to server to allow chunking Kubernetes API LIST calls across multiple requests on large clusters (#3823, @dharmab)
* Use region specified in the BackupStorageLocation spec when getting restic repo identifier. Originally fixed by @jala-dx in #3617. (#3857, @zubron)
* skip backuping projected volume when using restic (#3866, @alaypatel07)
* Install Kubernetes preferred CRDs API version (v1beta1/v1). (#3614, @jenting)
* Add Label to BackupSpec so that labels can explicitly be provided to Schedule.Spec.Template.Metadata.Labels which will be reflected on the backups created. (#3641, @arush-sal)
* Add PVC UID label to PodVolumeRestore (#3792, @sseago)
* Support pulling plugin images by digest (#3803, @2uasimojo)
* Added BackupPhaseUploading and BackupPhaseUploadingPartialFailure backup phases as part of Upload Progress Monitoring. (#3805, @dsmithuchida)
Uploading (new)
The "Uploading" phase signifies that the main part of the backup, including
snapshotting has completed successfully and uploading is continuing. In
the event of an error during uploading, the phase will change to
UploadingPartialFailure. On success, the phase changes to Completed. The
backup cannot be restored from when it is in the Uploading state.
UploadingPartialFailure (new)
The "UploadingPartialFailure" phase signifies that the main part of the backup,
including snapshotting has completed, but there were partial failures either
during the main part or during the uploading. The backup cannot be restored
from when it is in the UploadingPartialFailure state.
* 🐛 Fix plugin name derivation from image name (#3711, @ashish-amarnath)
Wait for CustomResourceDefinitions to be ready before restoring CustomResources. Also refresh the resource list from the Kubernetes API server after restoring CRDs in order to properly restore CRs.
When restoring a v1 CRD with PreserveUnknownFields = True, make sure that the preservation behavior is maintained by copying the flag into the Open API V3 schema, but update the flag so as to allow the Kubernetes API server to accept the CRD without error.
remove the schedule validation and instead of checking the schedule because we might be in another cluster we check if a backup exist with the schedule name.
Back up schema-less CustomResourceDefinitions as v1beta1, even if they are retrieved via the v1 endpoint.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.