mirror of
https://github.com/vmware-tanzu/velero.git
synced 2026-04-28 11:27:00 +00:00
Compare commits
45 Commits
v1.12.1-rc
...
v1.12.2-rc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b68486221d | ||
|
|
5abd318b2c | ||
|
|
7c051514fd | ||
|
|
c8fd9d4d62 | ||
|
|
ccfbcc5455 | ||
|
|
ea25b8a793 | ||
|
|
2d6578635d | ||
|
|
fc44f3b8f0 | ||
|
|
df72745909 | ||
|
|
453bd93c90 | ||
|
|
65939c920e | ||
|
|
c042d477ab | ||
|
|
5c44ed49a5 | ||
|
|
3325a0cd1b | ||
|
|
b2d3fa0bec | ||
|
|
25fc2f4d6e | ||
|
|
a036e8d463 | ||
|
|
f92cdb1f76 | ||
|
|
0531dbb1a2 | ||
|
|
de55794381 | ||
|
|
d7b4b0a770 | ||
|
|
f1c93bd6c4 | ||
|
|
06e3773b22 | ||
|
|
32a8bbb9ac | ||
|
|
84d8bbda24 | ||
|
|
86e34eec28 | ||
|
|
5923046471 | ||
|
|
d1399225da | ||
|
|
6438fc9a69 | ||
|
|
a674a1eaff | ||
|
|
bb4f9094fd | ||
|
|
1264c438c1 | ||
|
|
7e35fd3261 | ||
|
|
482ec13d38 | ||
|
|
dd825ef8bb | ||
|
|
dc525aa045 | ||
|
|
36ad5dafa9 | ||
|
|
7b76047596 | ||
|
|
f1fcec3514 | ||
|
|
17ad487803 | ||
|
|
bb6c1f60ea | ||
|
|
0be6ad3a06 | ||
|
|
1c462d5f6d | ||
|
|
32deef7ae3 | ||
|
|
72b5e7aad6 |
@@ -70,7 +70,7 @@ RUN mkdir -p /output/usr/bin && \
|
||||
go clean -modcache -cache
|
||||
|
||||
# Velero image packing section
|
||||
FROM paketobuildpacks/run-jammy-tiny:0.2.5
|
||||
FROM paketobuildpacks/run-jammy-tiny:0.2.11
|
||||
|
||||
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
|
||||
|
||||
|
||||
@@ -42,7 +42,7 @@ The following is a list of the supported Kubernetes versions for each Velero ver
|
||||
|
||||
| Velero version | Expected Kubernetes version compatibility | Tested on Kubernetes version |
|
||||
|----------------|-------------------------------------------|----------------------------------------|
|
||||
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.26.7, and 1.27.3 |
|
||||
| 1.12 | 1.18-latest | 1.25.7, 1.26.5, 1.27.6 and 1.28.0 |
|
||||
| 1.11 | 1.18-latest | 1.23.10, 1.24.9, 1.25.5, and 1.26.1 |
|
||||
| 1.10 | 1.18-latest | 1.22.5, 1.23.8, 1.24.6 and 1.25.1 |
|
||||
| 1.9 | 1.18-latest | 1.20.5, 1.21.2, 1.22.5, 1.23, and 1.24 |
|
||||
|
||||
@@ -1,3 +1,33 @@
|
||||
## v1.12.2
|
||||
### 2023-11-20
|
||||
|
||||
### Download
|
||||
https://github.com/vmware-tanzu/velero/releases/tag/v1.12.2
|
||||
|
||||
### Container Image
|
||||
`velero/velero:v1.12.2`
|
||||
|
||||
### Documentation
|
||||
https://velero.io/docs/v1.12/
|
||||
|
||||
### Upgrading
|
||||
https://velero.io/docs/v1.12/upgrade-to-1.12/
|
||||
|
||||
### All changes
|
||||
* Fix issue #7068, due to a behavior of CSI external snapshotter, manipulations of VS and VSC may not be handled in the same order inside external snapshotter as the API is called. So add a protection finalizer to ensure the order (#7114, @Lyndon-Li)
|
||||
* Update Backup.Status.CSIVolumeSnapshotsCompleted during finalize (#7111, @kaovilai)
|
||||
* Cherry-pick #6917 - Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers (#7049, @27149chen)
|
||||
* Bump up Velero base image to latest patch release (#7110, @allenxu404)
|
||||
* Fix the node-agent missing metrics-address defines. (#7098, @yanggangtony)
|
||||
* Fix issue #7094, fallback to full backup if previous snapshot is not found (#7097, @Lyndon-Li)
|
||||
* Add DataUpload Result and CSI VolumeSnapshot check for restore PV. (#7087, @blackpiglet)
|
||||
* Fix issue #7027, data mover backup exposer should not assume the first volume as the backup volume in backup pod (#7060, @Lyndon-Li)
|
||||
* Truncate the credential file to avoid the change of secret content messing it up (#7058, @ywk253100)
|
||||
* restore: Use warning when Create IsAlreadyExist and Get error (#7054, @kaovilai)
|
||||
* Read information from the credential specified by BSL (#7033, @ywk253100)
|
||||
* Fix issue 6913: Velero Built-in Datamover: Backup stucks in phase WaitingForPluginOperations when Node Agent pod gets restarted (#7025, @shubham-pampattiwar)
|
||||
* Fix unified repository (kopia) s3 credentials profile selection (#6997, @kaovilai)
|
||||
|
||||
## v1.12.1
|
||||
### 2023-10-20
|
||||
|
||||
|
||||
193
design/merge-patch-and-strategic-in-resource-modifier.md
Normal file
193
design/merge-patch-and-strategic-in-resource-modifier.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# Proposal to Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers
|
||||
|
||||
- [Proposal to Support JSON Merge Patch and Strategic Merge Patch in Resource Modifiers](#proposal-to-support-json-merge-patch-and-strategic-merge-patch-in-resource-modifiers)
|
||||
- [Abstract](#abstract)
|
||||
- [Goals](#goals)
|
||||
- [Non Goals](#non-goals)
|
||||
- [User Stories](#user-stories)
|
||||
- [Scenario 1](#scenario-1)
|
||||
- [Scenario 2](#scenario-2)
|
||||
- [Detailed Design](#detailed-design)
|
||||
- [How to choose the right patch type](#how-to-choose-the-right-patch-type)
|
||||
- [New Field MergePatches](#new-field-mergepatches)
|
||||
- [New Field StrategicPatches](#new-field-strategicpatches)
|
||||
- [Conditional Patches in ALL Patch Types](#conditional-patches-in-all-patch-types)
|
||||
- [Wildcard Support for GroupResource](#wildcard-support-for-groupresource)
|
||||
- [Helper Command to Generate Merge Patch and Strategic Merge Patch](#helper-command-to-generate-merge-patch-and-strategic-merge-patch)
|
||||
- [Security Considerations](#security-considerations)
|
||||
- [Compatibility](#compatibility)
|
||||
- [Implementation](#implementation)
|
||||
- [Future Enhancements](#future-enhancements)
|
||||
- [Open Issues](#open-issues)
|
||||
|
||||
## Abstract
|
||||
Velero introduced the concept of Resource Modifiers in v1.12.0. This feature allows the user to specify a configmap with a set of rules to modify the resources during restore. The user can specify the filters to select the resources and then specify the JSON Patch to apply on the resource. This feature is currently limited to the operations supported by JSON Patch RFC.
|
||||
This proposal is to add support for JSON Merge Patch and Strategic Merge Patch in the Resource Modifiers. This will allow the user to use the same configmap to apply JSON Merge Patch and Strategic Merge Patch on the resources during restore.
|
||||
|
||||
## Goals
|
||||
- Allow the user to specify a JSON patch, JSON Merge Patch or Strategic Merge Patch for modification.
|
||||
- Allow the user to specify multiple JSON Patch, JSON Merge Patch or Strategic Merge Patch.
|
||||
- Allow the user to specify mixed JSON Patch, JSON Merge Patch and Strategic Merge Patch in the same configmap.
|
||||
|
||||
## Non Goals
|
||||
- Deprecating the existing RestoreItemAction plugins for standard substitutions(like changing the namespace, changing the storage class, etc.)
|
||||
|
||||
## User Stories
|
||||
|
||||
### Scenario 1
|
||||
- Alice has some Pods and part of them have an annotation `{"for": "bar"}`.
|
||||
- Alice wishes to restore these Pods to a different cluster without this annotation.
|
||||
- Alice can use this feature to remove this annotation during restore.
|
||||
|
||||
### Scenario 2
|
||||
- Bob has a Pod with several containers and one container with name nginx has an image `repo1/nginx`.
|
||||
- Bob wishes to restore this Pod to a different cluster, but new cluster can not access repo1, so he pushes the image to repo2.
|
||||
- Bob can use this feature to update the image of container nginx to `repo2/nginx` during restore.
|
||||
|
||||
## Detailed Design
|
||||
- The design and approach is inspired by kubectl patch command and [this doc](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/).
|
||||
- New fields `MergePatches` and `StrategicPatches` will be added to the `ResourceModifierRule` struct to support all three patch types.
|
||||
- Only one of the three patch types can be specified in a single `ResourceModifierRule`.
|
||||
- Add wildcard support for `groupResource` in `conditions` struct.
|
||||
- The workflow to create Resource Modifier ConfigMap and reference it in RestoreSpec will remain the same as described in document [Resource Modifiers](https://github.com/vmware-tanzu/velero/blob/main/site/content/docs/main/restore-resource-modifiers.md).
|
||||
|
||||
### How to choose the right patch type
|
||||
- [JSON Merge Patch](https://datatracker.ietf.org/doc/html/rfc7386) is a naively simple format, with limited usability. Probably it is a good choice if you are building something small, with very simple JSON Schema.
|
||||
- [JSON Patch](https://datatracker.ietf.org/doc/html/rfc6902) is a more complex format, but it is applicable to any JSON documents. For a comparison of JSON patch and JSON merge patch, see [JSON Patch and JSON Merge Patch](https://erosb.github.io/post/json-patch-vs-merge-patch/).
|
||||
- Strategic Merge Patch is a Kubernetes defined patch type, mainly used to process resources of type list. You can replace/merge a list, add/remove items from a list by key, change the order of items in a list, etc. Strategic merge patch is not supported for custom resources. For more details, see [this doc](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/).
|
||||
|
||||
### New Field MergePatches
|
||||
MergePatches is a list to specify the merge patches to be applied on the resource. The merge patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches.
|
||||
|
||||
Example of MergePatches in ResourceModifierRule
|
||||
```yaml
|
||||
version: v1
|
||||
resourceModifierRules:
|
||||
- conditions:
|
||||
groupResource: pods
|
||||
namespaces:
|
||||
- ns1
|
||||
mergePatches:
|
||||
- patchData: |
|
||||
{
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"foo": null
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
- The above configmap will apply the Merge Patch to all the pods in namespace ns1 and remove the annotation `foo` from the pods.
|
||||
- Both json and yaml format are supported for the patchData.
|
||||
|
||||
### New Field StrategicPatches
|
||||
StrategicPatches is a list to specify the strategic merge patches to be applied on the resource. The strategic merge patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches.
|
||||
|
||||
Example of StrategicPatches in ResourceModifierRule
|
||||
```yaml
|
||||
version: v1
|
||||
resourceModifierRules:
|
||||
- conditions:
|
||||
groupResource: pods
|
||||
resourceNameRegex: "^my-pod$"
|
||||
namespaces:
|
||||
- ns1
|
||||
strategicPatches:
|
||||
- patchData: |
|
||||
{
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"name": "nginx",
|
||||
"image": "repo2/nginx"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
- The above configmap will apply the Strategic Merge Patch to the pod with name my-pod in namespace ns1 and update the image of container nginx to `repo2/nginx`.
|
||||
- Both json and yaml format are supported for the patchData.
|
||||
|
||||
### Conditional Patches in ALL Patch Types
|
||||
Since JSON Merge Patch and Strategic Merge Patch do not support conditional patches, we will use the `test` operation of JSON Patch to support conditional patches in all patch types by adding it to `Conditions` struct in `ResourceModifierRule`.
|
||||
|
||||
Example of test in conditions
|
||||
```yaml
|
||||
version: v1
|
||||
resourceModifierRules:
|
||||
- conditions:
|
||||
groupResource: persistentvolumeclaims.storage.k8s.io
|
||||
matches:
|
||||
- path: "/spec/storageClassName"
|
||||
value: "premium"
|
||||
mergePatches:
|
||||
- patchData: |
|
||||
{
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"foo": null
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
- The above configmap will apply the Merge Patch to all the PVCs in all namespaces with storageClassName premium and remove the annotation `foo` from the PVCs.
|
||||
- You can specify multiple rules in the `matches` list. The patch will be applied only if all the matches are satisfied.
|
||||
|
||||
### Wildcard Support for GroupResource
|
||||
The user can specify a wildcard for groupResource in the conditions' struct. This will allow the user to apply the patches for all the resources of a particular group or all resources in all groups. For example, `*.apps` will apply to all the resources in the `apps` group, `*` will apply to all the resources in all groups.
|
||||
|
||||
### Helper Command to Generate Merge Patch and Strategic Merge Patch
|
||||
The patchData of Strategic Merge Patch is sometimes a bit complex for user to write. We can provide a helper command to generate the patchData for Strategic Merge Patch. The command will take the original resource and the modified resource as input and generate the patchData.
|
||||
It can also be used in JSON Merge Patch.
|
||||
|
||||
Here is a sample code snippet to achieve this:
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
)
|
||||
|
||||
func main() {
|
||||
pod := &corev1.Pod{
|
||||
Spec: corev1.PodSpec{
|
||||
Containers: []corev1.Container{
|
||||
{
|
||||
Name: "web",
|
||||
Image: "nginx",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
newPod := pod.DeepCopy()
|
||||
patch := client.StrategicMergeFrom(pod)
|
||||
newPod.Spec.Containers[0].Image = "nginx1"
|
||||
|
||||
data, _ := patch.Data(newPod)
|
||||
fmt.Println(string(data))
|
||||
// Output:
|
||||
// {"spec":{"$setElementOrder/containers":[{"name":"web"}],"containers":[{"image":"nginx1","name":"web"}]}}
|
||||
}
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
No security impact.
|
||||
|
||||
## Compatibility
|
||||
Compatible with current Resource Modifiers.
|
||||
|
||||
## Implementation
|
||||
- Use "github.com/evanphx/json-patch" to support JSON Merge Patch.
|
||||
- Use "k8s.io/apimachinery/pkg/util/strategicpatch" to support Strategic Merge Patch.
|
||||
- Use glob to support wildcard for `groupResource` in `conditions` struct.
|
||||
- Use `test` operation of JSON Patch to calculate the `matches` in `conditions` struct.
|
||||
|
||||
## Future enhancements
|
||||
- add a Velero subcommand to generate/validate the patchData for Strategic Merge Patch and JSON Merge Patch.
|
||||
- add jq support for more complex conditions or patches, to meet the situations that the current conditions or patches can not handle. like [this issue](https://github.com/vmware-tanzu/velero/issues/6344)
|
||||
|
||||
## Open Issues
|
||||
N/A
|
||||
6
go.mod
6
go.mod
@@ -41,7 +41,7 @@ require (
|
||||
golang.org/x/oauth2 v0.7.0
|
||||
golang.org/x/text v0.13.0
|
||||
google.golang.org/api v0.120.0
|
||||
google.golang.org/grpc v1.54.0
|
||||
google.golang.org/grpc v1.56.3
|
||||
google.golang.org/protobuf v1.30.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
k8s.io/api v0.25.6
|
||||
@@ -54,12 +54,13 @@ require (
|
||||
k8s.io/metrics v0.25.6
|
||||
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed
|
||||
sigs.k8s.io/controller-runtime v0.12.2
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd
|
||||
sigs.k8s.io/yaml v1.3.0
|
||||
)
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.110.0 // indirect
|
||||
cloud.google.com/go/compute v1.19.0 // indirect
|
||||
cloud.google.com/go/compute v1.19.1 // indirect
|
||||
cloud.google.com/go/compute/metadata v0.2.3 // indirect
|
||||
cloud.google.com/go/iam v0.13.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go/sdk/azcore v0.21.1 // indirect
|
||||
@@ -155,7 +156,6 @@ require (
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
k8s.io/component-base v0.24.2 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1 // indirect
|
||||
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
|
||||
)
|
||||
|
||||
|
||||
12
go.sum
12
go.sum
@@ -27,8 +27,8 @@ cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvf
|
||||
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
|
||||
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
|
||||
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
|
||||
cloud.google.com/go/compute v1.19.0 h1:+9zda3WGgW1ZSTlVppLCYFIr48Pa35q1uG2N1itbCEQ=
|
||||
cloud.google.com/go/compute v1.19.0/go.mod h1:rikpw2y+UMidAe9tISo04EHNOIf42RLYF/q8Bs93scU=
|
||||
cloud.google.com/go/compute v1.19.1 h1:am86mquDUgjGNWxiGn+5PGLbmgiWXlE/yNWpIpNvuXY=
|
||||
cloud.google.com/go/compute v1.19.1/go.mod h1:6ylj3a05WF8leseCdIf77NK0g1ey+nj5IKd5/kvShxE=
|
||||
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
|
||||
cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
|
||||
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
|
||||
@@ -1245,8 +1245,8 @@ google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQ
|
||||
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
|
||||
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
|
||||
google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ=
|
||||
google.golang.org/grpc v1.54.0 h1:EhTqbhiYeixwWQtAEZAxmV9MGqcjEU2mFx52xCzNyag=
|
||||
google.golang.org/grpc v1.54.0/go.mod h1:PUSEXI6iWghWaB6lXM4knEgpJNu2qUcKfDtNci3EC2g=
|
||||
google.golang.org/grpc v1.56.3 h1:8I4C0Yq1EjstUzUJzpcRVbuYA2mODtEmpWiQoN/b2nc=
|
||||
google.golang.org/grpc v1.56.3/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
@@ -1376,8 +1376,8 @@ sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.30/go.mod h1:fEO7lR
|
||||
sigs.k8s.io/controller-runtime v0.12.2 h1:nqV02cvhbAj7tbt21bpPpTByrXGn2INHRsi39lXy9sE=
|
||||
sigs.k8s.io/controller-runtime v0.12.2/go.mod h1:qKsk4WE6zW2Hfj0G4v10EnNB2jMG1C+NTb8h+DwCoU0=
|
||||
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2/go.mod h1:B+TnT182UBxE84DiCz4CVE26eOSDAeYCpfDnC2kdKMY=
|
||||
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 h1:iXTIw73aPyC+oRdyqqvVJuloN1p0AC/kzH07hu3NE+k=
|
||||
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
|
||||
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
|
||||
sigs.k8s.io/kustomize/api v0.8.11/go.mod h1:a77Ls36JdfCWojpUqR6m60pdGY1AYFix4AH83nJtY1g=
|
||||
sigs.k8s.io/kustomize/api v0.11.4/go.mod h1:k+8RsqYbgpkIrJ4p9jcdPqe8DprLxFUUO0yNOq8C+xI=
|
||||
sigs.k8s.io/kustomize/kyaml v0.11.0/go.mod h1:GNMwjim4Ypgp/MueD3zXHLRJEjz7RvtPae0AwlvEMFM=
|
||||
|
||||
@@ -71,7 +71,7 @@ func (n *namespacedFileStore) Path(selector *corev1api.SecretKeySelector) (strin
|
||||
|
||||
keyFilePath := filepath.Join(n.fsRoot, fmt.Sprintf("%s-%s", selector.Name, selector.Key))
|
||||
|
||||
file, err := n.fs.OpenFile(keyFilePath, os.O_RDWR|os.O_CREATE, 0644)
|
||||
file, err := n.fs.OpenFile(keyFilePath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "unable to open credentials file for writing")
|
||||
}
|
||||
|
||||
45
internal/resourcemodifiers/json_merge_patch.go
Normal file
45
internal/resourcemodifiers/json_merge_patch.go
Normal file
@@ -0,0 +1,45 @@
|
||||
package resourcemodifiers
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
jsonpatch "github.com/evanphx/json-patch"
|
||||
"github.com/sirupsen/logrus"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"sigs.k8s.io/yaml"
|
||||
)
|
||||
|
||||
type JSONMergePatch struct {
|
||||
PatchData string `json:"patchData,omitempty"`
|
||||
}
|
||||
|
||||
type JSONMergePatcher struct {
|
||||
patches []JSONMergePatch
|
||||
}
|
||||
|
||||
func (p *JSONMergePatcher) Patch(u *unstructured.Unstructured, _ logrus.FieldLogger) (*unstructured.Unstructured, error) {
|
||||
objBytes, err := u.MarshalJSON()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in marshaling object %s", err)
|
||||
}
|
||||
|
||||
for _, patch := range p.patches {
|
||||
patchBytes, err := yaml.YAMLToJSON([]byte(patch.PatchData))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in converting YAML to JSON %s", err)
|
||||
}
|
||||
|
||||
objBytes, err = jsonpatch.MergePatch(objBytes, patchBytes)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in applying JSON Patch: %s", err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
updated := &unstructured.Unstructured{}
|
||||
err = updated.UnmarshalJSON(objBytes)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in unmarshalling modified object %s", err.Error())
|
||||
}
|
||||
|
||||
return updated, nil
|
||||
}
|
||||
41
internal/resourcemodifiers/json_merge_patch_test.go
Normal file
41
internal/resourcemodifiers/json_merge_patch_test.go
Normal file
@@ -0,0 +1,41 @@
|
||||
package resourcemodifiers
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
)
|
||||
|
||||
func TestJsonMergePatchFailure(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
data string
|
||||
}{
|
||||
{
|
||||
name: "patch with bad yaml",
|
||||
data: "a: b:",
|
||||
},
|
||||
{
|
||||
name: "patch with bad json",
|
||||
data: `{"a"::1}`,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
err := clientgoscheme.AddToScheme(scheme)
|
||||
assert.NoError(t, err)
|
||||
pt := &JSONMergePatcher{
|
||||
patches: []JSONMergePatch{{PatchData: tt.data}},
|
||||
}
|
||||
|
||||
u := &unstructured.Unstructured{}
|
||||
_, err = pt.Patch(u, logrus.New())
|
||||
assert.Error(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
96
internal/resourcemodifiers/json_patch.go
Normal file
96
internal/resourcemodifiers/json_patch.go
Normal file
@@ -0,0 +1,96 @@
|
||||
package resourcemodifiers
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
jsonpatch "github.com/evanphx/json-patch"
|
||||
"github.com/sirupsen/logrus"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
)
|
||||
|
||||
type JSONPatch struct {
|
||||
Operation string `json:"operation"`
|
||||
From string `json:"from,omitempty"`
|
||||
Path string `json:"path"`
|
||||
Value string `json:"value,omitempty"`
|
||||
}
|
||||
|
||||
func (p *JSONPatch) ToString() string {
|
||||
if addQuotes(p.Value) {
|
||||
return fmt.Sprintf(`{"op": "%s", "from": "%s", "path": "%s", "value": "%s"}`, p.Operation, p.From, p.Path, p.Value)
|
||||
}
|
||||
return fmt.Sprintf(`{"op": "%s", "from": "%s", "path": "%s", "value": %s}`, p.Operation, p.From, p.Path, p.Value)
|
||||
}
|
||||
|
||||
func addQuotes(value string) bool {
|
||||
if value == "" {
|
||||
return true
|
||||
}
|
||||
// if value is null, then don't add quotes
|
||||
if value == "null" {
|
||||
return false
|
||||
}
|
||||
// if value is a boolean, then don't add quotes
|
||||
if _, err := strconv.ParseBool(value); err == nil {
|
||||
return false
|
||||
}
|
||||
// if value is a json object or array, then don't add quotes.
|
||||
if strings.HasPrefix(value, "{") || strings.HasPrefix(value, "[") {
|
||||
return false
|
||||
}
|
||||
// if value is a number, then don't add quotes
|
||||
if _, err := strconv.ParseFloat(value, 64); err == nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
type JSONPatcher struct {
|
||||
patches []JSONPatch `yaml:"patches"`
|
||||
}
|
||||
|
||||
func (p *JSONPatcher) Patch(u *unstructured.Unstructured, logger logrus.FieldLogger) (*unstructured.Unstructured, error) {
|
||||
modifiedObjBytes, err := p.applyPatch(u)
|
||||
if err != nil {
|
||||
if errors.Is(err, jsonpatch.ErrTestFailed) {
|
||||
logger.Infof("Test operation failed for JSON Patch %s", err.Error())
|
||||
return u.DeepCopy(), nil
|
||||
}
|
||||
return nil, fmt.Errorf("error in applying JSON Patch %s", err.Error())
|
||||
}
|
||||
|
||||
updated := &unstructured.Unstructured{}
|
||||
err = updated.UnmarshalJSON(modifiedObjBytes)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in unmarshalling modified object %s", err.Error())
|
||||
}
|
||||
|
||||
return updated, nil
|
||||
}
|
||||
|
||||
func (p *JSONPatcher) applyPatch(u *unstructured.Unstructured) ([]byte, error) {
|
||||
patchBytes := p.patchArrayToByteArray()
|
||||
jsonPatch, err := jsonpatch.DecodePatch(patchBytes)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in decoding json patch %s", err.Error())
|
||||
}
|
||||
|
||||
objBytes, err := u.MarshalJSON()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in marshaling object %s", err.Error())
|
||||
}
|
||||
|
||||
return jsonPatch.Apply(objBytes)
|
||||
}
|
||||
|
||||
func (p *JSONPatcher) patchArrayToByteArray() []byte {
|
||||
var patches []string
|
||||
for _, patch := range p.patches {
|
||||
patches = append(patches, patch.ToString())
|
||||
}
|
||||
patchesStr := strings.Join(patches, ",\n\t")
|
||||
return []byte(fmt.Sprintf(`[%s]`, patchesStr))
|
||||
}
|
||||
@@ -3,16 +3,16 @@ package resourcemodifiers
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
jsonpatch "github.com/evanphx/json-patch"
|
||||
"github.com/gobwas/glob"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"sigs.k8s.io/yaml"
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/util/collections"
|
||||
@@ -23,11 +23,9 @@ const (
|
||||
ResourceModifierSupportedVersionV1 = "v1"
|
||||
)
|
||||
|
||||
type JSONPatch struct {
|
||||
Operation string `json:"operation"`
|
||||
From string `json:"from,omitempty"`
|
||||
Path string `json:"path"`
|
||||
Value string `json:"value,omitempty"`
|
||||
type MatchRule struct {
|
||||
Path string `json:"path,omitempty"`
|
||||
Value string `json:"value,omitempty"`
|
||||
}
|
||||
|
||||
type Conditions struct {
|
||||
@@ -35,11 +33,14 @@ type Conditions struct {
|
||||
GroupResource string `json:"groupResource"`
|
||||
ResourceNameRegex string `json:"resourceNameRegex,omitempty"`
|
||||
LabelSelector *metav1.LabelSelector `json:"labelSelector,omitempty"`
|
||||
Matches []MatchRule `json:"matches,omitempty"`
|
||||
}
|
||||
|
||||
type ResourceModifierRule struct {
|
||||
Conditions Conditions `json:"conditions"`
|
||||
Patches []JSONPatch `json:"patches"`
|
||||
Conditions Conditions `json:"conditions"`
|
||||
Patches []JSONPatch `json:"patches,omitempty"`
|
||||
MergePatches []JSONMergePatch `json:"mergePatches,omitempty"`
|
||||
StrategicPatches []StrategicMergePatch `json:"strategicPatches,omitempty"`
|
||||
}
|
||||
|
||||
type ResourceModifiers struct {
|
||||
@@ -68,10 +69,10 @@ func GetResourceModifiersFromConfig(cm *v1.ConfigMap) (*ResourceModifiers, error
|
||||
return resModifiers, nil
|
||||
}
|
||||
|
||||
func (p *ResourceModifiers) ApplyResourceModifierRules(obj *unstructured.Unstructured, groupResource string, log logrus.FieldLogger) []error {
|
||||
func (p *ResourceModifiers) ApplyResourceModifierRules(obj *unstructured.Unstructured, groupResource string, scheme *runtime.Scheme, log logrus.FieldLogger) []error {
|
||||
var errs []error
|
||||
for _, rule := range p.ResourceModifierRules {
|
||||
err := rule.Apply(obj, groupResource, log)
|
||||
err := rule.apply(obj, groupResource, scheme, log)
|
||||
if err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
@@ -80,13 +81,22 @@ func (p *ResourceModifiers) ApplyResourceModifierRules(obj *unstructured.Unstruc
|
||||
return errs
|
||||
}
|
||||
|
||||
func (r *ResourceModifierRule) Apply(obj *unstructured.Unstructured, groupResource string, log logrus.FieldLogger) error {
|
||||
namespaceInclusion := collections.NewIncludesExcludes().Includes(r.Conditions.Namespaces...)
|
||||
if !namespaceInclusion.ShouldInclude(obj.GetNamespace()) {
|
||||
return nil
|
||||
func (r *ResourceModifierRule) apply(obj *unstructured.Unstructured, groupResource string, scheme *runtime.Scheme, log logrus.FieldLogger) error {
|
||||
ns := obj.GetNamespace()
|
||||
if ns != "" {
|
||||
namespaceInclusion := collections.NewIncludesExcludes().Includes(r.Conditions.Namespaces...)
|
||||
if !namespaceInclusion.ShouldInclude(ns) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
if r.Conditions.GroupResource != groupResource {
|
||||
g, err := glob.Compile(r.Conditions.GroupResource, '.')
|
||||
if err != nil {
|
||||
log.Errorf("Bad glob pattern of groupResource in condition, groupResource: %s, err: %s", r.Conditions.GroupResource, err)
|
||||
return err
|
||||
}
|
||||
|
||||
if !g.Match(groupResource) {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -110,87 +120,82 @@ func (r *ResourceModifierRule) Apply(obj *unstructured.Unstructured, groupResour
|
||||
}
|
||||
}
|
||||
|
||||
patches, err := r.PatchArrayToByteArray()
|
||||
match, err := matchConditions(obj, r.Conditions.Matches, log)
|
||||
if err != nil {
|
||||
return err
|
||||
} else if !match {
|
||||
log.Info("Conditions do not match, skip it")
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Infof("Applying resource modifier patch on %s/%s", obj.GetNamespace(), obj.GetName())
|
||||
err = ApplyPatch(patches, obj, log)
|
||||
err = r.applyPatch(obj, scheme, log)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// PatchArrayToByteArray converts all JsonPatch to string array with the format of jsonpatch.Patch and then convert it to byte array
|
||||
func (r *ResourceModifierRule) PatchArrayToByteArray() ([]byte, error) {
|
||||
var patches []string
|
||||
for _, patch := range r.Patches {
|
||||
patches = append(patches, patch.ToString())
|
||||
func matchConditions(u *unstructured.Unstructured, rules []MatchRule, _ logrus.FieldLogger) (bool, error) {
|
||||
if len(rules) == 0 {
|
||||
return true, nil
|
||||
}
|
||||
patchesStr := strings.Join(patches, ",\n\t")
|
||||
return []byte(fmt.Sprintf(`[%s]`, patchesStr)), nil
|
||||
}
|
||||
|
||||
func (p *JSONPatch) ToString() string {
|
||||
if addQuotes(p.Value) {
|
||||
return fmt.Sprintf(`{"op": "%s", "from": "%s", "path": "%s", "value": "%s"}`, p.Operation, p.From, p.Path, p.Value)
|
||||
}
|
||||
return fmt.Sprintf(`{"op": "%s", "from": "%s", "path": "%s", "value": %s}`, p.Operation, p.From, p.Path, p.Value)
|
||||
}
|
||||
var fixed []JSONPatch
|
||||
for _, rule := range rules {
|
||||
if rule.Path == "" {
|
||||
return false, fmt.Errorf("path is required for match rule")
|
||||
}
|
||||
|
||||
func ApplyPatch(patch []byte, obj *unstructured.Unstructured, log logrus.FieldLogger) error {
|
||||
jsonPatch, err := jsonpatch.DecodePatch(patch)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error in decoding json patch %s", err.Error())
|
||||
fixed = append(fixed, JSONPatch{
|
||||
Operation: "test",
|
||||
Path: rule.Path,
|
||||
Value: rule.Value,
|
||||
})
|
||||
}
|
||||
objBytes, err := obj.MarshalJSON()
|
||||
if err != nil {
|
||||
return fmt.Errorf("error in marshaling object %s", err.Error())
|
||||
}
|
||||
modifiedObjBytes, err := jsonPatch.Apply(objBytes)
|
||||
|
||||
p := &JSONPatcher{patches: fixed}
|
||||
_, err := p.applyPatch(u)
|
||||
if err != nil {
|
||||
if errors.Is(err, jsonpatch.ErrTestFailed) {
|
||||
log.Infof("Test operation failed for JSON Patch %s", err.Error())
|
||||
return nil
|
||||
return false, nil
|
||||
}
|
||||
return fmt.Errorf("error in applying JSON Patch %s", err.Error())
|
||||
return false, err
|
||||
}
|
||||
err = obj.UnmarshalJSON(modifiedObjBytes)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error in unmarshalling modified object %s", err.Error())
|
||||
}
|
||||
return nil
|
||||
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func unmarshalResourceModifiers(yamlData []byte) (*ResourceModifiers, error) {
|
||||
resModifiers := &ResourceModifiers{}
|
||||
err := yaml.UnmarshalStrict(yamlData, resModifiers)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decode yaml data into resource modifiers %v", err)
|
||||
return nil, fmt.Errorf("failed to decode yaml data into resource modifiers, err: %s", err)
|
||||
}
|
||||
return resModifiers, nil
|
||||
}
|
||||
|
||||
func addQuotes(value string) bool {
|
||||
if value == "" {
|
||||
return true
|
||||
}
|
||||
// if value is null, then don't add quotes
|
||||
if value == "null" {
|
||||
return false
|
||||
}
|
||||
// if value is a boolean, then don't add quotes
|
||||
if _, err := strconv.ParseBool(value); err == nil {
|
||||
return false
|
||||
}
|
||||
// if value is a json object or array, then don't add quotes.
|
||||
if strings.HasPrefix(value, "{") || strings.HasPrefix(value, "[") {
|
||||
return false
|
||||
}
|
||||
// if value is a number, then don't add quotes
|
||||
if _, err := strconv.ParseFloat(value, 64); err == nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
type patcher interface {
|
||||
Patch(u *unstructured.Unstructured, logger logrus.FieldLogger) (*unstructured.Unstructured, error)
|
||||
}
|
||||
|
||||
func (r *ResourceModifierRule) applyPatch(u *unstructured.Unstructured, scheme *runtime.Scheme, logger logrus.FieldLogger) error {
|
||||
var p patcher
|
||||
if len(r.Patches) > 0 {
|
||||
p = &JSONPatcher{patches: r.Patches}
|
||||
} else if len(r.MergePatches) > 0 {
|
||||
p = &JSONMergePatcher{patches: r.MergePatches}
|
||||
} else if len(r.StrategicPatches) > 0 {
|
||||
p = &StrategicMergePatcher{patches: r.StrategicPatches, scheme: scheme}
|
||||
} else {
|
||||
return fmt.Errorf("no patch data found")
|
||||
}
|
||||
|
||||
updated, err := p.Patch(u, logger)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error in applying patch %s", err)
|
||||
}
|
||||
|
||||
u.SetUnstructuredContent(updated.Object)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -9,6 +9,10 @@ import (
|
||||
v1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/serializer/yaml"
|
||||
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
)
|
||||
|
||||
func TestGetResourceModifiersFromConfig(t *testing.T) {
|
||||
@@ -116,6 +120,128 @@ func TestGetResourceModifiersFromConfig(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
cm5 := &v1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-configmap",
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
Data: map[string]string{
|
||||
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupResource: pods\n namespaces:\n - ns1\n matches:\n - path: /metadata/annotations/foo\n value: bar\n mergePatches:\n - patchData: |\n metadata:\n annotations:\n foo: null",
|
||||
},
|
||||
}
|
||||
|
||||
rules5 := &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "pods",
|
||||
Namespaces: []string{
|
||||
"ns1",
|
||||
},
|
||||
Matches: []MatchRule{
|
||||
{
|
||||
Path: "/metadata/annotations/foo",
|
||||
Value: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: "metadata:\n annotations:\n foo: null",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cm6 := &v1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-configmap",
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
Data: map[string]string{
|
||||
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupResource: pods\n namespaces:\n - ns1\n strategicPatches:\n - patchData: |\n spec:\n containers:\n - name: nginx\n image: repo2/nginx",
|
||||
},
|
||||
}
|
||||
|
||||
rules6 := &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "pods",
|
||||
Namespaces: []string{
|
||||
"ns1",
|
||||
},
|
||||
},
|
||||
StrategicPatches: []StrategicMergePatch{
|
||||
{
|
||||
PatchData: "spec:\n containers:\n - name: nginx\n image: repo2/nginx",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cm7 := &v1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-configmap",
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
Data: map[string]string{
|
||||
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupResource: pods\n namespaces:\n - ns1\n mergePatches:\n - patchData: |\n {\"metadata\":{\"annotations\":{\"foo\":null}}}",
|
||||
},
|
||||
}
|
||||
|
||||
rules7 := &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "pods",
|
||||
Namespaces: []string{
|
||||
"ns1",
|
||||
},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `{"metadata":{"annotations":{"foo":null}}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cm8 := &v1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-configmap",
|
||||
Namespace: "test-namespace",
|
||||
},
|
||||
Data: map[string]string{
|
||||
"sub.yml": "version: v1\nresourceModifierRules:\n- conditions:\n groupResource: pods\n namespaces:\n - ns1\n strategicPatches:\n - patchData: |\n {\"spec\":{\"containers\":[{\"name\": \"nginx\",\"image\": \"repo2/nginx\"}]}}",
|
||||
},
|
||||
}
|
||||
|
||||
rules8 := &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "pods",
|
||||
Namespaces: []string{
|
||||
"ns1",
|
||||
},
|
||||
},
|
||||
StrategicPatches: []StrategicMergePatch{
|
||||
{
|
||||
PatchData: `{"spec":{"containers":[{"name": "nginx","image": "repo2/nginx"}]}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
type args struct {
|
||||
cm *v1.ConfigMap
|
||||
}
|
||||
@@ -165,6 +291,38 @@ func TestGetResourceModifiersFromConfig(t *testing.T) {
|
||||
want: nil,
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "complex yaml data with json merge patch",
|
||||
args: args{
|
||||
cm: cm5,
|
||||
},
|
||||
want: rules5,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "complex yaml data with strategic merge patch",
|
||||
args: args{
|
||||
cm: cm6,
|
||||
},
|
||||
want: rules6,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "complex json data with json merge patch",
|
||||
args: args{
|
||||
cm: cm7,
|
||||
},
|
||||
want: rules7,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "complex json data with strategic merge patch",
|
||||
args: args{
|
||||
cm: cm8,
|
||||
},
|
||||
want: rules8,
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
@@ -487,6 +645,38 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
|
||||
wantErr: false,
|
||||
wantObj: deployNginxTwoReplica.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "nginx deployment: Empty Resource Regex",
|
||||
fields: fields{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "deployments.apps",
|
||||
Namespaces: []string{"foo"},
|
||||
},
|
||||
Patches: []JSONPatch{
|
||||
{
|
||||
Operation: "test",
|
||||
Path: "/spec/replicas",
|
||||
Value: "1",
|
||||
},
|
||||
{
|
||||
Operation: "replace",
|
||||
Path: "/spec/replicas",
|
||||
Value: "2",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
args: args{
|
||||
obj: deployNginxOneReplica.DeepCopy(),
|
||||
groupResource: "deployments.apps",
|
||||
},
|
||||
wantErr: false,
|
||||
wantObj: deployNginxTwoReplica.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "nginx deployment: Empty Resource Regex and namespaces list",
|
||||
fields: fields{
|
||||
@@ -704,7 +894,7 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
|
||||
Version: tt.fields.Version,
|
||||
ResourceModifierRules: tt.fields.ResourceModifierRules,
|
||||
}
|
||||
got := p.ApplyResourceModifierRules(tt.args.obj, tt.args.groupResource, logrus.New())
|
||||
got := p.ApplyResourceModifierRules(tt.args.obj, tt.args.groupResource, nil, logrus.New())
|
||||
|
||||
assert.Equal(t, tt.wantErr, len(got) > 0)
|
||||
assert.Equal(t, *tt.wantObj, *tt.args.obj)
|
||||
@@ -712,6 +902,633 @@ func TestResourceModifiers_ApplyResourceModifierRules(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
var podYAMLWithNginxImage = `
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod1
|
||||
namespace: fake
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
`
|
||||
|
||||
var podYAMLWithNginx1Image = `
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod1
|
||||
namespace: fake
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx1
|
||||
name: nginx
|
||||
`
|
||||
|
||||
var podYAMLWithNFSVolume = `
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod1
|
||||
namespace: fake
|
||||
spec:
|
||||
containers:
|
||||
- image: fake
|
||||
name: fake
|
||||
volumeMounts:
|
||||
- mountPath: /fake1
|
||||
name: vol1
|
||||
- mountPath: /fake2
|
||||
name: vol2
|
||||
volumes:
|
||||
- name: vol1
|
||||
nfs:
|
||||
path: /fake2
|
||||
- name: vol2
|
||||
emptyDir: {}
|
||||
`
|
||||
|
||||
var podYAMLWithPVCVolume = `
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod1
|
||||
namespace: fake
|
||||
spec:
|
||||
containers:
|
||||
- image: fake
|
||||
name: fake
|
||||
volumeMounts:
|
||||
- mountPath: /fake1
|
||||
name: vol1
|
||||
- mountPath: /fake2
|
||||
name: vol2
|
||||
volumes:
|
||||
- name: vol1
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc1
|
||||
- name: vol2
|
||||
emptyDir: {}
|
||||
`
|
||||
|
||||
var svcYAMLWithPort8000 = `
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: svc1
|
||||
namespace: fake
|
||||
spec:
|
||||
ports:
|
||||
- name: fake1
|
||||
port: 8001
|
||||
protocol: TCP
|
||||
targetPort: 8001
|
||||
- name: fake
|
||||
port: 8000
|
||||
protocol: TCP
|
||||
targetPort: 8000
|
||||
- name: fake2
|
||||
port: 8002
|
||||
protocol: TCP
|
||||
targetPort: 8002
|
||||
`
|
||||
|
||||
var svcYAMLWithPort9000 = `
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: svc1
|
||||
namespace: fake
|
||||
spec:
|
||||
ports:
|
||||
- name: fake1
|
||||
port: 8001
|
||||
protocol: TCP
|
||||
targetPort: 8001
|
||||
- name: fake
|
||||
port: 9000
|
||||
protocol: TCP
|
||||
targetPort: 9000
|
||||
- name: fake2
|
||||
port: 8002
|
||||
protocol: TCP
|
||||
targetPort: 8002
|
||||
`
|
||||
|
||||
var cmYAMLWithLabelAToB = `
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cm1
|
||||
namespace: fake
|
||||
labels:
|
||||
a: b
|
||||
c: d
|
||||
`
|
||||
|
||||
var cmYAMLWithLabelAToC = `
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cm1
|
||||
namespace: fake
|
||||
labels:
|
||||
a: c
|
||||
c: d
|
||||
`
|
||||
|
||||
var cmYAMLWithoutLabelA = `
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cm1
|
||||
namespace: fake
|
||||
labels:
|
||||
c: d
|
||||
`
|
||||
|
||||
func TestResourceModifiers_ApplyResourceModifierRules_StrategicMergePatch(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
|
||||
unstructuredSerializer := yaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
|
||||
o1, _, err := unstructuredSerializer.Decode([]byte(podYAMLWithNFSVolume), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
podWithNFSVolume := o1.(*unstructured.Unstructured)
|
||||
|
||||
o2, _, err := unstructuredSerializer.Decode([]byte(podYAMLWithPVCVolume), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
podWithPVCVolume := o2.(*unstructured.Unstructured)
|
||||
|
||||
o3, _, err := unstructuredSerializer.Decode([]byte(svcYAMLWithPort8000), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
svcWithPort8000 := o3.(*unstructured.Unstructured)
|
||||
|
||||
o4, _, err := unstructuredSerializer.Decode([]byte(svcYAMLWithPort9000), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
svcWithPort9000 := o4.(*unstructured.Unstructured)
|
||||
|
||||
o5, _, err := unstructuredSerializer.Decode([]byte(podYAMLWithNginxImage), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
podWithNginxImage := o5.(*unstructured.Unstructured)
|
||||
|
||||
o6, _, err := unstructuredSerializer.Decode([]byte(podYAMLWithNginx1Image), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
podWithNginx1Image := o6.(*unstructured.Unstructured)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
rm *ResourceModifiers
|
||||
obj *unstructured.Unstructured
|
||||
groupResource string
|
||||
wantErr bool
|
||||
wantObj *unstructured.Unstructured
|
||||
}{
|
||||
{
|
||||
name: "update image",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "pods",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
StrategicPatches: []StrategicMergePatch{
|
||||
{
|
||||
PatchData: `{"spec":{"containers":[{"name":"nginx","image":"nginx1"}]}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: podWithNginxImage.DeepCopy(),
|
||||
groupResource: "pods",
|
||||
wantErr: false,
|
||||
wantObj: podWithNginx1Image.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "update image with yaml format",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "pods",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
StrategicPatches: []StrategicMergePatch{
|
||||
{
|
||||
PatchData: `spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx1`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: podWithNginxImage.DeepCopy(),
|
||||
groupResource: "pods",
|
||||
wantErr: false,
|
||||
wantObj: podWithNginx1Image.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "replace nfs with pvc in volume",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "pods",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
StrategicPatches: []StrategicMergePatch{
|
||||
{
|
||||
PatchData: `{"spec":{"volumes":[{"nfs":null,"name":"vol1","persistentVolumeClaim":{"claimName":"pvc1"}}]}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: podWithNFSVolume.DeepCopy(),
|
||||
groupResource: "pods",
|
||||
wantErr: false,
|
||||
wantObj: podWithPVCVolume.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "replace any other volume source with pvc in volume",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "pods",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
StrategicPatches: []StrategicMergePatch{
|
||||
{
|
||||
PatchData: `{"spec":{"volumes":[{"$retainKeys":["name","persistentVolumeClaim"],"name":"vol1","persistentVolumeClaim":{"claimName":"pvc1"}}]}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: podWithNFSVolume.DeepCopy(),
|
||||
groupResource: "pods",
|
||||
wantErr: false,
|
||||
wantObj: podWithPVCVolume.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "update a service port",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "services",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
StrategicPatches: []StrategicMergePatch{
|
||||
{
|
||||
PatchData: `{"spec":{"$setElementOrder/ports":[{"port":8001},{"port":9000},{"port":8002}],"ports":[{"name":"fake","port":9000,"protocol":"TCP","targetPort":9000},{"$patch":"delete","port":8000}]}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: svcWithPort8000.DeepCopy(),
|
||||
groupResource: "services",
|
||||
wantErr: false,
|
||||
wantObj: svcWithPort9000.DeepCopy(),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := tt.rm.ApplyResourceModifierRules(tt.obj, tt.groupResource, scheme, logrus.New())
|
||||
|
||||
assert.Equal(t, tt.wantErr, len(got) > 0)
|
||||
assert.Equal(t, *tt.wantObj, *tt.obj)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestResourceModifiers_ApplyResourceModifierRules_JSONMergePatch(t *testing.T) {
|
||||
unstructuredSerializer := yaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
|
||||
o1, _, err := unstructuredSerializer.Decode([]byte(cmYAMLWithLabelAToB), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
cmWithLabelAToB := o1.(*unstructured.Unstructured)
|
||||
|
||||
o2, _, err := unstructuredSerializer.Decode([]byte(cmYAMLWithLabelAToC), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
cmWithLabelAToC := o2.(*unstructured.Unstructured)
|
||||
|
||||
o3, _, err := unstructuredSerializer.Decode([]byte(cmYAMLWithoutLabelA), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
cmWithoutLabelA := o3.(*unstructured.Unstructured)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
rm *ResourceModifiers
|
||||
obj *unstructured.Unstructured
|
||||
groupResource string
|
||||
wantErr bool
|
||||
wantObj *unstructured.Unstructured
|
||||
}{
|
||||
{
|
||||
name: "update labels",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "configmaps",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `{"metadata":{"labels":{"a":"c"}}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: cmWithLabelAToB.DeepCopy(),
|
||||
groupResource: "configmaps",
|
||||
wantErr: false,
|
||||
wantObj: cmWithLabelAToC.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "update labels in yaml format",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "configmaps",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `metadata:
|
||||
labels:
|
||||
a: c`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: cmWithLabelAToB.DeepCopy(),
|
||||
groupResource: "configmaps",
|
||||
wantErr: false,
|
||||
wantObj: cmWithLabelAToC.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "delete labels",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "configmaps",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `{"metadata":{"labels":{"a":null}}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: cmWithLabelAToB.DeepCopy(),
|
||||
groupResource: "configmaps",
|
||||
wantErr: false,
|
||||
wantObj: cmWithoutLabelA.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "add labels",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "configmaps",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `{"metadata":{"labels":{"a":"b"}}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: cmWithoutLabelA.DeepCopy(),
|
||||
groupResource: "configmaps",
|
||||
wantErr: false,
|
||||
wantObj: cmWithLabelAToB.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "delete non-existing labels",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "configmaps",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `{"metadata":{"labels":{"a":null}}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: cmWithoutLabelA.DeepCopy(),
|
||||
groupResource: "configmaps",
|
||||
wantErr: false,
|
||||
wantObj: cmWithoutLabelA.DeepCopy(),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := tt.rm.ApplyResourceModifierRules(tt.obj, tt.groupResource, nil, logrus.New())
|
||||
|
||||
assert.Equal(t, tt.wantErr, len(got) > 0)
|
||||
assert.Equal(t, *tt.wantObj, *tt.obj)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestResourceModifiers_wildcard_in_GroupResource(t *testing.T) {
|
||||
unstructuredSerializer := yaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
|
||||
o1, _, err := unstructuredSerializer.Decode([]byte(cmYAMLWithLabelAToB), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
cmWithLabelAToB := o1.(*unstructured.Unstructured)
|
||||
|
||||
o2, _, err := unstructuredSerializer.Decode([]byte(cmYAMLWithLabelAToC), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
cmWithLabelAToC := o2.(*unstructured.Unstructured)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
rm *ResourceModifiers
|
||||
obj *unstructured.Unstructured
|
||||
groupResource string
|
||||
wantErr bool
|
||||
wantObj *unstructured.Unstructured
|
||||
}{
|
||||
{
|
||||
name: "match all groups and resources",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "*",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `{"metadata":{"labels":{"a":"c"}}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: cmWithLabelAToB.DeepCopy(),
|
||||
groupResource: "configmaps",
|
||||
wantErr: false,
|
||||
wantObj: cmWithLabelAToC.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "match all resources in group apps",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "*.apps",
|
||||
Namespaces: []string{"fake"},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `{"metadata":{"labels":{"a":"c"}}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: cmWithLabelAToB.DeepCopy(),
|
||||
groupResource: "fake.apps",
|
||||
wantErr: false,
|
||||
wantObj: cmWithLabelAToC.DeepCopy(),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := tt.rm.ApplyResourceModifierRules(tt.obj, tt.groupResource, nil, logrus.New())
|
||||
|
||||
assert.Equal(t, tt.wantErr, len(got) > 0)
|
||||
assert.Equal(t, *tt.wantObj, *tt.obj)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestResourceModifiers_conditional_patches(t *testing.T) {
|
||||
unstructuredSerializer := yaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
|
||||
o1, _, err := unstructuredSerializer.Decode([]byte(cmYAMLWithLabelAToB), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
cmWithLabelAToB := o1.(*unstructured.Unstructured)
|
||||
|
||||
o2, _, err := unstructuredSerializer.Decode([]byte(cmYAMLWithLabelAToC), nil, nil)
|
||||
assert.NoError(t, err)
|
||||
cmWithLabelAToC := o2.(*unstructured.Unstructured)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
rm *ResourceModifiers
|
||||
obj *unstructured.Unstructured
|
||||
groupResource string
|
||||
wantErr bool
|
||||
wantObj *unstructured.Unstructured
|
||||
}{
|
||||
{
|
||||
name: "match conditions and apply patches",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "*",
|
||||
Namespaces: []string{"fake"},
|
||||
Matches: []MatchRule{
|
||||
{
|
||||
Path: "/metadata/labels/a",
|
||||
Value: "b",
|
||||
},
|
||||
},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `{"metadata":{"labels":{"a":"c"}}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: cmWithLabelAToB.DeepCopy(),
|
||||
groupResource: "configmaps",
|
||||
wantErr: false,
|
||||
wantObj: cmWithLabelAToC.DeepCopy(),
|
||||
},
|
||||
{
|
||||
name: "mismatch conditions and skip patches",
|
||||
rm: &ResourceModifiers{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "*",
|
||||
Namespaces: []string{"fake"},
|
||||
Matches: []MatchRule{
|
||||
{
|
||||
Path: "/metadata/labels/a",
|
||||
Value: "c",
|
||||
},
|
||||
},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `{"metadata":{"labels":{"a":"c"}}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
obj: cmWithLabelAToB.DeepCopy(),
|
||||
groupResource: "configmaps",
|
||||
wantErr: false,
|
||||
wantObj: cmWithLabelAToB.DeepCopy(),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := tt.rm.ApplyResourceModifierRules(tt.obj, tt.groupResource, nil, logrus.New())
|
||||
|
||||
assert.Equal(t, tt.wantErr, len(got) > 0)
|
||||
assert.Equal(t, *tt.wantObj, *tt.obj)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestJSONPatch_ToString(t *testing.T) {
|
||||
type fields struct {
|
||||
Operation string
|
||||
|
||||
@@ -9,6 +9,21 @@ func (r *ResourceModifierRule) Validate() error {
|
||||
if err := r.Conditions.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
count := 0
|
||||
for _, size := range []int{
|
||||
len(r.Patches),
|
||||
len(r.MergePatches),
|
||||
len(r.StrategicPatches),
|
||||
} {
|
||||
if size != 0 {
|
||||
count++
|
||||
}
|
||||
if count >= 2 {
|
||||
return fmt.Errorf("only one of patches, mergePatches, strategicPatches can be specified")
|
||||
}
|
||||
}
|
||||
|
||||
for _, patch := range r.Patches {
|
||||
if err := patch.Validate(); err != nil {
|
||||
return err
|
||||
|
||||
@@ -114,6 +114,32 @@ func TestResourceModifiers_Validate(t *testing.T) {
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "More than one patch type in a rule",
|
||||
fields: fields{
|
||||
Version: "v1",
|
||||
ResourceModifierRules: []ResourceModifierRule{
|
||||
{
|
||||
Conditions: Conditions{
|
||||
GroupResource: "*",
|
||||
},
|
||||
Patches: []JSONPatch{
|
||||
{
|
||||
Operation: "test",
|
||||
Path: "/spec/storageClassName",
|
||||
Value: "premium",
|
||||
},
|
||||
},
|
||||
MergePatches: []JSONMergePatch{
|
||||
{
|
||||
PatchData: `{"metadata":{"labels":{"a":null}}}`,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
|
||||
143
internal/resourcemodifiers/strategic_merge_patch.go
Normal file
143
internal/resourcemodifiers/strategic_merge_patch.go
Normal file
@@ -0,0 +1,143 @@
|
||||
package resourcemodifiers
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/util/mergepatch"
|
||||
"k8s.io/apimachinery/pkg/util/strategicpatch"
|
||||
"k8s.io/apimachinery/pkg/util/validation/field"
|
||||
kubejson "sigs.k8s.io/json"
|
||||
"sigs.k8s.io/yaml"
|
||||
)
|
||||
|
||||
type StrategicMergePatch struct {
|
||||
PatchData string `json:"patchData,omitempty"`
|
||||
}
|
||||
|
||||
type StrategicMergePatcher struct {
|
||||
patches []StrategicMergePatch
|
||||
scheme *runtime.Scheme
|
||||
}
|
||||
|
||||
func (p *StrategicMergePatcher) Patch(u *unstructured.Unstructured, _ logrus.FieldLogger) (*unstructured.Unstructured, error) {
|
||||
gvk := u.GetObjectKind().GroupVersionKind()
|
||||
schemaReferenceObj, err := p.scheme.New(gvk)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
origin := u.DeepCopy()
|
||||
updated := u.DeepCopy()
|
||||
for _, patch := range p.patches {
|
||||
patchBytes, err := yaml.YAMLToJSON([]byte(patch.PatchData))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in converting YAML to JSON %s", err)
|
||||
}
|
||||
|
||||
err = strategicPatchObject(origin, patchBytes, updated, schemaReferenceObj)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in applying Strategic Patch %s", err.Error())
|
||||
}
|
||||
|
||||
origin = updated.DeepCopy()
|
||||
}
|
||||
|
||||
return updated, nil
|
||||
}
|
||||
|
||||
// strategicPatchObject applies a strategic merge patch of `patchBytes` to
|
||||
// `originalObject` and stores the result in `objToUpdate`.
|
||||
// It additionally returns the map[string]interface{} representation of the
|
||||
// `originalObject` and `patchBytes`.
|
||||
// NOTE: Both `originalObject` and `objToUpdate` are supposed to be versioned.
|
||||
func strategicPatchObject(
|
||||
originalObject runtime.Object,
|
||||
patchBytes []byte,
|
||||
objToUpdate runtime.Object,
|
||||
schemaReferenceObj runtime.Object,
|
||||
) error {
|
||||
originalObjMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(originalObject)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
patchMap := make(map[string]interface{})
|
||||
var strictErrs []error
|
||||
strictErrs, err = kubejson.UnmarshalStrict(patchBytes, &patchMap)
|
||||
if err != nil {
|
||||
return apierrors.NewBadRequest(err.Error())
|
||||
}
|
||||
|
||||
if err := applyPatchToObject(originalObjMap, patchMap, objToUpdate, schemaReferenceObj, strictErrs); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// applyPatchToObject applies a strategic merge patch of <patchMap> to
|
||||
// <originalMap> and stores the result in <objToUpdate>.
|
||||
// NOTE: <objToUpdate> must be a versioned object.
|
||||
func applyPatchToObject(
|
||||
originalMap map[string]interface{},
|
||||
patchMap map[string]interface{},
|
||||
objToUpdate runtime.Object,
|
||||
schemaReferenceObj runtime.Object,
|
||||
strictErrs []error,
|
||||
) error {
|
||||
patchedObjMap, err := strategicpatch.StrategicMergeMapPatch(originalMap, patchMap, schemaReferenceObj)
|
||||
if err != nil {
|
||||
return interpretStrategicMergePatchError(err)
|
||||
}
|
||||
|
||||
// Rather than serialize the patched map to JSON, then decode it to an object, we go directly from a map to an object
|
||||
converter := runtime.DefaultUnstructuredConverter
|
||||
if err := converter.FromUnstructuredWithValidation(patchedObjMap, objToUpdate, true); err != nil {
|
||||
strictError, isStrictError := runtime.AsStrictDecodingError(err)
|
||||
switch {
|
||||
case !isStrictError:
|
||||
// disregard any sttrictErrs, because it's an incomplete
|
||||
// list of strict errors given that we don't know what fields were
|
||||
// unknown because StrategicMergeMapPatch failed.
|
||||
// Non-strict errors trump in this case.
|
||||
return apierrors.NewInvalid(schema.GroupKind{}, "", field.ErrorList{
|
||||
field.Invalid(field.NewPath("patch"), fmt.Sprintf("%+v", patchMap), err.Error()),
|
||||
})
|
||||
//case validationDirective == metav1.FieldValidationWarn:
|
||||
// addStrictDecodingWarnings(requestContext, append(strictErrs, strictError.Errors()...))
|
||||
default:
|
||||
strictDecodingError := runtime.NewStrictDecodingError(append(strictErrs, strictError.Errors()...))
|
||||
return apierrors.NewInvalid(schema.GroupKind{}, "", field.ErrorList{
|
||||
field.Invalid(field.NewPath("patch"), fmt.Sprintf("%+v", patchMap), strictDecodingError.Error()),
|
||||
})
|
||||
}
|
||||
} else if len(strictErrs) > 0 {
|
||||
switch {
|
||||
//case validationDirective == metav1.FieldValidationWarn:
|
||||
// addStrictDecodingWarnings(requestContext, strictErrs)
|
||||
default:
|
||||
return apierrors.NewInvalid(schema.GroupKind{}, "", field.ErrorList{
|
||||
field.Invalid(field.NewPath("patch"), fmt.Sprintf("%+v", patchMap), runtime.NewStrictDecodingError(strictErrs).Error()),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// interpretStrategicMergePatchError interprets the error type and returns an error with appropriate HTTP code.
|
||||
func interpretStrategicMergePatchError(err error) error {
|
||||
switch err {
|
||||
case mergepatch.ErrBadJSONDoc, mergepatch.ErrBadPatchFormatForPrimitiveList, mergepatch.ErrBadPatchFormatForRetainKeys, mergepatch.ErrBadPatchFormatForSetElementOrderList, mergepatch.ErrUnsupportedStrategicMergePatchFormat:
|
||||
return apierrors.NewBadRequest(err.Error())
|
||||
case mergepatch.ErrNoListOfLists, mergepatch.ErrPatchContentNotMatchRetainKeys:
|
||||
return apierrors.NewGenericServerResponse(http.StatusUnprocessableEntity, "", schema.GroupResource{}, "", err.Error(), 0, false)
|
||||
default:
|
||||
return err
|
||||
}
|
||||
}
|
||||
52
internal/resourcemodifiers/strategic_merge_patch_test.go
Normal file
52
internal/resourcemodifiers/strategic_merge_patch_test.go
Normal file
@@ -0,0 +1,52 @@
|
||||
package resourcemodifiers
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
|
||||
)
|
||||
|
||||
func TestStrategicMergePatchFailure(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
data string
|
||||
kind string
|
||||
}{
|
||||
{
|
||||
name: "patch with unknown kind",
|
||||
data: "{}",
|
||||
kind: "BadKind",
|
||||
},
|
||||
{
|
||||
name: "patch with bad yaml",
|
||||
data: "a: b:",
|
||||
kind: "Pod",
|
||||
},
|
||||
{
|
||||
name: "patch with bad json",
|
||||
data: `{"a"::1}`,
|
||||
kind: "Pod",
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
err := clientgoscheme.AddToScheme(scheme)
|
||||
assert.NoError(t, err)
|
||||
pt := &StrategicMergePatcher{
|
||||
patches: []StrategicMergePatch{{PatchData: tt.data}},
|
||||
scheme: scheme,
|
||||
}
|
||||
|
||||
u := &unstructured.Unstructured{}
|
||||
u.SetGroupVersionKind(schema.GroupVersionKind{Version: "v1", Kind: tt.kind})
|
||||
_, err = pt.Patch(u, logrus.New())
|
||||
assert.Error(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,34 +1,17 @@
|
||||
//go:build !ignore_autogenerated
|
||||
// +build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright the Velero contributors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by deepcopy-gen. DO NOT EDIT.
|
||||
// Code generated by controller-gen. DO NOT EDIT.
|
||||
|
||||
package v2alpha1
|
||||
|
||||
import (
|
||||
runtime "k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *CSISnapshotSpec) DeepCopyInto(out *CSISnapshotSpec) {
|
||||
*out = *in
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CSISnapshotSpec.
|
||||
@@ -48,7 +31,6 @@ func (in *DataDownload) DeepCopyInto(out *DataDownload) {
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataDownload.
|
||||
@@ -81,7 +63,6 @@ func (in *DataDownloadList) DeepCopyInto(out *DataDownloadList) {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataDownloadList.
|
||||
@@ -114,7 +95,6 @@ func (in *DataDownloadSpec) DeepCopyInto(out *DataDownloadSpec) {
|
||||
}
|
||||
}
|
||||
out.OperationTimeout = in.OperationTimeout
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataDownloadSpec.
|
||||
@@ -139,7 +119,6 @@ func (in *DataDownloadStatus) DeepCopyInto(out *DataDownloadStatus) {
|
||||
*out = (*in).DeepCopy()
|
||||
}
|
||||
out.Progress = in.Progress
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataDownloadStatus.
|
||||
@@ -159,7 +138,6 @@ func (in *DataUpload) DeepCopyInto(out *DataUpload) {
|
||||
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
|
||||
in.Spec.DeepCopyInto(&out.Spec)
|
||||
in.Status.DeepCopyInto(&out.Status)
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataUpload.
|
||||
@@ -192,7 +170,6 @@ func (in *DataUploadList) DeepCopyInto(out *DataUploadList) {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataUploadList.
|
||||
@@ -227,7 +204,6 @@ func (in *DataUploadResult) DeepCopyInto(out *DataUploadResult) {
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataUploadResult.
|
||||
@@ -260,7 +236,6 @@ func (in *DataUploadSpec) DeepCopyInto(out *DataUploadSpec) {
|
||||
}
|
||||
}
|
||||
out.OperationTimeout = in.OperationTimeout
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataUploadSpec.
|
||||
@@ -296,7 +271,6 @@ func (in *DataUploadStatus) DeepCopyInto(out *DataUploadStatus) {
|
||||
*out = (*in).DeepCopy()
|
||||
}
|
||||
out.Progress = in.Progress
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataUploadStatus.
|
||||
@@ -312,7 +286,6 @@ func (in *DataUploadStatus) DeepCopy() *DataUploadStatus {
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *TargetVolumeSpec) DeepCopyInto(out *TargetVolumeSpec) {
|
||||
*out = *in
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TargetVolumeSpec.
|
||||
|
||||
@@ -20,8 +20,6 @@ import (
|
||||
"fmt"
|
||||
"sort"
|
||||
|
||||
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
|
||||
"github.com/vmware-tanzu/velero/internal/hook"
|
||||
"github.com/vmware-tanzu/velero/internal/resourcepolicies"
|
||||
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
@@ -51,7 +49,6 @@ type Request struct {
|
||||
VolumeSnapshots []*volume.Snapshot
|
||||
PodVolumeBackups []*velerov1api.PodVolumeBackup
|
||||
BackedUpItems map[itemKey]struct{}
|
||||
CSISnapshots []snapshotv1api.VolumeSnapshot
|
||||
itemOperationsList *[]*itemoperation.BackupOperation
|
||||
ResPolicies *resourcepolicies.Policies
|
||||
SkippedPVTracker *skipPVTracker
|
||||
|
||||
68
pkg/backup/snapshots.go
Normal file
68
pkg/backup/snapshots.go
Normal file
@@ -0,0 +1,68 @@
|
||||
package backup
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
snapshotv1listers "github.com/kubernetes-csi/external-snapshotter/client/v4/listers/volumesnapshot/v1"
|
||||
"github.com/sirupsen/logrus"
|
||||
"k8s.io/apimachinery/pkg/util/sets"
|
||||
kbclient "sigs.k8s.io/controller-runtime/pkg/client"
|
||||
|
||||
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
"github.com/vmware-tanzu/velero/pkg/features"
|
||||
"github.com/vmware-tanzu/velero/pkg/label"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
|
||||
)
|
||||
|
||||
// Common function to update the status of CSI snapshots
|
||||
// returns VolumeSnapshot, VolumeSnapshotContent, VolumeSnapshotClasses referenced
|
||||
func UpdateBackupCSISnapshotsStatus(client kbclient.Client, volumeSnapshotLister snapshotv1listers.VolumeSnapshotLister, backup *velerov1api.Backup, backupLog logrus.FieldLogger) (volumeSnapshots []snapshotv1api.VolumeSnapshot, volumeSnapshotContents []snapshotv1api.VolumeSnapshotContent, volumeSnapshotClasses []snapshotv1api.VolumeSnapshotClass) {
|
||||
if boolptr.IsSetToTrue(backup.Spec.SnapshotMoveData) {
|
||||
backupLog.Info("backup SnapshotMoveData is set to true, skip VolumeSnapshot resource persistence.")
|
||||
} else if features.IsEnabled(velerov1api.CSIFeatureFlag) {
|
||||
selector := label.NewSelectorForBackup(backup.Name)
|
||||
vscList := &snapshotv1api.VolumeSnapshotContentList{}
|
||||
|
||||
if volumeSnapshotLister != nil {
|
||||
tmpVSs, err := volumeSnapshotLister.List(label.NewSelectorForBackup(backup.Name))
|
||||
if err != nil {
|
||||
backupLog.Error(err)
|
||||
}
|
||||
for _, vs := range tmpVSs {
|
||||
volumeSnapshots = append(volumeSnapshots, *vs)
|
||||
}
|
||||
}
|
||||
|
||||
err := client.List(context.Background(), vscList, &kbclient.ListOptions{LabelSelector: selector})
|
||||
if err != nil {
|
||||
backupLog.Error(err)
|
||||
}
|
||||
if len(vscList.Items) >= 0 {
|
||||
volumeSnapshotContents = vscList.Items
|
||||
}
|
||||
|
||||
vsClassSet := sets.NewString()
|
||||
for index := range volumeSnapshotContents {
|
||||
// persist the volumesnapshotclasses referenced by vsc
|
||||
if volumeSnapshotContents[index].Spec.VolumeSnapshotClassName != nil && !vsClassSet.Has(*volumeSnapshotContents[index].Spec.VolumeSnapshotClassName) {
|
||||
vsClass := &snapshotv1api.VolumeSnapshotClass{}
|
||||
if err := client.Get(context.TODO(), kbclient.ObjectKey{Name: *volumeSnapshotContents[index].Spec.VolumeSnapshotClassName}, vsClass); err != nil {
|
||||
backupLog.Error(err)
|
||||
} else {
|
||||
vsClassSet.Insert(*volumeSnapshotContents[index].Spec.VolumeSnapshotClassName)
|
||||
volumeSnapshotClasses = append(volumeSnapshotClasses, *vsClass)
|
||||
}
|
||||
}
|
||||
}
|
||||
backup.Status.CSIVolumeSnapshotsAttempted = len(volumeSnapshots)
|
||||
csiVolumeSnapshotsCompleted := 0
|
||||
for _, vs := range volumeSnapshots {
|
||||
if vs.Status != nil && boolptr.IsSetToTrue(vs.Status.ReadyToUse) {
|
||||
csiVolumeSnapshotsCompleted++
|
||||
}
|
||||
}
|
||||
backup.Status.CSIVolumeSnapshotsCompleted = csiVolumeSnapshotsCompleted
|
||||
}
|
||||
return volumeSnapshots, volumeSnapshotContents, volumeSnapshotClasses
|
||||
}
|
||||
@@ -299,3 +299,9 @@ func (b *BackupBuilder) DataMover(name string) *BackupBuilder {
|
||||
b.object.Spec.DataMover = name
|
||||
return b
|
||||
}
|
||||
|
||||
// WithStatus sets the Backup's status.
|
||||
func (b *BackupBuilder) WithStatus(status velerov1api.BackupStatus) *BackupBuilder {
|
||||
b.object.Status = status
|
||||
return b
|
||||
}
|
||||
|
||||
@@ -67,3 +67,8 @@ func (v *VolumeSnapshotBuilder) BoundVolumeSnapshotContentName(vscName string) *
|
||||
v.object.Status.BoundVolumeSnapshotContentName = &vscName
|
||||
return v
|
||||
}
|
||||
|
||||
func (v *VolumeSnapshotBuilder) SourcePVC(name string) *VolumeSnapshotBuilder {
|
||||
v.object.Spec.Source.PersistentVolumeClaimName = &name
|
||||
return v
|
||||
}
|
||||
|
||||
@@ -114,6 +114,7 @@ func NewServerCommand(f client.Factory) *cobra.Command {
|
||||
command.Flags().Var(formatFlag, "log-format", fmt.Sprintf("The format for log output. Valid values are %s.", strings.Join(formatFlag.AllowedValues(), ", ")))
|
||||
command.Flags().DurationVar(&config.resourceTimeout, "resource-timeout", config.resourceTimeout, "How long to wait for resource processes which are not covered by other specific timeout parameters. Default is 10 minutes.")
|
||||
command.Flags().DurationVar(&config.dataMoverPrepareTimeout, "data-mover-prepare-timeout", config.dataMoverPrepareTimeout, "How long to wait for preparing a DataUpload/DataDownload. Default is 30 minutes.")
|
||||
command.Flags().StringVar(&config.metricsAddress, "metrics-address", config.metricsAddress, "The address to expose prometheus metrics")
|
||||
|
||||
return command
|
||||
}
|
||||
@@ -193,14 +194,15 @@ func newNodeAgentServer(logger logrus.FieldLogger, factory client.Factory, confi
|
||||
}
|
||||
|
||||
s := &nodeAgentServer{
|
||||
logger: logger,
|
||||
ctx: ctx,
|
||||
cancelFunc: cancelFunc,
|
||||
fileSystem: filesystem.NewFileSystem(),
|
||||
mgr: mgr,
|
||||
config: config,
|
||||
namespace: factory.Namespace(),
|
||||
nodeName: nodeName,
|
||||
logger: logger,
|
||||
ctx: ctx,
|
||||
cancelFunc: cancelFunc,
|
||||
fileSystem: filesystem.NewFileSystem(),
|
||||
mgr: mgr,
|
||||
config: config,
|
||||
namespace: factory.Namespace(),
|
||||
nodeName: nodeName,
|
||||
metricsAddress: config.metricsAddress,
|
||||
}
|
||||
|
||||
// the cache isn't initialized yet when "validatePodVolumesHostPath" is called, the client returned by the manager cannot
|
||||
|
||||
@@ -761,7 +761,6 @@ func (s *server) runControllers(defaultVolumeSnapshotLocations map[string]string
|
||||
backupStoreGetter,
|
||||
s.config.formatFlag.Parse(),
|
||||
s.csiSnapshotLister,
|
||||
s.csiSnapshotClient,
|
||||
s.credentialFileStore,
|
||||
s.config.maxConcurrentK8SConnections,
|
||||
s.config.defaultSnapshotMoveData,
|
||||
@@ -825,6 +824,7 @@ func (s *server) runControllers(defaultVolumeSnapshotLocations map[string]string
|
||||
cmd.CheckError(err)
|
||||
r := controller.NewBackupFinalizerReconciler(
|
||||
s.mgr.GetClient(),
|
||||
s.csiSnapshotLister,
|
||||
clock.RealClock{},
|
||||
backupper,
|
||||
newPluginManager,
|
||||
|
||||
@@ -21,6 +21,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
@@ -33,7 +34,6 @@ import (
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
kerrors "k8s.io/apimachinery/pkg/util/errors"
|
||||
"k8s.io/apimachinery/pkg/util/sets"
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
"k8s.io/utils/clock"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
@@ -111,7 +111,6 @@ func NewBackupReconciler(
|
||||
backupStoreGetter persistence.ObjectBackupStoreGetter,
|
||||
formatFlag logging.Format,
|
||||
volumeSnapshotLister snapshotv1listers.VolumeSnapshotLister,
|
||||
volumeSnapshotClient snapshotterClientSet.Interface,
|
||||
credentialStore credentials.FileStore,
|
||||
maxConcurrentK8SConnections int,
|
||||
defaultSnapshotMoveData bool,
|
||||
@@ -137,7 +136,6 @@ func NewBackupReconciler(
|
||||
backupStoreGetter: backupStoreGetter,
|
||||
formatFlag: formatFlag,
|
||||
volumeSnapshotLister: volumeSnapshotLister,
|
||||
volumeSnapshotClient: volumeSnapshotClient,
|
||||
credentialFileStore: credentialStore,
|
||||
maxConcurrentK8SConnections: maxConcurrentK8SConnections,
|
||||
defaultSnapshotMoveData: defaultSnapshotMoveData,
|
||||
@@ -476,7 +474,7 @@ func (b *backupReconciler) prepareBackupRequest(backup *velerov1api.Backup, logg
|
||||
request.Status.ValidationErrors = append(request.Status.ValidationErrors, "encountered labelSelector as well as orLabelSelectors in backup spec, only one can be specified")
|
||||
}
|
||||
|
||||
if request.Spec.ResourcePolicy != nil && request.Spec.ResourcePolicy.Kind == resourcepolicies.ConfigmapRefType {
|
||||
if request.Spec.ResourcePolicy != nil && strings.EqualFold(request.Spec.ResourcePolicy.Kind, resourcepolicies.ConfigmapRefType) {
|
||||
policiesConfigmap := &corev1api.ConfigMap{}
|
||||
err := b.kbClient.Get(context.Background(), kbclient.ObjectKey{Namespace: request.Namespace, Name: request.Spec.ResourcePolicy.Name}, policiesConfigmap)
|
||||
if err != nil {
|
||||
@@ -655,65 +653,15 @@ func (b *backupReconciler) runBackup(backup *pkgbackup.Request) error {
|
||||
fatalErrs = append(fatalErrs, err)
|
||||
}
|
||||
|
||||
// Empty slices here so that they can be passed in to the persistBackup call later, regardless of whether or not CSI's enabled.
|
||||
// This way, we only make the Lister call if the feature flag's on.
|
||||
var volumeSnapshots []snapshotv1api.VolumeSnapshot
|
||||
var volumeSnapshotContents []snapshotv1api.VolumeSnapshotContent
|
||||
var volumeSnapshotClasses []snapshotv1api.VolumeSnapshotClass
|
||||
if boolptr.IsSetToTrue(backup.Spec.SnapshotMoveData) {
|
||||
backupLog.Info("backup SnapshotMoveData is set to true, skip VolumeSnapshot resource persistence.")
|
||||
} else if features.IsEnabled(velerov1api.CSIFeatureFlag) {
|
||||
selector := label.NewSelectorForBackup(backup.Name)
|
||||
vscList := &snapshotv1api.VolumeSnapshotContentList{}
|
||||
|
||||
if b.volumeSnapshotLister != nil {
|
||||
tmpVSs, err := b.volumeSnapshotLister.List(label.NewSelectorForBackup(backup.Name))
|
||||
if err != nil {
|
||||
backupLog.Error(err)
|
||||
}
|
||||
for _, vs := range tmpVSs {
|
||||
volumeSnapshots = append(volumeSnapshots, *vs)
|
||||
}
|
||||
}
|
||||
|
||||
backup.CSISnapshots = volumeSnapshots
|
||||
|
||||
err = b.kbClient.List(context.Background(), vscList, &kbclient.ListOptions{LabelSelector: selector})
|
||||
if err != nil {
|
||||
backupLog.Error(err)
|
||||
}
|
||||
if len(vscList.Items) >= 0 {
|
||||
volumeSnapshotContents = vscList.Items
|
||||
}
|
||||
|
||||
vsClassSet := sets.NewString()
|
||||
for index := range volumeSnapshotContents {
|
||||
// persist the volumesnapshotclasses referenced by vsc
|
||||
if volumeSnapshotContents[index].Spec.VolumeSnapshotClassName != nil && !vsClassSet.Has(*volumeSnapshotContents[index].Spec.VolumeSnapshotClassName) {
|
||||
vsClass := &snapshotv1api.VolumeSnapshotClass{}
|
||||
if err := b.kbClient.Get(context.TODO(), kbclient.ObjectKey{Name: *volumeSnapshotContents[index].Spec.VolumeSnapshotClassName}, vsClass); err != nil {
|
||||
backupLog.Error(err)
|
||||
} else {
|
||||
vsClassSet.Insert(*volumeSnapshotContents[index].Spec.VolumeSnapshotClassName)
|
||||
volumeSnapshotClasses = append(volumeSnapshotClasses, *vsClass)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// native snapshots phase will either be failed or completed right away
|
||||
// https://github.com/vmware-tanzu/velero/blob/de3ea52f0cc478e99efa7b9524c7f353514261a4/pkg/backup/item_backupper.go#L632-L639
|
||||
backup.Status.VolumeSnapshotsAttempted = len(backup.VolumeSnapshots)
|
||||
for _, snap := range backup.VolumeSnapshots {
|
||||
if snap.Status.Phase == volume.SnapshotPhaseCompleted {
|
||||
backup.Status.VolumeSnapshotsCompleted++
|
||||
}
|
||||
}
|
||||
|
||||
backup.Status.CSIVolumeSnapshotsAttempted = len(backup.CSISnapshots)
|
||||
for _, vs := range backup.CSISnapshots {
|
||||
if vs.Status != nil && boolptr.IsSetToTrue(vs.Status.ReadyToUse) {
|
||||
backup.Status.CSIVolumeSnapshotsCompleted++
|
||||
}
|
||||
}
|
||||
volumeSnapshots, volumeSnapshotContents, volumeSnapshotClasses := pkgbackup.UpdateBackupCSISnapshotsStatus(b.kbClient, b.volumeSnapshotLister, backup.Backup, backupLog)
|
||||
|
||||
// Iterate over backup item operations and update progress.
|
||||
// Any errors on operations at this point should be added to backup errors.
|
||||
|
||||
@@ -21,11 +21,13 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
snapshotfake "github.com/kubernetes-csi/external-snapshotter/client/v4/clientset/versioned/fake"
|
||||
snapshotinformers "github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions"
|
||||
@@ -43,6 +45,10 @@ import (
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
kbclient "sigs.k8s.io/controller-runtime/pkg/client"
|
||||
|
||||
kubeutil "github.com/vmware-tanzu/velero/pkg/util/kube"
|
||||
|
||||
fakeClient "sigs.k8s.io/controller-runtime/pkg/client/fake"
|
||||
|
||||
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
pkgbackup "github.com/vmware-tanzu/velero/pkg/backup"
|
||||
"github.com/vmware-tanzu/velero/pkg/builder"
|
||||
@@ -1665,3 +1671,63 @@ func Test_getLastSuccessBySchedule(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Unit tests to make sure that the backup's status is updated correctly during reconcile.
|
||||
// To clear up confusion whether status can be updated with Patch alone without status writer and not kbClient.Status().Patch()
|
||||
func TestPatchResourceWorksWithStatus(t *testing.T) {
|
||||
type args struct {
|
||||
original *velerov1api.Backup
|
||||
updated *velerov1api.Backup
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "patch backup status",
|
||||
args: args{
|
||||
original: defaultBackup().SnapshotMoveData(false).Result(),
|
||||
updated: defaultBackup().SnapshotMoveData(false).WithStatus(velerov1api.BackupStatus{
|
||||
CSIVolumeSnapshotsCompleted: 1,
|
||||
}).Result(),
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
error := velerov1api.AddToScheme(scheme)
|
||||
if error != nil {
|
||||
t.Errorf("PatchResource() error = %v", error)
|
||||
}
|
||||
fakeClient := fakeClient.NewClientBuilder().WithScheme(scheme).WithObjects(tt.args.original).Build()
|
||||
fromCluster := &velerov1api.Backup{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: tt.args.original.Name,
|
||||
Namespace: tt.args.original.Namespace,
|
||||
},
|
||||
}
|
||||
// check original exists
|
||||
if err := fakeClient.Get(context.Background(), kbclient.ObjectKeyFromObject(tt.args.updated), fromCluster); err != nil {
|
||||
t.Errorf("PatchResource() error = %v", err)
|
||||
}
|
||||
// ignore resourceVersion
|
||||
tt.args.updated.ResourceVersion = fromCluster.ResourceVersion
|
||||
tt.args.original.ResourceVersion = fromCluster.ResourceVersion
|
||||
if err := kubeutil.PatchResource(tt.args.original, tt.args.updated, fakeClient); (err != nil) != tt.wantErr {
|
||||
t.Errorf("PatchResource() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
// check updated exists
|
||||
if err := fakeClient.Get(context.Background(), kbclient.ObjectKeyFromObject(tt.args.updated), fromCluster); err != nil {
|
||||
t.Errorf("PatchResource() error = %v", err)
|
||||
}
|
||||
|
||||
// check fromCluster is equal to updated
|
||||
if !reflect.DeepEqual(fromCluster, tt.args.updated) {
|
||||
t.Error(cmp.Diff(fromCluster, tt.args.updated))
|
||||
}
|
||||
})
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
@@ -29,6 +29,8 @@ import (
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
kbclient "sigs.k8s.io/controller-runtime/pkg/client"
|
||||
|
||||
snapshotv1listers "github.com/kubernetes-csi/external-snapshotter/client/v4/listers/volumesnapshot/v1"
|
||||
|
||||
velerov1api "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
pkgbackup "github.com/vmware-tanzu/velero/pkg/backup"
|
||||
"github.com/vmware-tanzu/velero/pkg/metrics"
|
||||
@@ -40,19 +42,21 @@ import (
|
||||
|
||||
// backupFinalizerReconciler reconciles a Backup object
|
||||
type backupFinalizerReconciler struct {
|
||||
client kbclient.Client
|
||||
clock clocks.WithTickerAndDelayedExecution
|
||||
backupper pkgbackup.Backupper
|
||||
newPluginManager func(logrus.FieldLogger) clientmgmt.Manager
|
||||
backupTracker BackupTracker
|
||||
metrics *metrics.ServerMetrics
|
||||
backupStoreGetter persistence.ObjectBackupStoreGetter
|
||||
log logrus.FieldLogger
|
||||
client kbclient.Client
|
||||
volumeSnapshotLister snapshotv1listers.VolumeSnapshotLister
|
||||
clock clocks.WithTickerAndDelayedExecution
|
||||
backupper pkgbackup.Backupper
|
||||
newPluginManager func(logrus.FieldLogger) clientmgmt.Manager
|
||||
backupTracker BackupTracker
|
||||
metrics *metrics.ServerMetrics
|
||||
backupStoreGetter persistence.ObjectBackupStoreGetter
|
||||
log logrus.FieldLogger
|
||||
}
|
||||
|
||||
// NewBackupFinalizerReconciler initializes and returns backupFinalizerReconciler struct.
|
||||
func NewBackupFinalizerReconciler(
|
||||
client kbclient.Client,
|
||||
volumeSnapshotLister snapshotv1listers.VolumeSnapshotLister,
|
||||
clock clocks.WithTickerAndDelayedExecution,
|
||||
backupper pkgbackup.Backupper,
|
||||
newPluginManager func(logrus.FieldLogger) clientmgmt.Manager,
|
||||
@@ -187,6 +191,7 @@ func (r *backupFinalizerReconciler) Reconcile(ctx context.Context, req ctrl.Requ
|
||||
backup.Status.CompletionTimestamp = &metav1.Time{Time: r.clock.Now()}
|
||||
recordBackupMetrics(log, backup, outBackupFile, r.metrics, true)
|
||||
|
||||
pkgbackup.UpdateBackupCSISnapshotsStatus(r.client, r.volumeSnapshotLister, backup, log)
|
||||
// update backup metadata in object store
|
||||
backupJSON := new(bytes.Buffer)
|
||||
if err := encode.To(backup, "json", backupJSON); err != nil {
|
||||
|
||||
@@ -23,6 +23,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
snapshotv1listers "github.com/kubernetes-csi/external-snapshotter/client/v4/listers/volumesnapshot/v1"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
@@ -43,12 +44,14 @@ import (
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/framework"
|
||||
"github.com/vmware-tanzu/velero/pkg/plugin/velero"
|
||||
velerotest "github.com/vmware-tanzu/velero/pkg/test"
|
||||
velerotestmocks "github.com/vmware-tanzu/velero/pkg/test/mocks"
|
||||
)
|
||||
|
||||
func mockBackupFinalizerReconciler(fakeClient kbclient.Client, fakeClock *testclocks.FakeClock) (*backupFinalizerReconciler, *fakeBackupper) {
|
||||
func mockBackupFinalizerReconciler(fakeClient kbclient.Client, fakeVolumeSnapshotLister snapshotv1listers.VolumeSnapshotLister, fakeClock *testclocks.FakeClock) (*backupFinalizerReconciler, *fakeBackupper) {
|
||||
backupper := new(fakeBackupper)
|
||||
return NewBackupFinalizerReconciler(
|
||||
fakeClient,
|
||||
fakeVolumeSnapshotLister,
|
||||
fakeClock,
|
||||
backupper,
|
||||
func(logrus.FieldLogger) clientmgmt.Manager { return pluginManager },
|
||||
@@ -160,7 +163,10 @@ func TestBackupFinalizerReconcile(t *testing.T) {
|
||||
}
|
||||
|
||||
fakeClient := velerotest.NewFakeControllerRuntimeClient(t, initObjs...)
|
||||
reconciler, backupper := mockBackupFinalizerReconciler(fakeClient, fakeClock)
|
||||
|
||||
fakeVolumeSnapshotLister := velerotestmocks.NewVolumeSnapshotLister(t)
|
||||
|
||||
reconciler, backupper := mockBackupFinalizerReconciler(fakeClient, fakeVolumeSnapshotLister, fakeClock)
|
||||
pluginManager.On("CleanupClients").Return(nil)
|
||||
backupStore.On("GetBackupItemOperations", test.backup.Name).Return(test.backupOperations, nil)
|
||||
backupStore.On("GetBackupContents", mock.Anything).Return(io.NopCloser(bytes.NewReader([]byte("hello world"))), nil)
|
||||
|
||||
@@ -275,6 +275,8 @@ func (c *backupOperationsReconciler) updateBackupAndOperationsJSON(
|
||||
return nil
|
||||
}
|
||||
|
||||
// check progress of backupItemOperations
|
||||
// return: inProgressOperations, changes, completedCount, failedCount, errs
|
||||
func getBackupItemOperationProgress(
|
||||
backup *velerov1api.Backup,
|
||||
pluginManager clientmgmt.Manager,
|
||||
|
||||
@@ -287,12 +287,13 @@ func (r *DataDownloadReconciler) Reconcile(ctx context.Context, req ctrl.Request
|
||||
} else if dd.Status.Phase == velerov2alpha1api.DataDownloadPhaseInProgress {
|
||||
log.Info("Data download is in progress")
|
||||
if dd.Spec.Cancel {
|
||||
log.Info("Data download is being canceled")
|
||||
fsRestore := r.dataPathMgr.GetAsyncBR(dd.Name)
|
||||
if fsRestore == nil {
|
||||
r.OnDataDownloadCancelled(ctx, dd.GetNamespace(), dd.GetName())
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
|
||||
log.Info("Data download is being canceled")
|
||||
// Update status to Canceling.
|
||||
original := dd.DeepCopy()
|
||||
dd.Status.Phase = velerov2alpha1api.DataDownloadPhaseCanceling
|
||||
@@ -300,7 +301,6 @@ func (r *DataDownloadReconciler) Reconcile(ctx context.Context, req ctrl.Request
|
||||
log.WithError(err).Error("error updating data download status")
|
||||
return ctrl.Result{}, err
|
||||
}
|
||||
|
||||
fsRestore.Cancel()
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
|
||||
@@ -292,13 +292,15 @@ func (r *DataUploadReconciler) Reconcile(ctx context.Context, req ctrl.Request)
|
||||
} else if du.Status.Phase == velerov2alpha1api.DataUploadPhaseInProgress {
|
||||
log.Info("Data upload is in progress")
|
||||
if du.Spec.Cancel {
|
||||
fsBackup := r.dataPathMgr.GetAsyncBR(du.Name)
|
||||
if fsBackup == nil {
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
log.Info("Data upload is being canceled")
|
||||
|
||||
// Update status to Canceling.
|
||||
fsBackup := r.dataPathMgr.GetAsyncBR(du.Name)
|
||||
if fsBackup == nil {
|
||||
r.OnDataUploadCancelled(ctx, du.GetNamespace(), du.GetName())
|
||||
return ctrl.Result{}, nil
|
||||
}
|
||||
|
||||
// Update status to Canceling
|
||||
original := du.DeepCopy()
|
||||
du.Status.Phase = velerov2alpha1api.DataUploadPhaseCanceling
|
||||
if err := r.client.Patch(ctx, du, client.MergeFrom(original)); err != nil {
|
||||
|
||||
@@ -25,6 +25,7 @@ import (
|
||||
"io"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
@@ -376,7 +377,7 @@ func (r *restoreReconciler) validateAndComplete(restore *api.Restore) (backupInf
|
||||
}
|
||||
|
||||
var resourceModifiers *resourcemodifiers.ResourceModifiers = nil
|
||||
if restore.Spec.ResourceModifier != nil && restore.Spec.ResourceModifier.Kind == resourcemodifiers.ConfigmapRefType {
|
||||
if restore.Spec.ResourceModifier != nil && strings.EqualFold(restore.Spec.ResourceModifier.Kind, resourcemodifiers.ConfigmapRefType) {
|
||||
ResourceModifierConfigMap := &corev1api.ConfigMap{}
|
||||
err := r.kbClient.Get(context.Background(), client.ObjectKey{Namespace: restore.Namespace, Name: restore.Spec.ResourceModifier.Name}, ResourceModifierConfigMap)
|
||||
if err != nil {
|
||||
@@ -514,6 +515,11 @@ func (r *restoreReconciler) runValidatedRestore(restore *api.Restore, info backu
|
||||
return errors.Wrap(err, "error fetching volume snapshots metadata")
|
||||
}
|
||||
|
||||
csiVolumeSnapshots, err := backupStore.GetCSIVolumeSnapshots(restore.Spec.BackupName)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "fail to fetch CSI VolumeSnapshots metadata")
|
||||
}
|
||||
|
||||
restoreLog.Info("starting restore")
|
||||
|
||||
var podVolumeBackups []*api.PodVolumeBackup
|
||||
@@ -530,6 +536,7 @@ func (r *restoreReconciler) runValidatedRestore(restore *api.Restore, info backu
|
||||
BackupReader: backupFile,
|
||||
ResourceModifiers: resourceModifiers,
|
||||
DisableInformerCache: r.disableInformerCache,
|
||||
CSIVolumeSnapshots: csiVolumeSnapshots,
|
||||
}
|
||||
restoreWarnings, restoreErrors := r.restorer.RestoreWithResolvers(restoreReq, actionsResolver, pluginManager)
|
||||
|
||||
|
||||
@@ -23,6 +23,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -471,6 +472,7 @@ func TestRestoreReconcile(t *testing.T) {
|
||||
}
|
||||
if test.expectedRestorerCall != nil {
|
||||
backupStore.On("GetBackupContents", test.backup.Name).Return(io.NopCloser(bytes.NewReader([]byte("hello world"))), nil)
|
||||
backupStore.On("GetCSIVolumeSnapshots", test.backup.Name).Return([]*snapshotv1api.VolumeSnapshot{}, nil)
|
||||
|
||||
restorer.On("RestoreWithResolvers", mock.Anything, mock.Anything, mock.Anything, mock.Anything,
|
||||
mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(warnings, errors)
|
||||
@@ -781,7 +783,8 @@ func TestValidateAndCompleteWithResourceModifierSpecified(t *testing.T) {
|
||||
Spec: velerov1api.RestoreSpec{
|
||||
BackupName: "backup-1",
|
||||
ResourceModifier: &corev1.TypedLocalObjectReference{
|
||||
Kind: resourcemodifiers.ConfigmapRefType,
|
||||
// intentional to ensure case insensitivity works as expected
|
||||
Kind: "confIGMaP",
|
||||
Name: "test-configmap-invalid",
|
||||
},
|
||||
},
|
||||
|
||||
@@ -128,6 +128,13 @@ func (e *csiSnapshotExposer) Expose(ctx context.Context, ownerObject corev1.Obje
|
||||
|
||||
curLog.WithField("vs name", volumeSnapshot.Name).Infof("VS is deleted in namespace %s", volumeSnapshot.Namespace)
|
||||
|
||||
err = csi.RemoveVSCProtect(ctx, e.csiSnapshotClient, vsc.Name, csiExposeParam.Timeout)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "error to remove protect from volume snapshot content")
|
||||
}
|
||||
|
||||
curLog.WithField("vsc name", vsc.Name).Infof("Removed protect from VSC")
|
||||
|
||||
err = csi.EnsureDeleteVSC(ctx, e.csiSnapshotClient, vsc.Name, csiExposeParam.Timeout)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "error to delete volume snapshot content")
|
||||
@@ -190,6 +197,7 @@ func (e *csiSnapshotExposer) GetExposed(ctx context.Context, ownerObject corev1.
|
||||
|
||||
backupPodName := ownerObject.Name
|
||||
backupPVCName := ownerObject.Name
|
||||
volumeName := string(ownerObject.UID)
|
||||
|
||||
curLog := e.log.WithFields(logrus.Fields{
|
||||
"owner": ownerObject.Name,
|
||||
@@ -218,7 +226,20 @@ func (e *csiSnapshotExposer) GetExposed(ctx context.Context, ownerObject corev1.
|
||||
|
||||
curLog.WithField("backup pvc", backupPVCName).Info("Backup PVC is bound")
|
||||
|
||||
return &ExposeResult{ByPod: ExposeByPod{HostingPod: pod, VolumeName: pod.Spec.Volumes[0].Name}}, nil
|
||||
i := 0
|
||||
for i = 0; i < len(pod.Spec.Volumes); i++ {
|
||||
if pod.Spec.Volumes[i].Name == volumeName {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if i == len(pod.Spec.Volumes) {
|
||||
return nil, errors.Errorf("backup pod %s doesn't have the expected backup volume", pod.Name)
|
||||
}
|
||||
|
||||
curLog.WithField("pod", pod.Name).Infof("Backup volume is found in pod at index %v", i)
|
||||
|
||||
return &ExposeResult{ByPod: ExposeByPod{HostingPod: pod, VolumeName: volumeName}}, nil
|
||||
}
|
||||
|
||||
func (e *csiSnapshotExposer) CleanUp(ctx context.Context, ownerObject corev1.ObjectReference, vsName string, sourceNamespace string) {
|
||||
|
||||
@@ -37,6 +37,8 @@ import (
|
||||
velerov1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
|
||||
velerotest "github.com/vmware-tanzu/velero/pkg/test"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
|
||||
|
||||
clientFake "sigs.k8s.io/controller-runtime/pkg/client/fake"
|
||||
)
|
||||
|
||||
type reactor struct {
|
||||
@@ -384,3 +386,180 @@ func TestExpose(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetExpose(t *testing.T) {
|
||||
backup := &velerov1.Backup{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
APIVersion: velerov1.SchemeGroupVersion.String(),
|
||||
Kind: "Backup",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: velerov1.DefaultNamespace,
|
||||
Name: "fake-backup",
|
||||
UID: "fake-uid",
|
||||
},
|
||||
}
|
||||
|
||||
backupPod := &corev1.Pod{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: backup.Namespace,
|
||||
Name: backup.Name,
|
||||
},
|
||||
Spec: corev1.PodSpec{
|
||||
Volumes: []corev1.Volume{
|
||||
{
|
||||
Name: "fake-volume",
|
||||
},
|
||||
{
|
||||
Name: "fake-volume-2",
|
||||
},
|
||||
{
|
||||
Name: string(backup.UID),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
backupPodWithoutVolume := &corev1.Pod{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: backup.Namespace,
|
||||
Name: backup.Name,
|
||||
},
|
||||
Spec: corev1.PodSpec{
|
||||
Volumes: []corev1.Volume{
|
||||
{
|
||||
Name: "fake-volume-1",
|
||||
},
|
||||
{
|
||||
Name: "fake-volume-2",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
backupPVC := &corev1.PersistentVolumeClaim{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: backup.Namespace,
|
||||
Name: backup.Name,
|
||||
},
|
||||
Spec: corev1.PersistentVolumeClaimSpec{
|
||||
VolumeName: "fake-pv-name",
|
||||
},
|
||||
}
|
||||
|
||||
backupPV := &corev1.PersistentVolume{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "fake-pv-name",
|
||||
},
|
||||
}
|
||||
|
||||
scheme := runtime.NewScheme()
|
||||
corev1.AddToScheme(scheme)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
kubeClientObj []runtime.Object
|
||||
ownerBackup *velerov1.Backup
|
||||
exposeWaitParam CSISnapshotExposeWaitParam
|
||||
Timeout time.Duration
|
||||
err string
|
||||
expectedResult *ExposeResult
|
||||
}{
|
||||
{
|
||||
name: "backup pod is not found",
|
||||
ownerBackup: backup,
|
||||
exposeWaitParam: CSISnapshotExposeWaitParam{
|
||||
NodeName: "fake-node",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "wait pvc bound fail",
|
||||
ownerBackup: backup,
|
||||
exposeWaitParam: CSISnapshotExposeWaitParam{
|
||||
NodeName: "fake-node",
|
||||
},
|
||||
kubeClientObj: []runtime.Object{
|
||||
backupPod,
|
||||
},
|
||||
Timeout: time.Second,
|
||||
err: "error to wait backup PVC bound, fake-backup: error to wait for rediness of PVC: error to get pvc velero/fake-backup: persistentvolumeclaims \"fake-backup\" not found",
|
||||
},
|
||||
{
|
||||
name: "backup volume not found in pod",
|
||||
ownerBackup: backup,
|
||||
exposeWaitParam: CSISnapshotExposeWaitParam{
|
||||
NodeName: "fake-node",
|
||||
},
|
||||
kubeClientObj: []runtime.Object{
|
||||
backupPodWithoutVolume,
|
||||
backupPVC,
|
||||
backupPV,
|
||||
},
|
||||
Timeout: time.Second,
|
||||
err: "backup pod fake-backup doesn't have the expected backup volume",
|
||||
},
|
||||
{
|
||||
name: "succeed",
|
||||
ownerBackup: backup,
|
||||
exposeWaitParam: CSISnapshotExposeWaitParam{
|
||||
NodeName: "fake-node",
|
||||
},
|
||||
kubeClientObj: []runtime.Object{
|
||||
backupPod,
|
||||
backupPVC,
|
||||
backupPV,
|
||||
},
|
||||
Timeout: time.Second,
|
||||
expectedResult: &ExposeResult{
|
||||
ByPod: ExposeByPod{
|
||||
HostingPod: backupPod,
|
||||
VolumeName: string(backup.UID),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
fakeKubeClient := fake.NewSimpleClientset(test.kubeClientObj...)
|
||||
|
||||
fakeClientBuilder := clientFake.NewClientBuilder()
|
||||
fakeClientBuilder = fakeClientBuilder.WithScheme(scheme)
|
||||
|
||||
fakeClient := fakeClientBuilder.WithRuntimeObjects(test.kubeClientObj...).Build()
|
||||
|
||||
exposer := csiSnapshotExposer{
|
||||
kubeClient: fakeKubeClient,
|
||||
log: velerotest.NewLogger(),
|
||||
}
|
||||
|
||||
var ownerObject corev1.ObjectReference
|
||||
if test.ownerBackup != nil {
|
||||
ownerObject = corev1.ObjectReference{
|
||||
Kind: test.ownerBackup.Kind,
|
||||
Namespace: test.ownerBackup.Namespace,
|
||||
Name: test.ownerBackup.Name,
|
||||
UID: test.ownerBackup.UID,
|
||||
APIVersion: test.ownerBackup.APIVersion,
|
||||
}
|
||||
}
|
||||
|
||||
test.exposeWaitParam.NodeClient = fakeClient
|
||||
|
||||
result, err := exposer.GetExposed(context.Background(), ownerObject, test.Timeout, &test.exposeWaitParam)
|
||||
if test.err == "" {
|
||||
assert.NoError(t, err)
|
||||
|
||||
if test.expectedResult == nil {
|
||||
assert.Nil(t, result)
|
||||
} else {
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, test.expectedResult.ByPod.VolumeName, result.ByPod.VolumeName)
|
||||
assert.Equal(t, test.expectedResult.ByPod.HostingPod.Name, result.ByPod.HostingPod.Name)
|
||||
}
|
||||
} else {
|
||||
assert.EqualError(t, err, test.err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -105,6 +105,7 @@ func DaemonSet(namespace string, opts ...podTemplateOption) *appsv1.DaemonSet {
|
||||
{
|
||||
Name: "node-agent",
|
||||
Image: c.image,
|
||||
Ports: containerPorts(),
|
||||
ImagePullPolicy: pullPolicy,
|
||||
Command: []string{
|
||||
"/velero",
|
||||
|
||||
@@ -64,7 +64,7 @@ func GetS3ResticEnvVars(config map[string]string) (map[string]string, error) {
|
||||
result[awsSecretKeyEnvVar] = creds.SecretAccessKey
|
||||
result[awsSessTokenEnvVar] = creds.SessionToken
|
||||
result[awsCredentialsFileEnvVar] = ""
|
||||
result[awsProfileEnvVar] = ""
|
||||
result[awsProfileEnvVar] = "" // profile is not needed since we have the credentials from profile via GetS3Credentials
|
||||
result[awsConfigFileEnvVar] = ""
|
||||
}
|
||||
|
||||
@@ -87,6 +87,7 @@ func GetS3Credentials(config map[string]string) (*credentials.Value, error) {
|
||||
opts.SharedConfigFiles = append(opts.SharedConfigFiles, credentialsFile)
|
||||
opts.SharedConfigState = session.SharedConfigEnable
|
||||
}
|
||||
opts.Profile = config[awsProfileKey]
|
||||
|
||||
sess, err := session.NewSessionWithOptions(opts)
|
||||
if err != nil {
|
||||
|
||||
@@ -17,8 +17,11 @@ limitations under the License.
|
||||
package config
|
||||
|
||||
import (
|
||||
"os"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
@@ -63,3 +66,81 @@ func TestGetS3ResticEnvVars(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetS3CredentialsCorrectlyUseProfile(t *testing.T) {
|
||||
type args struct {
|
||||
config map[string]string
|
||||
secretFileContents string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want *credentials.Value
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "Test GetS3Credentials use profile correctly",
|
||||
args: args{
|
||||
config: map[string]string{
|
||||
"profile": "some-profile",
|
||||
},
|
||||
secretFileContents: `[default]
|
||||
aws_access_key_id = default-access-key-id
|
||||
aws_secret_access_key = default-secret-access-key
|
||||
[profile some-profile]
|
||||
aws_access_key_id = some-profile-access-key-id
|
||||
aws_secret_access_key = some-profile-secret-access-key
|
||||
`,
|
||||
},
|
||||
want: &credentials.Value{
|
||||
AccessKeyID: "some-profile-access-key-id",
|
||||
SecretAccessKey: "some-profile-secret-access-key",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Test GetS3Credentials default to default profile",
|
||||
args: args{
|
||||
config: map[string]string{},
|
||||
secretFileContents: `[default]
|
||||
aws_access_key_id = default-access-key-id
|
||||
aws_secret_access_key = default-secret-access-key
|
||||
[profile some-profile]
|
||||
aws_access_key_id = some-profile-access-key-id
|
||||
aws_secret_access_key = some-profile-secret-access-key
|
||||
`,
|
||||
},
|
||||
want: &credentials.Value{
|
||||
AccessKeyID: "default-access-key-id",
|
||||
SecretAccessKey: "default-secret-access-key",
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
tmpFile, err := os.CreateTemp("", "velero-test-aws-credentials")
|
||||
defer os.Remove(tmpFile.Name())
|
||||
if err != nil {
|
||||
t.Errorf("GetS3Credentials() error = %v", err)
|
||||
return
|
||||
}
|
||||
// write the contents of the secret file to the temp file
|
||||
_, err = tmpFile.WriteString(tt.args.secretFileContents)
|
||||
if err != nil {
|
||||
t.Errorf("GetS3Credentials() error = %v", err)
|
||||
return
|
||||
}
|
||||
tt.args.config["credentialsFile"] = tmpFile.Name()
|
||||
got, err := GetS3Credentials(tt.args.config)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("GetS3Credentials() error = %v, wantErr %v", err, tt.wantErr)
|
||||
return
|
||||
}
|
||||
if !reflect.DeepEqual(got.AccessKeyID, tt.want.AccessKeyID) {
|
||||
t.Errorf("GetS3Credentials() got = %v, want %v", got.AccessKeyID, tt.want.AccessKeyID)
|
||||
}
|
||||
if !reflect.DeepEqual(got.SecretAccessKey, tt.want.SecretAccessKey) {
|
||||
t.Errorf("GetS3Credentials() got = %v, want %v", got.SecretAccessKey, tt.want.SecretAccessKey)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -54,7 +54,7 @@ var getS3BucketRegion = repoconfig.GetAWSBucketRegion
|
||||
var getAzureStorageDomain = repoconfig.GetAzureStorageDomain
|
||||
|
||||
type localFuncTable struct {
|
||||
getStorageVariables func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error)
|
||||
getStorageVariables func(*velerov1api.BackupStorageLocation, string, string, credentials.FileStore) (map[string]string, error)
|
||||
getStorageCredentials func(*velerov1api.BackupStorageLocation, credentials.FileStore) (map[string]string, error)
|
||||
}
|
||||
|
||||
@@ -345,7 +345,7 @@ func (urp *unifiedRepoProvider) GetStoreOptions(param interface{}) (map[string]s
|
||||
return map[string]string{}, errors.Errorf("invalid parameter, expect %T, actual %T", RepoParam{}, param)
|
||||
}
|
||||
|
||||
storeVar, err := funcTable.getStorageVariables(repoParam.BackupLocation, urp.repoBackend, repoParam.BackupRepo.Spec.VolumeNamespace)
|
||||
storeVar, err := funcTable.getStorageVariables(repoParam.BackupLocation, urp.repoBackend, repoParam.BackupRepo.Spec.VolumeNamespace, urp.credentialGetter.FromFile)
|
||||
if err != nil {
|
||||
return map[string]string{}, errors.Wrap(err, "error to get storage variables")
|
||||
}
|
||||
@@ -450,7 +450,8 @@ func getStorageCredentials(backupLocation *velerov1api.BackupStorageLocation, cr
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func getStorageVariables(backupLocation *velerov1api.BackupStorageLocation, repoBackend string, repoName string) (map[string]string, error) {
|
||||
func getStorageVariables(backupLocation *velerov1api.BackupStorageLocation, repoBackend string, repoName string,
|
||||
credentialFileStore credentials.FileStore) (map[string]string, error) {
|
||||
result := make(map[string]string)
|
||||
|
||||
backendType := repoconfig.GetBackendType(backupLocation.Spec.Provider, backupLocation.Spec.Config)
|
||||
@@ -463,6 +464,14 @@ func getStorageVariables(backupLocation *velerov1api.BackupStorageLocation, repo
|
||||
config = map[string]string{}
|
||||
}
|
||||
|
||||
if backupLocation.Spec.Credential != nil {
|
||||
credsFile, err := credentialFileStore.Path(backupLocation.Spec.Credential)
|
||||
if err != nil {
|
||||
return map[string]string{}, errors.WithStack(err)
|
||||
}
|
||||
config[repoconfig.CredentialsFileKey] = credsFile
|
||||
}
|
||||
|
||||
bucket := strings.Trim(config["bucket"], "/")
|
||||
prefix := strings.Trim(config["prefix"], "/")
|
||||
if backupLocation.Spec.ObjectStorage != nil {
|
||||
|
||||
@@ -521,12 +521,13 @@ func TestGetStorageVariables(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
credFileStore := new(credmock.FileStore)
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
getS3BucketRegion = tc.getS3BucketRegion
|
||||
getAzureStorageDomain = tc.getAzureStorageDomain
|
||||
|
||||
actual, err := getStorageVariables(&tc.backupLocation, tc.repoBackend, tc.repoName)
|
||||
actual, err := getStorageVariables(&tc.backupLocation, tc.repoBackend, tc.repoName, credFileStore)
|
||||
|
||||
require.Equal(t, tc.expected, actual)
|
||||
|
||||
@@ -615,7 +616,7 @@ func TestGetStoreOptions(t *testing.T) {
|
||||
BackupRepo: &velerov1api.BackupRepository{},
|
||||
},
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, errors.New("fake-error-2")
|
||||
},
|
||||
},
|
||||
@@ -629,7 +630,7 @@ func TestGetStoreOptions(t *testing.T) {
|
||||
BackupRepo: &velerov1api.BackupRepository{},
|
||||
},
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -689,7 +690,7 @@ func TestPrepareRepo(t *testing.T) {
|
||||
repoService: new(reposervicenmocks.BackupRepoService),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, errors.New("fake-store-option-error")
|
||||
},
|
||||
},
|
||||
@@ -700,7 +701,7 @@ func TestPrepareRepo(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -720,7 +721,7 @@ func TestPrepareRepo(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -797,7 +798,7 @@ func TestForget(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -821,7 +822,7 @@ func TestForget(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -849,7 +850,7 @@ func TestForget(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -941,7 +942,7 @@ func TestInitRepo(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -959,7 +960,7 @@ func TestInitRepo(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -1029,7 +1030,7 @@ func TestConnectToRepo(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -1047,7 +1048,7 @@ func TestConnectToRepo(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -1121,7 +1122,7 @@ func TestBoostRepoConnect(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -1148,7 +1149,7 @@ func TestBoostRepoConnect(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -1174,7 +1175,7 @@ func TestBoostRepoConnect(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -1261,7 +1262,7 @@ func TestPruneRepo(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
@@ -1279,7 +1280,7 @@ func TestPruneRepo(t *testing.T) {
|
||||
getter: new(credmock.SecretStore),
|
||||
credStoreReturn: "fake-password",
|
||||
funcTable: localFuncTable{
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string) (map[string]string, error) {
|
||||
getStorageVariables: func(*velerov1api.BackupStorageLocation, string, string, velerocredentials.FileStore) (map[string]string, error) {
|
||||
return map[string]string{}, nil
|
||||
},
|
||||
getStorageCredentials: func(*velerov1api.BackupStorageLocation, velerocredentials.FileStore) (map[string]string, error) {
|
||||
|
||||
@@ -21,6 +21,7 @@ import (
|
||||
"io"
|
||||
"sort"
|
||||
|
||||
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
"github.com/sirupsen/logrus"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
@@ -60,6 +61,7 @@ type Request struct {
|
||||
itemOperationsList *[]*itemoperation.RestoreOperation
|
||||
ResourceModifiers *resourcemodifiers.ResourceModifiers
|
||||
DisableInformerCache bool
|
||||
CSIVolumeSnapshots []*snapshotv1api.VolumeSnapshot
|
||||
}
|
||||
|
||||
type restoredItemStatus struct {
|
||||
|
||||
@@ -30,6 +30,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
@@ -298,6 +299,7 @@ func (kr *kubernetesRestorer) RestoreWithResolvers(
|
||||
pvsToProvision: sets.NewString(),
|
||||
pvRestorer: pvRestorer,
|
||||
volumeSnapshots: req.VolumeSnapshots,
|
||||
csiVolumeSnapshots: req.CSIVolumeSnapshots,
|
||||
podVolumeBackups: req.PodVolumeBackups,
|
||||
resourceTerminatingTimeout: kr.resourceTerminatingTimeout,
|
||||
resourceTimeout: kr.resourceTimeout,
|
||||
@@ -347,6 +349,7 @@ type restoreContext struct {
|
||||
pvsToProvision sets.String
|
||||
pvRestorer PVRestorer
|
||||
volumeSnapshots []*volume.Snapshot
|
||||
csiVolumeSnapshots []*snapshotv1api.VolumeSnapshot
|
||||
podVolumeBackups []*velerov1api.PodVolumeBackup
|
||||
resourceTerminatingTimeout time.Duration
|
||||
resourceTimeout time.Duration
|
||||
@@ -1287,7 +1290,35 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
|
||||
}
|
||||
|
||||
case hasPodVolumeBackup(obj, ctx):
|
||||
ctx.log.Infof("Dynamically re-provisioning persistent volume because it has a pod volume backup to be restored.")
|
||||
ctx.log.WithFields(logrus.Fields{
|
||||
"namespace": obj.GetNamespace(),
|
||||
"name": obj.GetName(),
|
||||
"groupResource": groupResource.String(),
|
||||
}).Infof("Dynamically re-provisioning persistent volume because it has a pod volume backup to be restored.")
|
||||
ctx.pvsToProvision.Insert(name)
|
||||
|
||||
// Return early because we don't want to restore the PV itself, we
|
||||
// want to dynamically re-provision it.
|
||||
return warnings, errs, itemExists
|
||||
|
||||
case hasCSIVolumeSnapshot(ctx, obj):
|
||||
ctx.log.WithFields(logrus.Fields{
|
||||
"namespace": obj.GetNamespace(),
|
||||
"name": obj.GetName(),
|
||||
"groupResource": groupResource.String(),
|
||||
}).Infof("Dynamically re-provisioning persistent volume because it has a related CSI VolumeSnapshot.")
|
||||
ctx.pvsToProvision.Insert(name)
|
||||
|
||||
// Return early because we don't want to restore the PV itself, we
|
||||
// want to dynamically re-provision it.
|
||||
return warnings, errs, itemExists
|
||||
|
||||
case hasSnapshotDataUpload(ctx, obj):
|
||||
ctx.log.WithFields(logrus.Fields{
|
||||
"namespace": obj.GetNamespace(),
|
||||
"name": obj.GetName(),
|
||||
"groupResource": groupResource.String(),
|
||||
}).Infof("Dynamically re-provisioning persistent volume because it has a related snapshot DataUpload.")
|
||||
ctx.pvsToProvision.Insert(name)
|
||||
|
||||
// Return early because we don't want to restore the PV itself, we
|
||||
@@ -1295,7 +1326,11 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
|
||||
return warnings, errs, itemExists
|
||||
|
||||
case hasDeleteReclaimPolicy(obj.Object):
|
||||
ctx.log.Infof("Dynamically re-provisioning persistent volume because it doesn't have a snapshot and its reclaim policy is Delete.")
|
||||
ctx.log.WithFields(logrus.Fields{
|
||||
"namespace": obj.GetNamespace(),
|
||||
"name": obj.GetName(),
|
||||
"groupResource": groupResource.String(),
|
||||
}).Infof("Dynamically re-provisioning persistent volume because it doesn't have a snapshot and its reclaim policy is Delete.")
|
||||
ctx.pvsToProvision.Insert(name)
|
||||
|
||||
// Return early because we don't want to restore the PV itself, we
|
||||
@@ -1303,7 +1338,11 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
|
||||
return warnings, errs, itemExists
|
||||
|
||||
default:
|
||||
ctx.log.Infof("Restoring persistent volume as-is because it doesn't have a snapshot and its reclaim policy is not Delete.")
|
||||
ctx.log.WithFields(logrus.Fields{
|
||||
"namespace": obj.GetNamespace(),
|
||||
"name": obj.GetName(),
|
||||
"groupResource": groupResource.String(),
|
||||
}).Infof("Restoring persistent volume as-is because it doesn't have a snapshot and its reclaim policy is not Delete.")
|
||||
|
||||
// Check to see if the claimRef.namespace field needs to be remapped, and do so if necessary.
|
||||
_, err = remapClaimRefNS(ctx, obj)
|
||||
@@ -1466,7 +1505,7 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
|
||||
}
|
||||
|
||||
if ctx.resourceModifiers != nil {
|
||||
if errList := ctx.resourceModifiers.ApplyResourceModifierRules(obj, groupResource.String(), ctx.log); errList != nil {
|
||||
if errList := ctx.resourceModifiers.ApplyResourceModifierRules(obj, groupResource.String(), ctx.kbClient.Scheme(), ctx.log); errList != nil {
|
||||
for _, err := range errList {
|
||||
errs.Add(namespace, err)
|
||||
}
|
||||
@@ -1524,18 +1563,17 @@ func (ctx *restoreContext) restoreItem(obj *unstructured.Unstructured, groupReso
|
||||
}
|
||||
|
||||
if restoreErr != nil {
|
||||
// check for the existence of the object in cluster, if no error then it implies that object exists
|
||||
// and if err then we want to judge whether there is an existing error in the previous creation.
|
||||
// if so, we will return the 'get' error.
|
||||
// otherwise, we will return the original creation error.
|
||||
// check for the existence of the object that failed creation due to alreadyExist in cluster, if no error then it implies that object exists.
|
||||
// and if err then itemExists remains false as we were not able to confirm the existence of the object via Get call or creation call.
|
||||
// We return the get error as a warning to notify the user that the object could exist in cluster and we were not able to confirm it.
|
||||
if !ctx.disableInformerCache {
|
||||
fromCluster, err = ctx.getResource(groupResource, obj, namespace, name)
|
||||
} else {
|
||||
fromCluster, err = resourceClient.Get(name, metav1.GetOptions{})
|
||||
}
|
||||
if err != nil && isAlreadyExistsError {
|
||||
ctx.log.Errorf("Error retrieving in-cluster version of %s: %v", kube.NamespaceAndName(obj), err)
|
||||
errs.Add(namespace, err)
|
||||
ctx.log.Warnf("Unable to retrieve in-cluster version of %s: %v, object won't be restored by velero or have restore labels, and existing resource policy is not applied", kube.NamespaceAndName(obj), err)
|
||||
warnings.Add(namespace, err)
|
||||
return warnings, errs, itemExists
|
||||
}
|
||||
}
|
||||
@@ -1930,6 +1968,55 @@ func hasSnapshot(pvName string, snapshots []*volume.Snapshot) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func hasCSIVolumeSnapshot(ctx *restoreContext, unstructuredPV *unstructured.Unstructured) bool {
|
||||
pv := new(v1.PersistentVolume)
|
||||
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(unstructuredPV.Object, pv); err != nil {
|
||||
ctx.log.WithError(err).Warnf("Unable to convert PV from unstructured to structured")
|
||||
return false
|
||||
}
|
||||
|
||||
for _, vs := range ctx.csiVolumeSnapshots {
|
||||
if pv.Spec.ClaimRef.Name == *vs.Spec.Source.PersistentVolumeClaimName &&
|
||||
pv.Spec.ClaimRef.Namespace == vs.Namespace {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func hasSnapshotDataUpload(ctx *restoreContext, unstructuredPV *unstructured.Unstructured) bool {
|
||||
pv := new(v1.PersistentVolume)
|
||||
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(unstructuredPV.Object, pv); err != nil {
|
||||
ctx.log.WithError(err).Warnf("Unable to convert PV from unstructured to structured")
|
||||
return false
|
||||
}
|
||||
|
||||
if pv.Spec.ClaimRef == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
dataUploadResultList := new(v1.ConfigMapList)
|
||||
err := ctx.kbClient.List(go_context.TODO(), dataUploadResultList, &crclient.ListOptions{
|
||||
LabelSelector: labels.SelectorFromSet(map[string]string{
|
||||
velerov1api.RestoreUIDLabel: label.GetValidName(string(ctx.restore.GetUID())),
|
||||
velerov1api.PVCNamespaceNameLabel: label.GetValidName(pv.Spec.ClaimRef.Namespace + "." + pv.Spec.ClaimRef.Name),
|
||||
velerov1api.ResourceUsageLabel: label.GetValidName(string(velerov1api.VeleroResourceUsageDataUploadResult)),
|
||||
}),
|
||||
})
|
||||
if err != nil {
|
||||
ctx.log.WithError(err).Warnf("Fail to list DataUpload result CM.")
|
||||
return false
|
||||
}
|
||||
|
||||
if len(dataUploadResultList.Items) != 1 {
|
||||
ctx.log.WithError(fmt.Errorf("dataupload result number is not expected")).
|
||||
Warnf("Got %d DataUpload result. Expect one.", len(dataUploadResultList.Items))
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func hasPodVolumeBackup(unstructuredPV *unstructured.Unstructured, ctx *restoreContext) bool {
|
||||
if len(ctx.podVolumeBackups) == 0 {
|
||||
return false
|
||||
|
||||
@@ -25,6 +25,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -2256,6 +2257,7 @@ func (*volumeSnapshotter) DeleteSnapshot(snapshotID string) error {
|
||||
// Verification is done by looking at the contents of the API and the metadata/spec/status of
|
||||
// the items in the API.
|
||||
func TestRestorePersistentVolumes(t *testing.T) {
|
||||
testPVCName := "testPVC"
|
||||
tests := []struct {
|
||||
name string
|
||||
restore *velerov1api.Restore
|
||||
@@ -2265,6 +2267,8 @@ func TestRestorePersistentVolumes(t *testing.T) {
|
||||
volumeSnapshots []*volume.Snapshot
|
||||
volumeSnapshotLocations []*velerov1api.VolumeSnapshotLocation
|
||||
volumeSnapshotterGetter volumeSnapshotterGetter
|
||||
csiVolumeSnapshots []*snapshotv1api.VolumeSnapshot
|
||||
dataUploadResult *corev1api.ConfigMap
|
||||
want []*test.APIResource
|
||||
wantError bool
|
||||
wantWarning bool
|
||||
@@ -2923,6 +2927,77 @@ func TestRestorePersistentVolumes(t *testing.T) {
|
||||
),
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "when a PV with a reclaim policy of retain has a CSI VolumeSnapshot and does not exist in-cluster, the PV is not restored",
|
||||
restore: defaultRestore().Result(),
|
||||
backup: defaultBackup().Result(),
|
||||
tarball: test.NewTarWriter(t).
|
||||
AddItems("persistentvolumes",
|
||||
builder.ForPersistentVolume("pv-1").
|
||||
ReclaimPolicy(corev1api.PersistentVolumeReclaimRetain).
|
||||
ClaimRef("velero", testPVCName).
|
||||
Result(),
|
||||
).
|
||||
Done(),
|
||||
apiResources: []*test.APIResource{
|
||||
test.PVs(),
|
||||
test.PVCs(),
|
||||
},
|
||||
csiVolumeSnapshots: []*snapshotv1api.VolumeSnapshot{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Namespace: "velero",
|
||||
Name: "test",
|
||||
},
|
||||
Spec: snapshotv1api.VolumeSnapshotSpec{
|
||||
Source: snapshotv1api.VolumeSnapshotSource{
|
||||
PersistentVolumeClaimName: &testPVCName,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
volumeSnapshotLocations: []*velerov1api.VolumeSnapshotLocation{
|
||||
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "default").Provider("provider-1").Result(),
|
||||
},
|
||||
volumeSnapshotterGetter: map[string]vsv1.VolumeSnapshotter{
|
||||
"provider-1": &volumeSnapshotter{
|
||||
snapshotVolumes: map[string]string{"snapshot-1": "new-volume"},
|
||||
},
|
||||
},
|
||||
want: []*test.APIResource{},
|
||||
},
|
||||
{
|
||||
name: "when a PV with a reclaim policy of retain has a DataUpload result CM and does not exist in-cluster, the PV is not restored",
|
||||
restore: defaultRestore().ObjectMeta(builder.WithUID("fakeUID")).Result(),
|
||||
backup: defaultBackup().Result(),
|
||||
tarball: test.NewTarWriter(t).
|
||||
AddItems("persistentvolumes",
|
||||
builder.ForPersistentVolume("pv-1").
|
||||
ReclaimPolicy(corev1api.PersistentVolumeReclaimRetain).
|
||||
ClaimRef("velero", testPVCName).
|
||||
Result(),
|
||||
).
|
||||
Done(),
|
||||
apiResources: []*test.APIResource{
|
||||
test.PVs(),
|
||||
test.PVCs(),
|
||||
test.ConfigMaps(),
|
||||
},
|
||||
volumeSnapshotLocations: []*velerov1api.VolumeSnapshotLocation{
|
||||
builder.ForVolumeSnapshotLocation(velerov1api.DefaultNamespace, "default").Provider("provider-1").Result(),
|
||||
},
|
||||
volumeSnapshotterGetter: map[string]vsv1.VolumeSnapshotter{
|
||||
"provider-1": &volumeSnapshotter{
|
||||
snapshotVolumes: map[string]string{"snapshot-1": "new-volume"},
|
||||
},
|
||||
},
|
||||
dataUploadResult: builder.ForConfigMap("velero", "test").ObjectMeta(builder.WithLabelsMap(map[string]string{
|
||||
velerov1api.RestoreUIDLabel: "fakeUID",
|
||||
velerov1api.PVCNamespaceNameLabel: "velero/testPVC",
|
||||
velerov1api.ResourceUsageLabel: string(velerov1api.VeleroResourceUsageDataUploadResult),
|
||||
})).Result(),
|
||||
want: []*test.APIResource{},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
@@ -2939,6 +3014,10 @@ func TestRestorePersistentVolumes(t *testing.T) {
|
||||
require.NoError(t, h.restorer.kbClient.Create(context.Background(), vsl))
|
||||
}
|
||||
|
||||
if tc.dataUploadResult != nil {
|
||||
require.NoError(t, h.restorer.kbClient.Create(context.TODO(), tc.dataUploadResult))
|
||||
}
|
||||
|
||||
for _, r := range tc.apiResources {
|
||||
h.AddItems(t, r)
|
||||
}
|
||||
@@ -2955,11 +3034,12 @@ func TestRestorePersistentVolumes(t *testing.T) {
|
||||
}
|
||||
|
||||
data := &Request{
|
||||
Log: h.log,
|
||||
Restore: tc.restore,
|
||||
Backup: tc.backup,
|
||||
VolumeSnapshots: tc.volumeSnapshots,
|
||||
BackupReader: tc.tarball,
|
||||
Log: h.log,
|
||||
Restore: tc.restore,
|
||||
Backup: tc.backup,
|
||||
VolumeSnapshots: tc.volumeSnapshots,
|
||||
BackupReader: tc.tarball,
|
||||
CSIVolumeSnapshots: tc.csiVolumeSnapshots,
|
||||
}
|
||||
warnings, errs := h.restorer.Restore(
|
||||
data,
|
||||
@@ -3652,3 +3732,175 @@ func TestIsAlreadyExistsError(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestHasCSIVolumeSnapshot(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
vs *snapshotv1api.VolumeSnapshot
|
||||
obj *unstructured.Unstructured
|
||||
expectedResult bool
|
||||
}{
|
||||
{
|
||||
name: "Invalid PV, expect false.",
|
||||
obj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"kind": 1,
|
||||
},
|
||||
},
|
||||
expectedResult: false,
|
||||
},
|
||||
{
|
||||
name: "Cannot find VS, expect false",
|
||||
obj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"kind": "PersistentVolume",
|
||||
"apiVersion": "v1",
|
||||
"metadata": map[string]interface{}{
|
||||
"namespace": "default",
|
||||
"name": "test",
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedResult: false,
|
||||
},
|
||||
{
|
||||
name: "Find VS, expect true.",
|
||||
obj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"kind": "PersistentVolume",
|
||||
"apiVersion": "v1",
|
||||
"metadata": map[string]interface{}{
|
||||
"namespace": "velero",
|
||||
"name": "test",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
"claimRef": map[string]interface{}{
|
||||
"namespace": "velero",
|
||||
"name": "test",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
vs: builder.ForVolumeSnapshot("velero", "test").SourcePVC("test").Result(),
|
||||
expectedResult: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
h := newHarness(t)
|
||||
|
||||
ctx := &restoreContext{
|
||||
log: h.log,
|
||||
}
|
||||
|
||||
if tc.vs != nil {
|
||||
ctx.csiVolumeSnapshots = []*snapshotv1api.VolumeSnapshot{tc.vs}
|
||||
}
|
||||
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
require.Equal(t, tc.expectedResult, hasCSIVolumeSnapshot(ctx, tc.obj))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestHasSnapshotDataUpload(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
duResult *corev1api.ConfigMap
|
||||
obj *unstructured.Unstructured
|
||||
expectedResult bool
|
||||
restore *velerov1api.Restore
|
||||
}{
|
||||
{
|
||||
name: "Invalid PV, expect false.",
|
||||
obj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"kind": 1,
|
||||
},
|
||||
},
|
||||
expectedResult: false,
|
||||
},
|
||||
{
|
||||
name: "PV without ClaimRef, expect false",
|
||||
obj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"kind": "PersistentVolume",
|
||||
"apiVersion": "v1",
|
||||
"metadata": map[string]interface{}{
|
||||
"namespace": "default",
|
||||
"name": "test",
|
||||
},
|
||||
},
|
||||
},
|
||||
duResult: builder.ForConfigMap("velero", "test").Result(),
|
||||
restore: builder.ForRestore("velero", "test").ObjectMeta(builder.WithUID("fakeUID")).Result(),
|
||||
expectedResult: false,
|
||||
},
|
||||
{
|
||||
name: "Cannot find DataUploadResult CM, expect false",
|
||||
obj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"kind": "PersistentVolume",
|
||||
"apiVersion": "v1",
|
||||
"metadata": map[string]interface{}{
|
||||
"namespace": "default",
|
||||
"name": "test",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
"claimRef": map[string]interface{}{
|
||||
"namespace": "velero",
|
||||
"name": "testPVC",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
duResult: builder.ForConfigMap("velero", "test").Result(),
|
||||
restore: builder.ForRestore("velero", "test").ObjectMeta(builder.WithUID("fakeUID")).Result(),
|
||||
expectedResult: false,
|
||||
},
|
||||
{
|
||||
name: "Find DataUploadResult CM, expect true",
|
||||
obj: &unstructured.Unstructured{
|
||||
Object: map[string]interface{}{
|
||||
"kind": "PersistentVolume",
|
||||
"apiVersion": "v1",
|
||||
"metadata": map[string]interface{}{
|
||||
"namespace": "default",
|
||||
"name": "test",
|
||||
},
|
||||
"spec": map[string]interface{}{
|
||||
"claimRef": map[string]interface{}{
|
||||
"namespace": "velero",
|
||||
"name": "testPVC",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
duResult: builder.ForConfigMap("velero", "test").ObjectMeta(builder.WithLabelsMap(map[string]string{
|
||||
velerov1api.RestoreUIDLabel: "fakeUID",
|
||||
velerov1api.PVCNamespaceNameLabel: "velero/testPVC",
|
||||
velerov1api.ResourceUsageLabel: string(velerov1api.VeleroResourceUsageDataUploadResult),
|
||||
})).Result(),
|
||||
restore: builder.ForRestore("velero", "test").ObjectMeta(builder.WithUID("fakeUID")).Result(),
|
||||
expectedResult: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
h := newHarness(t)
|
||||
|
||||
ctx := &restoreContext{
|
||||
log: h.log,
|
||||
kbClient: h.restorer.kbClient,
|
||||
restore: tc.restore,
|
||||
}
|
||||
|
||||
if tc.duResult != nil {
|
||||
require.NoError(t, ctx.kbClient.Create(context.TODO(), tc.duResult))
|
||||
}
|
||||
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
require.Equal(t, tc.expectedResult, hasSnapshotDataUpload(ctx, tc.obj))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
20
pkg/test/mocks.go
Normal file
20
pkg/test/mocks.go
Normal file
@@ -0,0 +1,20 @@
|
||||
package test
|
||||
|
||||
import (
|
||||
snapshotv1 "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
snapshotv1listers "github.com/kubernetes-csi/external-snapshotter/client/v4/listers/volumesnapshot/v1"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
)
|
||||
|
||||
// VolumeSnapshotLister helps list VolumeSnapshots.
|
||||
// All objects returned here must be treated as read-only.
|
||||
//
|
||||
//go:generate mockery --name VolumeSnapshotLister
|
||||
type VolumeSnapshotLister interface {
|
||||
// List lists all VolumeSnapshots in the indexer.
|
||||
// Objects returned here must be treated as read-only.
|
||||
List(selector labels.Selector) (ret []*snapshotv1.VolumeSnapshot, err error)
|
||||
// VolumeSnapshots returns an object that can list and get VolumeSnapshots.
|
||||
VolumeSnapshots(namespace string) snapshotv1listers.VolumeSnapshotNamespaceLister
|
||||
snapshotv1listers.VolumeSnapshotListerExpansion
|
||||
}
|
||||
73
pkg/test/mocks/VolumeSnapshotLister.go
Normal file
73
pkg/test/mocks/VolumeSnapshotLister.go
Normal file
@@ -0,0 +1,73 @@
|
||||
// Code generated by mockery v2.35.4. DO NOT EDIT.
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
labels "k8s.io/apimachinery/pkg/labels"
|
||||
|
||||
v1 "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
|
||||
volumesnapshotv1 "github.com/kubernetes-csi/external-snapshotter/client/v4/listers/volumesnapshot/v1"
|
||||
)
|
||||
|
||||
// VolumeSnapshotLister is an autogenerated mock type for the VolumeSnapshotLister type
|
||||
type VolumeSnapshotLister struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
// List provides a mock function with given fields: selector
|
||||
func (_m *VolumeSnapshotLister) List(selector labels.Selector) ([]*v1.VolumeSnapshot, error) {
|
||||
ret := _m.Called(selector)
|
||||
|
||||
var r0 []*v1.VolumeSnapshot
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(0).(func(labels.Selector) ([]*v1.VolumeSnapshot, error)); ok {
|
||||
return rf(selector)
|
||||
}
|
||||
if rf, ok := ret.Get(0).(func(labels.Selector) []*v1.VolumeSnapshot); ok {
|
||||
r0 = rf(selector)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).([]*v1.VolumeSnapshot)
|
||||
}
|
||||
}
|
||||
|
||||
if rf, ok := ret.Get(1).(func(labels.Selector) error); ok {
|
||||
r1 = rf(selector)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// VolumeSnapshots provides a mock function with given fields: namespace
|
||||
func (_m *VolumeSnapshotLister) VolumeSnapshots(namespace string) volumesnapshotv1.VolumeSnapshotNamespaceLister {
|
||||
ret := _m.Called(namespace)
|
||||
|
||||
var r0 volumesnapshotv1.VolumeSnapshotNamespaceLister
|
||||
if rf, ok := ret.Get(0).(func(string) volumesnapshotv1.VolumeSnapshotNamespaceLister); ok {
|
||||
r0 = rf(namespace)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(volumesnapshotv1.VolumeSnapshotNamespaceLister)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// NewVolumeSnapshotLister creates a new instance of VolumeSnapshotLister. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||
// The first argument is typically a *testing.T value.
|
||||
func NewVolumeSnapshotLister(t interface {
|
||||
mock.TestingT
|
||||
Cleanup(func())
|
||||
}) *VolumeSnapshotLister {
|
||||
mock := &VolumeSnapshotLister{}
|
||||
mock.Mock.Test(t)
|
||||
|
||||
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||
|
||||
return mock
|
||||
}
|
||||
@@ -142,6 +142,17 @@ func ServiceAccounts(items ...metav1.Object) *APIResource {
|
||||
}
|
||||
}
|
||||
|
||||
func ConfigMaps(items ...metav1.Object) *APIResource {
|
||||
return &APIResource{
|
||||
Group: "",
|
||||
Version: "v1",
|
||||
Name: "configmaps",
|
||||
ShortName: "cm",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
}
|
||||
|
||||
func CRDs(items ...metav1.Object) *APIResource {
|
||||
return &APIResource{
|
||||
Group: "apiextensions.k8s.io",
|
||||
|
||||
@@ -236,10 +236,10 @@ func SnapshotSource(
|
||||
|
||||
mani, err := loadSnapshotFunc(ctx, rep, manifest.ID(parentSnapshot))
|
||||
if err != nil {
|
||||
return "", 0, errors.Wrapf(err, "Failed to load previous snapshot %v from kopia", parentSnapshot)
|
||||
log.WithError(err).Warnf("Failed to load previous snapshot %v from kopia, fallback to full backup", parentSnapshot)
|
||||
} else {
|
||||
previous = append(previous, mani)
|
||||
}
|
||||
|
||||
previous = append(previous, mani)
|
||||
} else {
|
||||
log.Infof("Searching for parent snapshot")
|
||||
|
||||
|
||||
@@ -112,7 +112,7 @@ func TestSnapshotSource(t *testing.T) {
|
||||
notError: true,
|
||||
},
|
||||
{
|
||||
name: "failed to load snapshot",
|
||||
name: "failed to load snapshot, should fallback to full backup and not error",
|
||||
args: []mockArgs{
|
||||
{methodName: "LoadSnapshot", returns: []interface{}{manifest, errors.New("failed to load snapshot")}},
|
||||
{methodName: "SaveSnapshot", returns: []interface{}{manifest.ID, nil}},
|
||||
@@ -122,7 +122,7 @@ func TestSnapshotSource(t *testing.T) {
|
||||
{methodName: "Upload", returns: []interface{}{manifest, nil}},
|
||||
{methodName: "Flush", returns: []interface{}{nil}},
|
||||
},
|
||||
notError: false,
|
||||
notError: true,
|
||||
},
|
||||
{
|
||||
name: "failed to save snapshot",
|
||||
|
||||
@@ -31,6 +31,7 @@ import (
|
||||
|
||||
"github.com/vmware-tanzu/velero/pkg/util/boolptr"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/stringptr"
|
||||
"github.com/vmware-tanzu/velero/pkg/util/stringslice"
|
||||
|
||||
snapshotv1api "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
|
||||
snapshotter "github.com/kubernetes-csi/external-snapshotter/client/v4/clientset/versioned/typed/volumesnapshot/v1"
|
||||
@@ -41,7 +42,8 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
waitInternal = 2 * time.Second
|
||||
waitInternal = 2 * time.Second
|
||||
volumeSnapshotContentProtectFinalizer = "velero.io/volume-snapshot-content-protect-finalizer"
|
||||
)
|
||||
|
||||
// WaitVolumeSnapshotReady waits a VS to become ready to use until the timeout reaches
|
||||
@@ -97,36 +99,17 @@ func GetVolumeSnapshotContentForVolumeSnapshot(volSnap *snapshotv1api.VolumeSnap
|
||||
return vsc, nil
|
||||
}
|
||||
|
||||
// RetainVSC updates the VSC's deletion policy to Retain and return the update VSC
|
||||
// RetainVSC updates the VSC's deletion policy to Retain and add a finalier and then return the update VSC
|
||||
func RetainVSC(ctx context.Context, snapshotClient snapshotter.SnapshotV1Interface,
|
||||
vsc *snapshotv1api.VolumeSnapshotContent) (*snapshotv1api.VolumeSnapshotContent, error) {
|
||||
if vsc.Spec.DeletionPolicy == snapshotv1api.VolumeSnapshotContentRetain {
|
||||
return vsc, nil
|
||||
}
|
||||
origBytes, err := json.Marshal(vsc)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error marshaling original VSC")
|
||||
}
|
||||
|
||||
updated := vsc.DeepCopy()
|
||||
updated.Spec.DeletionPolicy = snapshotv1api.VolumeSnapshotContentRetain
|
||||
|
||||
updatedBytes, err := json.Marshal(updated)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error marshaling updated VSC")
|
||||
}
|
||||
|
||||
patchBytes, err := jsonpatch.CreateMergePatch(origBytes, updatedBytes)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error creating json merge patch for VSC")
|
||||
}
|
||||
|
||||
retained, err := snapshotClient.VolumeSnapshotContents().Patch(ctx, vsc.Name, types.MergePatchType, patchBytes, metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error patching VSC")
|
||||
}
|
||||
|
||||
return retained, nil
|
||||
return patchVSC(ctx, snapshotClient, vsc, func(updated *snapshotv1api.VolumeSnapshotContent) {
|
||||
updated.Spec.DeletionPolicy = snapshotv1api.VolumeSnapshotContentRetain
|
||||
updated.Finalizers = append(updated.Finalizers, volumeSnapshotContentProtectFinalizer)
|
||||
})
|
||||
}
|
||||
|
||||
// DeleteVolumeSnapshotContentIfAny deletes a VSC by name if it exists, and log an error when the deletion fails
|
||||
@@ -169,11 +152,35 @@ func EnsureDeleteVS(ctx context.Context, snapshotClient snapshotter.SnapshotV1In
|
||||
return nil
|
||||
}
|
||||
|
||||
func RemoveVSCProtect(ctx context.Context, snapshotClient snapshotter.SnapshotV1Interface, vscName string, timeout time.Duration) error {
|
||||
err := wait.PollImmediate(waitInternal, timeout, func() (bool, error) {
|
||||
vsc, err := snapshotClient.VolumeSnapshotContents().Get(ctx, vscName, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
return false, errors.Wrapf(err, "error to get VolumeSnapshotContent %s", vscName)
|
||||
}
|
||||
|
||||
vsc.Finalizers = stringslice.Except(vsc.Finalizers, volumeSnapshotContentProtectFinalizer)
|
||||
|
||||
_, err = snapshotClient.VolumeSnapshotContents().Update(ctx, vsc, metav1.UpdateOptions{})
|
||||
if err == nil {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
if !apierrors.IsConflict(err) {
|
||||
return false, errors.Wrapf(err, "error to update VolumeSnapshotContent %s", vscName)
|
||||
}
|
||||
|
||||
return false, nil
|
||||
})
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// EnsureDeleteVSC asserts the existence of a VSC by name, deletes it and waits for its disappearance and returns errors on any failure
|
||||
func EnsureDeleteVSC(ctx context.Context, snapshotClient snapshotter.SnapshotV1Interface,
|
||||
vscName string, timeout time.Duration) error {
|
||||
err := snapshotClient.VolumeSnapshotContents().Delete(ctx, vscName, metav1.DeleteOptions{})
|
||||
if err != nil {
|
||||
if err != nil && !apierrors.IsNotFound(err) {
|
||||
return errors.Wrap(err, "error to delete volume snapshot content")
|
||||
}
|
||||
|
||||
@@ -208,3 +215,31 @@ func DeleteVolumeSnapshotIfAny(ctx context.Context, snapshotClient snapshotter.S
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func patchVSC(ctx context.Context, snapshotClient snapshotter.SnapshotV1Interface,
|
||||
vsc *snapshotv1api.VolumeSnapshotContent, updateFunc func(*snapshotv1api.VolumeSnapshotContent)) (*snapshotv1api.VolumeSnapshotContent, error) {
|
||||
origBytes, err := json.Marshal(vsc)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error marshaling original VSC")
|
||||
}
|
||||
|
||||
updated := vsc.DeepCopy()
|
||||
updateFunc(updated)
|
||||
|
||||
updatedBytes, err := json.Marshal(updated)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error marshaling updated VSC")
|
||||
}
|
||||
|
||||
patchBytes, err := jsonpatch.CreateMergePatch(origBytes, updatedBytes)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error creating json merge patch for VSC")
|
||||
}
|
||||
|
||||
patched, err := snapshotClient.VolumeSnapshotContents().Patch(ctx, vsc.Name, types.MergePatchType, patchBytes, metav1.PatchOptions{})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error patching VSC")
|
||||
}
|
||||
|
||||
return patched, nil
|
||||
}
|
||||
|
||||
@@ -34,6 +34,8 @@ import (
|
||||
"github.com/vmware-tanzu/velero/pkg/util/stringptr"
|
||||
|
||||
velerotest "github.com/vmware-tanzu/velero/pkg/test"
|
||||
|
||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
)
|
||||
|
||||
type reactor struct {
|
||||
@@ -364,9 +366,23 @@ func TestEnsureDeleteVSC(t *testing.T) {
|
||||
err string
|
||||
}{
|
||||
{
|
||||
name: "delete fail",
|
||||
name: "delete fail on VSC not found",
|
||||
vscName: "fake-vsc",
|
||||
err: "error to delete volume snapshot content: volumesnapshotcontents.snapshot.storage.k8s.io \"fake-vsc\" not found",
|
||||
},
|
||||
{
|
||||
name: "delete fail on others",
|
||||
vscName: "fake-vsc",
|
||||
clientObj: []runtime.Object{vscObj},
|
||||
reactors: []reactor{
|
||||
{
|
||||
verb: "delete",
|
||||
resource: "volumesnapshotcontents",
|
||||
reactorFunc: func(action clientTesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
return true, nil, errors.New("fake-delete-error")
|
||||
},
|
||||
},
|
||||
},
|
||||
err: "error to delete volume snapshot content: fake-delete-error",
|
||||
},
|
||||
{
|
||||
name: "wait fail",
|
||||
@@ -399,7 +415,7 @@ func TestEnsureDeleteVSC(t *testing.T) {
|
||||
}
|
||||
|
||||
err := EnsureDeleteVSC(context.Background(), fakeSnapshotClient.SnapshotV1(), test.vscName, time.Millisecond)
|
||||
if err != nil {
|
||||
if test.err != "" {
|
||||
assert.EqualError(t, err, test.err)
|
||||
} else {
|
||||
assert.NoError(t, err)
|
||||
@@ -601,7 +617,8 @@ func TestRetainVSC(t *testing.T) {
|
||||
clientObj: []runtime.Object{vscObj},
|
||||
updated: &snapshotv1api.VolumeSnapshotContent{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "fake-vsc",
|
||||
Name: "fake-vsc",
|
||||
Finalizers: []string{volumeSnapshotContentProtectFinalizer},
|
||||
},
|
||||
Spec: snapshotv1api.VolumeSnapshotContentSpec{
|
||||
DeletionPolicy: snapshotv1api.VolumeSnapshotContentRetain,
|
||||
@@ -634,3 +651,98 @@ func TestRetainVSC(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoveVSCProtect(t *testing.T) {
|
||||
vscObj := &snapshotv1api.VolumeSnapshotContent{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "fake-vsc",
|
||||
Finalizers: []string{volumeSnapshotContentProtectFinalizer},
|
||||
},
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
clientObj []runtime.Object
|
||||
reactors []reactor
|
||||
vsc string
|
||||
updated *snapshotv1api.VolumeSnapshotContent
|
||||
timeout time.Duration
|
||||
err string
|
||||
}{
|
||||
{
|
||||
name: "get vsc error",
|
||||
vsc: "fake-vsc",
|
||||
err: "error to get VolumeSnapshotContent fake-vsc: volumesnapshotcontents.snapshot.storage.k8s.io \"fake-vsc\" not found",
|
||||
},
|
||||
{
|
||||
name: "update vsc fail",
|
||||
vsc: "fake-vsc",
|
||||
clientObj: []runtime.Object{vscObj},
|
||||
reactors: []reactor{
|
||||
{
|
||||
verb: "update",
|
||||
resource: "volumesnapshotcontents",
|
||||
reactorFunc: func(action clientTesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
return true, nil, errors.New("fake-update-error")
|
||||
},
|
||||
},
|
||||
},
|
||||
err: "error to update VolumeSnapshotContent fake-vsc: fake-update-error",
|
||||
},
|
||||
{
|
||||
name: "update vsc timeout",
|
||||
vsc: "fake-vsc",
|
||||
clientObj: []runtime.Object{vscObj},
|
||||
reactors: []reactor{
|
||||
{
|
||||
verb: "update",
|
||||
resource: "volumesnapshotcontents",
|
||||
reactorFunc: func(action clientTesting.Action) (handled bool, ret runtime.Object, err error) {
|
||||
return true, nil, &apierrors.StatusError{ErrStatus: metav1.Status{
|
||||
Reason: metav1.StatusReasonConflict,
|
||||
}}
|
||||
},
|
||||
},
|
||||
},
|
||||
timeout: time.Second,
|
||||
err: "timed out waiting for the condition",
|
||||
},
|
||||
{
|
||||
name: "succeed",
|
||||
vsc: "fake-vsc",
|
||||
clientObj: []runtime.Object{vscObj},
|
||||
timeout: time.Second,
|
||||
updated: &snapshotv1api.VolumeSnapshotContent{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "fake-vsc",
|
||||
Finalizers: []string{},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
fakeSnapshotClient := snapshotFake.NewSimpleClientset(test.clientObj...)
|
||||
|
||||
for _, reactor := range test.reactors {
|
||||
fakeSnapshotClient.Fake.PrependReactor(reactor.verb, reactor.resource, reactor.reactorFunc)
|
||||
}
|
||||
|
||||
err := RemoveVSCProtect(context.Background(), fakeSnapshotClient.SnapshotV1(), test.vsc, test.timeout)
|
||||
|
||||
if len(test.err) == 0 {
|
||||
assert.NoError(t, err)
|
||||
} else {
|
||||
assert.EqualError(t, err, test.err)
|
||||
}
|
||||
|
||||
if test.updated != nil {
|
||||
updated, err := fakeSnapshotClient.SnapshotV1().VolumeSnapshotContents().Get(context.Background(), test.vsc, metav1.GetOptions{})
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.Equal(t, test.updated.Finalizers, updated.Finalizers)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -105,3 +105,83 @@ resourceModifierRules:
|
||||
- Update a container's image using a json patch with positional arrays
|
||||
kubectl patch pod valid-pod -type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'
|
||||
- Before creating the resource modifier yaml, you can try it out using kubectl patch command. The same commands should work as it is.
|
||||
|
||||
#### JSON Merge Patch
|
||||
You can modify a resource using JSON Merge Patch
|
||||
```yaml
|
||||
version: v1
|
||||
resourceModifierRules:
|
||||
- conditions:
|
||||
groupResource: pods
|
||||
namespaces:
|
||||
- ns1
|
||||
mergePatches:
|
||||
- patchData: |
|
||||
{
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"foo": null
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
- The above configmap will apply the Merge Patch to all the pods in namespace ns1 and remove the annotation `foo` from the pods.
|
||||
- Both json and yaml format are supported for the patchData.
|
||||
- For more details, please refer to [this doc](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)
|
||||
|
||||
#### Strategic Merge Patch
|
||||
You can modify a resource using Strategic Merge Patch
|
||||
```yaml
|
||||
version: v1
|
||||
resourceModifierRules:
|
||||
- conditions:
|
||||
groupResource: pods
|
||||
resourceNameRegex: "^my-pod$"
|
||||
namespaces:
|
||||
- ns1
|
||||
strategicPatches:
|
||||
- patchData: |
|
||||
{
|
||||
"spec": {
|
||||
"containers": [
|
||||
{
|
||||
"name": "nginx",
|
||||
"image": "repo2/nginx"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
- The above configmap will apply the Strategic Merge Patch to the pod with name my-pod in namespace ns1 and update the image of container nginx to `repo2/nginx`.
|
||||
- Both json and yaml format are supported for the patchData.
|
||||
- For more details, please refer to [this doc](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/)
|
||||
|
||||
|
||||
### Conditional Patches in ALL Patch Types
|
||||
A new field `matches` is added in conditions to support conditional patches.
|
||||
|
||||
Example of matches in conditions
|
||||
```yaml
|
||||
version: v1
|
||||
resourceModifierRules:
|
||||
- conditions:
|
||||
groupResource: persistentvolumeclaims.storage.k8s.io
|
||||
matches:
|
||||
- path: "/spec/storageClassName"
|
||||
value: "premium"
|
||||
mergePatches:
|
||||
- patchData: |
|
||||
{
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"foo": null
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
- The above configmap will apply the Merge Patch to all the PVCs in all namespaces with storageClassName premium and remove the annotation `foo` from the PVCs.
|
||||
- You can specify multiple rules in the `matches` list. The patch will be applied only if all the matches are satisfied.
|
||||
|
||||
### Wildcard Support for GroupResource
|
||||
The user can specify a wildcard for groupResource in the conditions' struct. This will allow the user to apply the patches for all the resources of a particular group or all resources in all groups. For example, `*.apps` will apply to all the resources in the `apps` group, `*` will apply to all the resources in core group, `*.*` will apply to all the resources in all groups.
|
||||
- If both `*.groupName` and `namespaces` are specified, the patches will be applied to all the namespaced resources in this group in the specified namespaces and all the cluster resources in this group.
|
||||
Reference in New Issue
Block a user