mirror of
https://github.com/vmware-tanzu/velero.git
synced 2026-01-10 15:07:29 +00:00
Compare commits
39 Commits
v1.16.1-rc
...
v1.13.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ea5a89f83b | ||
|
|
642924d2bd | ||
|
|
8dca539314 | ||
|
|
a6a6da5a72 | ||
|
|
99376a3de6 | ||
|
|
eed1c383c8 | ||
|
|
941ad1a993 | ||
|
|
02d229cd06 | ||
|
|
c859f7bf11 | ||
|
|
e1222ffd74 | ||
|
|
9cdaeadef3 | ||
|
|
cb7211d997 | ||
|
|
df08980618 | ||
|
|
51a90e7d2f | ||
|
|
62a531785f | ||
|
|
5dd1d3bfe5 | ||
|
|
701e786150 | ||
|
|
170fcc53ba | ||
|
|
44aa6a7c6b | ||
|
|
2a9f4fa576 | ||
|
|
4d27ca99c1 | ||
|
|
8914c7209b | ||
|
|
76670e940c | ||
|
|
25d977e5bc | ||
|
|
94c7d4b6d4 | ||
|
|
09401c8454 | ||
|
|
981d64a1b8 | ||
|
|
16b8b8da72 | ||
|
|
9fd73b2d13 | ||
|
|
c377e472e8 | ||
|
|
f5714cb636 | ||
|
|
5ffa12189b | ||
|
|
1882be763e | ||
|
|
42bbf87197 | ||
|
|
8aa6a8e59d | ||
|
|
fdb29819b4 | ||
|
|
74f225037c | ||
|
|
6e90e628aa | ||
|
|
46f64f2f98 |
2
.github/workflows/crds-verify-kind.yaml
vendored
2
.github/workflows/crds-verify-kind.yaml
vendored
@@ -14,7 +14,7 @@ jobs:
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: '1.21'
|
||||
go-version: '1.21.6'
|
||||
id: go
|
||||
# Look for a CLI that's made for this PR
|
||||
- name: Fetch built CLI
|
||||
|
||||
4
.github/workflows/e2e-test-kind.yaml
vendored
4
.github/workflows/e2e-test-kind.yaml
vendored
@@ -14,7 +14,7 @@ jobs:
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: '1.21'
|
||||
go-version: '1.21.6'
|
||||
id: go
|
||||
# Look for a CLI that's made for this PR
|
||||
- name: Fetch built CLI
|
||||
@@ -72,7 +72,7 @@ jobs:
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: '1.21'
|
||||
go-version: '1.21.6'
|
||||
id: go
|
||||
- name: Check out the code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
2
.github/workflows/pr-ci-check.yml
vendored
2
.github/workflows/pr-ci-check.yml
vendored
@@ -10,7 +10,7 @@ jobs:
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: '1.21'
|
||||
go-version: '1.21.6'
|
||||
id: go
|
||||
- name: Check out the code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
2
.github/workflows/push.yml
vendored
2
.github/workflows/push.yml
vendored
@@ -18,7 +18,7 @@ jobs:
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v4
|
||||
with:
|
||||
go-version: '1.21'
|
||||
go-version: '1.21.6'
|
||||
id: go
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
# Velero binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.21-bookworm as velero-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.21.6-bookworm as velero-builder
|
||||
|
||||
ARG GOPROXY
|
||||
ARG BIN
|
||||
@@ -47,7 +47,7 @@ RUN mkdir -p /output/usr/bin && \
|
||||
go clean -modcache -cache
|
||||
|
||||
# Restic binary build section
|
||||
FROM --platform=$BUILDPLATFORM golang:1.21-bookworm as restic-builder
|
||||
FROM --platform=$BUILDPLATFORM golang:1.21.6-bookworm as restic-builder
|
||||
|
||||
ARG BIN
|
||||
ARG TARGETOS
|
||||
@@ -70,7 +70,7 @@ RUN mkdir -p /output/usr/bin && \
|
||||
go clean -modcache -cache
|
||||
|
||||
# Velero image packing section
|
||||
FROM paketobuildpacks/run-jammy-tiny:latest
|
||||
FROM paketobuildpacks/run-jammy-tiny:0.2.19
|
||||
|
||||
LABEL maintainer="Xun Jiang <jxun@vmware.com>"
|
||||
|
||||
|
||||
2
Tiltfile
2
Tiltfile
@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
|
||||
|
||||
tilt_helper_dockerfile_header = """
|
||||
# Tilt image
|
||||
FROM golang:1.21 as tilt-helper
|
||||
FROM golang:1.21.6 as tilt-helper
|
||||
|
||||
# Support live reloading with Tilt
|
||||
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \
|
||||
|
||||
@@ -1,3 +1,24 @@
|
||||
## v1.13.1
|
||||
### 2024-03-13
|
||||
|
||||
### Download
|
||||
https://github.com/vmware-tanzu/velero/releases/tag/v1.13.1
|
||||
|
||||
### Container Image
|
||||
`velero/velero:v1.13.1`
|
||||
|
||||
### Documentation
|
||||
https://velero.io/docs/v1.13/
|
||||
|
||||
### Upgrading
|
||||
https://velero.io/docs/v1.13/upgrade-to-1.13/
|
||||
|
||||
### All changes
|
||||
* Fix issue #7308, change the data path requeue time to 5 second for data mover backup/restore, PVB and PVR. (#7459, @Lyndon-Li)
|
||||
* BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7399, @kaovilai)
|
||||
* Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time (#7445, @allenxu404)
|
||||
|
||||
|
||||
## v1.13
|
||||
### 2024-01-10
|
||||
|
||||
@@ -64,11 +85,15 @@ To fix CVEs and keep pace with Golang, Velero made changes as follows:
|
||||
### Limitations/Known issues
|
||||
* The backup's VolumeInfo metadata doesn't have the information updated in the async operations. This function could be supported in v1.14 release.
|
||||
|
||||
### Note
|
||||
* Velero introduces the informer cache which is enabled by default. The informer cache improves the restore performance but may cause higher memory consumption. Increase the memory limit of the Velero pod or disable the informer cache by specifying the `--disable-informer-cache` option when installing Velero if you get the OOM error.
|
||||
|
||||
### Deprecation announcement
|
||||
* The generated k8s clients, informers, and listers are deprecated in the Velero v1.13 release. They are put in the Velero repository's pkg/generated directory. According to the n+2 supporting policy, the deprecated are kept for two more releases. The pkg/generated directory should be deleted in the v1.15 release.
|
||||
* After the backup VolumeInfo metadata file is added to the backup, Velero decides how to restore the PV resource according to the VolumeInfo content. To support the backup generated by the older version of Velero, the old logic is also kept. The support for the backup without the VolumeInfo metadata file will be kept for two releases. The support logic will be deleted in the v1.15 release.
|
||||
|
||||
### All Changes
|
||||
* Check resource Group Version and Kind is available in cluster before attempting restore to prevent being stuck (#7336, @kaovilai)
|
||||
* Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message (#7294, @ywk253100)
|
||||
* Fix issue #6928, remove snapshot deletion timeout for PVB (#7282, @Lyndon-Li)
|
||||
* Do not set "targetNamespace" to namespace items (#7274, @reasonerjt)
|
||||
|
||||
@@ -66,10 +66,10 @@ func done() bool {
|
||||
doneFile := filepath.Join("/restores", child.Name(), ".velero", os.Args[1])
|
||||
|
||||
if _, err := os.Stat(doneFile); os.IsNotExist(err) {
|
||||
fmt.Printf("Not found: %s\n", doneFile)
|
||||
fmt.Printf("The filesystem restore done file %s is not found yet. Retry later.\n", doneFile)
|
||||
return false
|
||||
} else if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "ERROR looking for %s: %s\n", doneFile, err)
|
||||
fmt.Fprintf(os.Stderr, "ERROR looking filesystem restore done file %s: %s\n", doneFile, err)
|
||||
return false
|
||||
}
|
||||
|
||||
|
||||
2
go.mod
2
go.mod
@@ -2,7 +2,7 @@ module github.com/vmware-tanzu/velero
|
||||
|
||||
go 1.21
|
||||
|
||||
toolchain go1.21.3
|
||||
toolchain go1.21.6
|
||||
|
||||
require (
|
||||
cloud.google.com/go/storage v1.33.0
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM --platform=linux/amd64 golang:1.21-bookworm
|
||||
FROM --platform=linux/amd64 golang:1.21.6-bookworm
|
||||
|
||||
ARG GOPROXY
|
||||
|
||||
|
||||
@@ -35,8 +35,8 @@ type DynamicFactory interface {
|
||||
// ClientForGroupVersionResource returns a Dynamic client for the given group/version
|
||||
// and resource for the given namespace.
|
||||
ClientForGroupVersionResource(gv schema.GroupVersion, resource metav1.APIResource, namespace string) (Dynamic, error)
|
||||
// DynamicSharedInformerFactoryForNamespace returns a DynamicSharedInformerFactory for the given namespace.
|
||||
DynamicSharedInformerFactoryForNamespace(namespace string) dynamicinformer.DynamicSharedInformerFactory
|
||||
// DynamicSharedInformerFactory returns a DynamicSharedInformerFactory.
|
||||
DynamicSharedInformerFactory() dynamicinformer.DynamicSharedInformerFactory
|
||||
}
|
||||
|
||||
// dynamicFactory implements DynamicFactory.
|
||||
@@ -55,8 +55,8 @@ func (f *dynamicFactory) ClientForGroupVersionResource(gv schema.GroupVersion, r
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (f *dynamicFactory) DynamicSharedInformerFactoryForNamespace(namespace string) dynamicinformer.DynamicSharedInformerFactory {
|
||||
return dynamicinformer.NewFilteredDynamicSharedInformerFactory(f.dynamicClient, time.Minute, namespace, nil)
|
||||
func (f *dynamicFactory) DynamicSharedInformerFactory() dynamicinformer.DynamicSharedInformerFactory {
|
||||
return dynamicinformer.NewDynamicSharedInformerFactory(f.dynamicClient, time.Minute)
|
||||
}
|
||||
|
||||
// Creator creates an object.
|
||||
|
||||
@@ -43,9 +43,8 @@ func NewGetCommand(f client.Factory, use string) *cobra.Command {
|
||||
crClient, err := f.KubebuilderClient()
|
||||
cmd.CheckError(err)
|
||||
|
||||
var repos *api.BackupRepositoryList
|
||||
repos := new(api.BackupRepositoryList)
|
||||
if len(args) > 0 {
|
||||
repos = new(api.BackupRepositoryList)
|
||||
for _, name := range args {
|
||||
repo := new(api.BackupRepository)
|
||||
err := crClient.Get(context.TODO(), ctrlclient.ObjectKey{Namespace: f.Namespace(), Name: name}, repo)
|
||||
|
||||
@@ -74,7 +74,7 @@ func NewDescribeCommand(f client.Factory, use string) *cobra.Command {
|
||||
podVolumeRestoreList := new(velerov1api.PodVolumeRestoreList)
|
||||
err = kbClient.List(context.TODO(), podVolumeRestoreList, &controllerclient.ListOptions{
|
||||
Namespace: f.Namespace(),
|
||||
LabelSelector: labels.SelectorFromSet(map[string]string{velerov1api.BackupNameLabel: label.GetValidName(restore.Name)}),
|
||||
LabelSelector: labels.SelectorFromSet(map[string]string{velerov1api.RestoreNameLabel: label.GetValidName(restore.Name)}),
|
||||
})
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "error getting PodVolumeRestores for restore %s: %v\n", restore.Name, err)
|
||||
|
||||
@@ -171,6 +171,12 @@ func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
|
||||
schedule.Spec.Template.ResourcePolicy = &v1.TypedLocalObjectReference{Kind: resourcepolicies.ConfigmapRefType, Name: o.BackupOptions.ResPoliciesConfigmap}
|
||||
}
|
||||
|
||||
if o.BackupOptions.ParallelFilesUpload > 0 {
|
||||
schedule.Spec.Template.UploaderConfig = &api.UploaderConfigForBackup{
|
||||
ParallelFilesUpload: o.BackupOptions.ParallelFilesUpload,
|
||||
}
|
||||
}
|
||||
|
||||
if printed, err := output.PrintWithFormat(c, schedule); printed || err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -33,6 +33,7 @@ import (
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/spf13/cobra"
|
||||
corev1api "k8s.io/api/core/v1"
|
||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
@@ -467,7 +468,12 @@ func setDefaultBackupLocation(ctx context.Context, client ctrlclient.Client, nam
|
||||
|
||||
backupLocation := &velerov1api.BackupStorageLocation{}
|
||||
if err := client.Get(ctx, types.NamespacedName{Namespace: namespace, Name: defaultBackupLocation}, backupLocation); err != nil {
|
||||
return errors.WithStack(err)
|
||||
if apierrors.IsNotFound(err) {
|
||||
logger.WithField("backupStorageLocation", defaultBackupLocation).WithError(err).Warn("Failed to set default backup storage location at server start")
|
||||
return nil
|
||||
} else {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
}
|
||||
|
||||
if !backupLocation.Spec.Default {
|
||||
|
||||
@@ -409,4 +409,13 @@ func Test_setDefaultBackupLocation(t *testing.T) {
|
||||
nonDefaultLocation := &velerov1api.BackupStorageLocation{}
|
||||
require.Nil(t, c.Get(context.Background(), client.ObjectKey{Namespace: "velero", Name: "non-default"}, nonDefaultLocation))
|
||||
assert.False(t, nonDefaultLocation.Spec.Default)
|
||||
|
||||
// no default location specified
|
||||
c = fake.NewClientBuilder().WithScheme(scheme).Build()
|
||||
err := setDefaultBackupLocation(context.Background(), c, "velero", "", logrus.New())
|
||||
assert.NoError(t, err)
|
||||
|
||||
// no default location created
|
||||
err = setDefaultBackupLocation(context.Background(), c, "velero", "default", logrus.New())
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
@@ -76,7 +76,11 @@ func (r *BackupRepoReconciler) SetupWithManager(mgr ctrl.Manager) error {
|
||||
For(&velerov1api.BackupRepository{}).
|
||||
Watches(s, nil).
|
||||
Watches(&source.Kind{Type: &velerov1api.BackupStorageLocation{}}, kube.EnqueueRequestsFromMapUpdateFunc(r.invalidateBackupReposForBSL),
|
||||
builder.WithPredicates(kube.NewUpdateEventPredicate(r.needInvalidBackupRepo))).
|
||||
builder.WithPredicates(
|
||||
// When BSL updates, check if the backup repositories need to be invalidated
|
||||
kube.NewUpdateEventPredicate(r.needInvalidBackupRepo),
|
||||
// When BSL is created, invalidate any backup repositories that reference it
|
||||
kube.NewCreateEventPredicate(func(client.Object) bool { return true }))).
|
||||
Complete(r)
|
||||
}
|
||||
|
||||
@@ -90,13 +94,13 @@ func (r *BackupRepoReconciler) invalidateBackupReposForBSL(bslObj client.Object)
|
||||
}).AsSelector(),
|
||||
}
|
||||
if err := r.List(context.TODO(), list, options); err != nil {
|
||||
r.logger.WithField("BSL", bsl.Name).WithError(err).Error("unable to list BackupRepositorys")
|
||||
r.logger.WithField("BSL", bsl.Name).WithError(err).Error("unable to list BackupRepositories")
|
||||
return []reconcile.Request{}
|
||||
}
|
||||
|
||||
for i := range list.Items {
|
||||
r.logger.WithField("BSL", bsl.Name).Infof("Invalidating Backup Repository %s", list.Items[i].Name)
|
||||
if err := r.patchBackupRepository(context.Background(), &list.Items[i], repoNotReady("re-establish on BSL change")); err != nil {
|
||||
if err := r.patchBackupRepository(context.Background(), &list.Items[i], repoNotReady("re-establish on BSL change or create")); err != nil {
|
||||
r.logger.WithField("BSL", bsl.Name).WithError(err).Errorf("fail to patch BackupRepository %s", list.Items[i].Name)
|
||||
}
|
||||
}
|
||||
@@ -104,6 +108,7 @@ func (r *BackupRepoReconciler) invalidateBackupReposForBSL(bslObj client.Object)
|
||||
return []reconcile.Request{}
|
||||
}
|
||||
|
||||
// needInvalidBackupRepo returns true if the BSL's storage type, bucket, prefix, CACert, or config has changed
|
||||
func (r *BackupRepoReconciler) needInvalidBackupRepo(oldObj client.Object, newObj client.Object) bool {
|
||||
oldBSL := oldObj.(*velerov1api.BackupStorageLocation)
|
||||
newBSL := newObj.(*velerov1api.BackupStorageLocation)
|
||||
|
||||
@@ -261,7 +261,7 @@ func (r *DataDownloadReconciler) Reconcile(ctx context.Context, req ctrl.Request
|
||||
if err != nil {
|
||||
if err == datapath.ConcurrentLimitExceed {
|
||||
log.Info("Data path instance is concurrent limited requeue later")
|
||||
return ctrl.Result{Requeue: true, RequeueAfter: time.Minute}, nil
|
||||
return ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5}, nil
|
||||
} else {
|
||||
return r.errorOut(ctx, dd, err, "error to create data path", log)
|
||||
}
|
||||
|
||||
@@ -219,7 +219,7 @@ func TestDataDownloadReconcile(t *testing.T) {
|
||||
dataMgr: datapath.NewManager(0),
|
||||
notNilExpose: true,
|
||||
notMockCleanUp: true,
|
||||
expectedResult: &ctrl.Result{Requeue: true, RequeueAfter: time.Minute},
|
||||
expectedResult: &ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5},
|
||||
},
|
||||
{
|
||||
name: "Error getting volume directory name for pvc in pod",
|
||||
@@ -416,8 +416,8 @@ func TestDataDownloadReconcile(t *testing.T) {
|
||||
require.NotNil(t, actualResult)
|
||||
|
||||
if test.expectedResult != nil {
|
||||
assert.Equal(t, test.expectedResult.Requeue, test.expectedResult.Requeue)
|
||||
assert.Equal(t, test.expectedResult.RequeueAfter, test.expectedResult.RequeueAfter)
|
||||
assert.Equal(t, test.expectedResult.Requeue, actualResult.Requeue)
|
||||
assert.Equal(t, test.expectedResult.RequeueAfter, actualResult.RequeueAfter)
|
||||
}
|
||||
|
||||
dd := velerov2alpha1api.DataDownload{}
|
||||
|
||||
@@ -269,7 +269,7 @@ func (r *DataUploadReconciler) Reconcile(ctx context.Context, req ctrl.Request)
|
||||
if err != nil {
|
||||
if err == datapath.ConcurrentLimitExceed {
|
||||
log.Info("Data path instance is concurrent limited requeue later")
|
||||
return ctrl.Result{Requeue: true, RequeueAfter: time.Minute}, nil
|
||||
return ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5}, nil
|
||||
} else {
|
||||
return r.errorOut(ctx, du, err, "error to create data path", log)
|
||||
}
|
||||
|
||||
@@ -413,7 +413,7 @@ func TestReconcile(t *testing.T) {
|
||||
du: dataUploadBuilder().Phase(velerov2alpha1api.DataUploadPhasePrepared).SnapshotType(fakeSnapshotType).Result(),
|
||||
expectedProcessed: false,
|
||||
expected: dataUploadBuilder().Phase(velerov2alpha1api.DataUploadPhasePrepared).Result(),
|
||||
expectedRequeue: ctrl.Result{Requeue: true, RequeueAfter: time.Minute},
|
||||
expectedRequeue: ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5},
|
||||
},
|
||||
{
|
||||
name: "prepare timeout",
|
||||
|
||||
@@ -126,7 +126,7 @@ func (r *PodVolumeBackupReconciler) Reconcile(ctx context.Context, req ctrl.Requ
|
||||
fsBackup, err := r.dataPathMgr.CreateFileSystemBR(pvb.Name, pVBRRequestor, ctx, r.Client, pvb.Namespace, callbacks, log)
|
||||
if err != nil {
|
||||
if err == datapath.ConcurrentLimitExceed {
|
||||
return ctrl.Result{Requeue: true, RequeueAfter: time.Minute}, nil
|
||||
return ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5}, nil
|
||||
} else {
|
||||
return r.errorOut(ctx, &pvb, err, "error to create data path", log)
|
||||
}
|
||||
|
||||
@@ -383,7 +383,7 @@ var _ = Describe("PodVolumeBackup Reconciler", func() {
|
||||
expected: builder.ForPodVolumeBackup(velerov1api.DefaultNamespace, "pvb-1").
|
||||
Phase("").
|
||||
Result(),
|
||||
expectedRequeue: ctrl.Result{Requeue: true, RequeueAfter: time.Minute},
|
||||
expectedRequeue: ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5},
|
||||
}),
|
||||
)
|
||||
})
|
||||
|
||||
@@ -122,7 +122,7 @@ func (c *PodVolumeRestoreReconciler) Reconcile(ctx context.Context, req ctrl.Req
|
||||
fsRestore, err := c.dataPathMgr.CreateFileSystemBR(pvr.Name, pVBRRequestor, ctx, c.Client, pvr.Namespace, callbacks, log)
|
||||
if err != nil {
|
||||
if err == datapath.ConcurrentLimitExceed {
|
||||
return ctrl.Result{Requeue: true, RequeueAfter: time.Minute}, nil
|
||||
return ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5}, nil
|
||||
} else {
|
||||
return c.errorOut(ctx, pvr, err, "error to create data path", log)
|
||||
}
|
||||
|
||||
@@ -468,7 +468,7 @@ func (m *ServerMetrics) InitSchedule(scheduleName string) {
|
||||
c.WithLabelValues(scheduleName).Add(0)
|
||||
}
|
||||
if c, ok := m.metrics[backupLastStatus].(*prometheus.GaugeVec); ok {
|
||||
c.WithLabelValues(scheduleName).Add(1)
|
||||
c.WithLabelValues(scheduleName).Set(float64(1))
|
||||
}
|
||||
if c, ok := m.metrics[restoreAttemptTotal].(*prometheus.CounterVec); ok {
|
||||
c.WithLabelValues(scheduleName).Add(0)
|
||||
|
||||
@@ -45,7 +45,6 @@ import (
|
||||
"k8s.io/apimachinery/pkg/util/sets"
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
"k8s.io/client-go/dynamic/dynamicinformer"
|
||||
"k8s.io/client-go/informers"
|
||||
corev1 "k8s.io/client-go/kubernetes/typed/core/v1"
|
||||
"k8s.io/client-go/tools/cache"
|
||||
crclient "sigs.k8s.io/controller-runtime/pkg/client"
|
||||
@@ -309,8 +308,6 @@ func (kr *kubernetesRestorer) RestoreWithResolvers(
|
||||
resourceTerminatingTimeout: kr.resourceTerminatingTimeout,
|
||||
resourceTimeout: kr.resourceTimeout,
|
||||
resourceClients: make(map[resourceClientKey]client.Dynamic),
|
||||
dynamicInformerFactories: make(map[string]*informerFactoryWithContext),
|
||||
resourceInformers: make(map[resourceClientKey]informers.GenericInformer),
|
||||
restoredItems: req.RestoredItems,
|
||||
renamedPVs: make(map[string]string),
|
||||
pvRenamer: kr.pvRenamer,
|
||||
@@ -362,8 +359,7 @@ type restoreContext struct {
|
||||
resourceTerminatingTimeout time.Duration
|
||||
resourceTimeout time.Duration
|
||||
resourceClients map[resourceClientKey]client.Dynamic
|
||||
dynamicInformerFactories map[string]*informerFactoryWithContext
|
||||
resourceInformers map[resourceClientKey]informers.GenericInformer
|
||||
dynamicInformerFactory *informerFactoryWithContext
|
||||
restoredItems map[itemKey]restoredItemStatus
|
||||
renamedPVs map[string]string
|
||||
pvRenamer func(string) (string, error)
|
||||
@@ -447,11 +443,16 @@ func (ctx *restoreContext) execute() (results.Result, results.Result) {
|
||||
|
||||
// Need to stop all informers if enabled
|
||||
if !ctx.disableInformerCache {
|
||||
context, cancel := signal.NotifyContext(go_context.Background(), os.Interrupt)
|
||||
ctx.dynamicInformerFactory = &informerFactoryWithContext{
|
||||
factory: ctx.dynamicFactory.DynamicSharedInformerFactory(),
|
||||
context: context,
|
||||
cancel: cancel,
|
||||
}
|
||||
|
||||
defer func() {
|
||||
// Call the cancel func to close the channel for each started informer
|
||||
for _, factory := range ctx.dynamicInformerFactories {
|
||||
factory.cancel()
|
||||
}
|
||||
ctx.dynamicInformerFactory.cancel()
|
||||
// After upgrading to client-go 0.27 or newer, also call Shutdown for each informer factory
|
||||
}()
|
||||
}
|
||||
@@ -579,28 +580,29 @@ func (ctx *restoreContext) execute() (results.Result, results.Result) {
|
||||
|
||||
// initialize informer caches for selected resources if enabled
|
||||
if !ctx.disableInformerCache {
|
||||
// CRD informer will have already been initialized if any CRDs were created,
|
||||
// but already-initialized informers aren't re-initialized because getGenericInformer
|
||||
// looks for an existing one first.
|
||||
factoriesToStart := make(map[string]*informerFactoryWithContext)
|
||||
for _, informerResource := range selectedResourceCollection {
|
||||
gr := schema.ParseGroupResource(informerResource.resource)
|
||||
if informerResource.totalItems == 0 {
|
||||
continue
|
||||
}
|
||||
version := ""
|
||||
for _, items := range informerResource.selectedItemsByNamespace {
|
||||
// don't use ns key since it represents original ns, not mapped ns
|
||||
if len(items) == 0 {
|
||||
continue
|
||||
}
|
||||
// use the first item in the list to initialize the informer. The rest of the list
|
||||
// should share the same gvr and namespace
|
||||
_, factory := ctx.getGenericInformerInternal(gr, items[0].version, items[0].targetNamespace)
|
||||
if factory != nil {
|
||||
factoriesToStart[items[0].targetNamespace] = factory
|
||||
}
|
||||
version = items[0].version
|
||||
break
|
||||
}
|
||||
gvr := schema.ParseGroupResource(informerResource.resource).WithVersion(version)
|
||||
_, _, err := ctx.discoveryHelper.ResourceFor(gvr)
|
||||
if err != nil {
|
||||
ctx.log.Infof("failed to create informer for %s: %v", gvr, err)
|
||||
continue
|
||||
}
|
||||
ctx.dynamicInformerFactory.factory.ForResource(gvr)
|
||||
}
|
||||
for _, factoryWithContext := range factoriesToStart {
|
||||
factoryWithContext.factory.WaitForCacheSync(factoryWithContext.context.Done())
|
||||
}
|
||||
ctx.dynamicInformerFactory.factory.Start(ctx.dynamicInformerFactory.context.Done())
|
||||
ctx.log.Info("waiting informer cache sync ...")
|
||||
ctx.dynamicInformerFactory.factory.WaitForCacheSync(ctx.dynamicInformerFactory.context.Done())
|
||||
}
|
||||
|
||||
// reset processedItems and totalItems before processing full resource list
|
||||
@@ -1061,47 +1063,23 @@ func (ctx *restoreContext) getResourceClient(groupResource schema.GroupResource,
|
||||
return client, nil
|
||||
}
|
||||
|
||||
// if new informer is created, non-nil factory is returned
|
||||
func (ctx *restoreContext) getGenericInformerInternal(groupResource schema.GroupResource, version, namespace string) (informers.GenericInformer, *informerFactoryWithContext) {
|
||||
var returnFactory *informerFactoryWithContext
|
||||
func (ctx *restoreContext) getResourceLister(groupResource schema.GroupResource, obj *unstructured.Unstructured, namespace string) (cache.GenericNamespaceLister, error) {
|
||||
_, _, err := ctx.discoveryHelper.KindFor(obj.GroupVersionKind())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
informer := ctx.dynamicInformerFactory.factory.ForResource(groupResource.WithVersion(obj.GroupVersionKind().Version))
|
||||
// if the restore contains CRDs or the RIA returns new resources, need to make sure the corresponding informers are synced
|
||||
if !informer.Informer().HasSynced() {
|
||||
ctx.dynamicInformerFactory.factory.Start(ctx.dynamicInformerFactory.context.Done())
|
||||
ctx.log.Infof("waiting informer cache sync for %s, %s/%s ...", groupResource, namespace, obj.GetName())
|
||||
ctx.dynamicInformerFactory.factory.WaitForCacheSync(ctx.dynamicInformerFactory.context.Done())
|
||||
}
|
||||
|
||||
key := getResourceClientKey(groupResource, version, namespace)
|
||||
factoryWithContext, ok := ctx.dynamicInformerFactories[key.namespace]
|
||||
if !ok {
|
||||
factory := ctx.dynamicFactory.DynamicSharedInformerFactoryForNamespace(namespace)
|
||||
informerContext, informerCancel := signal.NotifyContext(go_context.Background(), os.Interrupt)
|
||||
factoryWithContext = &informerFactoryWithContext{
|
||||
factory: factory,
|
||||
context: informerContext,
|
||||
cancel: informerCancel,
|
||||
}
|
||||
ctx.dynamicInformerFactories[key.namespace] = factoryWithContext
|
||||
}
|
||||
informer, ok := ctx.resourceInformers[key]
|
||||
if !ok {
|
||||
ctx.log.Infof("[debug] Creating factory for %s in namespace %s", key.resource, key.namespace)
|
||||
informer = factoryWithContext.factory.ForResource(key.resource)
|
||||
factoryWithContext.factory.Start(factoryWithContext.context.Done())
|
||||
ctx.resourceInformers[key] = informer
|
||||
returnFactory = factoryWithContext
|
||||
}
|
||||
return informer, returnFactory
|
||||
}
|
||||
|
||||
func (ctx *restoreContext) getGenericInformer(groupResource schema.GroupResource, version, namespace string) informers.GenericInformer {
|
||||
informer, factoryWithContext := ctx.getGenericInformerInternal(groupResource, version, namespace)
|
||||
if factoryWithContext != nil {
|
||||
factoryWithContext.factory.WaitForCacheSync(factoryWithContext.context.Done())
|
||||
}
|
||||
return informer
|
||||
}
|
||||
func (ctx *restoreContext) getResourceLister(groupResource schema.GroupResource, obj *unstructured.Unstructured, namespace string) cache.GenericNamespaceLister {
|
||||
informer := ctx.getGenericInformer(groupResource, obj.GroupVersionKind().Version, namespace)
|
||||
if namespace == "" {
|
||||
return informer.Lister()
|
||||
} else {
|
||||
return informer.Lister().ByNamespace(namespace)
|
||||
return informer.Lister(), nil
|
||||
}
|
||||
return informer.Lister().ByNamespace(namespace), nil
|
||||
}
|
||||
|
||||
func getResourceID(groupResource schema.GroupResource, namespace, name string) string {
|
||||
@@ -1113,7 +1091,10 @@ func getResourceID(groupResource schema.GroupResource, namespace, name string) s
|
||||
}
|
||||
|
||||
func (ctx *restoreContext) getResource(groupResource schema.GroupResource, obj *unstructured.Unstructured, namespace, name string) (*unstructured.Unstructured, error) {
|
||||
lister := ctx.getResourceLister(groupResource, obj, namespace)
|
||||
lister, err := ctx.getResourceLister(groupResource, obj, namespace)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "Error getting lister for %s", getResourceID(groupResource, namespace, name))
|
||||
}
|
||||
clusterObj, err := lister.Get(name)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error getting resource from lister for %s, %s/%s", groupResource, namespace, name)
|
||||
@@ -1123,6 +1104,7 @@ func (ctx *restoreContext) getResource(groupResource schema.GroupResource, obj *
|
||||
ctx.log.WithError(errors.WithStack(fmt.Errorf("expected *unstructured.Unstructured but got %T", u))).Error("unable to understand entry returned from client")
|
||||
return nil, fmt.Errorf("expected *unstructured.Unstructured but got %T", u)
|
||||
}
|
||||
ctx.log.Debugf("get %s, %s/%s from informer cache", groupResource, namespace, name)
|
||||
return u, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -19,9 +19,6 @@ package test
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"golang.org/x/text/cases"
|
||||
"golang.org/x/text/language"
|
||||
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/client-go/discovery"
|
||||
discoveryfake "k8s.io/client-go/discovery/fake"
|
||||
@@ -76,7 +73,7 @@ func (c *DiscoveryClient) WithAPIResource(resource *APIResource) *DiscoveryClien
|
||||
Namespaced: resource.Namespaced,
|
||||
Group: resource.Group,
|
||||
Version: resource.Version,
|
||||
Kind: cases.Title(language.Und).String(strings.TrimSuffix(resource.Name, "s")),
|
||||
Kind: resource.Kind,
|
||||
Verbs: metav1.Verbs([]string{"list", "create", "get", "delete"}),
|
||||
ShortNames: []string{resource.ShortName},
|
||||
})
|
||||
|
||||
@@ -38,8 +38,8 @@ func (df *FakeDynamicFactory) ClientForGroupVersionResource(gv schema.GroupVersi
|
||||
return args.Get(0).(client.Dynamic), args.Error(1)
|
||||
}
|
||||
|
||||
func (df *FakeDynamicFactory) DynamicSharedInformerFactoryForNamespace(namespace string) dynamicinformer.DynamicSharedInformerFactory {
|
||||
args := df.Called(namespace)
|
||||
func (df *FakeDynamicFactory) DynamicSharedInformerFactory() dynamicinformer.DynamicSharedInformerFactory {
|
||||
args := df.Called()
|
||||
return args.Get(0).(dynamicinformer.DynamicSharedInformerFactory)
|
||||
}
|
||||
|
||||
|
||||
@@ -27,6 +27,7 @@ type APIResource struct {
|
||||
Group string
|
||||
Version string
|
||||
Name string
|
||||
Kind string
|
||||
ShortName string
|
||||
Namespaced bool
|
||||
Items []metav1.Object
|
||||
@@ -50,6 +51,7 @@ func Pods(items ...metav1.Object) *APIResource {
|
||||
ShortName: "po",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
Kind: "Pod",
|
||||
}
|
||||
}
|
||||
|
||||
@@ -59,6 +61,7 @@ func PVCs(items ...metav1.Object) *APIResource {
|
||||
Version: "v1",
|
||||
Name: "persistentvolumeclaims",
|
||||
ShortName: "pvc",
|
||||
Kind: "PersistentVolumeClaim",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
@@ -70,6 +73,7 @@ func PVs(items ...metav1.Object) *APIResource {
|
||||
Version: "v1",
|
||||
Name: "persistentvolumes",
|
||||
ShortName: "pv",
|
||||
Kind: "PersistentVolume",
|
||||
Namespaced: false,
|
||||
Items: items,
|
||||
}
|
||||
@@ -81,6 +85,7 @@ func Secrets(items ...metav1.Object) *APIResource {
|
||||
Version: "v1",
|
||||
Name: "secrets",
|
||||
ShortName: "secrets",
|
||||
Kind: "Secret",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
@@ -92,6 +97,7 @@ func Deployments(items ...metav1.Object) *APIResource {
|
||||
Version: "v1",
|
||||
Name: "deployments",
|
||||
ShortName: "deploy",
|
||||
Kind: "Deployment",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
@@ -103,6 +109,7 @@ func ExtensionsDeployments(items ...metav1.Object) *APIResource {
|
||||
Version: "v1",
|
||||
Name: "deployments",
|
||||
ShortName: "deploy",
|
||||
Kind: "Deployment",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
@@ -115,6 +122,7 @@ func VeleroDeployments(items ...metav1.Object) *APIResource {
|
||||
Version: "v1",
|
||||
Name: "deployments",
|
||||
ShortName: "deploy",
|
||||
Kind: "Deployment",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
@@ -126,6 +134,7 @@ func Namespaces(items ...metav1.Object) *APIResource {
|
||||
Version: "v1",
|
||||
Name: "namespaces",
|
||||
ShortName: "ns",
|
||||
Kind: "Namespace",
|
||||
Namespaced: false,
|
||||
Items: items,
|
||||
}
|
||||
@@ -137,6 +146,7 @@ func ServiceAccounts(items ...metav1.Object) *APIResource {
|
||||
Version: "v1",
|
||||
Name: "serviceaccounts",
|
||||
ShortName: "sa",
|
||||
Kind: "ServiceAccount",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
@@ -148,6 +158,7 @@ func ConfigMaps(items ...metav1.Object) *APIResource {
|
||||
Version: "v1",
|
||||
Name: "configmaps",
|
||||
ShortName: "cm",
|
||||
Kind: "ConfigMap",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
@@ -159,6 +170,7 @@ func CRDs(items ...metav1.Object) *APIResource {
|
||||
Version: "v1beta1",
|
||||
Name: "customresourcedefinitions",
|
||||
ShortName: "crd",
|
||||
Kind: "CustomResourceDefinition",
|
||||
Namespaced: false,
|
||||
Items: items,
|
||||
}
|
||||
@@ -169,6 +181,7 @@ func VSLs(items ...metav1.Object) *APIResource {
|
||||
Group: "velero.io",
|
||||
Version: "v1",
|
||||
Name: "volumesnapshotlocations",
|
||||
Kind: "VolumeSnapshotLocation",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
@@ -179,6 +192,7 @@ func Backups(items ...metav1.Object) *APIResource {
|
||||
Group: "velero.io",
|
||||
Version: "v1",
|
||||
Name: "backups",
|
||||
Kind: "Backup",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
@@ -190,6 +204,7 @@ func Services(items ...metav1.Object) *APIResource {
|
||||
Version: "v1",
|
||||
Name: "services",
|
||||
ShortName: "svc",
|
||||
Kind: "Service",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
@@ -200,6 +215,7 @@ func DataUploads(items ...metav1.Object) *APIResource {
|
||||
Group: "velero.io",
|
||||
Version: "v2alpha1",
|
||||
Name: "datauploads",
|
||||
Kind: "DataUpload",
|
||||
Namespaced: true,
|
||||
Items: items,
|
||||
}
|
||||
|
||||
@@ -62,6 +62,10 @@ func LoadCredentials(config map[string]string) (map[string]string, error) {
|
||||
credFile = config[credentialFile]
|
||||
}
|
||||
|
||||
if len(credFile) == 0 {
|
||||
return map[string]string{}, nil
|
||||
}
|
||||
|
||||
// put the credential file content into a map
|
||||
creds, err := godotenv.Read(credFile)
|
||||
if err != nil {
|
||||
|
||||
@@ -28,8 +28,9 @@ import (
|
||||
|
||||
func TestLoadCredentials(t *testing.T) {
|
||||
// no credential file
|
||||
_, err := LoadCredentials(nil)
|
||||
require.NotNil(t, err)
|
||||
credentials, err := LoadCredentials(nil)
|
||||
require.Nil(t, err)
|
||||
assert.NotNil(t, credentials)
|
||||
|
||||
// specified credential file in the config
|
||||
name := filepath.Join(os.TempDir(), "credential")
|
||||
@@ -43,7 +44,7 @@ func TestLoadCredentials(t *testing.T) {
|
||||
config := map[string]string{
|
||||
"credentialsFile": name,
|
||||
}
|
||||
credentials, err := LoadCredentials(config)
|
||||
credentials, err = LoadCredentials(config)
|
||||
require.Nil(t, err)
|
||||
assert.Equal(t, "value", credentials["key"])
|
||||
|
||||
|
||||
@@ -26,10 +26,8 @@ import (
|
||||
|
||||
type MapUpdateFunc func(client.Object) []reconcile.Request
|
||||
|
||||
// EnqueueRequestsFromMapUpdateFunc is for the same purpose with EnqueueRequestsFromMapFunc.
|
||||
// Merely, it is more friendly to updating the mapped objects in the MapUpdateFunc, because
|
||||
// on Update event, MapUpdateFunc is called for only once with the new object, so if MapUpdateFunc
|
||||
// does some update to the mapped objects, the update is done for once
|
||||
// EnqueueRequestsFromMapUpdateFunc has the same purpose with handler.EnqueueRequestsFromMapFunc.
|
||||
// MapUpdateFunc is simpler on Update event because mapAndEnqueue is called once with the new object. EnqueueRequestsFromMapFunc is called twice with the old and new object.
|
||||
func EnqueueRequestsFromMapUpdateFunc(fn MapUpdateFunc) handler.EventHandler {
|
||||
return &enqueueRequestsFromMapFunc{
|
||||
toRequests: fn,
|
||||
|
||||
@@ -81,11 +81,16 @@ These instructions start the Velero server and a Minio instance that is accessib
|
||||
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
|
||||
```
|
||||
|
||||
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). You may need to update AWS plugin version to one that is [compatible](https://github.com/vmware-tanzu/velero-plugin-for-aws#compatibility) with the version of Velero you are installing.
|
||||
* This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). You may need to update AWS plugin version to one that is [compatible](https://github.com/vmware-tanzu/velero-plugin-for-aws#compatibility) with the version of Velero you are installing.
|
||||
|
||||
Additionally, you can specify `--use-node-agent` to enable File System Backup support, and `--wait` to wait for the deployment to be ready.
|
||||
* Additionally, you can specify `--use-node-agent` to enable File System Backup support, and `--wait` to wait for the deployment to be ready.
|
||||
|
||||
This example also assumes you have named your Minio bucket "velero".
|
||||
* This example also assumes you have named your Minio bucket "velero".
|
||||
|
||||
* Please make sure to set parameter `s3ForcePathStyle=true`. The parameter is used to set the Velero integrated AWS SDK data query address style. There are two types of the address: [virtual-host and path-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). If the `s3ForcePathStyle=true` is not set, the default value is false, then the AWS SDK will query in virtual-host style, but the MinIO server only support path-style address by default. The miss match will mean Velero can upload data to MinIO, but **cannot download from MinIO**. This [link](https://github.com/vmware-tanzu/velero/issues/7268) is an example of this issue.
|
||||
It can be resolved by two ways:
|
||||
* Set `s3ForcePathStyle=true` for parameter `--backup-location-config` when installing Velero. This is the preferred way.
|
||||
* Make MinIO server support virtual-host style address. Add the [MINIO_DOMAIN environment variable](https://min.io/docs/minio/linux/reference/minio-server/settings/core.html#id5) for MinIO server will do the magic.
|
||||
|
||||
|
||||
1. Deploy the example nginx application:
|
||||
|
||||
@@ -81,11 +81,16 @@ These instructions start the Velero server and a Minio instance that is accessib
|
||||
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
|
||||
```
|
||||
|
||||
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). You may need to update AWS plugin version to one that is [compatible](https://github.com/vmware-tanzu/velero-plugin-for-aws#compatibility) with the version of Velero you are installing.
|
||||
* This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). You may need to update AWS plugin version to one that is [compatible](https://github.com/vmware-tanzu/velero-plugin-for-aws#compatibility) with the version of Velero you are installing.
|
||||
|
||||
Additionally, you can specify `--use-node-agent` to enable File System Backup support, and `--wait` to wait for the deployment to be ready.
|
||||
* Additionally, you can specify `--use-node-agent` to enable File System Backup support, and `--wait` to wait for the deployment to be ready.
|
||||
|
||||
This example also assumes you have named your Minio bucket "velero".
|
||||
* This example also assumes you have named your Minio bucket "velero".
|
||||
|
||||
* Please make sure to set parameter `s3ForcePathStyle=true`. The parameter is used to set the Velero integrated AWS SDK data query address style. There are two types of the address: [virtual-host and path-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). If the `s3ForcePathStyle=true` is not set, the default value is false, then the AWS SDK will query in virtual-host style, but the MinIO server only support path-style address by default. The miss match will mean Velero can upload data to MinIO, but **cannot download from MinIO**. This [link](https://github.com/vmware-tanzu/velero/issues/7268) is an example of this issue.
|
||||
It can be resolved by two ways:
|
||||
* Set `s3ForcePathStyle=true` for parameter `--backup-location-config` when installing Velero. This is the preferred way.
|
||||
* Make MinIO server support virtual-host style address. Add the [MINIO_DOMAIN environment variable](https://min.io/docs/minio/linux/reference/minio-server/settings/core.html#id5) for MinIO server will do the magic.
|
||||
|
||||
|
||||
1. Deploy the example nginx application:
|
||||
|
||||
@@ -54,7 +54,7 @@ VELERO_IMAGE ?= velero/velero:main
|
||||
PLUGINS ?=
|
||||
RESTORE_HELPER_IMAGE ?=
|
||||
#Released version only
|
||||
UPGRADE_FROM_VELERO_VERSION ?= v1.10.2,v1.11.0
|
||||
UPGRADE_FROM_VELERO_VERSION ?= v1.11.0,v1.12.3
|
||||
# UPGRADE_FROM_VELERO_CLI can has the same format(a list divided by comma) with UPGRADE_FROM_VELERO_VERSION
|
||||
# Upgrade tests will be executed sequently according to the list by UPGRADE_FROM_VELERO_VERSION
|
||||
# So although length of UPGRADE_FROM_VELERO_CLI list is not equal with UPGRADE_FROM_VELERO_VERSION
|
||||
@@ -62,7 +62,7 @@ UPGRADE_FROM_VELERO_VERSION ?= v1.10.2,v1.11.0
|
||||
# to the end, nil string will be set if UPGRADE_FROM_VELERO_CLI is shorter than UPGRADE_FROM_VELERO_VERSION
|
||||
UPGRADE_FROM_VELERO_CLI ?=
|
||||
|
||||
MIGRATE_FROM_VELERO_VERSION ?= v1.11.0,self
|
||||
MIGRATE_FROM_VELERO_VERSION ?= v1.12.3,self
|
||||
MIGRATE_FROM_VELERO_CLI ?=
|
||||
|
||||
VELERO_NAMESPACE ?= velero
|
||||
|
||||
Reference in New Issue
Block a user