Compare commits

...

39 Commits

Author SHA1 Message Date
Wenkai Yin(尹文开)
ea5a89f83b Merge pull request #7500 from ywk253100/240307_1.13.1
Generate the changelog for release 1.13.1
2024-03-08 13:03:11 +08:00
Wenkai Yin(尹文开)
642924d2bd Generate the changelog for release 1.13.1
Generate the changelog for release 1.13.1

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-03-07 11:23:07 +08:00
lyndon-li
8dca539314 Merge pull request #7468 from blackpiglet/7464_fix_release_1.13
[release-1.13]Modify the label used by the restore CLI to filter the PVR.
2024-03-01 09:47:55 +08:00
Xun Jiang
a6a6da5a72 Modify the label used by the restore CLI to filter the PVR.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-02-29 10:21:57 +08:00
danfeng
99376a3de6 Merge pull request #7461 from danfengliu/bumpup-upgrade-path
bump up upgrade path to 1.13
2024-02-27 14:51:41 +08:00
danfeng
eed1c383c8 Merge branch 'release-1.13' into bumpup-upgrade-path 2024-02-27 14:39:48 +08:00
Xun Jiang/Bruce Jiang
941ad1a993 Merge pull request #7450 from allenxu404/release-1.13
[cherry-pick]adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time
2024-02-26 10:04:06 +08:00
allenxu404
02d229cd06 Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time
Signed-off-by: allenxu404 <qix2@vmware.com>
2024-02-26 09:26:04 +08:00
danfengl
c859f7bf11 bump up upgrade path to 1.13
Signed-off-by: danfengl <danfengl@vmware.com>
2024-02-23 06:42:29 +00:00
lyndon-li
e1222ffd74 Merge pull request #7459 from Lyndon-Li/release-1.13
[1.13] Issue 7308: change the data path requeue time to 5 second
2024-02-22 16:17:52 +08:00
Lyndon-Li
9cdaeadef3 issue 7308: change the data path requeue time to 5 second
Signed-off-by: Lyndon-Li <lyonghui@vmware.com>
2024-02-22 16:02:35 +08:00
Wenkai Yin(尹文开)
cb7211d997 Merge pull request #7453 from ywk253100/240221_credential
[cherry-pick]Don't return error when no credential file found
2024-02-21 16:58:22 +08:00
Wenkai Yin(尹文开)
df08980618 Don't return error when no credential file found
Don't return error when no credential file found

Fixes #7395

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-02-21 16:05:15 +08:00
lyndon-li
51a90e7d2f Merge pull request #7399 from kaovilai/restic-recreate-repo-vel1.13
release-1.13: BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7380)
2024-02-20 11:13:46 +08:00
lyndon-li
62a531785f Merge branch 'release-1.13' into restic-recreate-repo-vel1.13 2024-02-20 10:50:18 +08:00
qiuming
5dd1d3bfe5 Merge pull request #7407 from blackpiglet/fix_velero_repo_get_bug_1.13
[cherry-pick][release-1.13]Fix the `velero repo get` nil pointer issue.
2024-02-19 10:53:44 +08:00
Xun Jiang
701e786150 Fix the velero repo get nil pointer issue.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-02-08 14:31:59 +08:00
Tiger Kaovilai
170fcc53ba BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7380)
* Add BackupRepositories invalidation on BSL Create
Simplify comments

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

* Simplify

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

---------

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-02-06 16:35:40 -05:00
Xun Jiang/Bruce Jiang
44aa6a7c6b Merge pull request #7372 from blackpiglet/add_uploader_config_for_schedule_v1.13
Add `ParallelFilesUpload` for schedule creation.
2024-01-31 15:42:04 +08:00
Xun Jiang
2a9f4fa576 Add ParallelFilesUpload for schedule creation.
Modify restore-helper print information.

Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-01-31 13:35:10 +08:00
Wenkai Yin(尹文开)
4d27ca99c1 Merge pull request #7369 from qiuming-best/release-1.13
[Cherry-Pick] Fix server start failure when no default BSL
2024-01-30 17:10:45 +08:00
Ming Qiu
8914c7209b Fix server start failure when no default BSL
Signed-off-by: Ming Qiu <mqiu@vmware.com>
2024-01-30 08:33:54 +00:00
Wenkai Yin(尹文开)
76670e940c Merge pull request #7351 from ywk253100/240124_log
Log the error details
2024-01-24 13:54:27 +08:00
Wenkai Yin(尹文开)
25d977e5bc Log the error details
Log the error details

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-24 12:43:59 +08:00
qiuming
94c7d4b6d4 Merge pull request #7346 from ywk253100/240122_changelog
Check whether the API resource exists before creating the informer cache
2024-01-24 10:47:16 +08:00
Wenkai Yin(尹文开)
09401c8454 Check whether the API resource exists before creating the informer cache
Check whether the API resource exists before creating the informer cache

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-23 17:19:09 +08:00
qiuming
981d64a1b8 Merge pull request #7338 from ywk253100/240122_changelog
Move unreleased changelogs to 1.13 changelog
2024-01-23 10:19:56 +08:00
Wenkai Yin(尹文开)
16b8b8da72 Move unreleased changelogs to 1.13 changelog
Move unreleased changelogs to 1.13 changelog

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-23 10:06:15 +08:00
lyndon-li
9fd73b2d13 Merge pull request #7339 from ywk253100/240122_log_erro
Log the error got from the discovery helper
2024-01-22 14:11:38 +08:00
Wenkai Yin(尹文开)
c377e472e8 Log the error got from the discovery helper
Log the error got from the discovery helper

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-22 11:12:00 +08:00
Wenkai Yin(尹文开)
f5714cb636 [cherry-pick]Do not attempt restore resource with no available GVK in cluster (#7336)
* Specify the Kind explicitly in the API resource

Specify the Kind explicitly in the API resource to avoid wrong Kind conversion


* Do not attempt restore resource with no available GVK in cluster (#7322)

Check for GVK before attempting restore.


---------

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Tiger Kaovilai <tkaovila@redhat.com>
2024-01-22 10:51:36 +08:00
Wenkai Yin(尹文开)
5ffa12189b Merge pull request #7328 from ywk253100/240118_release_node
Add release note for the informer cache memory consumption
2024-01-18 15:27:43 +08:00
Wenkai Yin(尹文开)
1882be763e Add release note for the informer cache memory consumption
Add release note for the informer cache memory consumption

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-18 13:47:34 +08:00
Wenkai Yin(尹文开)
42bbf87197 Merge pull request #7325 from ywk253100/240116_informer
Create informer per resources to avoid huge memory consumption
2024-01-18 10:44:15 +08:00
Wenkai Yin(尹文开)
8aa6a8e59d Create informer per resources to avoid huge memory consumption
Create informer per resources to avoid huge memory consumption

Fixes #7323

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-17 22:37:49 +08:00
Xun Jiang/Bruce Jiang
fdb29819b4 Merge pull request #7304 from blackpiglet/fix_7268_release_1.13
Add detail for parameter s3ForcePathStyle in MinIO page.
2024-01-15 13:31:30 +08:00
Xun Jiang
74f225037c Add detail for parameter s3ForcePathStyle in MinIO page.
Signed-off-by: Xun Jiang <blackpigletbruce@gmail.com>
2024-01-12 16:55:38 +08:00
Wenkai Yin(尹文开)
6e90e628aa Merge pull request #7303 from ywk253100/240110_pin
Pin the version of Golang and base image
2024-01-10 17:52:51 +08:00
Wenkai Yin(尹文开)
46f64f2f98 Pin the version of Golang and base image
Pin the version of Golang and base image

Signed-off-by: Wenkai Yin(尹文开) <yinw@vmware.com>
2024-01-10 17:35:28 +08:00
35 changed files with 175 additions and 117 deletions

View File

@@ -14,7 +14,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21'
go-version: '1.21.6'
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI

View File

@@ -14,7 +14,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21'
go-version: '1.21.6'
id: go
# Look for a CLI that's made for this PR
- name: Fetch built CLI
@@ -72,7 +72,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21'
go-version: '1.21.6'
id: go
- name: Check out the code
uses: actions/checkout@v2

View File

@@ -10,7 +10,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21'
go-version: '1.21.6'
id: go
- name: Check out the code
uses: actions/checkout@v2

View File

@@ -18,7 +18,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21'
go-version: '1.21.6'
id: go
- uses: actions/checkout@v3

View File

@@ -13,7 +13,7 @@
# limitations under the License.
# Velero binary build section
FROM --platform=$BUILDPLATFORM golang:1.21-bookworm as velero-builder
FROM --platform=$BUILDPLATFORM golang:1.21.6-bookworm as velero-builder
ARG GOPROXY
ARG BIN
@@ -47,7 +47,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Restic binary build section
FROM --platform=$BUILDPLATFORM golang:1.21-bookworm as restic-builder
FROM --platform=$BUILDPLATFORM golang:1.21.6-bookworm as restic-builder
ARG BIN
ARG TARGETOS
@@ -70,7 +70,7 @@ RUN mkdir -p /output/usr/bin && \
go clean -modcache -cache
# Velero image packing section
FROM paketobuildpacks/run-jammy-tiny:latest
FROM paketobuildpacks/run-jammy-tiny:0.2.19
LABEL maintainer="Xun Jiang <jxun@vmware.com>"

View File

@@ -52,7 +52,7 @@ git_sha = str(local("git rev-parse HEAD", quiet = True, echo_off = True)).strip(
tilt_helper_dockerfile_header = """
# Tilt image
FROM golang:1.21 as tilt-helper
FROM golang:1.21.6 as tilt-helper
# Support live reloading with Tilt
RUN wget --output-document /restart.sh --quiet https://raw.githubusercontent.com/windmilleng/rerun-process-wrapper/master/restart.sh && \

View File

@@ -1,3 +1,24 @@
## v1.13.1
### 2024-03-13
### Download
https://github.com/vmware-tanzu/velero/releases/tag/v1.13.1
### Container Image
`velero/velero:v1.13.1`
### Documentation
https://velero.io/docs/v1.13/
### Upgrading
https://velero.io/docs/v1.13/upgrade-to-1.13/
### All changes
* Fix issue #7308, change the data path requeue time to 5 second for data mover backup/restore, PVB and PVR. (#7459, @Lyndon-Li)
* BackupRepositories associated with a BSL are invalidated when BSL is (re-)created. (#7399, @kaovilai)
* Adjust the logic for the backup_last_status metrics to stop incorrectly incrementing over time (#7445, @allenxu404)
## v1.13
### 2024-01-10
@@ -64,11 +85,15 @@ To fix CVEs and keep pace with Golang, Velero made changes as follows:
### Limitations/Known issues
* The backup's VolumeInfo metadata doesn't have the information updated in the async operations. This function could be supported in v1.14 release.
### Note
* Velero introduces the informer cache which is enabled by default. The informer cache improves the restore performance but may cause higher memory consumption. Increase the memory limit of the Velero pod or disable the informer cache by specifying the `--disable-informer-cache` option when installing Velero if you get the OOM error.
### Deprecation announcement
* The generated k8s clients, informers, and listers are deprecated in the Velero v1.13 release. They are put in the Velero repository's pkg/generated directory. According to the n+2 supporting policy, the deprecated are kept for two more releases. The pkg/generated directory should be deleted in the v1.15 release.
* After the backup VolumeInfo metadata file is added to the backup, Velero decides how to restore the PV resource according to the VolumeInfo content. To support the backup generated by the older version of Velero, the old logic is also kept. The support for the backup without the VolumeInfo metadata file will be kept for two releases. The support logic will be deleted in the v1.15 release.
### All Changes
* Check resource Group Version and Kind is available in cluster before attempting restore to prevent being stuck (#7336, @kaovilai)
* Make "disable-informer-cache" option false(enabled) by default to keep it consistent with the help message (#7294, @ywk253100)
* Fix issue #6928, remove snapshot deletion timeout for PVB (#7282, @Lyndon-Li)
* Do not set "targetNamespace" to namespace items (#7274, @reasonerjt)

View File

@@ -66,10 +66,10 @@ func done() bool {
doneFile := filepath.Join("/restores", child.Name(), ".velero", os.Args[1])
if _, err := os.Stat(doneFile); os.IsNotExist(err) {
fmt.Printf("Not found: %s\n", doneFile)
fmt.Printf("The filesystem restore done file %s is not found yet. Retry later.\n", doneFile)
return false
} else if err != nil {
fmt.Fprintf(os.Stderr, "ERROR looking for %s: %s\n", doneFile, err)
fmt.Fprintf(os.Stderr, "ERROR looking filesystem restore done file %s: %s\n", doneFile, err)
return false
}

2
go.mod
View File

@@ -2,7 +2,7 @@ module github.com/vmware-tanzu/velero
go 1.21
toolchain go1.21.3
toolchain go1.21.6
require (
cloud.google.com/go/storage v1.33.0

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM --platform=linux/amd64 golang:1.21-bookworm
FROM --platform=linux/amd64 golang:1.21.6-bookworm
ARG GOPROXY

View File

@@ -35,8 +35,8 @@ type DynamicFactory interface {
// ClientForGroupVersionResource returns a Dynamic client for the given group/version
// and resource for the given namespace.
ClientForGroupVersionResource(gv schema.GroupVersion, resource metav1.APIResource, namespace string) (Dynamic, error)
// DynamicSharedInformerFactoryForNamespace returns a DynamicSharedInformerFactory for the given namespace.
DynamicSharedInformerFactoryForNamespace(namespace string) dynamicinformer.DynamicSharedInformerFactory
// DynamicSharedInformerFactory returns a DynamicSharedInformerFactory.
DynamicSharedInformerFactory() dynamicinformer.DynamicSharedInformerFactory
}
// dynamicFactory implements DynamicFactory.
@@ -55,8 +55,8 @@ func (f *dynamicFactory) ClientForGroupVersionResource(gv schema.GroupVersion, r
}, nil
}
func (f *dynamicFactory) DynamicSharedInformerFactoryForNamespace(namespace string) dynamicinformer.DynamicSharedInformerFactory {
return dynamicinformer.NewFilteredDynamicSharedInformerFactory(f.dynamicClient, time.Minute, namespace, nil)
func (f *dynamicFactory) DynamicSharedInformerFactory() dynamicinformer.DynamicSharedInformerFactory {
return dynamicinformer.NewDynamicSharedInformerFactory(f.dynamicClient, time.Minute)
}
// Creator creates an object.

View File

@@ -43,9 +43,8 @@ func NewGetCommand(f client.Factory, use string) *cobra.Command {
crClient, err := f.KubebuilderClient()
cmd.CheckError(err)
var repos *api.BackupRepositoryList
repos := new(api.BackupRepositoryList)
if len(args) > 0 {
repos = new(api.BackupRepositoryList)
for _, name := range args {
repo := new(api.BackupRepository)
err := crClient.Get(context.TODO(), ctrlclient.ObjectKey{Namespace: f.Namespace(), Name: name}, repo)

View File

@@ -74,7 +74,7 @@ func NewDescribeCommand(f client.Factory, use string) *cobra.Command {
podVolumeRestoreList := new(velerov1api.PodVolumeRestoreList)
err = kbClient.List(context.TODO(), podVolumeRestoreList, &controllerclient.ListOptions{
Namespace: f.Namespace(),
LabelSelector: labels.SelectorFromSet(map[string]string{velerov1api.BackupNameLabel: label.GetValidName(restore.Name)}),
LabelSelector: labels.SelectorFromSet(map[string]string{velerov1api.RestoreNameLabel: label.GetValidName(restore.Name)}),
})
if err != nil {
fmt.Fprintf(os.Stderr, "error getting PodVolumeRestores for restore %s: %v\n", restore.Name, err)

View File

@@ -171,6 +171,12 @@ func (o *CreateOptions) Run(c *cobra.Command, f client.Factory) error {
schedule.Spec.Template.ResourcePolicy = &v1.TypedLocalObjectReference{Kind: resourcepolicies.ConfigmapRefType, Name: o.BackupOptions.ResPoliciesConfigmap}
}
if o.BackupOptions.ParallelFilesUpload > 0 {
schedule.Spec.Template.UploaderConfig = &api.UploaderConfigForBackup{
ParallelFilesUpload: o.BackupOptions.ParallelFilesUpload,
}
}
if printed, err := output.PrintWithFormat(c, schedule); printed || err != nil {
return err
}

View File

@@ -33,6 +33,7 @@ import (
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
corev1api "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
@@ -467,7 +468,12 @@ func setDefaultBackupLocation(ctx context.Context, client ctrlclient.Client, nam
backupLocation := &velerov1api.BackupStorageLocation{}
if err := client.Get(ctx, types.NamespacedName{Namespace: namespace, Name: defaultBackupLocation}, backupLocation); err != nil {
return errors.WithStack(err)
if apierrors.IsNotFound(err) {
logger.WithField("backupStorageLocation", defaultBackupLocation).WithError(err).Warn("Failed to set default backup storage location at server start")
return nil
} else {
return errors.WithStack(err)
}
}
if !backupLocation.Spec.Default {

View File

@@ -409,4 +409,13 @@ func Test_setDefaultBackupLocation(t *testing.T) {
nonDefaultLocation := &velerov1api.BackupStorageLocation{}
require.Nil(t, c.Get(context.Background(), client.ObjectKey{Namespace: "velero", Name: "non-default"}, nonDefaultLocation))
assert.False(t, nonDefaultLocation.Spec.Default)
// no default location specified
c = fake.NewClientBuilder().WithScheme(scheme).Build()
err := setDefaultBackupLocation(context.Background(), c, "velero", "", logrus.New())
assert.NoError(t, err)
// no default location created
err = setDefaultBackupLocation(context.Background(), c, "velero", "default", logrus.New())
assert.NoError(t, err)
}

View File

@@ -76,7 +76,11 @@ func (r *BackupRepoReconciler) SetupWithManager(mgr ctrl.Manager) error {
For(&velerov1api.BackupRepository{}).
Watches(s, nil).
Watches(&source.Kind{Type: &velerov1api.BackupStorageLocation{}}, kube.EnqueueRequestsFromMapUpdateFunc(r.invalidateBackupReposForBSL),
builder.WithPredicates(kube.NewUpdateEventPredicate(r.needInvalidBackupRepo))).
builder.WithPredicates(
// When BSL updates, check if the backup repositories need to be invalidated
kube.NewUpdateEventPredicate(r.needInvalidBackupRepo),
// When BSL is created, invalidate any backup repositories that reference it
kube.NewCreateEventPredicate(func(client.Object) bool { return true }))).
Complete(r)
}
@@ -90,13 +94,13 @@ func (r *BackupRepoReconciler) invalidateBackupReposForBSL(bslObj client.Object)
}).AsSelector(),
}
if err := r.List(context.TODO(), list, options); err != nil {
r.logger.WithField("BSL", bsl.Name).WithError(err).Error("unable to list BackupRepositorys")
r.logger.WithField("BSL", bsl.Name).WithError(err).Error("unable to list BackupRepositories")
return []reconcile.Request{}
}
for i := range list.Items {
r.logger.WithField("BSL", bsl.Name).Infof("Invalidating Backup Repository %s", list.Items[i].Name)
if err := r.patchBackupRepository(context.Background(), &list.Items[i], repoNotReady("re-establish on BSL change")); err != nil {
if err := r.patchBackupRepository(context.Background(), &list.Items[i], repoNotReady("re-establish on BSL change or create")); err != nil {
r.logger.WithField("BSL", bsl.Name).WithError(err).Errorf("fail to patch BackupRepository %s", list.Items[i].Name)
}
}
@@ -104,6 +108,7 @@ func (r *BackupRepoReconciler) invalidateBackupReposForBSL(bslObj client.Object)
return []reconcile.Request{}
}
// needInvalidBackupRepo returns true if the BSL's storage type, bucket, prefix, CACert, or config has changed
func (r *BackupRepoReconciler) needInvalidBackupRepo(oldObj client.Object, newObj client.Object) bool {
oldBSL := oldObj.(*velerov1api.BackupStorageLocation)
newBSL := newObj.(*velerov1api.BackupStorageLocation)

View File

@@ -261,7 +261,7 @@ func (r *DataDownloadReconciler) Reconcile(ctx context.Context, req ctrl.Request
if err != nil {
if err == datapath.ConcurrentLimitExceed {
log.Info("Data path instance is concurrent limited requeue later")
return ctrl.Result{Requeue: true, RequeueAfter: time.Minute}, nil
return ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5}, nil
} else {
return r.errorOut(ctx, dd, err, "error to create data path", log)
}

View File

@@ -219,7 +219,7 @@ func TestDataDownloadReconcile(t *testing.T) {
dataMgr: datapath.NewManager(0),
notNilExpose: true,
notMockCleanUp: true,
expectedResult: &ctrl.Result{Requeue: true, RequeueAfter: time.Minute},
expectedResult: &ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5},
},
{
name: "Error getting volume directory name for pvc in pod",
@@ -416,8 +416,8 @@ func TestDataDownloadReconcile(t *testing.T) {
require.NotNil(t, actualResult)
if test.expectedResult != nil {
assert.Equal(t, test.expectedResult.Requeue, test.expectedResult.Requeue)
assert.Equal(t, test.expectedResult.RequeueAfter, test.expectedResult.RequeueAfter)
assert.Equal(t, test.expectedResult.Requeue, actualResult.Requeue)
assert.Equal(t, test.expectedResult.RequeueAfter, actualResult.RequeueAfter)
}
dd := velerov2alpha1api.DataDownload{}

View File

@@ -269,7 +269,7 @@ func (r *DataUploadReconciler) Reconcile(ctx context.Context, req ctrl.Request)
if err != nil {
if err == datapath.ConcurrentLimitExceed {
log.Info("Data path instance is concurrent limited requeue later")
return ctrl.Result{Requeue: true, RequeueAfter: time.Minute}, nil
return ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5}, nil
} else {
return r.errorOut(ctx, du, err, "error to create data path", log)
}

View File

@@ -413,7 +413,7 @@ func TestReconcile(t *testing.T) {
du: dataUploadBuilder().Phase(velerov2alpha1api.DataUploadPhasePrepared).SnapshotType(fakeSnapshotType).Result(),
expectedProcessed: false,
expected: dataUploadBuilder().Phase(velerov2alpha1api.DataUploadPhasePrepared).Result(),
expectedRequeue: ctrl.Result{Requeue: true, RequeueAfter: time.Minute},
expectedRequeue: ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5},
},
{
name: "prepare timeout",

View File

@@ -126,7 +126,7 @@ func (r *PodVolumeBackupReconciler) Reconcile(ctx context.Context, req ctrl.Requ
fsBackup, err := r.dataPathMgr.CreateFileSystemBR(pvb.Name, pVBRRequestor, ctx, r.Client, pvb.Namespace, callbacks, log)
if err != nil {
if err == datapath.ConcurrentLimitExceed {
return ctrl.Result{Requeue: true, RequeueAfter: time.Minute}, nil
return ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5}, nil
} else {
return r.errorOut(ctx, &pvb, err, "error to create data path", log)
}

View File

@@ -383,7 +383,7 @@ var _ = Describe("PodVolumeBackup Reconciler", func() {
expected: builder.ForPodVolumeBackup(velerov1api.DefaultNamespace, "pvb-1").
Phase("").
Result(),
expectedRequeue: ctrl.Result{Requeue: true, RequeueAfter: time.Minute},
expectedRequeue: ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5},
}),
)
})

View File

@@ -122,7 +122,7 @@ func (c *PodVolumeRestoreReconciler) Reconcile(ctx context.Context, req ctrl.Req
fsRestore, err := c.dataPathMgr.CreateFileSystemBR(pvr.Name, pVBRRequestor, ctx, c.Client, pvr.Namespace, callbacks, log)
if err != nil {
if err == datapath.ConcurrentLimitExceed {
return ctrl.Result{Requeue: true, RequeueAfter: time.Minute}, nil
return ctrl.Result{Requeue: true, RequeueAfter: time.Second * 5}, nil
} else {
return c.errorOut(ctx, pvr, err, "error to create data path", log)
}

View File

@@ -468,7 +468,7 @@ func (m *ServerMetrics) InitSchedule(scheduleName string) {
c.WithLabelValues(scheduleName).Add(0)
}
if c, ok := m.metrics[backupLastStatus].(*prometheus.GaugeVec); ok {
c.WithLabelValues(scheduleName).Add(1)
c.WithLabelValues(scheduleName).Set(float64(1))
}
if c, ok := m.metrics[restoreAttemptTotal].(*prometheus.CounterVec); ok {
c.WithLabelValues(scheduleName).Add(0)

View File

@@ -45,7 +45,6 @@ import (
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/dynamic/dynamicinformer"
"k8s.io/client-go/informers"
corev1 "k8s.io/client-go/kubernetes/typed/core/v1"
"k8s.io/client-go/tools/cache"
crclient "sigs.k8s.io/controller-runtime/pkg/client"
@@ -309,8 +308,6 @@ func (kr *kubernetesRestorer) RestoreWithResolvers(
resourceTerminatingTimeout: kr.resourceTerminatingTimeout,
resourceTimeout: kr.resourceTimeout,
resourceClients: make(map[resourceClientKey]client.Dynamic),
dynamicInformerFactories: make(map[string]*informerFactoryWithContext),
resourceInformers: make(map[resourceClientKey]informers.GenericInformer),
restoredItems: req.RestoredItems,
renamedPVs: make(map[string]string),
pvRenamer: kr.pvRenamer,
@@ -362,8 +359,7 @@ type restoreContext struct {
resourceTerminatingTimeout time.Duration
resourceTimeout time.Duration
resourceClients map[resourceClientKey]client.Dynamic
dynamicInformerFactories map[string]*informerFactoryWithContext
resourceInformers map[resourceClientKey]informers.GenericInformer
dynamicInformerFactory *informerFactoryWithContext
restoredItems map[itemKey]restoredItemStatus
renamedPVs map[string]string
pvRenamer func(string) (string, error)
@@ -447,11 +443,16 @@ func (ctx *restoreContext) execute() (results.Result, results.Result) {
// Need to stop all informers if enabled
if !ctx.disableInformerCache {
context, cancel := signal.NotifyContext(go_context.Background(), os.Interrupt)
ctx.dynamicInformerFactory = &informerFactoryWithContext{
factory: ctx.dynamicFactory.DynamicSharedInformerFactory(),
context: context,
cancel: cancel,
}
defer func() {
// Call the cancel func to close the channel for each started informer
for _, factory := range ctx.dynamicInformerFactories {
factory.cancel()
}
ctx.dynamicInformerFactory.cancel()
// After upgrading to client-go 0.27 or newer, also call Shutdown for each informer factory
}()
}
@@ -579,28 +580,29 @@ func (ctx *restoreContext) execute() (results.Result, results.Result) {
// initialize informer caches for selected resources if enabled
if !ctx.disableInformerCache {
// CRD informer will have already been initialized if any CRDs were created,
// but already-initialized informers aren't re-initialized because getGenericInformer
// looks for an existing one first.
factoriesToStart := make(map[string]*informerFactoryWithContext)
for _, informerResource := range selectedResourceCollection {
gr := schema.ParseGroupResource(informerResource.resource)
if informerResource.totalItems == 0 {
continue
}
version := ""
for _, items := range informerResource.selectedItemsByNamespace {
// don't use ns key since it represents original ns, not mapped ns
if len(items) == 0 {
continue
}
// use the first item in the list to initialize the informer. The rest of the list
// should share the same gvr and namespace
_, factory := ctx.getGenericInformerInternal(gr, items[0].version, items[0].targetNamespace)
if factory != nil {
factoriesToStart[items[0].targetNamespace] = factory
}
version = items[0].version
break
}
gvr := schema.ParseGroupResource(informerResource.resource).WithVersion(version)
_, _, err := ctx.discoveryHelper.ResourceFor(gvr)
if err != nil {
ctx.log.Infof("failed to create informer for %s: %v", gvr, err)
continue
}
ctx.dynamicInformerFactory.factory.ForResource(gvr)
}
for _, factoryWithContext := range factoriesToStart {
factoryWithContext.factory.WaitForCacheSync(factoryWithContext.context.Done())
}
ctx.dynamicInformerFactory.factory.Start(ctx.dynamicInformerFactory.context.Done())
ctx.log.Info("waiting informer cache sync ...")
ctx.dynamicInformerFactory.factory.WaitForCacheSync(ctx.dynamicInformerFactory.context.Done())
}
// reset processedItems and totalItems before processing full resource list
@@ -1061,47 +1063,23 @@ func (ctx *restoreContext) getResourceClient(groupResource schema.GroupResource,
return client, nil
}
// if new informer is created, non-nil factory is returned
func (ctx *restoreContext) getGenericInformerInternal(groupResource schema.GroupResource, version, namespace string) (informers.GenericInformer, *informerFactoryWithContext) {
var returnFactory *informerFactoryWithContext
func (ctx *restoreContext) getResourceLister(groupResource schema.GroupResource, obj *unstructured.Unstructured, namespace string) (cache.GenericNamespaceLister, error) {
_, _, err := ctx.discoveryHelper.KindFor(obj.GroupVersionKind())
if err != nil {
return nil, err
}
informer := ctx.dynamicInformerFactory.factory.ForResource(groupResource.WithVersion(obj.GroupVersionKind().Version))
// if the restore contains CRDs or the RIA returns new resources, need to make sure the corresponding informers are synced
if !informer.Informer().HasSynced() {
ctx.dynamicInformerFactory.factory.Start(ctx.dynamicInformerFactory.context.Done())
ctx.log.Infof("waiting informer cache sync for %s, %s/%s ...", groupResource, namespace, obj.GetName())
ctx.dynamicInformerFactory.factory.WaitForCacheSync(ctx.dynamicInformerFactory.context.Done())
}
key := getResourceClientKey(groupResource, version, namespace)
factoryWithContext, ok := ctx.dynamicInformerFactories[key.namespace]
if !ok {
factory := ctx.dynamicFactory.DynamicSharedInformerFactoryForNamespace(namespace)
informerContext, informerCancel := signal.NotifyContext(go_context.Background(), os.Interrupt)
factoryWithContext = &informerFactoryWithContext{
factory: factory,
context: informerContext,
cancel: informerCancel,
}
ctx.dynamicInformerFactories[key.namespace] = factoryWithContext
}
informer, ok := ctx.resourceInformers[key]
if !ok {
ctx.log.Infof("[debug] Creating factory for %s in namespace %s", key.resource, key.namespace)
informer = factoryWithContext.factory.ForResource(key.resource)
factoryWithContext.factory.Start(factoryWithContext.context.Done())
ctx.resourceInformers[key] = informer
returnFactory = factoryWithContext
}
return informer, returnFactory
}
func (ctx *restoreContext) getGenericInformer(groupResource schema.GroupResource, version, namespace string) informers.GenericInformer {
informer, factoryWithContext := ctx.getGenericInformerInternal(groupResource, version, namespace)
if factoryWithContext != nil {
factoryWithContext.factory.WaitForCacheSync(factoryWithContext.context.Done())
}
return informer
}
func (ctx *restoreContext) getResourceLister(groupResource schema.GroupResource, obj *unstructured.Unstructured, namespace string) cache.GenericNamespaceLister {
informer := ctx.getGenericInformer(groupResource, obj.GroupVersionKind().Version, namespace)
if namespace == "" {
return informer.Lister()
} else {
return informer.Lister().ByNamespace(namespace)
return informer.Lister(), nil
}
return informer.Lister().ByNamespace(namespace), nil
}
func getResourceID(groupResource schema.GroupResource, namespace, name string) string {
@@ -1113,7 +1091,10 @@ func getResourceID(groupResource schema.GroupResource, namespace, name string) s
}
func (ctx *restoreContext) getResource(groupResource schema.GroupResource, obj *unstructured.Unstructured, namespace, name string) (*unstructured.Unstructured, error) {
lister := ctx.getResourceLister(groupResource, obj, namespace)
lister, err := ctx.getResourceLister(groupResource, obj, namespace)
if err != nil {
return nil, errors.Wrapf(err, "Error getting lister for %s", getResourceID(groupResource, namespace, name))
}
clusterObj, err := lister.Get(name)
if err != nil {
return nil, errors.Wrapf(err, "error getting resource from lister for %s, %s/%s", groupResource, namespace, name)
@@ -1123,6 +1104,7 @@ func (ctx *restoreContext) getResource(groupResource schema.GroupResource, obj *
ctx.log.WithError(errors.WithStack(fmt.Errorf("expected *unstructured.Unstructured but got %T", u))).Error("unable to understand entry returned from client")
return nil, fmt.Errorf("expected *unstructured.Unstructured but got %T", u)
}
ctx.log.Debugf("get %s, %s/%s from informer cache", groupResource, namespace, name)
return u, nil
}

View File

@@ -19,9 +19,6 @@ package test
import (
"strings"
"golang.org/x/text/cases"
"golang.org/x/text/language"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/discovery"
discoveryfake "k8s.io/client-go/discovery/fake"
@@ -76,7 +73,7 @@ func (c *DiscoveryClient) WithAPIResource(resource *APIResource) *DiscoveryClien
Namespaced: resource.Namespaced,
Group: resource.Group,
Version: resource.Version,
Kind: cases.Title(language.Und).String(strings.TrimSuffix(resource.Name, "s")),
Kind: resource.Kind,
Verbs: metav1.Verbs([]string{"list", "create", "get", "delete"}),
ShortNames: []string{resource.ShortName},
})

View File

@@ -38,8 +38,8 @@ func (df *FakeDynamicFactory) ClientForGroupVersionResource(gv schema.GroupVersi
return args.Get(0).(client.Dynamic), args.Error(1)
}
func (df *FakeDynamicFactory) DynamicSharedInformerFactoryForNamespace(namespace string) dynamicinformer.DynamicSharedInformerFactory {
args := df.Called(namespace)
func (df *FakeDynamicFactory) DynamicSharedInformerFactory() dynamicinformer.DynamicSharedInformerFactory {
args := df.Called()
return args.Get(0).(dynamicinformer.DynamicSharedInformerFactory)
}

View File

@@ -27,6 +27,7 @@ type APIResource struct {
Group string
Version string
Name string
Kind string
ShortName string
Namespaced bool
Items []metav1.Object
@@ -50,6 +51,7 @@ func Pods(items ...metav1.Object) *APIResource {
ShortName: "po",
Namespaced: true,
Items: items,
Kind: "Pod",
}
}
@@ -59,6 +61,7 @@ func PVCs(items ...metav1.Object) *APIResource {
Version: "v1",
Name: "persistentvolumeclaims",
ShortName: "pvc",
Kind: "PersistentVolumeClaim",
Namespaced: true,
Items: items,
}
@@ -70,6 +73,7 @@ func PVs(items ...metav1.Object) *APIResource {
Version: "v1",
Name: "persistentvolumes",
ShortName: "pv",
Kind: "PersistentVolume",
Namespaced: false,
Items: items,
}
@@ -81,6 +85,7 @@ func Secrets(items ...metav1.Object) *APIResource {
Version: "v1",
Name: "secrets",
ShortName: "secrets",
Kind: "Secret",
Namespaced: true,
Items: items,
}
@@ -92,6 +97,7 @@ func Deployments(items ...metav1.Object) *APIResource {
Version: "v1",
Name: "deployments",
ShortName: "deploy",
Kind: "Deployment",
Namespaced: true,
Items: items,
}
@@ -103,6 +109,7 @@ func ExtensionsDeployments(items ...metav1.Object) *APIResource {
Version: "v1",
Name: "deployments",
ShortName: "deploy",
Kind: "Deployment",
Namespaced: true,
Items: items,
}
@@ -115,6 +122,7 @@ func VeleroDeployments(items ...metav1.Object) *APIResource {
Version: "v1",
Name: "deployments",
ShortName: "deploy",
Kind: "Deployment",
Namespaced: true,
Items: items,
}
@@ -126,6 +134,7 @@ func Namespaces(items ...metav1.Object) *APIResource {
Version: "v1",
Name: "namespaces",
ShortName: "ns",
Kind: "Namespace",
Namespaced: false,
Items: items,
}
@@ -137,6 +146,7 @@ func ServiceAccounts(items ...metav1.Object) *APIResource {
Version: "v1",
Name: "serviceaccounts",
ShortName: "sa",
Kind: "ServiceAccount",
Namespaced: true,
Items: items,
}
@@ -148,6 +158,7 @@ func ConfigMaps(items ...metav1.Object) *APIResource {
Version: "v1",
Name: "configmaps",
ShortName: "cm",
Kind: "ConfigMap",
Namespaced: true,
Items: items,
}
@@ -159,6 +170,7 @@ func CRDs(items ...metav1.Object) *APIResource {
Version: "v1beta1",
Name: "customresourcedefinitions",
ShortName: "crd",
Kind: "CustomResourceDefinition",
Namespaced: false,
Items: items,
}
@@ -169,6 +181,7 @@ func VSLs(items ...metav1.Object) *APIResource {
Group: "velero.io",
Version: "v1",
Name: "volumesnapshotlocations",
Kind: "VolumeSnapshotLocation",
Namespaced: true,
Items: items,
}
@@ -179,6 +192,7 @@ func Backups(items ...metav1.Object) *APIResource {
Group: "velero.io",
Version: "v1",
Name: "backups",
Kind: "Backup",
Namespaced: true,
Items: items,
}
@@ -190,6 +204,7 @@ func Services(items ...metav1.Object) *APIResource {
Version: "v1",
Name: "services",
ShortName: "svc",
Kind: "Service",
Namespaced: true,
Items: items,
}
@@ -200,6 +215,7 @@ func DataUploads(items ...metav1.Object) *APIResource {
Group: "velero.io",
Version: "v2alpha1",
Name: "datauploads",
Kind: "DataUpload",
Namespaced: true,
Items: items,
}

View File

@@ -62,6 +62,10 @@ func LoadCredentials(config map[string]string) (map[string]string, error) {
credFile = config[credentialFile]
}
if len(credFile) == 0 {
return map[string]string{}, nil
}
// put the credential file content into a map
creds, err := godotenv.Read(credFile)
if err != nil {

View File

@@ -28,8 +28,9 @@ import (
func TestLoadCredentials(t *testing.T) {
// no credential file
_, err := LoadCredentials(nil)
require.NotNil(t, err)
credentials, err := LoadCredentials(nil)
require.Nil(t, err)
assert.NotNil(t, credentials)
// specified credential file in the config
name := filepath.Join(os.TempDir(), "credential")
@@ -43,7 +44,7 @@ func TestLoadCredentials(t *testing.T) {
config := map[string]string{
"credentialsFile": name,
}
credentials, err := LoadCredentials(config)
credentials, err = LoadCredentials(config)
require.Nil(t, err)
assert.Equal(t, "value", credentials["key"])

View File

@@ -26,10 +26,8 @@ import (
type MapUpdateFunc func(client.Object) []reconcile.Request
// EnqueueRequestsFromMapUpdateFunc is for the same purpose with EnqueueRequestsFromMapFunc.
// Merely, it is more friendly to updating the mapped objects in the MapUpdateFunc, because
// on Update event, MapUpdateFunc is called for only once with the new object, so if MapUpdateFunc
// does some update to the mapped objects, the update is done for once
// EnqueueRequestsFromMapUpdateFunc has the same purpose with handler.EnqueueRequestsFromMapFunc.
// MapUpdateFunc is simpler on Update event because mapAndEnqueue is called once with the new object. EnqueueRequestsFromMapFunc is called twice with the old and new object.
func EnqueueRequestsFromMapUpdateFunc(fn MapUpdateFunc) handler.EventHandler {
return &enqueueRequestsFromMapFunc{
toRequests: fn,

View File

@@ -81,11 +81,16 @@ These instructions start the Velero server and a Minio instance that is accessib
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
```
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). You may need to update AWS plugin version to one that is [compatible](https://github.com/vmware-tanzu/velero-plugin-for-aws#compatibility) with the version of Velero you are installing.
* This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). You may need to update AWS plugin version to one that is [compatible](https://github.com/vmware-tanzu/velero-plugin-for-aws#compatibility) with the version of Velero you are installing.
Additionally, you can specify `--use-node-agent` to enable File System Backup support, and `--wait` to wait for the deployment to be ready.
* Additionally, you can specify `--use-node-agent` to enable File System Backup support, and `--wait` to wait for the deployment to be ready.
This example also assumes you have named your Minio bucket "velero".
* This example also assumes you have named your Minio bucket "velero".
* Please make sure to set parameter `s3ForcePathStyle=true`. The parameter is used to set the Velero integrated AWS SDK data query address style. There are two types of the address: [virtual-host and path-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). If the `s3ForcePathStyle=true` is not set, the default value is false, then the AWS SDK will query in virtual-host style, but the MinIO server only support path-style address by default. The miss match will mean Velero can upload data to MinIO, but **cannot download from MinIO**. This [link](https://github.com/vmware-tanzu/velero/issues/7268) is an example of this issue.
It can be resolved by two ways:
* Set `s3ForcePathStyle=true` for parameter `--backup-location-config` when installing Velero. This is the preferred way.
* Make MinIO server support virtual-host style address. Add the [MINIO_DOMAIN environment variable](https://min.io/docs/minio/linux/reference/minio-server/settings/core.html#id5) for MinIO server will do the magic.
1. Deploy the example nginx application:

View File

@@ -81,11 +81,16 @@ These instructions start the Velero server and a Minio instance that is accessib
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000
```
This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). You may need to update AWS plugin version to one that is [compatible](https://github.com/vmware-tanzu/velero-plugin-for-aws#compatibility) with the version of Velero you are installing.
* This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created (`--use-volume-snapshots=false`). You may need to update AWS plugin version to one that is [compatible](https://github.com/vmware-tanzu/velero-plugin-for-aws#compatibility) with the version of Velero you are installing.
Additionally, you can specify `--use-node-agent` to enable File System Backup support, and `--wait` to wait for the deployment to be ready.
* Additionally, you can specify `--use-node-agent` to enable File System Backup support, and `--wait` to wait for the deployment to be ready.
This example also assumes you have named your Minio bucket "velero".
* This example also assumes you have named your Minio bucket "velero".
* Please make sure to set parameter `s3ForcePathStyle=true`. The parameter is used to set the Velero integrated AWS SDK data query address style. There are two types of the address: [virtual-host and path-style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html). If the `s3ForcePathStyle=true` is not set, the default value is false, then the AWS SDK will query in virtual-host style, but the MinIO server only support path-style address by default. The miss match will mean Velero can upload data to MinIO, but **cannot download from MinIO**. This [link](https://github.com/vmware-tanzu/velero/issues/7268) is an example of this issue.
It can be resolved by two ways:
* Set `s3ForcePathStyle=true` for parameter `--backup-location-config` when installing Velero. This is the preferred way.
* Make MinIO server support virtual-host style address. Add the [MINIO_DOMAIN environment variable](https://min.io/docs/minio/linux/reference/minio-server/settings/core.html#id5) for MinIO server will do the magic.
1. Deploy the example nginx application:

View File

@@ -54,7 +54,7 @@ VELERO_IMAGE ?= velero/velero:main
PLUGINS ?=
RESTORE_HELPER_IMAGE ?=
#Released version only
UPGRADE_FROM_VELERO_VERSION ?= v1.10.2,v1.11.0
UPGRADE_FROM_VELERO_VERSION ?= v1.11.0,v1.12.3
# UPGRADE_FROM_VELERO_CLI can has the same format(a list divided by comma) with UPGRADE_FROM_VELERO_VERSION
# Upgrade tests will be executed sequently according to the list by UPGRADE_FROM_VELERO_VERSION
# So although length of UPGRADE_FROM_VELERO_CLI list is not equal with UPGRADE_FROM_VELERO_VERSION
@@ -62,7 +62,7 @@ UPGRADE_FROM_VELERO_VERSION ?= v1.10.2,v1.11.0
# to the end, nil string will be set if UPGRADE_FROM_VELERO_CLI is shorter than UPGRADE_FROM_VELERO_VERSION
UPGRADE_FROM_VELERO_CLI ?=
MIGRATE_FROM_VELERO_VERSION ?= v1.11.0,self
MIGRATE_FROM_VELERO_VERSION ?= v1.12.3,self
MIGRATE_FROM_VELERO_CLI ?=
VELERO_NAMESPACE ?= velero