Fix various typos found by codespell (#3057)

By running the following command:

codespell -S .git,*.png,*.jpg,*.woff,*.ttf,*.gif,*.ico -L \
iam,aks,ist,bridget,ue

Signed-off-by: Mateusz Gozdek <mgozdekof@gmail.com>
This commit is contained in:
Mateusz Gozdek
2020-11-10 17:48:35 +01:00
committed by GitHub
parent dc6762a895
commit dbc83af77b
82 changed files with 117 additions and 116 deletions

View File

@@ -52,7 +52,7 @@ spec:
of a Velero VolumeSnapshotLocation.
properties:
phase:
description: VolumeSnapshotLocationPhase is the lifecyle phase of
description: VolumeSnapshotLocationPhase is the lifecycle phase of
a Velero VolumeSnapshotLocation.
enum:
- Available

View File

@@ -84,7 +84,7 @@ If the metadata file does not exist, this is an older backup and we cannot displ
### Fetch backup contents archive and walkthrough to list contents
Instead of recording new metadata about what resources have been backed up, we could simply download the backup contents archive and walkthrough it to list the contents everytime `velero backup describe <name> --details` is run.
Instead of recording new metadata about what resources have been backed up, we could simply download the backup contents archive and walkthrough it to list the contents every time `velero backup describe <name> --details` is run.
The advantage of this approach is that we don't need to change any backup procedures as we already have this content, and we will also be able to list resources for older backups.
Additionally, if we wanted to expose more information about the backed up resources, we can do so without having to update what we store in the metadata.

View File

@@ -176,7 +176,7 @@ This will allow the development to continue on the feature while it's in pre-pro
[`BackupStore.PutBackup`][9] will receive an additional argument, `volumeSnapshots io.Reader`, that contains the JSON representation of `VolumeSnapshots`.
This will be written to a file named `csi-snapshots.json.gz`.
[`defaultRestorePriorities`][11] should be rewritten to the following to accomodate proper association between the CSI objects and PVCs. `CustomResourceDefinition`s are moved up because they're necessary for creating the CSI CRDs. The CSI CRDs are created before `PersistentVolume`s and `PersistentVolumeClaim`s so that they may be used as data sources.
[`defaultRestorePriorities`][11] should be rewritten to the following to accommodate proper association between the CSI objects and PVCs. `CustomResourceDefinition`s are moved up because they're necessary for creating the CSI CRDs. The CSI CRDs are created before `PersistentVolume`s and `PersistentVolumeClaim`s so that they may be used as data sources.
GitHub issue [1565][17] represents this work.
```go
@@ -248,7 +248,7 @@ Volumes with any other `PersistentVolumeSource` set will use Velero's current Vo
### VolumeSnapshotLocations and VolumeSnapshotClasses
Velero uses its own `VolumeSnapshotLocation` CRDs to specify configuration options for a given storage system.
In Velero, this often includes topology information such as regions or availibility zones, as well as credential information.
In Velero, this often includes topology information such as regions or availability zones, as well as credential information.
CSI volume snapshotting has a `VolumeSnapshotClass` CRD which also contains configuration options for a given storage system, but these options are not the same as those that Velero would use.
Since CSI volume snapshotting is operating within the same storage system that manages the volumes already, it does not need the same topology or credential information that Velero does.
@@ -269,7 +269,7 @@ Additionally, the VolumeSnapshotter plugins and CSI volume snapshot drivers over
Thus, there's not a logical place to fit the creation of VolumeSnapshot creation in the VolumeSnapshotter interface.
* Implement CSI logic directly in Velero core code.
The plugins could be packaged separately, but that doesn't necessarily make sense with server and client changes being made to accomodate CSI snapshot lookup.
The plugins could be packaged separately, but that doesn't necessarily make sense with server and client changes being made to accommodate CSI snapshot lookup.
* Implementing the CSI logic entirely in external plugins.
As mentioned above, the necessary plugins for `PersistentVolumeClaim`, `VolumeSnapshot`, and `VolumeSnapshotContent` could be hosted out-out-of-tree from Velero.

View File

@@ -19,7 +19,7 @@ This design seeks to provide the missing extension point.
## Non Goals
- Specific implementations of hte DeleteItemAction API beyond test cases.
- Specific implementations of the DeleteItemAction API beyond test cases.
- Rollback of DeleteItemAction execution.
## High-Level Design

View File

@@ -45,7 +45,7 @@ Currently, the Velero repository sits under the Heptio GitHub organization. With
### Notes/How-Tos
#### Transfering the GH repository
#### Transferring the GH repository
All action items needed for the repo transfer are listed in the Todo list above. For details about what gets moved and other info, this is the GH documentation: https://help.github.com/en/articles/transferring-a-repository
@@ -57,7 +57,7 @@ Someone with owner permission on the new repository needs to go to their Travis
After this, webhook notifications can be added following these instructions: https://docs.travis-ci.com/user/notifications/#configuring-webhook-notifications.
#### Transfering ZenHub
#### Transferring ZenHub
Pre-requisite: A new Zenhub account must exist for a vmware or vmware-tanzu organization.

View File

@@ -413,7 +413,7 @@ However, it can provide preference over latest supported API.
If new fields are added without changing API version, it won't cause any problem as these resources are intended to provide information, and, there is no reconciliation on these resources.
### Compatibility of latest plugin with older version of Velero
Plugin that supports this CR should handle the situation gracefully when CRDs are not installed. It can handle the errors occured during creation/updation of the CRs.
Plugin that supports this CR should handle the situation gracefully when CRDs are not installed. It can handle the errors occurred during creation/updation of the CRs.
## Limitations:
@@ -432,7 +432,7 @@ But, this involves good amount of changes and needs a way for backward compatibi
As volume plugins are mostly K8s native, its fine to go ahead with current limiation.
### Update Backup CR
Instead of creating new CRs, plugins can directly update the status of Backup CR. But, this deviates from current approach of having seperate CRs like PodVolumeBackup/PodVolumeRestore to know operations progress.
Instead of creating new CRs, plugins can directly update the status of Backup CR. But, this deviates from current approach of having separate CRs like PodVolumeBackup/PodVolumeRestore to know operations progress.
### Restricting on name rather than using labels
Instead of using labels to identify the CR related to particular backup on a volume, restrictions can be placed on the name of VolumePluginBackup CR to be same as the value returned from CreateSnapshot.

View File

@@ -63,7 +63,7 @@ With the `--json` flag, `restic backup` outputs single lines of JSON reporting t
The [command factory for backup](https://github.com/heptio/velero/blob/af4b9373fc73047f843cd4bc3648603d780c8b74/pkg/restic/command_factory.go#L37) will be updated to include the `--json` flag.
The code to run the `restic backup` command (https://github.com/heptio/velero/blob/af4b9373fc73047f843cd4bc3648603d780c8b74/pkg/controller/pod_volume_backup_controller.go#L241) will be changed to include a Goroutine that reads from the command's stdout stream.
The implementation of this will largely follow [@jmontleon's PoC](https://github.com/fusor/velero/pull/4/files) of this.
The Goroutine will periodically read the stream (every 10 seconds) and get the last printed status line, which will be convered to JSON.
The Goroutine will periodically read the stream (every 10 seconds) and get the last printed status line, which will be converted to JSON.
If `bytes_done` is empty, restic has not finished scanning the volume and hasn't calculated the `total_bytes`.
In this case, we will not update the PodVolumeBackup and instead will wait for the next iteration.
Once we get a non-zero value for `bytes_done`, the `bytes_done` and `total_bytes` properties will be read and the PodVolumeBackup will be patched to update `status.Progress.BytesDone` and `status.Progress.TotalBytes` respectively.