Compare commits

..

189 Commits

Author SHA1 Message Date
Nolan Brubaker
39bab5ada9 Merge pull request #1372 from skriss/v1.0.0-alpha.1-changelog
v1.0.0-alpha.1 changelog
2019-04-15 17:48:16 -04:00
Steve Kriss
316e6cc67e v1.0.0-alpha.1 changelog
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 15:41:21 -06:00
Nolan Brubaker
6f474016a6 Add velero install command (#1287)
Add velero install command

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-04-15 14:10:11 -07:00
Steve Kriss
bc8f07f963 Azure: add support for loading env vars from a file, $AZURE_CREDENTIALS_FILE (#1364)
* azure: load env vars from AZURE_CREDENTIALS_FILE if it exists

Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 14:05:13 -07:00
Nolan Brubaker
9470983d5f Merge pull request #1365 from skriss/update-base-images
switch to debian:stretch-slim base images and go 1.12.x
2019-04-15 16:24:17 -04:00
Nolan Brubaker
94f014101d Merge pull request #1323 from skriss/v1.0-removals
v1.0 removals
2019-04-15 16:24:02 -04:00
Nolan Brubaker
c38def0849 Merge pull request #1370 from skriss/install-fixes
add some missing config to pkg/install daemonset, deployment
2019-04-15 15:45:52 -04:00
Steve Kriss
66c6d7a026 add some missing config to pkg/install daemonset, deployment
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 13:12:35 -06:00
Nolan Brubaker
9b9b4f666e Merge pull request #1369 from skriss/update-daemonset-log
update daemonset log to show version and SHA
2019-04-15 13:37:59 -04:00
Steve Kriss
373e4c9abe update daemonset log to show version and SHA
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 11:27:04 -06:00
Steve Kriss
ce374584c4 changelog
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:17:41 -06:00
Steve Kriss
c59544cb79 remove backup.status.volumeBackups and all related code
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:17:40 -06:00
Steve Kriss
6ed4e1f147 remove legacy metrics
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:17:03 -06:00
Steve Kriss
b04d6b02f3 remove support for legacy restic annotations
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:17:03 -06:00
Steve Kriss
7f36f78aee remove code that removes legacy GC finalizer from backups
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:17:03 -06:00
Steve Kriss
892673816b remove legacy restore label
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:17:03 -06:00
Steve Kriss
c8c03a38e9 remove support for legacy Azure snapshot ID format
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:17:03 -06:00
Steve Kriss
ede9a8f5b4 remove support for legacy client config file
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:17:03 -06:00
Steve Kriss
b87de94723 remove legacy hook annotation support
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:17:03 -06:00
Steve Kriss
77e648eafa remove Ark field from RestoreResult
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:17:02 -06:00
Steve Kriss
d49008dec0 remove Ark API pkg and generated code
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:15:18 -06:00
Steve Kriss
b03da3c0ed remove code referencing Ark API pkg
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:15:18 -06:00
Steve Kriss
49cb4cd5c3 switch to debian:stretch-slim base images and go 1.12.x
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-15 10:07:23 -06:00
Steve Kriss
3ed97db550 Merge pull request #1362 from nrb/include-resources-examples
Add example for restoring with --include-resources
2019-04-12 14:45:15 -06:00
Steve Kriss
02cbb77dea Merge pull request #1366 from nrb/fix-1363
Prepend velero.io to non-namespaced names
2019-04-12 14:43:17 -06:00
Nolan Brubaker
d679498c8a Add example for restoring with --include-resources
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-04-12 16:36:39 -04:00
Nolan Brubaker
c326f59627 Prepend velero.io to non-namespaced names
Prior to 1.0, user-provided values such as cloud provider names didn't
need to be namespaced. This change allows those values to continue
working without the user editing the fields manually.

Fixes #1363

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-04-12 16:31:16 -04:00
Nolan Brubaker
bc93b2bbac Merge pull request #1358 from skriss/restore-log-fix
instantiate plugin manager with per-restore logger so plugin logs are captured
2019-04-12 12:47:11 -04:00
Steve Kriss
3116185e5b instantiate plugin manager with per-restore logger so plugin logs are captured
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-12 10:36:20 -06:00
KubeKween
abee09aa2d Add validation for plugin name format and dups (#1339)
* Add validation for plugin name format and dups

Signed-off-by: Carlisia <carlisiac@vmware.com>

* Bug fix

Signed-off-by: Carlisia <carlisiac@vmware.com>

* Add changelog

Signed-off-by: Carlisia <carlisiac@vmware.com>

* Address code reviews

Signed-off-by: Carlisia <carlisiac@vmware.com>

* Fix code

Signed-off-by: Carlisia <carlisiac@vmware.com>

* Address code reviews

Signed-off-by: Carlisia <carlisiac@vmware.com>

* Add documentation

Signed-off-by: Carlisia <carlisiac@vmware.com>

* Update godoc

Signed-off-by: Carlisia <carlisiac@vmware.com>

* More doc

Signed-off-by: Carlisia <carlisiac@vmware.com>
2019-04-12 08:25:04 -06:00
Aman Wangde
0e0f357cef Added ability to disable controllers (#1326)
Signed-off-by: James King <james.king@emc.com>
2019-04-10 15:41:28 -04:00
Steve Kriss
23c0d3f612 Merge pull request #1352 from rohandvora/default-backup-ttl
Set default backup TTL
2019-04-09 15:34:32 -06:00
Rohan Vora
4beb8aab3c Set default backup TTL
Set default backup TTL to 30 days when TTL
is not provided in the backup yaml configuration.

Updates #138

Signed-off-by: Rohan Vora <vorar@vmware.com>
2019-04-09 14:13:29 -07:00
Imran Pochi
b444d3c2f1 Update Restic documentation for RancherOS (#1348)
Signed-off-by: Imran Pochi <pochiimran@yahoo.co.in>
2019-04-09 11:22:50 -07:00
KubeKween
13eaad0e64 Refactor protobuf (#1354)
* Update protobuffs

Signed-off-by: Carlisia <carlisiac@vmware.com>
2019-04-09 13:50:05 -04:00
Steve Kriss
956152d6e1 Merge pull request #1355 from nrb/backup-examples
Add examples to backup create command
2019-04-09 08:15:00 -06:00
Nolan Brubaker
bca21a1ec0 Add examples to backup create command
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-04-05 14:40:05 -04:00
Steve Kriss
2f47ca62ad always allow 'bucket' as a config key for object stores (#1349)
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-05 11:24:55 -07:00
sseago
a519547efc Adds support for allowing a RestoreItemAction to skip item restore (#1336)
* Adds support for allowing a RestoreItemAction to skip item restore

This allows a RestoreItemAction plugin to signal to velero that
the returned item should be skipped rather than restored to the
cluster.

To support this, a boolean SkipRestore attribute is added to
RestoreItemActionExecuteOutput. If restore.restoreResource finds
this set to true, any remaining actions on this item are skipped,
and restore on this item is skipped. Execution continues with
the next item of this resource type.

To signal this for a particular item, the RestoreItemAction's
Execute method should call WithoutRestore() on the
RestoreItemActionExecuteOutput before returning it.

Signed-off-by: Scott Seago <sseago@redhat.com>

* Autogenerated code to support SkipRestore

Signed-off-by: Scott Seago <sseago@redhat.com>

* Added changelog for #1336

Signed-off-by: Scott Seago <sseago@redhat.com>
2019-04-04 13:39:54 -06:00
Fábio Franco Uechi
0167539a14 add new counter metrics for backup deletion (#1280)
* compute backup deletion metrics (attempt, success, fail)

Signed-off-by: fabito <fuechi@ciandt.com>
2019-04-04 14:25:59 -04:00
Steve Kriss
985479094f Merge pull request #1342 from cwilkers/patch-1
Add NooBaa to support matrix
2019-04-03 07:32:28 -06:00
Chandler Wilkerson
a611658436 Add NooBaa to support matrix
Per https://github.com/heptio/velero/pull/1334#issuecomment-479190807

Signed-off-by: cwilkers <cwilkers@redhat.com>
2019-04-03 07:08:24 -05:00
Nolan Brubaker
0f442b002d Merge pull request #1341 from skriss/fix-changelogs
changelog fixes
2019-04-02 17:24:52 -04:00
Steve Kriss
a774b54ae7 changelog fixes
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-02 15:16:47 -06:00
Steve Kriss
2e3f00f64d Merge pull request #1340 from carlisia/c-copy
Fix copyright
2019-04-01 15:49:06 -06:00
Nolan Brubaker
c3a933d3e3 Merge pull request #1338 from skriss/validate-config-keys
objectstores/volumesnapshotters: check for invalid keys in config
2019-04-01 15:11:15 -04:00
Nolan Brubaker
bbd28a9fb9 Merge pull request #1337 from skriss/logs-cmd-validation
logs commands: validate item exists & is finished processing
2019-04-01 15:10:52 -04:00
Carlisia
23b1098950 Fix copyright
Signed-off-by: Carlisia <carlisiac@vmware.com>
2019-04-01 12:06:14 -07:00
Steve Kriss
1d3d66aa77 logs commands: validate backup/restore exists & is finished processing
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-01 12:39:02 -06:00
Fábio Franco Uechi
40c7fbce09 Gracefully handle failed API groups from the discovery API (#1293)
* log details and continue executing if error of type ErrGroupDiscoveryFailed is returned by discovery API

Signed-off-by: fabito <fuechi@ciandt.com>
2019-04-01 14:27:37 -04:00
Steve Kriss
6bf29e17aa objectstores/volumesnapshotters: check for invalid keys in config
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-04-01 08:15:24 -06:00
Steve Kriss
7298a4eda0 allow restic restore helper image to be specified via ConfigMap (#1311)
* allow restic restore helper image to be specified via ConfigMap

Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-29 17:11:34 -04:00
Steve Kriss
2a36cdcbf6 set backup start timestamp before patching to inprogress (#1330)
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-29 13:33:50 -07:00
Steve Kriss
dcee310745 improve handling of custom S3 URLs (#1331)
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-29 13:32:43 -07:00
Steve Kriss
a696cd09f2 remove Warning from restore item action output (#1318)
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-28 13:08:37 -07:00
Steve Kriss
be42ea782d turn down log levels in plugin server to DEBUG (#1325)
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-28 12:44:26 -07:00
Steve Kriss
9b635c0e14 add additionalItems to restore item actions (#1304)
* add additionalItems to restore item actions

Signed-off-by: Steve Kriss <krisss@vmware.com>
Co-authored-by: Andy Goldstein <goldsteina@vmware.com>
2019-03-28 12:21:56 -07:00
KubeKween
477e42286c Bump plugin client version (#1319)
* Bump plugin client version

Signed-off-by: Carlisia <carlisiac@vmware.com>
2019-03-28 11:34:43 -04:00
Nolan Brubaker
21f3169ad3 Merge pull request #1321 from skriss/rename-block-store
rename BlockStore to VolumeSnapshotter
2019-03-28 11:33:06 -04:00
Nolan Brubaker
59e0ef4524 Merge pull request #1322 from skriss/fix-grpc-streaming
don't wrap io.EOF errors during gRPC streaming
2019-03-28 11:30:54 -04:00
Steve Kriss
86293b68b3 don't wrap io.EOF errors during gRPC streaming
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-27 16:22:28 -06:00
Steve Kriss
e4e0ed68a6 changelog
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-27 14:58:11 -06:00
Steve Kriss
bb9c3f6a1a rename BlockStore to VolumeSnapshotter
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-27 14:55:28 -06:00
Nolan Brubaker
3f2c28f6bb Merge pull request #1301 from skriss/plugins-error-location
log error locations from plugin logger and don't overwrite in client
2019-03-27 11:21:52 -04:00
Nolan Brubaker
60460f6920 Merge pull request #1300 from skriss/plugin-error-stacks
Add stack traces to plugin errors so error location info can be logged
2019-03-27 11:21:00 -04:00
Steve Kriss
7b0d8217de send plugin error stack traces over gRPC and log error locations
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-27 08:34:03 -06:00
Matt Stump
f8baf4f4f0 Fix for #1312, use describe to determine if AWS EBS snapshot is encrypted and explicitly pass that value in EC2 CreateVolume call. (#1316)
Signed-off-by: Matt Stump <mstump@vorstella.com>
2019-03-26 17:30:27 -07:00
Steve Kriss
b1c0e9c49b update plugins to work with updated go-plugin (#1308)
* update plugins to work with updated go-plugin

Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-21 12:32:18 -07:00
Steve Kriss
4d7add1782 Merge pull request #1306 from pei0804/mustcompile
Improvement: regex faster
2019-03-21 09:07:20 -06:00
pei0804
7af9f8d74e compile only once for regexp.MustCompile
Signed-off-by: pei0804 <peeeei0804@gmail.com>
2019-03-21 21:51:48 +09:00
Steve Kriss
ff2db31b32 log error locations from plugin logger and don't overwrite in client
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-20 16:13:37 -06:00
Steve Kriss
bd662ab613 Merge pull request #1303 from carlisia/c-dep
Update go-plugin
2019-03-20 14:10:14 -06:00
Carlisia
01f2ae76e2 Update go-plugin
Signed-off-by: Carlisia <carlisiac@vmware.com>
2019-03-20 13:02:51 -07:00
Steve Kriss
a111eed2af update license headers to Velero contributors (#1302)
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-20 12:32:48 -07:00
Steve Kriss
4c73e23ce8 Merge pull request #1291 from carlisia/c-plugins-III
Split velero plugin client into its own package
2019-03-19 17:42:22 -06:00
Carlisia
a71e43b2b7 Split velero plugin client into its own package
Signed-off-by: Carlisia <carlisiac@vmware.com>
2019-03-19 16:05:37 -07:00
Steve Kriss
1eac10ca9f Merge pull request #1288 from carlisia/c-plugins-II
Split plugin framework into its own package
2019-03-19 16:44:58 -06:00
Carlisia
7dfe58d37f Split plugin framework into its own package
Signed-off-by: Carlisia <carlisiac@vmware.com>
2019-03-19 15:36:31 -07:00
Nolan Brubaker
78bf8fb868 Merge pull request #1297 from skriss/restic-exclude-hostpath-pv
exclude hostPath PVs from restic backup
2019-03-19 11:35:33 -04:00
Steve Kriss
7d66fc31bd pkg/restic: check for & skip hostPath PVC/PVs
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-19 08:57:47 -06:00
Steve Kriss
183bea369d make resticrepositories non-restorable resources (#1296)
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-18 11:38:37 -07:00
Nolan Brubaker
de09fd7cdc Merge pull request #1294 from skriss/master-curl-redirect
add -L flag to curl commands (master branch)
2019-03-18 13:27:16 -04:00
Steve Kriss
f64b37289d add -L flag to curl commands
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-18 10:57:28 -06:00
KubeKween
73514a003b Move plugin interfaces to same package (#1264)
* Move plugin interfaces to same package

Signed-off-by: Carlisia <carlisiac@vmware.com>
2019-03-14 16:35:06 -04:00
Steve Kriss
7674332313 pass --log-level to plugins (#1278)
Plumb the log level through to plugin processes


Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-14 10:53:36 -07:00
Steve Kriss
409116fce8 add basic plugin panic handlers (#1270)
* add server-side panic handlers to all plugin methods

Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-13 14:07:52 -04:00
Nolan Brubaker
503b112638 Add location resources and tests (#1277)
Add locations and tests to install package

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-03-13 11:23:00 -06:00
Steve Kriss
b286c652ec Merge pull request #1274 from MetisMachine/aws-new
AWS: Inclue zone in volume ID
2019-03-12 14:02:18 -06:00
tsturzl
89ca2571f3 AWS zone on volume IDs
Signed-off-by: Travis Sturzl<travis@metismachine.com>
2019-03-12 13:04:33 -06:00
Nolan Brubaker
394548afcd Merge pull request #1254 from skriss/remove-wait-for-pv
remove restore code that waits for a PV to become Available
2019-03-11 13:20:59 -04:00
KubeKween
4ee41a13a0 Merge pull request #1261 from asaf-erlich/patch-1
Update ark restore to not open every single file open during extracti…
2019-03-08 14:59:09 -08:00
asaf-erlich
4041044a93 Update ark restore to not open every single file open during extraction of the data
Original error was:

```
ark -n <redacted> restore logs <redacted>
time="2019-03-06T18:31:06Z" level=info msg="Not including resource" groupResource=nodes logSource="pkg/restore/restore.go:124"
time="2019-03-06T18:31:06Z" level=info msg="Not including resource" groupResource=events logSource="pkg/restore/restore.go:124"
time="2019-03-06T18:31:06Z" level=info msg="Not including resource" groupResource=events.events.k8s.io logSource="pkg/restore/restore.go:124"
time="2019-03-06T18:31:06Z" level=info msg="Not including resource" groupResource=backups.ark.heptio.com logSource="pkg/restore/restore.go:124"
time="2019-03-06T18:31:06Z" level=info msg="Not including resource" groupResource=restores.ark.heptio.com logSource="pkg/restore/restore.go:124"
time="2019-03-06T18:31:06Z" level=info msg="Starting restore of backup backup/<redacted>" logSource="pkg/restore/restore.go:342"
time="2019-03-06T18:31:06Z" level=info msg="error unzipping and extracting: open /tmp/604421455/resources/rolebindings.rbac.authorization.k8s.io/namespaces/<redacted>/<redacted>: too many open files" logSource="pkg/restore/restore.go:346"
```

Downloading the directory from s3 and untarring it I found 1036 files. The ulimit -n output says 1024. This is our team's best guess at a root cause. But the code fixed in the PR definitely is holding all the files open until the method closes: https://blog.learngoprogramming.com/gotchas-of-defer-in-go-1-8d070894cb01

Please note my go code abilities are not great and I did not test this. I just edited the file in github. All I did was remove the defer and put fileClose after the copy is done. Theoretically this should only hold one file open at a time now. Let me know if you want me to do any further steps.

Thank you,
-Asaf

Signed-off-by: Asaf Erlich <aerlich@groupon.com>
2019-03-07 11:23:35 -05:00
Nolan Brubaker
5e12a921b5 Merge pull request #1256 from carlisia/c-plugins
Add original item to restore plugin interface
2019-03-06 19:02:47 -05:00
Michal Wieczorek
1354e2b6ff Add original item to restore plugin interface
Signed-off-by: Michal Wieczorek <wieczorek-michal@wp.pl>
2019-03-05 17:09:42 -08:00
Steve Kriss
e29aa74a23 remove restore code that waits for a PV to become Available
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-05 17:04:52 -07:00
Nolan Brubaker
ce3f43e876 Merge pull request #1251 from skriss/backup-extractor
move backup extraction logic to its own type
2019-03-05 16:37:48 -05:00
Nolan Brubaker
5912fe66e5 Merge pull request #1250 from skriss/extract-pv-restorer
move pvRestorer and tests to their own files
2019-03-05 16:37:32 -05:00
KubeKween
c006d9246f Merge pull request #1248 from DheerajSShetty/describe_ouput
Improve `describe` output
2019-03-04 13:34:42 -08:00
DheerajSShetty
1b031f0cc4 Improve describe output
* Move Phase to right under Metadata(name/namespace/label/annotations)
 * Move Validation errors: section right after Phase: section and only
   show it if the item has a phase of FailedValidation
 * For restores move Warnings and Errors under Validation errors. Do not
   show Warnings or Errors if there are none.

Signed-off-by: DheerajSShetty <dheerajs@vmware.com>

Fixes #987
2019-03-04 13:21:18 -08:00
Steve Kriss
88e6a740f2 move pvRestorer and tests to their own files
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-01 15:07:25 -07:00
Steve Kriss
0fec56f488 move backup extraction logic to its own type
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-03-01 15:05:58 -07:00
KubeKween
e21940bee1 Merge pull request #1231 from skriss/k8s-1.12-deps
update kubernetes and azure dependencies to 1.12
2019-02-28 15:09:06 -08:00
Nolan Brubaker
421b64b1fa Merge pull request #1247 from skriss/pr-1146-changelog
add changelog for PR 1146
2019-02-28 18:08:32 -05:00
Steve Kriss
81e741ebfc add changelog for PR 1146
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 16:02:23 -07:00
Nolan Brubaker
fcf21813a5 Merge pull request #1246 from skriss/preserve-storageclass
when restoring a PV, don't remove its spec.storageClassName
2019-02-28 18:02:20 -05:00
Steve Kriss
8dd1cbf62b add changelog
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 15:58:18 -07:00
KubeKween
65f3926caa Merge pull request #1146 from skriss/replace-map-utils-final
replace ark's map_utils.go with structured types and apimachinery's unstructured helpers
2019-02-28 14:37:07 -08:00
Steve Kriss
31501b79b2 when deleting snapshot, don't error if it doesn't exist
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 15:33:05 -07:00
Steve Kriss
6bf837b233 address breaking changes in Azure SDK
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 15:33:05 -07:00
Steve Kriss
f908d5f8c0 upgrade Azure SDK to a GA version matching Kubernetes
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 15:33:05 -07:00
Steve Kriss
f8548e1ca1 tweak a couple of dependency versions
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 15:33:05 -07:00
Steve Kriss
58e471bda0 fix breaking changes
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 15:33:05 -07:00
Steve Kriss
61eab7dca3 update generated code using 1.12 generator
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 15:33:05 -07:00
Steve Kriss
efc490138c update to 1.12 dependencies
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 15:33:04 -07:00
Steve Kriss
80fe640b98 add changelog
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 14:36:38 -07:00
Steve Kriss
21c57c46b3 when restoring a PV, don't remove its spec.storageClassName
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-28 14:34:36 -07:00
KubeKween
7353294b7f Merge pull request #1236 from DheerajSShetty/squashed_feature
defer closing the ReadCloser in ObjectStoreGRPCServer.GetObject
2019-02-28 13:12:05 -08:00
Steve Kriss
7e736ab79d Merge pull request #1244 from carlisia/c-fiximages
Fix readme links
2019-02-28 13:07:49 -08:00
Carlisia
5468ccf5cb Fix readme links
Signed-off-by: Carlisia <carlisiac@vmware.com>
2019-02-28 12:33:15 -08:00
DheerajSShetty
032aaac508 defer closing the ReadCloser in ObjectStoreGRPCServer.GetObject
Signed-off-by: DheerajSShetty <dheerajs@vmware.com>

Fixes #1093
2019-02-28 11:51:10 -08:00
KubeKween
ab2fc65c02 Merge pull request #1243 from skriss/changelogs
v0.10.2 and v0.11.0 changelogs
2019-02-28 07:03:24 -08:00
KubeKween
03b8f5397f Merge pull request #1203 from skriss/v0.11-changes
v0.11 changes
2019-02-28 07:01:23 -08:00
Steve Kriss
431602e852 add v0.11.0 changelog
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-27 12:40:42 -08:00
Steve Kriss
cb0a9281f6 add v0.10.2 changelog
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-27 08:23:21 -08:00
Nolan Brubaker
783c7d850c Merge pull request #1235 from skriss/restic-race-fix
Fix bugs in restic repository ensurer
2019-02-27 11:09:47 -05:00
Steve Kriss
e3e76c2067 pkg/restic: fix concurrency bugs causing dupe repos, panics
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-25 18:30:26 -08:00
KubeKween
e4771f582b Merge pull request #1232 from skriss/install-docs-updates
add more documentation about using official releases
2019-02-22 12:04:20 -08:00
Steve Kriss
4e0b0c87bb add more documentation about using official releases
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-22 12:56:52 -07:00
KubeKween
3724af259c Merge pull request #1227 from skriss/update-logo
update to logo with name
2019-02-21 09:10:17 -08:00
Steve Kriss
522ee9ad36 update to logo with name
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-21 10:00:06 -07:00
KubeKween
1b3c444720 Merge pull request #1190 from nrb/document-velero-migration
Instructions for migrating from Ark to Velero
2019-02-20 12:26:41 -08:00
Nolan Brubaker
3d2b031ee4 Document steps for migrating from Ark to Velero
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-02-20 15:18:31 -05:00
Steve Kriss
8be6f03ef0 Merge pull request #1212 from nrb/list-link-fix
Update support link to the google group
2019-02-14 11:28:15 -07:00
Nolan Brubaker
e2f84a1242 Update support link to the google group
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-02-14 13:19:22 -05:00
Nolan Brubaker
49eeeb04f0 Merge pull request #1204 from skriss/deprecation-notices
add deprecation notices to pkg/apis/ark/v1 types
2019-02-12 16:02:47 -05:00
Steve Kriss
e1d414338c Merge pull request #1200 from nrb/fix-1183
Fix restoring GCP regional disks
2019-02-12 13:48:42 -07:00
Nolan Brubaker
0ffaeb949d Fix restoring GCP regional disks
To create a regional disk, the URLs for the zones in which the disk is
replicated must be provided to the GCP API.

Fixes #1183

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-02-12 15:42:50 -05:00
Steve Kriss
ed73be44fd Merge pull request #1206 from nzoueidi/master
Add UTC time to --schedule
2019-02-12 09:19:53 -07:00
Naeil ZOUEIDI
988ce573c0 Add UTC time to --schedule
Signed-off-by: Naeil Ezzoueidi <naeilzoueidi@ubuntu.com>
2019-02-12 11:04:10 -05:00
Steve Kriss
780dc4551f fix compile err & test
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 16:45:04 -07:00
Steve Kriss
32835c63f6 code review feedback
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 16:21:49 -07:00
Steve Kriss
86c5c25d13 code review: remove obsolete commented code
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 16:21:49 -07:00
Steve Kriss
250f109c41 delete pkg/util/collections/map_utils.go & tests
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 16:21:49 -07:00
Steve Kriss
d8e9b772ff move MergeMaps func into pkg/restore where it's used
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 16:21:48 -07:00
Steve Kriss
88fc6e2141 pkg/controller: remove usage of pkg/util/collections
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 16:21:24 -07:00
Steve Kriss
38ad7d71f5 pkg/restore: remove usage of pkg/util/collections
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 16:20:41 -07:00
Steve Kriss
e91c841c59 pkg/backup: remove usage of pkg/util/collections
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 16:18:51 -07:00
Steve Kriss
902c0f797f pkg/podexec: remove usage of pkg/util/collections
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 16:18:37 -07:00
Steve Kriss
296dd6617e pkg/cloudprovider: remove usage of pkg/util/collections
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 16:17:33 -07:00
Steve Kriss
4cd8170386 add deprecation notices to pkg/apis/ark/v1 types
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 15:44:52 -07:00
Steve Kriss
551aaa646d remove note about rename being WIP, mention former name
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 12:46:21 -07:00
Steve Kriss
0df30c1e89 ark->velero in issue templates & bug cmd
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-11 12:43:51 -07:00
KubeKween
378011baf6 Merge pull request #1153 from daved/fix/1142-restore_log
Clarify conditional nature of restore in log
2019-02-11 09:22:25 -08:00
Daved
b2b1ee44ea Clarify restore log when object unchanged
Signed-off-by: Daved <daved@codemodus.com>
2019-02-11 09:12:07 -08:00
Steve Kriss
4583aa7078 Merge pull request #826 from nrb/fix-691
Wait for namespace to terminate before restoring
2019-02-07 11:07:25 -07:00
Steve Kriss
b15970d3ef Merge pull request #1198 from dvhart/source-archive-docs
Document build from release archive limitations
2019-02-07 11:00:35 -07:00
Darren Hart
9df3947745 Document build from release archive limitations
Due to the assumption of building from a git repository inherent in the
Makefile targets, update the documentation to limit source code archive
builds to the `go build` commands.

Note that it may be possible to update hack/build.sh to cope with
building outside a git repository, but there are a number of questions
to answer regarding how to populate the buildinfo info, and that is a
larger discussion than ensuring we can build the binary from the sources
in the source code archive.

Signed-off-by: Darren Hart <dvhart@vmware.com>
2019-02-07 09:27:01 -08:00
Steve Kriss
2364393b7c Merge pull request #1196 from dvhart/source-archive-docs
Document building from GitHub Release Source code archive
2019-02-07 07:55:14 -07:00
Darren Hart
ee2b352489 Document build from release archive and fix ToC link
Update the documentation to include minimal instructions for building
the velero binary from the GitHub Release "Source code" archive.

Correct the Download link in the Table of Contents for
build-from-scratch.md.

Signed-off-by: Darren Hart <dvhart@vmware.com>
2019-02-06 18:08:38 -08:00
Nolan Brubaker
890202f2e4 Wait for PV/namespace to delete before restore
If a PV already exists, wait for it, it's associated PVC, and associated
namespace to be deleted before attempting to restore it.

If a namespace already exists, wait for it to be deleted before
attempting to restore it.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-02-06 15:43:50 -05:00
Nolan Brubaker
3c7737c8b1 Merge pull request #1166 from skriss/tweak-server-log-version
velero server: log version and git SHA at startup
2019-02-06 15:28:51 -05:00
Nolan Brubaker
ca8e951ac6 Merge pull request #1194 from skriss/fix-changelogs
add/fix changelogs for recent PRs
2019-02-06 14:50:33 -05:00
Nolan Brubaker
52ecc45ec8 Merge pull request #1167 from skriss/logging-and-misc-cleanup
Logging and misc cleanup
2019-02-06 14:45:08 -05:00
Steve Kriss
8ee406b4bd add/fix changelogs for recent PRs
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-06 12:42:29 -07:00
Nolan Brubaker
46e87661c0 Merge pull request #1171 from skriss/restic-stats
use restic stats instead of check to check repo existence
2019-02-06 14:28:07 -05:00
Steve Kriss
723cda2697 use restic stats instead of check to check repo existence
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-06 12:14:11 -07:00
Nolan Brubaker
5f0ff026b0 Merge pull request #1156 from skriss/restic-v0.9.4
upgrade to restic v0.9.4 and replace --hostname with --host
2019-02-06 14:03:44 -05:00
Steve Kriss
0a810ced54 Merge pull request #1187 from nrb/issue-template-namespace-fix
Use current info in bug template
2019-02-05 14:28:27 -07:00
Nolan Brubaker
c1a817b4e9 Use current info in bug template
The binary, deployment, and namespace have yet to change for most users,
so make sure the bug template has the correct information when filing.

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-02-05 16:11:56 -05:00
Steve Kriss
478d12b4ff upgrade to restic v0.9.4 and replace --hostname with --host
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-05 10:15:04 -07:00
Steve Kriss
328bc361be velero server: log version and git SHA at startup
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-05 10:12:06 -07:00
Steve Kriss
7913ae1867 remove extraneous use of meta.Accessor
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-05 10:09:57 -07:00
Steve Kriss
c0a55e136b logging tweaks for clarity
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-02-05 10:08:33 -07:00
Steve Kriss
3054a38bd6 Merge pull request #1186 from nrb/issue-template-command
Update templates to use current command
2019-02-05 09:20:31 -07:00
Nolan Brubaker
381149cedf Update templates to use current command
Currently, the executable binary in the wild is `ark`, so leave it in
our github issue templates

Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-02-05 11:12:33 -05:00
Steve Kriss
db9dacae54 Merge pull request #1185 from ncdc/move-velero-image
Move Velero image to docs/img
2019-02-05 08:56:24 -07:00
Andy Goldstein
77327db062 Move Velero image to docs/img
Signed-off-by: Andy Goldstein <goldsteina@vmware.com>
2019-02-05 10:48:16 -05:00
Andy Goldstein
1675943f44 Merge pull request #1184 from nrb/rename-ark-to-velero
Rename Ark to Velero!!!
2019-02-05 10:41:20 -05:00
Nolan Brubaker
43714caaec Rename Ark to Velero!!!
Signed-off-by: Nolan Brubaker <brubakern@vmware.com>
2019-02-04 17:35:22 -05:00
Andy Goldstein
bbc6caf7fe Merge pull request #1180 from skriss/fix-version-mode
change mode on metadata/version to 0755
2019-02-01 10:17:13 -05:00
Steve Kriss
25299513c1 change mode on metadata/version to 0755
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-01-31 16:27:13 -07:00
Andy Goldstein
e61d3c6ca0 Merge pull request #1174 from The-smooth-operator/kube2iam_docs
extend AWS NoCredentialsProviders troubleshooting docs with kube2iam …
2019-01-28 14:56:09 -05:00
The-smooth-operator
c56e3e5af3 extend AWS NoCredentialsProviders troubleshooting docs with kube2iam case
Signed-off-by: The-smooth-operator <alberto.delbarrio.albelda@gmail.com>
2019-01-28 12:34:53 +01:00
Andy Goldstein
78cb813210 Merge pull request #1163 from apoplawski/documentation-add-info-on-protoc
Added protoc-gen-go version info
2019-01-24 12:18:43 -05:00
Andy Goldstein
f90b8f9473 Merge pull request #1116 from skriss/status-crd
add server version to `ark version` output
2019-01-23 15:01:22 -05:00
Steve Kriss
8a58b217be show server version in ark version output using ServerStatusRequest CRD
Signed-off-by: Steve Kriss <steve@heptio.com>
2019-01-23 12:51:13 -07:00
Andy Goldstein
5847dcabba Merge pull request #1117 from wwitzel3/issue-134
Add backup-version file in backup tarball
2019-01-22 09:01:34 -08:00
Artur Poplawski
ad5146b9b1 added protoc-gen-go version info
Signed-off-by: Steve Kriss <krisss@vmware.com>
2019-01-22 15:46:01 +01:00
Wayne Witzel III
d08c2e1b9c Add backup-version file in backup tarball
Signed-off-by: Wayne Witzel III <wwitzel3@vmware.com>
2019-01-09 16:36:32 -05:00
1397 changed files with 187855 additions and 54835 deletions

View File

@@ -14,11 +14,11 @@ about: Tell us about a problem you are experiencing
**The output of the following commands will help us better understand what's going on**:
(Pasting long output into a [GitHub gist](https://gist.github.com) or other pastebin is fine.)
* `kubectl logs deployment/ark -n heptio-ark`
* `ark backup describe <backupname>` or `kubectl get backup/<backupname> -n heptio-ark -o yaml`
* `ark backup logs <backupname>`
* `ark restore describe <restorename>` or `kubectl get restore/<restorename> -n heptio-ark -o yaml`
* `ark restore logs <restorename>`
* `kubectl logs deployment/velero -n velero`
* `velero backup describe <backupname>` or `kubectl get backup/<backupname> -n velero -o yaml`
* `velero backup logs <backupname>`
* `velero restore describe <restorename>` or `kubectl get restore/<restorename> -n velero -o yaml`
* `velero restore logs <restorename>`
**Anything else you would like to add:**
@@ -27,7 +27,7 @@ about: Tell us about a problem you are experiencing
**Environment:**
- Ark version (use `ark version`):
- Velero version (use `velero version`):
- Kubernetes version (use `kubectl version`):
- Kubernetes installer & version:
- Cloud provider or hardware configuration:

View File

@@ -14,7 +14,7 @@ about: Suggest an idea for this project
**Environment:**
- Ark version (use `ark version`):
- Velero version (use `velero version`):
- Kubernetes version (use `kubectl version`):
- Kubernetes installer & version:
- Cloud provider or hardware configuration:

2
.gitignore vendored
View File

@@ -27,7 +27,7 @@ _testmain.go
debug
/ark
/velero
.idea/
.container-*

View File

@@ -1,4 +1,4 @@
# Copyright 2018 the Heptio Ark contributors.
# Copyright 2018 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -17,7 +17,7 @@ before:
hooks:
- ./hack/set-example-tags.sh
builds:
- main: ./cmd/ark/main.go
- main: ./cmd/velero/main.go
env:
- CGO_ENABLED=0
goos:
@@ -39,7 +39,7 @@ builds:
- goos: windows
goarch: arm64
ldflags:
- -X "github.com/heptio/ark/pkg/buildinfo.Version={{ .Tag }}" -X "github.com/heptio/ark/pkg/buildinfo.GitSHA={{ .FullCommit }}" -X "github.com/heptio/ark/pkg/buildinfo.GitTreeState={{ .Env.GIT_TREE_STATE }}"
- -X "github.com/heptio/velero/pkg/buildinfo.Version={{ .Tag }}" -X "github.com/heptio/velero/pkg/buildinfo.GitSHA={{ .FullCommit }}" -X "github.com/heptio/velero/pkg/buildinfo.GitTreeState={{ .Env.GIT_TREE_STATE }}"
archive:
name_template: "{{ .ProjectName }}-{{ .Tag }}-{{ .Os }}-{{ .Arch }}"
files:
@@ -50,5 +50,5 @@ checksum:
release:
github:
owner: heptio
name: ark
name: velero
draft: true

View File

@@ -1,7 +1,7 @@
language: go
go:
- 1.11.x
- 1.12.x
sudo: required

View File

@@ -1,21 +1,11 @@
## Development release:
* [Unreleased Changes][0]
* [Unreleased Changes][9]
### Bug Fixes / Other Changes
* add multizone/regional support to gcp (#765, @wwitzel3)
* Delete spec.priority in pod restore action (#879, @mwieczorek)
* Added brew reference (#1051, @omerlh)
* Update to go 1.11 (#1069, @gliptak)
* Initialize empty schedule metrics on server init (#1054, @cbeneke)
* Update CHANGELOGs (#1063, @wwitzel3)
* Remove default token from all service accounts (#1048, @ncdc)
* Allow to use AWS Signature v1 for creating signed AWS urls (#811, @bashofmann)
## Current release:
* [CHANGELOG-0.10.md][8]
* [CHANGELOG-0.11.md][9]
## Older releases:
* [CHANGELOG-0.10.md][8]
* [CHANGELOG-0.9.md][7]
* [CHANGELOG-0.8.md][6]
* [CHANGELOG-0.7.md][5]
@@ -24,12 +14,14 @@
* [CHANGELOG-0.4.md][2]
* [CHANGELOG-0.3.md][1]
[9]: https://github.com/heptio/ark/blob/master/changelogs/unreleased
[8]: https://github.com/heptio/ark/blob/master/changelogs/CHANGELOG-0.10.md
[7]: https://github.com/heptio/ark/blob/master/changelogs/CHANGELOG-0.9.md
[6]: https://github.com/heptio/ark/blob/master/changelogs/CHANGELOG-0.8.md
[5]: https://github.com/heptio/ark/blob/master/changelogs/CHANGELOG-0.7.md
[4]: https://github.com/heptio/ark/blob/master/changelogs/CHANGELOG-0.6.md
[3]: https://github.com/heptio/ark/blob/master/changelogs/CHANGELOG-0.5.md
[2]: https://github.com/heptio/ark/blob/master/changelogs/CHANGELOG-0.4.md
[1]: https://github.com/heptio/ark/blob/master/changelogs/CHANGELOG-0.3.md
[9]: https://github.com/heptio/velero/blob/master/changelogs/CHANGELOG-0.11.md
[8]: https://github.com/heptio/velero/blob/master/changelogs/CHANGELOG-0.10.md
[7]: https://github.com/heptio/velero/blob/master/changelogs/CHANGELOG-0.9.md
[6]: https://github.com/heptio/velero/blob/master/changelogs/CHANGELOG-0.8.md
[5]: https://github.com/heptio/velero/blob/master/changelogs/CHANGELOG-0.7.md
[4]: https://github.com/heptio/velero/blob/master/changelogs/CHANGELOG-0.6.md
[3]: https://github.com/heptio/velero/blob/master/changelogs/CHANGELOG-0.5.md
[2]: https://github.com/heptio/velero/blob/master/changelogs/CHANGELOG-0.4.md
[1]: https://github.com/heptio/velero/blob/master/changelogs/CHANGELOG-0.3.md
[0]: https://github.com/heptio/velero/blob/master/changelogs/unreleased

View File

@@ -1,4 +1,4 @@
# Heptio Ark Community Code of Conduct
# Velero Community Code of Conduct
## Contributor Code of Conduct

View File

@@ -7,7 +7,7 @@ should be a new file created in the `changelogs/unreleased` folder. The file sho
naming convention of `pr-username` and the contents of the file should be your text for the
changelog.
ark/changelogs/unreleased <- folder
velero/changelogs/unreleased <- folder
000-username <- file

19
Dockerfile-fsfreeze-pause Normal file
View File

@@ -0,0 +1,19 @@
# Copyright 2018, 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM debian:stretch-slim
LABEL maintainer="Steve Kriss <krisss@vmware.com>"
ENTRYPOINT ["/bin/bash", "-c", "while true; do sleep 10000; done"]

View File

@@ -1,22 +0,0 @@
# Copyright 2018 the Heptio Ark contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM alpine:3.8
MAINTAINER Wayne Witzel III <wayne@heptio.com>
RUN apk add --no-cache ca-certificates
RUN apk add --update --no-cache busybox util-linux
ENTRYPOINT ["/bin/sh", "-c", "while true; do sleep 10000; done"]

View File

@@ -1,4 +1,4 @@
# Copyright 2017 the Heptio Ark contributors.
# Copyright 2017, 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,20 +12,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM alpine:3.8
FROM debian:stretch-slim
MAINTAINER Andy Goldstein <andy@heptio.com>
LABEL maintainer="Steve Kriss <krisss@vmware.com>"
RUN apk add --no-cache ca-certificates
RUN apt-get update && \
apt-get install -y --no-install-recommends ca-certificates wget bzip2 && \
wget --quiet https://github.com/restic/restic/releases/download/v0.9.4/restic_0.9.4_linux_amd64.bz2 && \
bunzip2 restic_0.9.4_linux_amd64.bz2 && \
mv restic_0.9.4_linux_amd64 /usr/bin/restic && \
chmod +x /usr/bin/restic && \
apt-get remove -y wget bzip2 && \
rm -rf /var/lib/apt/lists/*
RUN apk add --update --no-cache bzip2 && \
wget --quiet https://github.com/restic/restic/releases/download/v0.9.3/restic_0.9.3_linux_amd64.bz2 && \
bunzip2 restic_0.9.3_linux_amd64.bz2 && \
mv restic_0.9.3_linux_amd64 /usr/bin/restic && \
chmod +x /usr/bin/restic
ADD /bin/linux/amd64/ark /ark
ADD /bin/linux/amd64/velero /velero
USER nobody:nobody
ENTRYPOINT ["/ark"]
ENTRYPOINT ["/velero"]

View File

@@ -1,4 +1,4 @@
# Copyright 2018 the Heptio Ark contributors.
# Copyright 2018, 2019 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,12 +12,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM alpine:3.8
FROM debian:stretch-slim
MAINTAINER Steve Kriss <steve@heptio.com>
LABEL maintainer="Steve Kriss <krisss@vmware.com>"
ADD /bin/linux/amd64/ark-restic-restore-helper .
ADD /bin/linux/amd64/velero-restic-restore-helper .
USER nobody:nobody
ENTRYPOINT [ "/ark-restic-restore-helper" ]
ENTRYPOINT [ "/velero-restic-restore-helper" ]

419
Gopkg.lock generated
View File

@@ -2,6 +2,7 @@
[[projects]]
digest = "1:769af0c7dbdc19798e013900cfa855af9a7fda89912e019330a1dbd80a1e9a8c"
name = "cloud.google.com/go"
packages = [
"compute/metadata",
@@ -9,22 +10,27 @@
"internal",
"internal/optional",
"internal/version",
"storage"
"storage",
]
pruneopts = "NUT"
revision = "44bcd0b2078ba5e7fedbeb36808d1ed893534750"
version = "v0.11.0"
[[projects]]
digest = "1:5b71d15be52cbb93f5115f51ace93798204f6b4a3df0992d0b6da8644f505984"
name = "github.com/Azure/azure-sdk-for-go"
packages = [
"arm/disk",
"services/storage/mgmt/2017-10-01/storage",
"storage"
"services/compute/mgmt/2018-04-01/compute",
"services/storage/mgmt/2018-02-01/storage",
"storage",
"version",
]
revision = "2d1d76c9013c4feb6695a2346f0e66ea0ef77aa6"
version = "v11.3.0-beta"
pruneopts = "NUT"
revision = "520918e6c8e8e1064154f51d13e02fad92b287b8"
version = "v19.0.0"
[[projects]]
digest = "1:b825d8578481c8877ff3b9a3654d77a48577cc33e65f33c3678d7e3f134bf73d"
name = "github.com/Azure/go-autorest"
packages = [
"autorest",
@@ -32,11 +38,15 @@
"autorest/azure",
"autorest/date",
"autorest/to",
"autorest/validation"
"autorest/validation",
"version",
]
revision = "1ff28809256a84bb6966640ff3d0371af82ccba4"
pruneopts = "NUT"
revision = "bca49d5b51a50dc5bb17bbf6204c711c6dbded06"
version = "v10.14.0"
[[projects]]
digest = "1:f41188abdb95b92995643a927f5bdd208389822a8e1aba00d85633ae51b85c85"
name = "github.com/aws/aws-sdk-go"
packages = [
"aws",
@@ -69,73 +79,92 @@
"service/s3",
"service/s3/s3iface",
"service/s3/s3manager",
"service/sts"
"service/sts",
]
pruneopts = "NUT"
revision = "1f8fb9d0919e5a58992207db9512a03f76ab0274"
version = "v1.13.12"
[[projects]]
branch = "master"
digest = "1:707ebe952a8b3d00b343c01536c79c73771d100f63ec6babeaed5c79e2b8a8dd"
name = "github.com/beorn7/perks"
packages = ["quantile"]
pruneopts = "NUT"
revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
[[projects]]
digest = "1:a2c1d0e43bd3baaa071d1b9ed72c27d78169b2b269f71c105ac4ba34b1be4a39"
name = "github.com/davecgh/go-spew"
packages = ["spew"]
pruneopts = "NUT"
revision = "346938d642f2ec3594ed81d874461961cd0faa76"
version = "v1.1.0"
[[projects]]
digest = "1:7a6852b35eb5bbc184561443762d225116ae630c26a7c4d90546619f1e7d2ad2"
name = "github.com/dgrijalva/jwt-go"
packages = ["."]
pruneopts = "NUT"
revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e"
version = "v3.2.0"
[[projects]]
branch = "master"
digest = "1:da25cf063072a10461c19320e82117d85f9d60be4c95a62bc8d5a49acf7d0ca5"
name = "github.com/docker/spdystream"
packages = [
".",
"spdy"
"spdy",
]
pruneopts = "NUT"
revision = "bc6354cbbc295e925e4c611ffe90c1f287ee54db"
[[projects]]
branch = "master"
digest = "1:e8ffe2fb7368f65afaaf39769207bee2a7aeddf694e94f5bc05cffd750d4d98d"
name = "github.com/evanphx/json-patch"
packages = ["."]
pruneopts = "NUT"
revision = "944e07253867aacae43c04b2e6a239005443f33a"
[[projects]]
digest = "1:81466b4218bf6adddac2572a30ac733a9255919bc2f470b4827a317bd4ee1756"
name = "github.com/ghodss/yaml"
packages = ["."]
pruneopts = "NUT"
revision = "0ca9ea5df5451ffdf184b4428c902747c2c11cd7"
version = "v1.0.0"
[[projects]]
digest = "1:021d6ee454d87208dd1cd731cd702d3521aa8a51ad2072fa7beffbb3d677d8bb"
name = "github.com/go-ini/ini"
packages = ["."]
pruneopts = "NUT"
revision = "20b96f641a5ea98f2f8619ff4f3e061cff4833bd"
version = "v1.28.2"
[[projects]]
digest = "1:a6afc27b2a73a5506832f3c5a1c19a30772cb69e7bd1ced4639eb36a55db224f"
name = "github.com/gogo/protobuf"
packages = [
"proto",
"sortkeys"
"sortkeys",
]
pruneopts = "NUT"
revision = "100ba4e885062801d56799d78530b73b178a78f3"
version = "v0.4"
[[projects]]
branch = "master"
digest = "1:e2b86e41f3d669fc36b50d31d32d22c8ac656c75aa5ea89717ce7177e134ff2a"
name = "github.com/golang/glog"
packages = ["."]
pruneopts = "NUT"
revision = "23def4e6c14b4da8ac2ed8007337bc5eb5007998"
[[projects]]
branch = "master"
digest = "1:a98a0b00720dc3149bf3d0c8d5726188899e5bab2f5072b9a7ef82958fbc98b2"
name = "github.com/golang/protobuf"
packages = [
"proto",
@@ -143,235 +172,331 @@
"ptypes",
"ptypes/any",
"ptypes/duration",
"ptypes/timestamp"
"ptypes/timestamp",
]
revision = "ab9f9a6dab164b7d1246e0e688b0ab7b94d8553e"
pruneopts = "NUT"
revision = "b5d812f8a3706043e23a9cd5babf2e5423744d30"
version = "v1.3.1"
[[projects]]
branch = "master"
digest = "1:245bd4eb633039cd66106a5d340ae826d87f4e36a8602fcc940e14176fd26ea7"
name = "github.com/google/btree"
packages = ["."]
pruneopts = "NUT"
revision = "e89373fe6b4a7413d7acd6da1725b83ef713e6e4"
[[projects]]
branch = "master"
digest = "1:52c5834e2bebac9030c97cc0798ac11c3aa8a39f098aeb419f142533da6cd3cc"
name = "github.com/google/gofuzz"
packages = ["."]
pruneopts = "NUT"
revision = "24818f796faf91cd76ec7bddd72458fbced7a6c1"
[[projects]]
branch = "master"
digest = "1:139e03a0b4ef05098c2acb7c081b2d84d9478cae11ac777f7c1f6d550efab1ca"
name = "github.com/googleapis/gax-go"
packages = ["."]
pruneopts = "NUT"
revision = "84ed26760e7f6f80887a2fbfb50db3cc415d2cea"
[[projects]]
digest = "1:3d7c1446fc5c710351b246c0dc6700fae843ca27f5294d0bd9f68bab2a810c44"
name = "github.com/googleapis/gnostic"
packages = [
"OpenAPIv2",
"compiler",
"extensions"
"extensions",
]
pruneopts = "NUT"
revision = "ee43cbb60db7bd22502942cccbc39059117352ab"
version = "v0.1.0"
[[projects]]
branch = "master"
digest = "1:7fdf3223c7372d1ced0b98bf53457c5e89d89aecbad9a77ba9fcc6e01f9e5621"
name = "github.com/gregjones/httpcache"
packages = [
".",
"diskcache"
"diskcache",
]
pruneopts = "NUT"
revision = "9cad4c3443a7200dd6400aef47183728de563a38"
[[projects]]
branch = "master"
digest = "1:32e5a56c443b5581e4bf6e74cdc78b5826d7e4c5df43883e2dc31e4d7f4ae98a"
name = "github.com/hashicorp/go-hclog"
packages = ["."]
pruneopts = "NUT"
revision = "ca137eb4b4389c9bc6f1a6d887f056bf16c00510"
[[projects]]
branch = "master"
digest = "1:143aae8d04a6133eea9c6400b90a1f47ae1100b48a1636160aba861d1b26c5b2"
name = "github.com/hashicorp/go-plugin"
packages = ["."]
revision = "e2fbc6864d18d3c37b6cde4297ec9fca266d28f1"
packages = [
".",
"internal/plugin",
]
pruneopts = "NUT"
revision = "3f118e8ee104b6f22aeb12453fab56aed1356186"
[[projects]]
branch = "master"
digest = "1:892e13370cbfcda090d8f7676ef67b50cb2ead5460b72f3a1c2bb1c19e9a57de"
name = "github.com/hashicorp/golang-lru"
packages = [
".",
"simplelru"
"simplelru",
]
pruneopts = "NUT"
revision = "0a025b7e63adc15a622f29b0b2c4c3848243bbf6"
[[projects]]
branch = "master"
digest = "1:73d3d2f8f2bcf510db08576eca6c1d2b87bcea348de26bf1386b291ad1b52296"
name = "github.com/hashicorp/yamux"
packages = ["."]
pruneopts = "NUT"
revision = "f5742cb6b85602e7fa834e9d5d91a7d7fa850824"
[[projects]]
digest = "1:36480ab1ebec17489013e8a69d15451f47d0edbf8a54a45284857d13a0ebf692"
name = "github.com/imdario/mergo"
packages = ["."]
pruneopts = "NUT"
revision = "3e95a51e0639b4cf372f2ccf74c86749d747fbdc"
version = "0.2.2"
[[projects]]
digest = "1:406338ad39ab2e37b7f4452906442a3dbf0eb3379dd1f06aafb5c07e769a5fbb"
name = "github.com/inconshreveable/mousetrap"
packages = ["."]
pruneopts = "NUT"
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0"
[[projects]]
digest = "1:ac6d01547ec4f7f673311b4663909269bfb8249952de3279799289467837c3cc"
name = "github.com/jmespath/go-jmespath"
packages = ["."]
pruneopts = "NUT"
revision = "0b12d6b5"
[[projects]]
name = "github.com/json-iterator/go"
digest = "1:da62aa6632d04e080b8a8b85a59ed9ed1550842a0099a55f3ae3a20d02a3745a"
name = "github.com/joho/godotenv"
packages = ["."]
revision = "f2b4162afba35581b6d4a50d3b8f34e33c144682"
pruneopts = "NUT"
revision = "23d116af351c84513e1946b527c88823e476be13"
version = "v1.3.0"
[[projects]]
digest = "1:8e36686e8b139f8fe240c1d5cf3a145bc675c22ff8e707857cdd3ae17b00d728"
name = "github.com/json-iterator/go"
packages = ["."]
pruneopts = "NUT"
revision = "1624edc4454b8682399def8740d46db5e4362ba4"
version = "v1.1.5"
[[projects]]
digest = "1:13ada91f079028d1b4ca88e10a16439dcfa6541d26ed2e61e770f56d06301933"
name = "github.com/marstr/guid"
packages = ["."]
pruneopts = "NUT"
revision = "8bd9a64bf37eb297b492a4101fb28e80ac0b290f"
version = "v1.1.0"
[[projects]]
digest = "1:5985ef4caf91ece5d54817c11ea25f182697534f8ae6521eadcd628c142ac4b6"
name = "github.com/matttproud/golang_protobuf_extensions"
packages = ["pbutil"]
pruneopts = "NUT"
revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
version = "v1.0.1"
[[projects]]
branch = "master"
digest = "1:18b773b92ac82a451c1276bd2776c1e55ce057ee202691ab33c8d6690efcc048"
name = "github.com/mitchellh/go-testing-interface"
packages = ["."]
pruneopts = "NUT"
revision = "a61a99592b77c9ba629d254a693acffaeb4b7e28"
[[projects]]
digest = "1:2f42fa12d6911c7b7659738758631bec870b7e9b4c6be5444f963cdcfccc191f"
name = "github.com/modern-go/concurrent"
packages = ["."]
pruneopts = "NUT"
revision = "bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94"
version = "1.0.3"
[[projects]]
digest = "1:c6aca19413b13dc59c220ad7430329e2ec454cc310bc6d8de2c7e2b93c18a0f6"
name = "github.com/modern-go/reflect2"
packages = ["."]
revision = "1df9eeb2bb81f327b96228865c5687bc2194af3f"
version = "1.0.0"
pruneopts = "NUT"
revision = "4b7aa43c6742a2c18fdef89dd197aaae7dac7ccd"
version = "1.0.1"
[[projects]]
digest = "1:3b517122f3aad1ecce45a630ea912b3092b4729f25532a911d0cb2935a1f9352"
name = "github.com/oklog/run"
packages = ["."]
pruneopts = "NUT"
revision = "4dadeb3030eda0273a12382bb2348ffc7c9d1a39"
version = "v1.0.0"
[[projects]]
branch = "master"
digest = "1:3bf17a6e6eaa6ad24152148a631d18662f7212e21637c2699bff3369b7f00fa2"
name = "github.com/petar/GoLLRB"
packages = ["llrb"]
pruneopts = "NUT"
revision = "53be0d36a84c2a886ca057d34b6aa4468df9ccb4"
[[projects]]
digest = "1:6c6d91dc326ed6778783cff869c49fb2f61303cdd2ebbcf90abe53505793f3b6"
name = "github.com/peterbourgon/diskv"
packages = ["."]
pruneopts = "NUT"
revision = "5f041e8faa004a95c88a202771f4cc3e991971e6"
version = "v2.0.1"
[[projects]]
digest = "1:5cf3f025cbee5951a4ee961de067c8a89fc95a5adabead774f82822efabab121"
name = "github.com/pkg/errors"
packages = ["."]
pruneopts = "NUT"
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0"
[[projects]]
digest = "1:0028cb19b2e4c3112225cd871870f2d9cf49b9b4276531f03438a88e94be86fe"
name = "github.com/pmezard/go-difflib"
packages = ["difflib"]
pruneopts = "NUT"
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0"
[[projects]]
digest = "1:03bca087b180bf24c4f9060775f137775550a0834e18f0bca0520a868679dbd7"
name = "github.com/prometheus/client_golang"
packages = [
"prometheus",
"prometheus/promhttp"
"prometheus/promhttp",
]
pruneopts = "NUT"
revision = "c5b7fccd204277076155f10851dad72b76a49317"
version = "v0.8.0"
[[projects]]
branch = "master"
digest = "1:32d10bdfa8f09ecf13598324dba86ab891f11db3c538b6a34d1c3b5b99d7c36b"
name = "github.com/prometheus/client_model"
packages = ["go"]
pruneopts = "NUT"
revision = "99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c"
[[projects]]
branch = "master"
digest = "1:768b555b86742de2f28beb37f1dedce9a75f91f871d75b5717c96399c1a78c08"
name = "github.com/prometheus/common"
packages = [
"expfmt",
"internal/bitbucket.org/ww/goautoneg",
"model"
"model",
]
pruneopts = "NUT"
revision = "7600349dcfe1abd18d72d3a1770870d9800a7801"
[[projects]]
branch = "master"
digest = "1:c4a213a8d73fbb0b13f717ba7996116602ef18ecb42b91d77405877914cb0349"
name = "github.com/prometheus/procfs"
packages = [
".",
"internal/util",
"nfs",
"xfs"
"xfs",
]
pruneopts = "NUT"
revision = "94663424ae5ae9856b40a9f170762b4197024661"
[[projects]]
digest = "1:f53493533f0689ff978122bb36801af47fe549828ce786af9166694394c3fa0d"
name = "github.com/robfig/cron"
packages = ["."]
pruneopts = "NUT"
revision = "df38d32658d8788cd446ba74db4bb5375c4b0cb3"
[[projects]]
name = "github.com/satori/uuid"
digest = "1:6bc0652ea6e39e22ccd522458b8bdd8665bf23bdc5a20eec90056e4dc7e273ca"
name = "github.com/satori/go.uuid"
packages = ["."]
revision = "879c5887cd475cd7864858769793b2ceb0d44feb"
version = "v1.1.0"
pruneopts = "NUT"
revision = "f58768cc1a7a7e77a3bd49e98cdd21419399b6a3"
version = "v1.2.0"
[[projects]]
digest = "1:31c5d934770c8b0698c28eb8576cb39b14e2fcf3c5f2a6e8449116884cd92e3f"
name = "github.com/sirupsen/logrus"
packages = ["."]
pruneopts = "NUT"
revision = "f006c2ac4710855cf0f916dd6b77acf6b048dc6e"
version = "v1.0.3"
[[projects]]
branch = "master"
digest = "1:7e6f7748181bd6004ace3f6ccd389a088bac357714364152fde0e5f9e0b588d7"
name = "github.com/spf13/afero"
packages = [
".",
"mem"
"mem",
]
pruneopts = "NUT"
revision = "9be650865eab0c12963d8753212f4f9c66cdcf12"
[[projects]]
digest = "1:343d44e06621142ab09ae0c76c1799104cdfddd3ffb445d78b1adf8dc3ffaf3d"
name = "github.com/spf13/cobra"
packages = ["."]
pruneopts = "NUT"
revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385"
version = "v0.0.3"
[[projects]]
digest = "1:e3707aeaccd2adc89eba6c062fec72116fe1fc1ba71097da85b4d8ae1668a675"
name = "github.com/spf13/pflag"
packages = ["."]
pruneopts = "NUT"
revision = "9a97c102cda95a86cec2345a6f09f55a939babf5"
version = "v1.0.2"
[[projects]]
digest = "1:60a46e2410edbf02b419f833372dd1d24d7aa1b916a990a7370e792fada1eadd"
name = "github.com/stretchr/objx"
packages = ["."]
pruneopts = "NUT"
revision = "477a77ecc69700c7cdeb1fa9e129548e1c1c393c"
version = "v0.1.1"
[[projects]]
digest = "1:72cea38d2957d95d18be2287ef9d4b06b89796d2e3070bc7f796bea3a4844381"
name = "github.com/stretchr/testify"
packages = [
"assert",
"mock",
"require"
"require",
]
pruneopts = "NUT"
revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686"
version = "v1.2.2"
[[projects]]
digest = "1:2f977d7025e73f05091f406514f1d2cca36cc649d2af08d5f5223ebc6c475863"
name = "go.opencensus.io"
packages = [
".",
@@ -386,19 +511,23 @@
"trace",
"trace/internal",
"trace/propagation",
"trace/tracestate"
"trace/tracestate",
]
pruneopts = "NUT"
revision = "79993219becaa7e29e3b60cb67f5b8e82dee11d6"
version = "v0.17.0"
[[projects]]
branch = "master"
digest = "1:624a05c7c6ed502bf77364cd3d54631383dafc169982fddd8ee77b53c3d9cccf"
name = "golang.org/x/crypto"
packages = ["ssh/terminal"]
pruneopts = "NUT"
revision = "eb71ad9bd329b5ac0fd0148dd99bd62e8be8e035"
[[projects]]
branch = "master"
digest = "1:ce8a4c0642d5e3881d1970f39008477671a2a5157d051c36d1618cf6bb669556"
name = "golang.org/x/net"
packages = [
"context",
@@ -408,33 +537,39 @@
"idna",
"internal/timeseries",
"lex/httplex",
"trace"
"trace",
]
pruneopts = "NUT"
revision = "1c05540f6879653db88113bc4a2b70aec4bd491f"
[[projects]]
branch = "master"
digest = "1:b0fef33b00740f7eeb5198f67ee1642d8d2560e9b428df7fb5f69fb140f5c4d0"
name = "golang.org/x/oauth2"
packages = [
".",
"google",
"internal",
"jws",
"jwt"
"jwt",
]
pruneopts = "NUT"
revision = "9dcd33a902f40452422c2367fefcb95b54f9f8f8"
[[projects]]
branch = "master"
digest = "1:240624e43a0897823c99c74d446ec6de88134e6920b759815189be1a619113e6"
name = "golang.org/x/sys"
packages = [
"unix",
"windows"
"windows",
]
revision = "43e60d72a8e2bd92ee98319ba9a384a0e9837c08"
pruneopts = "NUT"
revision = "6c81ef8f67ca3f42fc9cd71dfbd5f35b0c4b5771"
[[projects]]
branch = "master"
digest = "1:0f6792185947c44cd78bc6a2f4399c44c7e85d406b3229a27d41f6cd0a8e982b"
name = "golang.org/x/text"
packages = [
"encoding",
@@ -451,18 +586,22 @@
"unicode/bidi",
"unicode/cldr",
"unicode/norm",
"unicode/rangetable"
"unicode/rangetable",
]
pruneopts = "NUT"
revision = "e56139fd9c5bc7244c76116c68e500765bb6db6b"
[[projects]]
branch = "master"
digest = "1:51a479a09b7ed06b7be5a854e27fcc328718ae0e5ad159f9ddeef12d0326c2e7"
name = "golang.org/x/time"
packages = ["rate"]
pruneopts = "NUT"
revision = "26559e0f760e39c24d730d3224364aef164ee23f"
[[projects]]
branch = "master"
digest = "1:a602f48d6caf1bcb3a082346459de0e0dbc6ea9aeabc3cb88255794980644001"
name = "google.golang.org/api"
packages = [
"compute/v1",
@@ -475,11 +614,13 @@
"option",
"storage/v1",
"transport/http",
"transport/http/internal/propagation"
"transport/http/internal/propagation",
]
pruneopts = "NUT"
revision = "3f6e8463aa1d824abe11b439d178c02220079da5"
[[projects]]
digest = "1:7206d98ec77c90c72ec2c405181a1dcf86965803b6dbc4f98ceab7a5047c37a9"
name = "google.golang.org/appengine"
packages = [
".",
@@ -491,58 +632,84 @@
"internal/modules",
"internal/remote_api",
"internal/urlfetch",
"urlfetch"
"urlfetch",
]
pruneopts = "NUT"
revision = "150dc57a1b433e64154302bdc40b6bb8aefa313a"
version = "v1.0.0"
[[projects]]
branch = "master"
digest = "1:a2059631b54cdc40db08f8c4dfb39d3c5ec442003506327df2c675a9384b7115"
name = "google.golang.org/genproto"
packages = [
"googleapis/api/annotations",
"googleapis/iam/v1",
"googleapis/rpc/status"
"googleapis/rpc/status",
]
pruneopts = "NUT"
revision = "ee236bd376b077c7a89f260c026c4735b195e459"
[[projects]]
digest = "1:8274473795baa9e1fc3b36fae1d8af131a03a7ae2456a8e87a6fda86af019f70"
name = "google.golang.org/grpc"
packages = [
".",
"balancer",
"balancer/base",
"balancer/roundrobin",
"binarylog/grpc_binarylog_v1",
"codes",
"connectivity",
"credentials",
"grpclb/grpc_lb_v1",
"credentials/internal",
"encoding",
"encoding/proto",
"grpclog",
"health",
"health/grpc_health_v1",
"internal",
"internal/backoff",
"internal/binarylog",
"internal/channelz",
"internal/envconfig",
"internal/grpcrand",
"internal/grpcsync",
"internal/syscall",
"internal/transport",
"keepalive",
"metadata",
"naming",
"peer",
"resolver",
"resolver/dns",
"resolver/passthrough",
"stats",
"status",
"tap",
"transport"
]
revision = "b3ddf786825de56a4178401b7e174ee332173b66"
version = "v1.5.2"
pruneopts = "NUT"
revision = "2fdaae294f38ed9a121193c51ec99fecd3b13eb7"
version = "v1.19.0"
[[projects]]
digest = "1:ef72505cf098abdd34efeea032103377bec06abb61d8a06f002d5d296a4b1185"
name = "gopkg.in/inf.v0"
packages = ["."]
pruneopts = "NUT"
revision = "3887ee99ecf07df5b447e9b00d9c0b2adaa9f3e4"
version = "v0.9.0"
[[projects]]
branch = "v2"
digest = "1:c85dc78b3426641ebf2a0bbf5b731b5c4613ddb5987dbe218f7e75468dcd56f5"
name = "gopkg.in/yaml.v2"
packages = ["."]
pruneopts = "NUT"
revision = "eb3733d160e74a9c7e442f435eb3bea458e1d19f"
[[projects]]
digest = "1:93e9a6515f47aaaf7f1c84617fc8c82db9216f7290c4d4149afeaf6936d9aa5e"
name = "k8s.io/api"
packages = [
"admission/v1beta1",
@@ -557,10 +724,12 @@
"authorization/v1beta1",
"autoscaling/v1",
"autoscaling/v2beta1",
"autoscaling/v2beta2",
"batch/v1",
"batch/v1beta1",
"batch/v2alpha1",
"certificates/v1beta1",
"coordination/v1beta1",
"core/v1",
"events/v1beta1",
"extensions/v1beta1",
@@ -575,22 +744,25 @@
"settings/v1alpha1",
"storage/v1",
"storage/v1alpha1",
"storage/v1beta1"
"storage/v1beta1",
]
revision = "072894a440bdee3a891dea811fe42902311cd2a3"
version = "kubernetes-1.11.0"
pruneopts = "NUT"
revision = "fd83cbc87e7632ccd8bbab63d2b673d4e0c631cc"
version = "kubernetes-1.12.0"
[[projects]]
branch = "master"
digest = "1:b8a1dcc5f4e559b7af185ba12dd341cb8c175ea3d36227a02699b251ae5fde05"
name = "k8s.io/apiextensions-apiserver"
packages = [
"pkg/apis/apiextensions",
"pkg/apis/apiextensions/v1beta1"
"pkg/apis/apiextensions/v1beta1",
]
revision = "07bbbb7a28a34c56bf9d1b192a88cc9b2350095e"
pruneopts = "NUT"
revision = "1748dfb29e8a4432b78514bc88a1b07937a9805a"
version = "kubernetes-1.12.0"
[[projects]]
branch = "release-1.11"
digest = "1:ca279c0bb7a72618aff5b77440d5a5e2f92857fdb7e0e4c7a1a77a7895929c49"
name = "k8s.io/apimachinery"
packages = [
"pkg/api/equality",
@@ -627,6 +799,7 @@
"pkg/util/intstr",
"pkg/util/json",
"pkg/util/mergepatch",
"pkg/util/naming",
"pkg/util/net",
"pkg/util/remotecommand",
"pkg/util/runtime",
@@ -640,11 +813,26 @@
"pkg/watch",
"third_party/forked/golang/json",
"third_party/forked/golang/netutil",
"third_party/forked/golang/reflect"
"third_party/forked/golang/reflect",
]
revision = "103fd098999dc9c0c88536f5c9ad2e5da39373ae"
pruneopts = "NUT"
revision = "6dd46049f39503a1fc8d65de4bd566829e95faff"
version = "kubernetes-1.12.0"
[[projects]]
branch = "release-1.12"
digest = "1:7991e5074de01462e0cf6ef77060895b50e9026d16152a6e925cb99b67a1f8ae"
name = "k8s.io/cli-runtime"
packages = [
"pkg/genericclioptions",
"pkg/genericclioptions/printers",
"pkg/genericclioptions/resource",
]
pruneopts = "NUT"
revision = "11047e25a94a7eaa541b92a8bbfd3e1243607219"
[[projects]]
digest = "1:5d9f76731330e62bede1e4eb9d519b282a26621a5368e5db1a18a8eb1ccda1ff"
name = "k8s.io/client-go"
packages = [
"discovery",
@@ -661,12 +849,15 @@
"informers/autoscaling",
"informers/autoscaling/v1",
"informers/autoscaling/v2beta1",
"informers/autoscaling/v2beta2",
"informers/batch",
"informers/batch/v1",
"informers/batch/v1beta1",
"informers/batch/v2alpha1",
"informers/certificates",
"informers/certificates/v1beta1",
"informers/coordination",
"informers/coordination/v1beta1",
"informers/core",
"informers/core/v1",
"informers/events",
@@ -704,10 +895,12 @@
"kubernetes/typed/authorization/v1beta1",
"kubernetes/typed/autoscaling/v1",
"kubernetes/typed/autoscaling/v2beta1",
"kubernetes/typed/autoscaling/v2beta2",
"kubernetes/typed/batch/v1",
"kubernetes/typed/batch/v1beta1",
"kubernetes/typed/batch/v2alpha1",
"kubernetes/typed/certificates/v1beta1",
"kubernetes/typed/coordination/v1beta1",
"kubernetes/typed/core/v1",
"kubernetes/typed/events/v1beta1",
"kubernetes/typed/extensions/v1beta1",
@@ -729,10 +922,12 @@
"listers/apps/v1beta2",
"listers/autoscaling/v1",
"listers/autoscaling/v2beta1",
"listers/autoscaling/v2beta2",
"listers/batch/v1",
"listers/batch/v1beta1",
"listers/batch/v2alpha1",
"listers/certificates/v1beta1",
"listers/coordination/v1beta1",
"listers/core/v1",
"listers/events/v1beta1",
"listers/extensions/v1beta1",
@@ -781,32 +976,128 @@
"util/integer",
"util/jsonpath",
"util/retry",
"util/workqueue"
"util/workqueue",
]
revision = "7d04d0e2a0a1a4d4a1cd6baa432a2301492e4e65"
version = "v8.0.0"
pruneopts = "NUT"
revision = "1638f8970cefaa404ff3a62950f88b08292b2696"
version = "v9.0.0"
[[projects]]
branch = "master"
digest = "1:a2c842a1e0aed96fd732b535514556323a6f5edfded3b63e5e0ab1bce188aa54"
name = "k8s.io/kube-openapi"
packages = ["pkg/util/proto"]
pruneopts = "NUT"
revision = "d83b052f768a50a309c692a9c271da3f3276ff88"
[[projects]]
digest = "1:8a9b1e755afd7ea778cd451a955977eb3fe0abcc4e32079644b6b7afc42d7ff8"
name = "k8s.io/kubernetes"
packages = [
"pkg/kubectl/genericclioptions",
"pkg/kubectl/genericclioptions/printers",
"pkg/kubectl/genericclioptions/resource",
"pkg/kubectl/scheme",
"pkg/printers"
"pkg/printers",
]
revision = "91e7b4fd31fcd3d5f436da26c980becec37ceefe"
version = "v1.11.0"
pruneopts = "NUT"
revision = "51dd616cdd25d6ee22c83a858773b607328a18ec"
version = "v1.12.5"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "7979aebee2c67e7fa68bddf050ef32b75a2f51145d26d00a54f6bf489af635a2"
input-imports = [
"cloud.google.com/go/storage",
"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute",
"github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2018-02-01/storage",
"github.com/Azure/azure-sdk-for-go/storage",
"github.com/Azure/go-autorest/autorest",
"github.com/Azure/go-autorest/autorest/adal",
"github.com/Azure/go-autorest/autorest/azure",
"github.com/Azure/go-autorest/autorest/to",
"github.com/aws/aws-sdk-go/aws",
"github.com/aws/aws-sdk-go/aws/awserr",
"github.com/aws/aws-sdk-go/aws/credentials",
"github.com/aws/aws-sdk-go/aws/endpoints",
"github.com/aws/aws-sdk-go/aws/request",
"github.com/aws/aws-sdk-go/aws/session",
"github.com/aws/aws-sdk-go/aws/signer/v4",
"github.com/aws/aws-sdk-go/service/ec2",
"github.com/aws/aws-sdk-go/service/s3",
"github.com/aws/aws-sdk-go/service/s3/s3manager",
"github.com/evanphx/json-patch",
"github.com/golang/glog",
"github.com/golang/protobuf/proto",
"github.com/hashicorp/go-hclog",
"github.com/hashicorp/go-plugin",
"github.com/joho/godotenv",
"github.com/pkg/errors",
"github.com/prometheus/client_golang/prometheus",
"github.com/prometheus/client_golang/prometheus/promhttp",
"github.com/robfig/cron",
"github.com/satori/go.uuid",
"github.com/sirupsen/logrus",
"github.com/spf13/afero",
"github.com/spf13/cobra",
"github.com/spf13/pflag",
"github.com/stretchr/testify/assert",
"github.com/stretchr/testify/mock",
"github.com/stretchr/testify/require",
"golang.org/x/net/context",
"golang.org/x/oauth2",
"golang.org/x/oauth2/google",
"google.golang.org/api/compute/v1",
"google.golang.org/api/googleapi",
"google.golang.org/api/iterator",
"google.golang.org/api/option",
"google.golang.org/grpc",
"google.golang.org/grpc/codes",
"google.golang.org/grpc/status",
"k8s.io/api/apps/v1",
"k8s.io/api/apps/v1beta1",
"k8s.io/api/batch/v1",
"k8s.io/api/core/v1",
"k8s.io/api/rbac/v1",
"k8s.io/api/rbac/v1beta1",
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1",
"k8s.io/apimachinery/pkg/api/equality",
"k8s.io/apimachinery/pkg/api/errors",
"k8s.io/apimachinery/pkg/api/meta",
"k8s.io/apimachinery/pkg/apis/meta/v1",
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured",
"k8s.io/apimachinery/pkg/labels",
"k8s.io/apimachinery/pkg/runtime",
"k8s.io/apimachinery/pkg/runtime/schema",
"k8s.io/apimachinery/pkg/runtime/serializer",
"k8s.io/apimachinery/pkg/types",
"k8s.io/apimachinery/pkg/util/clock",
"k8s.io/apimachinery/pkg/util/duration",
"k8s.io/apimachinery/pkg/util/errors",
"k8s.io/apimachinery/pkg/util/runtime",
"k8s.io/apimachinery/pkg/util/sets",
"k8s.io/apimachinery/pkg/util/wait",
"k8s.io/apimachinery/pkg/watch",
"k8s.io/client-go/discovery",
"k8s.io/client-go/discovery/fake",
"k8s.io/client-go/dynamic",
"k8s.io/client-go/informers",
"k8s.io/client-go/informers/core/v1",
"k8s.io/client-go/kubernetes",
"k8s.io/client-go/kubernetes/scheme",
"k8s.io/client-go/kubernetes/typed/core/v1",
"k8s.io/client-go/kubernetes/typed/rbac/v1",
"k8s.io/client-go/kubernetes/typed/rbac/v1beta1",
"k8s.io/client-go/listers/core/v1",
"k8s.io/client-go/plugin/pkg/client/auth/azure",
"k8s.io/client-go/plugin/pkg/client/auth/gcp",
"k8s.io/client-go/plugin/pkg/client/auth/oidc",
"k8s.io/client-go/rest",
"k8s.io/client-go/restmapper",
"k8s.io/client-go/testing",
"k8s.io/client-go/tools/cache",
"k8s.io/client-go/tools/clientcmd",
"k8s.io/client-go/tools/remotecommand",
"k8s.io/client-go/util/flowcontrol",
"k8s.io/client-go/util/workqueue",
"k8s.io/kubernetes/pkg/printers",
]
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -31,31 +31,28 @@
[[constraint]]
name = "k8s.io/kubernetes"
version = "~1.11"
version = "~1.12"
[[constraint]]
name = "k8s.io/client-go"
version = "~8.0"
version = "~9.0"
[[constraint]]
name = "k8s.io/apimachinery"
version = "kubernetes-1.11.0"
version = "kubernetes-1.12.0"
[[constraint]]
name = "k8s.io/api"
version = "kubernetes-1.11.0"
version = "kubernetes-1.12.0"
# vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:104:16:
# unknown field 'CaseSensitive' in struct literal of type jsoniter.Config
[[constraint]]
name = "k8s.io/apiextensions-apiserver"
version = "kubernetes-1.12.0"
# k8s.io/client-go v9.0 uses f2b4162afba35581b6d4a50d3b8f34e33c144682 (released in v1.1.4)
[[override]]
name = "github.com/json-iterator/go"
revision = "f2b4162afba35581b6d4a50d3b8f34e33c144682"
# vendor/k8s.io/client-go/plugin/pkg/client/auth/azure/azure.go:300:25:
# cannot call non-function spt.Token (type adal.Token)
[[override]]
name = "github.com/Azure/go-autorest"
revision = "1ff28809256a84bb6966640ff3d0371af82ccba4"
version = "~1.1.4"
#
# Cloud provider packages
@@ -66,7 +63,12 @@
[[constraint]]
name = "github.com/Azure/azure-sdk-for-go"
version = "~11.3.0-beta"
version = "~19.0.0"
# k8s.io/client-go v9.0 uses bca49d5b51a50dc5bb17bbf6204c711c6dbded06 (v10.14.0)
[[constraint]]
name = "github.com/Azure/go-autorest"
version = "~10.14.0"
[[constraint]]
name = "cloud.google.com/go"
@@ -91,15 +93,9 @@
name = "github.com/robfig/cron"
revision = "df38d32658d8788cd446ba74db4bb5375c4b0cb3"
# TODO(1.0) this repo is a redirect to github.com/satori/go.uuid. Our
# current version of azure-sdk-for-go references this redirect, so
# use it so we don't get a duplicate copy of this dependency.
# Once our azure-sdk-for-go is updated to a newer version (where
# their dependency has changed to .../go.uuid), switch this to
# github.com/satori/go.uuid
[[constraint]]
name = "github.com/satori/uuid"
version = "1.1.0"
name = "github.com/satori/go.uuid"
version = "~1.2.0"
[[constraint]]
name = "github.com/spf13/afero"
@@ -119,4 +115,20 @@
[[constraint]]
name = "github.com/hashicorp/go-plugin"
revision = "3f118e8ee104b6f22aeb12453fab56aed1356186"
[[constraint]]
name = "github.com/golang/protobuf"
version = "~v1.3.1"
[[constraint]]
name = "google.golang.org/grpc"
version = "~v1.19.0"
[[constraint]]
name = "github.com/joho/godotenv"
version = "~v1.3.0"
[[override]]
name = "golang.org/x/sys"
branch = "master"

View File

@@ -1,6 +1,6 @@
# Copyright 2016 The Kubernetes Authors.
#
# Modifications Copyright 2017 the Heptio Ark contributors.
# Modifications Copyright 2017 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,10 +15,10 @@
# limitations under the License.
# The binary to build (just the basename).
BIN ?= ark
BIN ?= velero
# This repo's root import path (under GOPATH).
PKG := github.com/heptio/ark
PKG := github.com/heptio/velero
# Where to push the docker image.
REGISTRY ?= gcr.io/heptio-images
@@ -47,7 +47,7 @@ GOARCH = $(word 2, $(platform_temp))
# TODO(ncdc): support multiple image architectures once gcr.io supports manifest lists
# Set default base image dynamically for each arch
ifeq ($(GOARCH),amd64)
DOCKERFILE ?= Dockerfile-$(BIN).alpine
DOCKERFILE ?= Dockerfile-$(BIN)
endif
#ifeq ($(GOARCH),arm)
# DOCKERFILE ?= Dockerfile.arm #armel/busybox
@@ -63,7 +63,7 @@ IMAGE = $(REGISTRY)/$(BIN)
# If you want to build AND push all containers, see the 'all-push' rule.
all:
@$(MAKE) build
@$(MAKE) build BIN=ark-restic-restore-helper
@$(MAKE) build BIN=velero-restic-restore-helper
build-%:
@$(MAKE) --no-print-directory ARCH=$* build
@@ -104,7 +104,7 @@ _output/bin/$(GOOS)/$(GOARCH)/$(BIN): build-dirs
TTY := $(shell tty -s && echo "-t")
BUILDER_IMAGE := ark-builder
BUILDER_IMAGE := velero-builder
# Example: make shell CMD="date > datefile"
shell: build-dirs build-image
@@ -146,7 +146,7 @@ endif
all-containers:
$(MAKE) container
$(MAKE) container BIN=ark-restic-restore-helper
$(MAKE) container BIN=velero-restic-restore-helper
$(MAKE) build-fsfreeze
container: verify test .container-$(DOTFILE_IMAGE) container-name
@@ -160,7 +160,7 @@ container-name:
all-push:
$(MAKE) push
$(MAKE) push BIN=ark-restic-restore-helper
$(MAKE) push BIN=velero-restic-restore-helper
$(MAKE) push-fsfreeze

View File

@@ -1,35 +1,39 @@
# Heptio Ark
![100]
**Maintainers:** [Heptio][0]
[![Build Status][1]][2] <a href="https://zenhub.com"><img src="https://raw.githubusercontent.com/ZenHubIO/support/master/zenhub-badge.png"></a>
[![Build Status][1]][2]
## Overview
Ark gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. Ark lets you:
Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. Velero lets you:
* Take backups of your cluster and restore in case of loss.
* Copy cluster resources to other clusters.
* Replicate your production environment for development and testing environments.
Ark consists of:
Velero consists of:
* A server that runs on your cluster
* A command-line client that runs locally
You can run Ark in clusters on a cloud provider or on-premises. For detailed information, see [Compatible Storage Providers][99].
You can run Velero in clusters on a cloud provider or on-premises. For detailed information, see [Compatible Storage Providers][99].
## Breaking changes
## Installation
Ark version 0.10.0 introduces a number of breaking changes. Before you upgrade to version 0.10.0, make sure to read [the documentation on upgrading][98].
We strongly recommend that you use an [official release][6] of Velero. The tarballs for each release contain the
command-line client **and** version-specific sample YAML files for deploying Velero to your cluster.
Follow the instructions under the **Install** section of [our documentation][29] to get started.
_The code and sample YAML files in the master branch of the Velero repository are under active development and are not guaranteed to be stable. Use them at your own risk!_
## More information
[The documentation][29] provides a getting started guide, plus information about building from source, architecture, extending Ark, and more.
[The documentation][29] provides a getting started guide, plus information about building from source, architecture, extending Velero, and more.
Please use the version selector at the top of the site to ensure you are using the appropriate documentation for your version of Velero.
## Troubleshooting
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#ark-dr channel][25] on the Kubernetes Slack server.
If you encounter issues, review the [troubleshooting docs][30], [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
## Contributing
@@ -51,29 +55,27 @@ Feedback and discussion are available on [the mailing list][24].
See [the list of releases][6] to find out about feature changes.
[0]: https://github.com/heptio
[1]: https://travis-ci.org/heptio/ark.svg?branch=master
[2]: https://travis-ci.org/heptio/ark
[1]: https://travis-ci.org/heptio/velero.svg?branch=master
[2]: https://travis-ci.org/heptio/velero
[4]: https://github.com/heptio/ark/issues
[5]: https://github.com/heptio/ark/blob/master/CONTRIBUTING.md
[6]: https://github.com/heptio/ark/releases
[4]: https://github.com/heptio/velero/issues
[5]: https://github.com/heptio/velero/blob/master/CONTRIBUTING.md
[6]: https://github.com/heptio/velero/releases
[8]: https://github.com/heptio/ark/blob/master/CODE_OF_CONDUCT.md
[8]: https://github.com/heptio/velero/blob/master/CODE_OF_CONDUCT.md
[9]: https://kubernetes.io/docs/setup/
[10]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-with-homebrew-on-macos
[11]: https://kubernetes.io/docs/tasks/tools/install-kubectl/#tabset-1
[12]: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/README.md
[14]: https://github.com/kubernetes/kubernetes
[24]: http://j.hept.io/ark-list
[25]: https://kubernetes.slack.com/messages/ark-dr
[26]: https://github.com/heptio/ark/blob/master/docs/zenhub.md
[24]: https://groups.google.com/forum/#!forum/projectvelero
[25]: https://kubernetes.slack.com/messages/velero
[26]: https://github.com/heptio/velero/blob/master/docs/zenhub.md
[29]: https://heptio.github.io/ark/
[29]: https://heptio.github.io/velero/
[30]: /docs/troubleshooting.md
[98]: /docs/upgrading-to-v0.10.md
[99]: /docs/support-matrix.md
[100]: /docs/img/velero.png

View File

@@ -1,5 +1,5 @@
# Ark Support
# Velero Support
Thanks for trying out Ark! We welcome all feedback, please consider joining our mailing list:
Thanks for trying out Velero! We welcome all feedback, please consider joining our mailing list:
- [Mailing List](http://j.hept.io/ark-list)
- [Mailing List](https://groups.google.com/forum/#!forum/velero)

View File

@@ -1,6 +1,18 @@
- [v0.10.2](#v0102)
- [v0.10.1](#v0101)
- [v0.10.0](#v0100)
## v0.10.2
#### 2019-02-28
### Download
- https://github.com/heptio/ark/releases/tag/v0.10.2
### Changes
* upgrade restic to v0.9.4 & replace --hostname flag with --host (#1156, @skriss)
* use 'restic stats' instead of 'restic check' to determine if repo exists (#1171, @skriss)
* Fix concurrency bug in code ensuring restic repository exists (#1235, @skriss)
## v0.10.1
#### 2019-01-10
@@ -245,5 +257,5 @@ need to be updated for v0.10.
- [eabef085](https://github.com/heptio/ark/commit/eabef085) Update generated Ark code based on the 1.11 k8s.io/code-generator script
- [f5eac0b4](https://github.com/heptio/ark/commit/f5eac0b4) Update vendored library code for Kubernetes 1.11
[1]: https://github.com/heptio/ark/blob/master/docs/upgrading-to-v0.10.md
[1]: https://heptio.github.io/velero/v0.10.0/upgrading-to-v0.10
[2]: locations.md

View File

@@ -0,0 +1,25 @@
- [v0.11.0](#v0110)
## v0.11.0
#### 2019-02-28
### Download
- https://github.com/heptio/velero/releases/tag/v0.11.0
### Highlights
* Heptio Ark is now Velero! This release is the first one to use the new name. For details on the changes and how to migrate to v0.11, see the [migration instructions][1]. **Please follow the instructions to ensure a successful upgrade to v0.11.**
* Restic has been upgraded to v0.9.4, which brings significantly faster restores thanks to a new multi-threaded restorer.
* Velero now waits for terminating namespaces and persistent volumes to delete before attempting to restore them, rather than trying and failing to restore them while they're being deleted.
### All Changes
* Fix concurrency bug in code ensuring restic repository exists (#1235, @skriss)
* Wait for PVs and namespaces to delete before attempting to restore them. (#826, @nrb)
* Set the zones for GCP regional disks on restore. This requires the `compute.zones.get` permission on the GCP serviceaccount in order to work correctly. (#1200, @nrb)
* Renamed Heptio Ark to Velero. Changed internal imports, environment variables, and binary name. (#1184, @nrb)
* use 'restic stats' instead of 'restic check' to determine if repo exists (#1171, @skriss)
* upgrade restic to v0.9.4 & replace --hostname flag with --host (#1156, @skriss)
* Clarify restore log when object unchanged (#1153, @daved)
* Add backup-version file in backup tarball. (#1117, @wwitzel3)
* add ServerStatusRequest CRD and show server version in `ark version` output (#1116, @skriss)
[1]: https://heptio.github.io/velero/v0.11.0/migrating-to-velero

View File

@@ -77,9 +77,9 @@
here are the steps you can take to upgrade:
1. Execute the steps from the **Credentials and configuration** section for your cloud:
* [AWS](https://heptio.github.io/ark/v0.8.0/aws-config#credentials-and-configuration)
* [Azure](https://heptio.github.io/ark/v0.8.0/azure-config#credentials-and-configuration)
* [GCP](https://heptio.github.io/ark/v0.8.0/gcp-config#credentials-and-configuration)
* [AWS](https://heptio.github.io/velero/v0.8.0/aws-config#credentials-and-configuration)
* [Azure](https://heptio.github.io/velero/v0.8.0/azure-config#credentials-and-configuration)
* [GCP](https://heptio.github.io/velero/v0.8.0/gcp-config#credentials-and-configuration)
When you get to the secret creation step, if you don't have your `credentials-ark` file handy,
you can copy the existing secret from your `heptio-ark-server` namespace into the `heptio-ark` namespace:
@@ -95,6 +95,6 @@
```
3. Execute the commands from the **Start the server** section for your cloud:
* [AWS](https://heptio.github.io/ark/v0.8.0/aws-config#start-the-server)
* [Azure](https://heptio.github.io/ark/v0.8.0/azure-config#start-the-server)
* [GCP](https://heptio.github.io/ark/v0.8.0/gcp-config#start-the-server)
* [AWS](https://heptio.github.io/velero/v0.8.0/aws-config#start-the-server)
* [Azure](https://heptio.github.io/velero/v0.8.0/azure-config#start-the-server)
* [GCP](https://heptio.github.io/velero/v0.8.0/gcp-config#start-the-server)

114
changelogs/CHANGELOG-1.0.md Normal file
View File

@@ -0,0 +1,114 @@
## v1.0.0-alpha.1
#### 2019-04-15
### Download
- https://github.com/heptio/velero/releases/tag/v1.0.0-alpha.1
### Highlights
We're excited to release our first alpha for v1.0! Please take it for a spin in your non-critical environments. Although we've finished the majority of the planned development work for v1.0, we are still working on a handful of items, so don't consider this alpha release to be fully feature-complete. Here's a quick rundown of the major changes in this release:
- We've added a new command, `velero install`, to make it easier to get up and running with Velero
- We've made a bunch of improvements to the plugin framework:
- we've reorganized the relevant packages to minimize the import surface for plugin authors
- all plugins are now wrapped in panic handlers that will report information on panics back to Velero
- Velero's `--log-level` flag is now passed to plugin implementations
- Errors logged within plugins are now annotated with the file/line of where the error occurred
- Restore item actions can now optionally return a list of additional related items that should be restored
- Restore item actions can now indicate that an item *should not* be restored
- The restic restore helper image used by Velero can now optionally be overridden via config map
### Breaking & Notable Changes
#### API
* All legacy Ark data types and pre-1.0 compatibility code has been removed. Users should migrate any backups created pre-v0.11.0 with the v0.11.1 migration command (not yet released)
#### Azure
* During installation, the `cloud-credentials` secret can now be created from a file, whose contents look like the following:
```
AZURE_TENANT_ID=${AZURE_TENANT_ID}
AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
```
When using this method, the `cloud-credentials` secret should be mounted as a volume into the Velero deployment and daemon set, at the path `/credentials`. Additionally, the `$AZURE_CREDENTIALS_FILE` environment variable should be set to `/credentials/cloud` (the location of the file within the Velero pods). Note that `velero install` always uses this method of providing credentials for Azure.
#### Image
* The base container image has been switched to `debian:stretch-slim`
#### Plugin Development
* `BlockStore` plugins are now named `VolumeSnapshotter` plugins
* Plugin APIs have moved to reduce the import surface:
* Plugin gRPC servers live in `github.com/heptio/velero/pkg/plugin/framework`
* Plugin interface types live in `github.com/heptio/velero/pkg/plugin/velero`
* RestoreItemAction interface now takes the original item from the backup as a parameter
* RestoreItemAction plugins can now return additional items to restore
* RestoreItemAction plugins can now skip restoring an item
* Plugins may now send stack traces with errors to the Velero server, so that the errors may be put into the server log
* Plugins must now be "namespaced," using `example.domain.com/plugin-name` format
* For external ObjectStore and VolumeSnapshotter plugins. this name will also be the provider name in BackupStorageLoction and VolumeSnapshotLocation objects
* `--log-level` flag is now passed to all plugins
#### Validation
* Configs for Azure, AWS, and GCP are now checked for invalid or extra keys, and the server is halted if any are found
### All Changes
* change container base images to debian:stretch-slim and upgrade to go 1.12 (#1365, @skriss)
* Azure: allow credentials to be provided in a .env file (#1364, @skriss)
* remove deprecated code in preparation for v1.0 release:
- remove ark.heptio.com API group
- remove support for reading ark-backup.json files from object storage
- remove Ark field from RestoreResult type
- remove support for "hook.backup.ark.heptio.com/..." annotations for specifying hooks
- remove support for $HOME/.config/ark/ client config directory
- remove support for restoring Azure snapshots using short snapshot ID formats in backup metadata
- stop applying "velero-restore" label to restored resources and remove it from the API pkg
- remove code that strips the "gc.ark.heptio.com" finalizer from backups
- remove support for "backup.ark.heptio.com/..." annotations for requesting restic backups
- remove "ark"-prefixed prometheus metrics
- remove VolumeBackups field and related code from Backup's status (#1323, @skriss)
* Add velero install command for basic use cases. (#1287, @nrb)
* Support non-namespaced names for built-in plugins (#1366, @nrb)
* instantiate the plugin manager with the per-restore logger so plugin logs are captured in the per-restore log (#1358, @skriss)
* Validate that there can't be any duplicate plugin name, and that the name format is `example.io/name`. (#1339, @carlisia)
* Added ability to dynamically disable controllers (#1326, @amanw)
* set default TTL for backups (#1352, @vorar)
* aws/azure/gcp: fail fast if unsupported keys are provided in BackupStorageLocation/VolumeSnapshotLocation config (#1338, @skriss)
* velero backup logs & velero restore logs: show helpful error message if backup/restore does not exist or is not finished processing (#1337, @skriss)
* Add support for allowing a RestoreItemAction to skip item restore. (#1336, @sseago)
* Improve error message around invalid S3 URLs, and gracefully handle trailing backslashes. (#1331, @skriss)
* set backup's start timestamp before patching it to InProgress so start times display in `velero backup get` while in progress (#1330, @skriss)
* rename BlockStore plugin to VolumeSnapshotter (#1321, @skriss)
* Bump plugin ProtocolVersion to version 2 (#1319, @carlisia)
* remove Warning field from restore item action output (#1318, @skriss)
* Fix for #1312, use describe to determine if AWS EBS snapshot is encrypted and explicitly pass that value in EC2 CreateVolume call. (#1316, @mstump)
* Allow restic restore helper image name to be optionally specified via ConfigMap (#1311, @skriss)
* compile only once to lower the initialization cost for regexp.MustCompile. (#1306, @pei0804)
* enable restore item actions to return additional related items to be restored; have pods return PVCs and PVCs return PVs (#1304, @skriss)
* log error locations from plugin logger, and don't overwrite them in the client logger if they exist already (#1301, @skriss)
* Send stack traces from plugin errors to Velero via gRPC so error location info can be logged (#1300, @skriss)
* check for and exclude hostPath-based persistent volumes from restic backup (#1297, @skriss)
* make resticrepositories non-restorable resources (#1296, @skriss)
* gracefully handle failed API groups from the discovery API (#1293, @fabito)
* Collect 3 new metrics: backup_deletion_{attempt|failure|success}_total (#1280, @fabito)
* Pass --log-level flag to internal/external plugins, matching Velero server's log level (#1278, @skriss)
* AWS EBS Volume IDs now contain AZ (#1274, @tsturzl)
* add panic handlers to all server-side plugin methods (#1270, @skriss)
* Move all the interfaces and associated types necessary to implement all of the Velero plugins to under the new package `pkg/plugin/velero`. (#1264, @carlisia)
* Update velero restore to not open every single file open during extraction of the data (#1261, @asaf)
* remove restore code that waits for a PV to become Available (#1254, @skriss)
* Improve `describe` output:
* Move Phase to right under Metadata(name/namespace/label/annotations)
* Move Validation errors: section right after Phase: section and only show it if the item has a phase of FailedValidation
* For restores move Warnings and Errors under Validation errors. Leave their display as is. (#1248, @DheerajSShetty)
* don't remove storageclass from a persistent volume when restoring it (#1246, @skriss)
* Need to defer closing the the ReadCloser in ObjectStoreGRPCServer.GetObject (#1236, @DheerajSShetty)
* update Kubernetes dependencies to match v1.12, and update Azure SDK to v19.0.0 (GA) (#1231, @skriss)
* remove pkg/util/collections/map_utils.go, replace with structured API types and apimachinery's unstructured helpers (#1146, @skriss)
* Add original resource (from backup) to restore item action interface (#1123, @mwieczorek)
### Coming in Future Alpha/Beta Releases:
- backup & restore phases will be modified to more clearly indicate successes, failures, and partial failures
- additional safety checks to ensure backups are never overwritten in object storage
- revised installation documentation that takes advantage of the `velero install` command
- as many additional stability and UX issues as we can get to

View File

@@ -0,0 +1 @@
Add original resource (from backup) to restore item action interface

View File

@@ -0,0 +1 @@
remove pkg/util/collections/map_utils.go, replace with structured API types and apimachinery's unstructured helpers

View File

@@ -0,0 +1 @@
update Kubernetes dependencies to match v1.12, and update Azure SDK to v19.0.0 (GA)

View File

@@ -0,0 +1 @@
Need to defer closing the the ReadCloser in ObjectStoreGRPCServer.GetObject

View File

@@ -0,0 +1 @@
don't remove storageclass from a persistent volume when restoring it

View File

@@ -0,0 +1,6 @@
Improve `describe` output
* Move Phase to right under Metadata(name/namespace/label/annotations)
* Move Validation errors: section right after Phase: section and only
show it if the item has a phase of FailedValidation
* For restores move Warnings and Errors under Validation errors. Leave
their display as is.

View File

@@ -0,0 +1 @@
remove restore code that waits for a PV to become Available

View File

@@ -0,0 +1 @@
Update velero restore to not open every single file open during extraction of the data

View File

@@ -0,0 +1 @@
Move all the interfaces and associated types necessary to implement all of the Velero plugins to under the new package `velero`.

View File

@@ -0,0 +1 @@
add panic handlers to all server-side plugin methods

View File

@@ -0,0 +1 @@
AWS EBS Volume IDs now contain AZ

View File

@@ -0,0 +1 @@
Pass --log-level flag to internal/external plugins, matching Velero server's log level

View File

@@ -0,0 +1 @@
Collect 3 new metrics: backup_deletion_{attempt|failure|success}_total

View File

@@ -0,0 +1 @@
Add velero install command for basic use cases.

View File

@@ -0,0 +1 @@
gracefully handle failed API groups from the discovery API

View File

@@ -0,0 +1 @@
make resticrepositories non-restorable resources

View File

@@ -0,0 +1 @@
check for and exclude hostPath-based persistent volumes from restic backup

View File

@@ -0,0 +1 @@
Send stack traces from plugin errors to Velero via gRPC so error location info can be logged

View File

@@ -0,0 +1 @@
log error locations from plugin logger, and don't overwrite them in the client logger if they exist already

View File

@@ -0,0 +1 @@
enable restore item actions to return additional related items to be restored; have pods return PVCs and PVCs return PVs

View File

@@ -0,0 +1 @@
compile only once to lower the initialization cost for regexp.MustCompile.

View File

@@ -0,0 +1 @@
Allow restic restore helper image name to be optionally specified via ConfigMap

View File

@@ -0,0 +1 @@
Fix for #1312, use describe to determine if AWS EBS snapshot is encrypted and explicitly pass that value in EC2 CreateVolume call.

View File

@@ -0,0 +1 @@
remove Warning field from restore item action output

View File

@@ -0,0 +1 @@
Bump plugin ProtocolVersion to version 2

View File

@@ -0,0 +1 @@
rename BlockStore plugin to VolumeSnapshotter

View File

@@ -0,0 +1,12 @@
remove deprecated code in preparation for v1.0 release:
- remove ark.heptio.com API group
- remove support for reading ark-backup.json files from object storage
- remove Ark field from RestoreResult type
- remove support for "hook.backup.ark.heptio.com/..." annotations for specifying hooks
- remove support for $HOME/.config/ark/ client config directory
- remove support for restoring Azure snapshots using short snapshot ID formats in backup metadata
- stop applying "velero-restore" label to restored resources and remove it from the API pkg
- remove code that strips the "gc.ark.heptio.com" finalizer from backups
- remove support for "backup.ark.heptio.com/..." annotations for requesting restic backups
- remove "ark"-prefixed prometheus metrics
- remove VolumeBackups field and related code from Backup's status

View File

@@ -0,0 +1 @@
Added ability to dynamically disable controllers

View File

@@ -0,0 +1 @@
set backup's start timestamp before patching it to InProgress so start times display in `velero backup get` while in progress

View File

@@ -0,0 +1 @@
Improve error message around invalid S3 URLs, and gracefully handle trailing backslashes.

View File

@@ -0,0 +1 @@
Add support for allowing a RestoreItemAction to skip item restore.

View File

@@ -0,0 +1 @@
velero backup logs & velero restore logs: show helpful error message if backup/restore does not exist or is not finished processing

View File

@@ -0,0 +1 @@
aws/azure/gcp: fail fast if unsupported keys are provided in BackupStorageLocation/VolumeSnapshotLocation config

View File

@@ -0,0 +1 @@
Validate that there can't be any duplicate plugin name, and that the name format is `example.io/name`.

View File

@@ -0,0 +1 @@
set default TTL for backups

View File

@@ -0,0 +1 @@
instantiate the plugin manager with the per-restore logger so plugin logs are captured in the per-restore log

View File

@@ -0,0 +1,7 @@
Azure: allow credentials to be provided in a .env file (path specified by $AZURE_CREDENTIALS_FILE), formatted like:
AZURE_TENANT_ID=${AZURE_TENANT_ID}
AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}

View File

@@ -0,0 +1 @@
change container base images to debian:stretch-slim and upgrade to go 1.12

View File

@@ -0,0 +1 @@
Support non-namespaced names for built-in plugins

View File

@@ -1,5 +1,5 @@
/*
Copyright 2018 the Heptio Ark contributors.
Copyright 2018 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -45,7 +45,7 @@ func main() {
}
// done returns true if for each directory under /restores, a file exists
// within the .ark/ subdirectory whose name is equal to os.Args[1], or
// within the .velero/ subdirectory whose name is equal to os.Args[1], or
// false otherwise
func done() bool {
children, err := ioutil.ReadDir("/restores")
@@ -60,7 +60,7 @@ func done() bool {
continue
}
doneFile := filepath.Join("/restores", child.Name(), ".ark", os.Args[1])
doneFile := filepath.Join("/restores", child.Name(), ".velero", os.Args[1])
if _, err := os.Stat(doneFile); os.IsNotExist(err) {
fmt.Printf("Not found: %s\n", doneFile)

View File

@@ -1,5 +1,5 @@
/*
Copyright 2017 the Heptio Ark contributors.
Copyright 2017 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -22,8 +22,8 @@ import (
"github.com/golang/glog"
"github.com/heptio/ark/pkg/cmd"
"github.com/heptio/ark/pkg/cmd/ark"
"github.com/heptio/velero/pkg/cmd"
"github.com/heptio/velero/pkg/cmd/velero"
)
func main() {
@@ -31,6 +31,6 @@ func main() {
baseName := filepath.Base(os.Args[0])
err := ark.NewCommand(baseName).Execute()
err := velero.NewCommand(baseName).Execute()
cmd.CheckError(err)
}

View File

@@ -1,10 +1,10 @@
# How Ark Works
# How Velero Works
Each Ark operation -- on-demand backup, scheduled backup, restore -- is a custom resource, defined with a Kubernetes [Custom Resource Definition (CRD)][20] and stored in [etcd][22]. Ark also includes controllers that process the custom resources to perform backups, restores, and all related operations.
Each Velero operation -- on-demand backup, scheduled backup, restore -- is a custom resource, defined with a Kubernetes [Custom Resource Definition (CRD)][20] and stored in [etcd][22]. Velero also includes controllers that process the custom resources to perform backups, restores, and all related operations.
You can back up or restore all objects in your cluster, or you can filter objects by type, namespace, and/or label.
Ark is ideal for the disaster recovery use case, as well as for snapshotting your application state, prior to performing system operations on your cluster (e.g. upgrades).
Velero is ideal for the disaster recovery use case, as well as for snapshotting your application state, prior to performing system operations on your cluster (e.g. upgrades).
## On-demand backups
@@ -27,17 +27,17 @@ Scheduled backups are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `
## Restores
The **restore** operation allows you to restore all of the objects and persistent volumes from a previously created backup. You can also restore only a filtered subset of objects and persistent volumes. Ark supports multiple namespace remapping--for example, in a single restore, objects in namespace "abc" can be recreated under namespace "def", and the objects in namespace "123" under "456".
The **restore** operation allows you to restore all of the objects and persistent volumes from a previously created backup. You can also restore only a filtered subset of objects and persistent volumes. Velero supports multiple namespace remapping--for example, in a single restore, objects in namespace "abc" can be recreated under namespace "def", and the objects in namespace "123" under "456".
The default name of a restore is `<BACKUP NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*. You can also specify a custom name. A restored object also includes a label with key `ark.heptio.com/restore-name` and value `<RESTORE NAME>`.
The default name of a restore is `<BACKUP NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as *YYYYMMDDhhmmss*. You can also specify a custom name. A restored object also includes a label with key `velero.io/restore-name` and value `<RESTORE NAME>`.
You can also run the Ark server in restore-only mode, which disables backup, schedule, and garbage collection functionality during disaster recovery.
You can also run the Velero server in restore-only mode, which disables backup, schedule, and garbage collection functionality during disaster recovery.
## Backup workflow
When you run `ark backup create test-backup`:
When you run `velero backup create test-backup`:
1. The Ark client makes a call to the Kubernetes API server to create a `Backup` object.
1. The Velero client makes a call to the Kubernetes API server to create a `Backup` object.
1. The `BackupController` notices the new `Backup` object and performs validation.
@@ -45,19 +45,19 @@ When you run `ark backup create test-backup`:
1. The `BackupController` makes a call to the object storage service -- for example, AWS S3 -- to upload the backup file.
By default, `ark backup create` makes disk snapshots of any persistent volumes. You can adjust the snapshots by specifying additional flags. Run `ark backup create --help` to see available flags. Snapshots can be disabled with the option `--snapshot-volumes=false`.
By default, `velero backup create` makes disk snapshots of any persistent volumes. You can adjust the snapshots by specifying additional flags. Run `velero backup create --help` to see available flags. Snapshots can be disabled with the option `--snapshot-volumes=false`.
![19]
## Backed-up API versions
Ark backs up resources using the Kubernetes API server's *preferred version* for each group/resource. When restoring a resource, this same API group/version must exist in the target cluster in order for the restore to be successful.
Velero backs up resources using the Kubernetes API server's *preferred version* for each group/resource. When restoring a resource, this same API group/version must exist in the target cluster in order for the restore to be successful.
For example, if the cluster being backed up has a `gizmos` resource in the `things` API group, with group/versions `things/v1alpha1`, `things/v1beta1`, and `things/v1`, and the server's preferred group/version is `things/v1`, then all `gizmos` will be backed up from the `things/v1` API endpoint. When backups from this cluster are restored, the target cluster **must** have the `things/v1` endpoint in order for `gizmos` to be restored. Note that `things/v1` **does not** need to be the preferred version in the target cluster; it just needs to exist.
## Set a backup to expire
When you create a backup, you can specify a TTL by adding the flag `--ttl <DURATION>`. If Ark sees that an existing backup resource is expired, it removes:
When you create a backup, you can specify a TTL by adding the flag `--ttl <DURATION>`. If Velero sees that an existing backup resource is expired, it removes:
* The backup resource
* The backup file from cloud object storage
@@ -66,7 +66,7 @@ When you create a backup, you can specify a TTL by adding the flag `--ttl <DURAT
## Object storage sync
Heptio Ark treats object storage as the source of truth. It continuously checks to see that the correct backup resources are always present. If there is a properly formatted backup file in the storage bucket, but no corresponding backup resource in the Kubernetes API, Ark synchronizes the information from object storage to Kubernetes.
Velero treats object storage as the source of truth. It continuously checks to see that the correct backup resources are always present. If there is a properly formatted backup file in the storage bucket, but no corresponding backup resource in the Kubernetes API, Velero synchronizes the information from object storage to Kubernetes.
This allows restore functionality to work in a cluster migration scenario, where the original backup objects do not exist in the new cluster.

View File

@@ -2,7 +2,7 @@
## API types
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `ark` cli
Here we list the API types that have some functionality that you can only configure via json/yaml vs the `velero` cli
(hooks)
* [Backup][1]

View File

@@ -2,12 +2,12 @@
## Use
The `Backup` API type is used as a request for the Ark Server to perform a backup. Once created, the
Ark Server immediately starts the backup process.
The `Backup` API type is used as a request for the Velero Server to perform a backup. Once created, the
Velero Server immediately starts the backup process.
## API GroupVersion
Backup belongs to the API group version `ark.heptio.com/v1`.
Backup belongs to the API group version `velero.io/v1`.
## Definition
@@ -15,15 +15,15 @@ Here is a sample `Backup` object with each of the fields documented:
```yaml
# Standard Kubernetes API Version declaration. Required.
apiVersion: ark.heptio.com/v1
apiVersion: velero.io/v1
# Standard Kubernetes Kind declaration. Required.
kind: Backup
# Standard Kubernetes metadata. Required.
metadata:
# Backup name. May be any valid Kubernetes object name. Required.
name: a
# Backup namespace. Required. In version 0.7.0 and later, can be any string. Must be the namespace of the Ark server.
namespace: heptio-ark
# Backup namespace. Required. In version 0.7.0 and later, can be any string. Must be the namespace of the Velero server.
namespace: velero
# Parameters about the backup. Required.
spec:
# Array of namespaces to include in the backup. If unspecified, all namespaces are included.
@@ -54,11 +54,11 @@ spec:
# Individual objects must match this label selector to be included in the backup. Optional.
labelSelector:
matchLabels:
app: ark
app: velero
component: server
# Whether or not to snapshot volumes. This only applies to PersistentVolumes for Azure, GCE, and
# AWS. Valid values are true, false, and null/unset. If unset, Ark performs snapshots as long as
# a persistent volume provider is configured for Ark.
# AWS. Valid values are true, false, and null/unset. If unset, Velero performs snapshots as long as
# a persistent volume provider is configured for Velero.
snapshotVolumes: null
# Where to store the tarball and logs.
storageLocation: aws-primary
@@ -66,7 +66,9 @@ spec:
volumeSnapshotLocations:
- aws-primary
- gcp-primary
# The amount of time before this backup is eligible for garbage collection.
# The amount of time before this backup is eligible for garbage collection. If not specified,
# a default value of 30 days will be used. The default can be configured on the velero server
# by passing the flag --default-backup-ttl.
ttl: 24h0m0s
# Actions to perform at different times during a backup. The only hook currently supported is
# executing a command in a container in a pod using the pod exec API. Optional.
@@ -92,7 +94,7 @@ spec:
# This hook only applies to objects matching this label selector. Optional.
labelSelector:
matchLabels:
app: ark
app: velero
component: server
# An array of hooks to run before executing custom actions. Currently only "exec" hooks are supported.
# DEPRECATED. Use pre instead.

View File

@@ -1,21 +1,21 @@
# Ark Backup Storage Locations
# Velero Backup Storage Locations
## Backup Storage Location
Ark can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
Velero can store backups in a number of locations. These are represented in the cluster via the `BackupStorageLocation` CRD.
Ark must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `ark server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
Velero must have at least one `BackupStorageLocation`. By default, this is expected to be named `default`, however the name can be changed by specifying `--default-backup-storage-location` on `velero server`. Backups that do not explicitly specify a storage location will be saved to this `BackupStorageLocation`.
> *NOTE*: `BackupStorageLocation` takes the place of the `Config.backupStorageProvider` key as of v0.10.0
A sample YAML `BackupStorageLocation` looks like the following:
```yaml
apiVersion: ark.heptio.com/v1
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: heptio-ark
namespace: velero
spec:
provider: aws
objectStorage:
@@ -32,7 +32,7 @@ The configurable parameters are as follows:
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String (Ark natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the backups. |
| `provider` | String (Velero natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the backups. |
| `objectStorage` | ObjectStorageLocation | Specification of the object storage for the given provider. |
| `objectStorage/bucket` | String | Required Field | The storage bucket where backups are to be uploaded. |
| `objectStorage/prefix` | String | Optional Field | The directory inside a storage bucket where backups are to be uploaded. |
@@ -48,10 +48,10 @@ The configurable parameters are as follows:
| --- | --- | --- | --- |
| `region` | string | Empty | *Example*: "us-east-1"<br><br>See [AWS documentation][3] for the full list.<br><br>Queried from the AWS S3 API if not provided. |
| `s3ForcePathStyle` | bool | `false` | Set this to `true` if you are using a local storage service like Minio. |
| `s3Url` | string | Required field for non-AWS-hosted storage| *Example*: http://minio:9000<br><br>You can specify the AWS S3 URL here for explicitness, but Ark can already generate it from `region`, and `bucket`. This field is primarily for local storage services like Minio.|
| `s3Url` | string | Required field for non-AWS-hosted storage| *Example*: http://minio:9000<br><br>You can specify the AWS S3 URL here for explicitness, but Velero can already generate it from `region`, and `bucket`. This field is primarily for local storage services like Minio.|
| `publicUrl` | string | Empty | *Example*: https://minio.mycluster.com<br><br>If specified, use this instead of `s3Url` when generating download URLs (e.g., for logs). This field is primarily for local storage services like Minio.|
| `kmsKeyId` | string | Empty | *Example*: "502b409c-4da1-419f-a16e-eif453b3i49f" or "alias/`<KMS-Key-Alias-Name>`"<br><br>Specify an [AWS KMS key][10] id or alias to enable encryption of the backups stored in S3. Only works with AWS S3 and may require explicitly granting key usage rights.|
| `signatureVersion` | string | `"4"` | Version of the signature algorithm used to create signed URLs that are used by ark cli to download backups or fetch logs. Possible versions are "1" and "4". Usually the default version 4 is correct, but some S3-compatible providers like Quobyte only support version 1.|
| `signatureVersion` | string | `"4"` | Version of the signature algorithm used to create signed URLs that are used by velero cli to download backups or fetch logs. Possible versions are "1" and "4". Usually the default version 4 is correct, but some S3-compatible providers like Quobyte only support version 1.|
#### Azure

View File

@@ -1,21 +1,21 @@
# Ark Volume Snapshot Location
# Velero Volume Snapshot Location
## Volume Snapshot Location
A volume snapshot location is the location in which to store the volume snapshots created for a backup.
Ark can be configured to take snapshots of volumes from multiple providers. Ark also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time.
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Ark must have at least one `VolumeSnapshotLocation` per cloud provider.
Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider.
A sample YAML `VolumeSnapshotLocation` looks like the following:
```yaml
apiVersion: ark.heptio.com/v1
apiVersion: velero.io/v1
kind: VolumeSnapshotLocation
metadata:
name: aws-default
namespace: heptio-ark
namespace: velero
spec:
provider: aws
config:
@@ -30,7 +30,7 @@ The configurable parameters are as follows:
| Key | Type | Default | Meaning |
| --- | --- | --- | --- |
| `provider` | String (Ark natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the volume. |
| `provider` | String (Velero natively supports `aws`, `gcp`, and `azure`. Other providers may be available via external plugins.)| Required Field | The name for whichever cloud provider will be used to actually store the volume. |
| `config` | See the corresponding [AWS][0], [GCP][1], and [Azure][2]-specific configs or your provider's documentation.
#### AWS

View File

@@ -1,17 +1,34 @@
# Run Ark on AWS
# Run Velero on AWS
To set up Ark on AWS, you:
To set up Velero on AWS, you:
* Download an official release of Velero
* Create your S3 bucket
* Create an AWS IAM user for Ark
* Create an AWS IAM user for Velero
* Configure the server
* Create a Secret for your credentials
If you do not have the `aws` CLI locally installed, follow the [user guide][5] to set it up.
## Download Velero
1. Download the [latest release's](https://github.com/heptio/velero/releases) tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
_We strongly recommend that you use an [official release](https://github.com/heptio/velero/releases) of Velero. The tarballs for each release contain the
`velero` command-line client **and** version-specific sample YAML files for deploying Velero to your cluster. The code and sample YAML files in the master
branch of the Velero repository are under active development and are not guaranteed to be stable. Use them at your own risk!_
## Create S3 bucket
Heptio Ark requires an object storage bucket to store backups in, preferrably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
Velero requires an object storage bucket to store backups in, preferrably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create an S3 bucket, replacing placeholders appropriately:
```bash
aws s3api create-bucket \
@@ -34,16 +51,16 @@ For more information, see [the AWS documentation on IAM users][14].
1. Create the IAM user:
```bash
aws iam create-user --user-name heptio-ark
aws iam create-user --user-name velero
```
> If you'll be using Ark to backup multiple clusters with multiple S3 buckets, it may be desirable to create a unique username per cluster rather than the default `heptio-ark`.
> If you'll be using Velero to backup multiple clusters with multiple S3 buckets, it may be desirable to create a unique username per cluster rather than the default `velero`.
2. Attach policies to give `heptio-ark` the necessary permissions:
2. Attach policies to give `velero` the necessary permissions:
```bash
BUCKET=<YOUR_BUCKET>
cat > heptio-ark-policy.json <<EOF
cat > velero-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
@@ -86,15 +103,15 @@ For more information, see [the AWS documentation on IAM users][14].
EOF
aws iam put-user-policy \
--user-name heptio-ark \
--policy-name heptio-ark \
--policy-document file://heptio-ark-policy.json
--user-name velero \
--policy-name velero \
--policy-document file://velero-policy.json
```
3. Create an access key for the user:
```bash
aws iam create-access-key --user-name heptio-ark
aws iam create-access-key --user-name velero
```
The result should look like:
@@ -102,7 +119,7 @@ For more information, see [the AWS documentation on IAM users][14].
```json
{
"AccessKey": {
"UserName": "heptio-ark",
"UserName": "velero",
"Status": "Active",
"CreateDate": "2017-07-31T22:24:41.576Z",
"SecretAccessKey": <AWS_SECRET_ACCESS_KEY>,
@@ -111,7 +128,7 @@ For more information, see [the AWS documentation on IAM users][14].
}
```
4. Create an Ark-specific credentials file (`credentials-ark`) in your local directory:
4. Create a Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
@@ -123,7 +140,7 @@ For more information, see [the AWS documentation on IAM users][14].
## Credentials and configuration
In the Ark directory (i.e. where you extracted the release tarball), run the following to first set up namespaces, RBAC, and other scaffolding. To run in a custom namespace, make sure that you have edited the YAML files to specify the namespace. See [Run in custom namespace][0].
In the Velero directory (i.e. where you extracted the release tarball), run the following to first set up namespaces, RBAC, and other scaffolding. To run in a custom namespace, make sure that you have edited the YAML files to specify the namespace. See [Run in custom namespace][0].
```bash
kubectl apply -f config/common/00-prereqs.yaml
@@ -133,17 +150,17 @@ Create a Secret. In the directory of the credentials file you just created, run:
```bash
kubectl create secret generic cloud-credentials \
--namespace <ARK_NAMESPACE> \
--from-file cloud=credentials-ark
--namespace <VELERO_NAMESPACE> \
--from-file cloud=credentials-velero
```
Specify the following values in the example files:
* In `config/aws/05-ark-backupstoragelocation.yaml`:
* In `config/aws/05-backupstoragelocation.yaml`:
* Replace `<YOUR_BUCKET>` and `<YOUR_REGION>` (for S3 backup storage, region is optional and will be queried from the AWS S3 API if not provided). See the [BackupStorageLocation definition][21] for details.
* In `config/aws/06-ark-volumesnapshotlocation.yaml`:
* In `config/aws/06-volumesnapshotlocation.yaml`:
* Replace `<YOUR_REGION>`. See the [VolumeSnapshotLocation definition][6] for details.
@@ -157,7 +174,7 @@ Specify the following values in the example files:
* (Optional) If you have multiple clusters and you want to support migration of resources between them, in file `config/aws/10-deployment.yaml`:
* Uncomment the environment variable `AWS_CLUSTER_NAME` and replace `<YOUR_CLUSTER_NAME>` with the current cluster's name. When restoring backup, it will make Ark (and cluster it's running on) claim ownership of AWS volumes created from snapshots taken on different cluster.
* Uncomment the environment variable `AWS_CLUSTER_NAME` and replace `<YOUR_CLUSTER_NAME>` with the current cluster's name. When restoring backup, it will make Velero (and cluster it's running on) claim ownership of AWS volumes created from snapshots taken on different cluster.
The best way to get the current cluster's name is to either check it with used deployment tool or to read it directly from the EC2 instances tags.
The following listing shows how to get the cluster's nodes EC2 Tags. First, get the nodes external IDs (EC2 IDs):
@@ -182,11 +199,11 @@ Specify the following values in the example files:
## Start the server
In the root of your Ark directory, run:
In the root of your Velero directory, run:
```bash
kubectl apply -f config/aws/05-ark-backupstoragelocation.yaml
kubectl apply -f config/aws/06-ark-volumesnapshotlocation.yaml
kubectl apply -f config/aws/05-backupstoragelocation.yaml
kubectl apply -f config/aws/06-volumesnapshotlocation.yaml
kubectl apply -f config/aws/10-deployment.yaml
```
@@ -196,12 +213,12 @@ In the root of your Ark directory, run:
> This path assumes you have `kube2iam` already running in your Kubernetes cluster. If that is not the case, please install it first, following the docs here: [https://github.com/jtblin/kube2iam](https://github.com/jtblin/kube2iam)
It can be set up for Ark by creating a role that will have required permissions, and later by adding the permissions annotation on the ark deployment to define which role it should use internally.
It can be set up for Velero by creating a role that will have required permissions, and later by adding the permissions annotation on the velero deployment to define which role it should use internally.
1. Create a Trust Policy document to allow the role being used for EC2 management & assume kube2iam role:
```bash
cat > heptio-ark-trust-policy.json <<EOF
cat > velero-trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
@@ -227,14 +244,14 @@ It can be set up for Ark by creating a role that will have required permissions,
2. Create the IAM role:
```bash
aws iam create-role --role-name heptio-ark --assume-role-policy-document file://./heptio-ark-trust-policy.json
aws iam create-role --role-name velero --assume-role-policy-document file://./velero-trust-policy.json
```
3. Attach policies to give `heptio-ark` the necessary permissions:
3. Attach policies to give `velero` the necessary permissions:
```bash
BUCKET=<YOUR_BUCKET>
cat > heptio-ark-policy.json <<EOF
cat > velero-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
@@ -277,31 +294,31 @@ It can be set up for Ark by creating a role that will have required permissions,
EOF
aws iam put-role-policy \
--role-name heptio-ark \
--policy-name heptio-ark-policy \
--policy-document file://./heptio-ark-policy.json
--role-name velero \
--policy-name velero-policy \
--policy-document file://./velero-policy.json
```
4. Update `AWS_ACCOUNT_ID` & `HEPTIO_ARK_ROLE_NAME` in the file `config/aws/10-deployment-kube2iam.yaml`:
4. Update `AWS_ACCOUNT_ID` & `VELERO_ROLE_NAME` in the file `config/aws/10-deployment-kube2iam.yaml`:
```
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
namespace: heptio-ark
name: ark
namespace: velero
name: velero
spec:
replicas: 1
template:
metadata:
labels:
component: ark
component: velero
annotations:
iam.amazonaws.com/role: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<HEPTIO_ARK_ROLE_NAME>
iam.amazonaws.com/role: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<VELERO_ROLE_NAME>
...
```
5. Run Ark deployment using the file `config/aws/10-deployment-kube2iam.yaml`.
5. Run Velero deployment using the file `config/aws/10-deployment-kube2iam.yaml`.
[0]: namespace.md
[5]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html

View File

@@ -1,9 +1,10 @@
# Run Ark on Azure
# Run Velero on Azure
To configure Ark on Azure, you:
To configure Velero on Azure, you:
* Download an official release of Velero
* Create your Azure storage account and blob container
* Create Azure service principal for Ark
* Create Azure service principal for Velero
* Configure the server
* Create a Secret for your credentials
@@ -20,13 +21,29 @@ az login
Ensure that the VMs for your agent pool allow Managed Disks. If I/O performance is critical,
consider using Premium Managed Disks, which are SSD backed.
## Download Velero
1. Download the [latest release's](https://github.com/heptio/velero/releases) tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
_We strongly recommend that you use an [official release](https://github.com/heptio/velero/releases) of Velero. The tarballs for each release contain the
`velero` command-line client **and** version-specific sample YAML files for deploying Velero to your cluster. The code and sample YAML files in the master
branch of the Velero repository are under active development and are not guaranteed to be stable. Use them at your own risk!_
## Create Azure storage account and blob container
Heptio Ark requires a storage account and blob container in which to store backups.
Velero requires a storage account and blob container in which to store backups.
The storage account can be created in the same Resource Group as your Kubernetes cluster or
separated into its own Resource Group. The example below shows the storage account created in a
separate `Ark_Backups` Resource Group.
separate `Velero_Backups` Resource Group.
The storage account needs to be created with a globally unique id since this is used for dns. In
the sample script below, we're generating a random name using `uuidgen`, but you can come up with
@@ -36,11 +53,11 @@ configured to only allow access via https.
```bash
# Create a resource group for the backups storage account. Change the location as needed.
AZURE_BACKUP_RESOURCE_GROUP=Ark_Backups
AZURE_BACKUP_RESOURCE_GROUP=Velero_Backups
az group create -n $AZURE_BACKUP_RESOURCE_GROUP --location WestUS
# Create the storage account
AZURE_STORAGE_ACCOUNT_ID="ark$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"
AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"
az storage account create \
--name $AZURE_STORAGE_ACCOUNT_ID \
--resource-group $AZURE_BACKUP_RESOURCE_GROUP \
@@ -51,10 +68,10 @@ az storage account create \
--access-tier Hot
```
Create the blob container named `ark`. Feel free to use a different name, preferably unique to a single Kubernetes cluster. See the [FAQ][20] for more details.
Create the blob container named `velero`. Feel free to use a different name, preferably unique to a single Kubernetes cluster. See the [FAQ][20] for more details.
```bash
az storage container create -n ark --public-access off --account-name $AZURE_STORAGE_ACCOUNT_ID
az storage container create -n velero --public-access off --account-name $AZURE_STORAGE_ACCOUNT_ID
```
## Get resource group for persistent volume snapshots
@@ -78,7 +95,7 @@ az storage container create -n ark --public-access off --account-name $AZURE_STO
## Create service principal
To integrate Ark with Azure, you must create an Ark-specific [service principal][17].
To integrate Velero with Azure, you must create an Velero-specific [service principal][17].
1. Obtain your Azure Account Subscription ID and Tenant ID:
@@ -89,23 +106,23 @@ To integrate Ark with Azure, you must create an Ark-specific [service principal]
1. Create a service principal with `Contributor` role. This will have subscription-wide access, so protect this credential. You can specify a password or let the `az ad sp create-for-rbac` command create one for you.
> If you'll be using Ark to backup multiple clusters with multiple blob containers, it may be desirable to create a unique username per cluster rather than the default `heptio-ark`.
> If you'll be using Velero to backup multiple clusters with multiple blob containers, it may be desirable to create a unique username per cluster rather than the default `velero`.
```bash
# Create service principal and specify your own password
AZURE_CLIENT_SECRET=super_secret_and_high_entropy_password_replace_me_with_your_own
az ad sp create-for-rbac --name "heptio-ark" --role "Contributor" --password $AZURE_CLIENT_SECRET
az ad sp create-for-rbac --name "velero" --role "Contributor" --password $AZURE_CLIENT_SECRET
# Or create service principal and let the CLI generate a password for you. Make sure to capture the password.
AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "heptio-ark" --role "Contributor" --query 'password' -o tsv`
AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv`
# After creating the service principal, obtain the client id
AZURE_CLIENT_ID=`az ad sp list --display-name "heptio-ark" --query '[0].appId' -o tsv`
AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`
```
## Credentials and configuration
In the Ark directory (i.e. where you extracted the release tarball), run the following to first set up namespaces, RBAC, and other scaffolding. To run in a custom namespace, make sure that you have edited the YAML file to specify the namespace. See [Run in custom namespace][0].
In the Velero directory (i.e. where you extracted the release tarball), run the following to first set up namespaces, RBAC, and other scaffolding. To run in a custom namespace, make sure that you have edited the YAML file to specify the namespace. See [Run in custom namespace][0].
```bash
kubectl apply -f config/common/00-prereqs.yaml
@@ -115,7 +132,7 @@ Now you need to create a Secret that contains all the environment variables you
```bash
kubectl create secret generic cloud-credentials \
--namespace <ARK_NAMESPACE> \
--namespace <VELERO_NAMESPACE> \
--from-literal AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID} \
--from-literal AZURE_TENANT_ID=${AZURE_TENANT_ID} \
--from-literal AZURE_CLIENT_ID=${AZURE_CLIENT_ID} \
@@ -125,21 +142,21 @@ kubectl create secret generic cloud-credentials \
Now that you have your Azure credentials stored in a Secret, you need to replace some placeholder values in the template files. Specifically, you need to change the following:
* In file `config/azure/05-ark-backupstoragelocation.yaml`:
* In file `config/azure/05-backupstoragelocation.yaml`:
* Replace `<YOUR_BLOB_CONTAINER>`, `<YOUR_STORAGE_RESOURCE_GROUP>`, and `<YOUR_STORAGE_ACCOUNT>`. See the [BackupStorageLocation definition][21] for details.
* In file `config/azure/06-ark-volumesnapshotlocation.yaml`:
* In file `config/azure/06-volumesnapshotlocation.yaml`:
* Replace `<YOUR_TIMEOUT>`. See the [VolumeSnapshotLocation definition][8] for details.
* (Optional, use only if you need to specify multiple volume snapshot locations) In `config/azure/00-ark-deployment.yaml`:
* (Optional, use only if you need to specify multiple volume snapshot locations) In `config/azure/00-deployment.yaml`:
* Uncomment the `--default-volume-snapshot-locations` and replace provider locations with the values for your environment.
## Start the server
In the root of your Ark directory, run:
In the root of your Velero directory, run:
```bash
kubectl apply -f config/azure/

View File

@@ -1,7 +1,7 @@
# Build from source
* [Prerequisites][1]
* [Download][2]
* [Getting the source][2]
* [Build][3]
* [Test][12]
* [Run][7]
@@ -9,31 +9,37 @@
## Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later. Version 1.7.5 or later is required to run `ark backup delete`.
* Access to a Kubernetes cluster, version 1.7 or later. Version 1.7.5 or later is required to run `velero backup delete`.
* A DNS server on the cluster
* `kubectl` installed
* [Go][5] installed (minimum version 1.8)
## Getting the source
### Option 1) Get latest (recommended)
```bash
mkdir $HOME/go
export GOPATH=$HOME/go
go get github.com/heptio/ark
go get github.com/heptio/velero
```
Where `go` is your [import path][4] for Go.
For Go development, it is recommended to add the Go import path (`$HOME/go` in this example) to your path.
### Option 2) Release archive
Download the archive named `Source code` from the [release page][22] and extract it in your Go import path as `src/github.com/heptio/velero`.
Note that the Makefile targets assume building from a git repository. When building from an archive, you will be limited to the `go build` commands described below.
## Build
You can build your Ark image locally on the machine where you run your cluster, or you can push it to a private registry. This section covers both workflows.
You can build your Velero image locally on the machine where you run your cluster, or you can push it to a private registry. This section covers both workflows.
Set the `$REGISTRY` environment variable (used in the `Makefile`) to push the Heptio Ark images to your own registry. This allows any node in your cluster to pull your locally built image.
Set the `$REGISTRY` environment variable (used in the `Makefile`) to push the Velero images to your own registry. This allows any node in your cluster to pull your locally built image.
In the Ark root directory, to build your container with the tag `$REGISTRY/ark:$VERSION`, run:
In the Velero root directory, to build your container with the tag `$REGISTRY/velero:$VERSION`, run:
```
make container
@@ -41,6 +47,11 @@ make container
To push your image to a registry, use `make push`.
To build only the `velero` binary, run:
```
go build ./cmd/velero
```
### Update generated files
The following files are automatically generated from the source code:
@@ -59,16 +70,16 @@ Run `make update` to regenerate files if you make the following changes:
Run [generate-proto.sh][13] to regenerate files if you make the following changes:
* Add/edit/remove protobuf message or service definitions. These changes require the [proto compiler][14].
* Add/edit/remove protobuf message or service definitions. These changes require the [proto compiler][14] and compiler plugin `protoc-gen-go` version v1.0.0.
### Cross compiling
By default, `make build` builds an `ark` binary for `linux-amd64`.
By default, `make build` builds an `velero` binary for `linux-amd64`.
To build for another platform, run `make build-<GOOS>-<GOARCH>`.
For example, to build for the Mac, run `make build-darwin-amd64`.
All binaries are placed in `_output/bin/<GOOS>/<GOARCH>`-- for example, `_output/bin/darwin/amd64/ark`.
All binaries are placed in `_output/bin/<GOOS>/<GOARCH>`-- for example, `_output/bin/darwin/amd64/velero`.
Ark's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
Velero's `Makefile` has a convenience target, `all-build`, that builds the following platforms:
* linux-amd64
* linux-arm
@@ -85,7 +96,7 @@ files (clientset, listers, shared informers, docs) are up to date.
### Prerequisites
When running Heptio Ark, you will need to account for the following (all of which are handled in the [`/examples`][6] manifests):
When running Velero, you will need to account for the following (all of which are handled in the [`/examples`][6] manifests):
* Appropriate RBAC permissions in the cluster
* Read access for all data from the source cluster and namespaces
@@ -93,8 +104,8 @@ When running Heptio Ark, you will need to account for the following (all of whic
* Cloud provider credentials
* Read/write access to volumes
* Read/write access to object storage for backup data
* A [BackupStorageLocation][20] object definition for the Ark server
* (Optional) A [VolumeSnapshotLocation][21] object definition for the Ark server, to take PV snapshots
* A [BackupStorageLocation][20] object definition for the Velero server
* (Optional) A [VolumeSnapshotLocation][21] object definition for the Velero server, to take PV snapshots
### Create a cluster
@@ -104,9 +115,9 @@ To provision a cluster on AWS using Amazons official CloudFormation templates
* eksctl - [a CLI for Amazon EKS][18]
### Option 1: Run your Ark server locally
### Option 1: Run your Velero server locally
Running the Ark server locally can speed up iterative development. This eliminates the need to rebuild the Ark server
Running the Velero server locally can speed up iterative development. This eliminates the need to rebuild the Velero server
image and redeploy it to the cluster with each change.
#### 1. Set enviroment variables
@@ -139,64 +150,64 @@ You may create resources on a cluster using our [example configurations][19].
##### Example
Here is how to setup using an existing cluster in AWS: At the root of the Ark repo:
Here is how to setup using an existing cluster in AWS: At the root of the Velero repo:
- Edit `examples/aws/05-ark-backupstoragelocation.yaml` to point to your AWS S3 bucket and region. Note: you can run `aws s3api list-buckets` to get the name of all your buckets.
- Edit `examples/aws/05-backupstoragelocation.yaml` to point to your AWS S3 bucket and region. Note: you can run `aws s3api list-buckets` to get the name of all your buckets.
- (Optional) Edit `examples/aws/06-ark-volumesnapshotlocation.yaml` to point to your AWS region.
- (Optional) Edit `examples/aws/06-volumesnapshotlocation.yaml` to point to your AWS region.
Then run the commands below.
`00-prereqs.yaml` contains all our CustomResourceDefinitions (CRDs) that allow us to perform CRUD operations on backups, restores, schedules, etc. it also contains the `heptio-ark` namespace, the `ark` ServiceAccount, and a cluster role binding to grant the `ark` ServiceAccount the cluster-admin role:
`00-prereqs.yaml` contains all our CustomResourceDefinitions (CRDs) that allow us to perform CRUD operations on backups, restores, schedules, etc. it also contains the `velero` namespace, the `velero` ServiceAccount, and a cluster role binding to grant the `velero` ServiceAccount the cluster-admin role:
```bash
kubectl apply -f examples/common/00-prereqs.yaml
```
`10-deployment.yaml` is a sample Ark config resource for AWS:
`10-deployment.yaml` is a sample Velero config resource for AWS:
```bash
kubectl apply -f examples/aws/10-deployment.yaml
```
And `05-ark-backupstoragelocation.yaml` specifies the location of your backup storage, together with the optional `06-ark-volumesnapshotlocation.yaml`:
And `05-backupstoragelocation.yaml` specifies the location of your backup storage, together with the optional `06-volumesnapshotlocation.yaml`:
```bash
kubectl apply -f examples/aws/05-ark-backupstoragelocation.yaml
kubectl apply -f examples/aws/05-backupstoragelocation.yaml
```
or
```bash
kubectl apply -f examples/aws/05-ark-backupstoragelocation.yaml examples/aws/06-ark-volumesnapshotlocation.yaml
kubectl apply -f examples/aws/05-backupstoragelocation.yaml examples/aws/06-volumesnapshotlocation.yaml
```
### 3. Start the Ark server
### 3. Start the Velero server
* Make sure `ark` is in your `PATH` or specify the full path.
* Make sure `velero` is in your `PATH` or specify the full path.
* Set variable for Ark as needed. The variables below can be exported as environment variables or passed as CLI cmd flags:
* `--kubeconfig`: set the path to the kubeconfig file the Ark server uses to talk to the Kubernetes apiserver
* `--namespace`: the set namespace where the Ark server should look for backups, schedules, restores
* `--log-level`: set the Ark server's log level
* `--plugin-dir`: set the directory where the Ark server looks for plugins
* Set variable for Velero as needed. The variables below can be exported as environment variables or passed as CLI cmd flags:
* `--kubeconfig`: set the path to the kubeconfig file the Velero server uses to talk to the Kubernetes apiserver
* `--namespace`: the set namespace where the Velero server should look for backups, schedules, restores
* `--log-level`: set the Velero server's log level
* `--plugin-dir`: set the directory where the Velero server looks for plugins
* `--metrics-address`: set the bind address and port where Prometheus metrics are exposed
* Start the server: `ark server`
* Start the server: `velero server`
### Option 2: Run your Ark server in a deployment
### Option 2: Run your Velero server in a deployment
1. Install Ark using a deployment:
1. Install Velero using a deployment:
We have examples of deployments for different cloud providers in `examples/<cloud-provider>/10-deployment.yaml`.
2. Replace the deployment's default Ark image with the image that you built. Run:
2. Replace the deployment's default Velero image with the image that you built. Run:
```
kubectl --namespace=heptio-ark set image deployment/ark ark=$REGISTRY/ark:$VERSION
kubectl --namespace=velero set image deployment/velero velero=$REGISTRY/velero:$VERSION
```
where `$REGISTRY` and `$VERSION` are the values that you built Ark with.
where `$REGISTRY` and `$VERSION` are the values that you built Velero with.
## 5. Vendoring dependencies
@@ -204,17 +215,17 @@ If you need to add or update the vendored dependencies, see [Vendoring dependenc
[0]: ../README.md
[1]: #prerequisites
[2]: #download
[2]: #getting-the-source
[3]: #build
[4]: https://blog.golang.org/organizing-go-code
[5]: https://golang.org/doc/install
[6]: https://github.com/heptio/ark/tree/master/examples
[6]: https://github.com/heptio/velero/tree/master/examples
[7]: #run
[8]: config-definition.md
[10]: #vendoring-dependencies
[11]: vendoring-dependencies.md
[12]: #test
[13]: https://github.com/heptio/ark/blob/master/hack/generate-proto.sh
[13]: https://github.com/heptio/velero/blob/master/hack/generate-proto.sh
[14]: https://grpc.io/docs/quickstart/go.html#install-protocol-buffers-v3
[15]: https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#the-shared-credentials-file
[16]: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable
@@ -223,3 +234,4 @@ If you need to add or update the vendored dependencies, see [Vendoring dependenc
[19]: ../examples/README.md
[20]: api-types/backupstoragelocation.md
[21]: api-types/volumesnapshotlocation.md
[22]: https://github.com/heptio/velero/releases

View File

@@ -3,57 +3,64 @@
## General
### `invalid configuration: no configuration has been provided`
This typically means that no `kubeconfig` file can be found for the Ark client to use. Ark looks for a kubeconfig in the
This typically means that no `kubeconfig` file can be found for the Velero client to use. Velero looks for a kubeconfig in the
following locations:
* the path specified by the `--kubeconfig` flag, if any
* the path specified by the `$KUBECONFIG` environment variable, if any
* `~/.kube/config`
### Backups or restores stuck in `New` phase
This means that the Ark controllers are not processing the backups/restores, which usually happens because the Ark server is not running. Check the pod description and logs for errors:
This means that the Velero controllers are not processing the backups/restores, which usually happens because the Velero server is not running. Check the pod description and logs for errors:
```
kubectl -n heptio-ark describe pods
kubectl -n heptio-ark logs deployment/ark
kubectl -n velero describe pods
kubectl -n velero logs deployment/velero
```
## AWS
### `NoCredentialProviders: no valid providers in chain`
This means that the secret containing the AWS IAM user credentials for Ark has not been created/mounted properly
into the Ark server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Ark server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-ark` file
* The `credentials-ark` file is formatted properly and has the correct values:
#### Using credentials
This means that the secret containing the AWS IAM user credentials for Velero has not been created/mounted properly
into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
* The `credentials-velero` file is formatted properly and has the correct values:
```
[default]
aws_access_key_id=<your AWS access key ID>
aws_secret_access_key=<your AWS secret access key>
```
* The `cloud-credentials` secret is defined as a volume for the Ark deployment
* The `cloud-credentials` secret is being mounted into the Ark server pod at `/credentials`
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
#### Using kube2iam
This means that Ark can't read the content of the S3 bucket. Ensure the following:
* There is a Trust Policy document allowing the role used by kube2iam to assume Ark's role, as stated in the AWS config documentation.
* The new Ark role has all the permissions listed in the documentation regarding S3.
## Azure
### `Failed to refresh the Token` or `adal: Refresh request failed`
This means that the secrets containing the Azure service principal credentials for Ark has not been created/mounted
properly into the Ark server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Ark server's namespace
This means that the secrets containing the Azure service principal credentials for Velero has not been created/mounted
properly into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has all of the expected keys and each one has the correct value (see [setup instructions](0))
* The `cloud-credentials` secret is defined as a volume for the Ark deployment
* The `cloud-credentials` secret is being mounted into the Ark server pod at `/credentials`
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
## GCE/GKE
### `open credentials/cloud: no such file or directory`
This means that the secret containing the GCE service account credentials for Ark has not been created/mounted properly
into the Ark server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Ark server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-ark` file
* The `cloud-credentials` secret is defined as a volume for the Ark deployment
* The `cloud-credentials` secret is being mounted into the Ark server pod at `/credentials`
This means that the secret containing the GCE service account credentials for Velero has not been created/mounted properly
into the Velero server pod. Ensure the following:
* The `cloud-credentials` secret exists in the Velero server's namespace
* The `cloud-credentials` secret has a single key, `cloud`, whose value is the contents of the `credentials-velero` file
* The `cloud-credentials` secret is defined as a volume for the Velero deployment
* The `cloud-credentials` secret is being mounted into the Velero server pod at `/credentials`
[0]: azure-config#credentials-and-configuration
[0]: azure-config#credentials-and-configuration

View File

@@ -5,7 +5,7 @@
## Example
When Heptio Ark finishes a Restore, its status changes to "Completed" regardless of whether or not there are issues during the process. The number of warnings and errors are indicated in the output columns from `ark restore get`:
When Velero finishes a Restore, its status changes to "Completed" regardless of whether or not there are issues during the process. The number of warnings and errors are indicated in the output columns from `velero restore get`:
```
NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
@@ -15,14 +15,14 @@ backup-test-2-20170726180514 backup-test-2 Completed 0 0 2
backup-test-2-20170726180515 backup-test-2 Completed 0 1 2017-07-26 13:32:59 -0400 EDT <none>
```
To delve into the warnings and errors into more detail, you can use `ark restore describe`:
To delve into the warnings and errors into more detail, you can use `velero restore describe`:
```
ark restore describe backup-test-20170726180512
velero restore describe backup-test-20170726180512
```
The output looks like this:
```
Name: backup-test-20170726180512
Namespace: heptio-ark
Namespace: velero
Labels: <none>
Annotations: <none>
@@ -48,10 +48,10 @@ Phase: Completed
Validation errors: <none>
Warnings:
Ark: <none>
Velero: <none>
Cluster: <none>
Namespaces:
heptio-ark: serviceaccounts "ark" already exists
velero: serviceaccounts "velero" already exists
serviceaccounts "default" already exists
kube-public: serviceaccounts "default" already exists
kube-system: serviceaccounts "attachdetach-controller" already exists
@@ -80,7 +80,7 @@ Warnings:
default: serviceaccounts "default" already exists
Errors:
Ark: <none>
Velero: <none>
Cluster: <none>
Namespaces: <none>
```
@@ -93,7 +93,7 @@ of them may have been pre-existing).
Both errors and warnings are structured in the same way:
* `Ark`: A list of system-related issues encountered by the Ark server (e.g. couldn't read directory).
* `Velero`: A list of system-related issues encountered by the Velero server (e.g. couldn't read directory).
* `Cluster`: A list of issues related to the restore of cluster-scoped resources.

View File

@@ -2,22 +2,22 @@
*Using Schedules and Restore-Only Mode*
If you periodically back up your cluster's resources, you are able to return to a previous state in case of some unexpected mishap, such as a service outage. Doing so with Heptio Ark looks like the following:
If you periodically back up your cluster's resources, you are able to return to a previous state in case of some unexpected mishap, such as a service outage. Doing so with Velero looks like the following:
1. After you first run the Ark server on your cluster, set up a daily backup (replacing `<SCHEDULE NAME>` in the command as desired):
1. After you first run the Velero server on your cluster, set up a daily backup (replacing `<SCHEDULE NAME>` in the command as desired):
```
ark schedule create <SCHEDULE NAME> --schedule "0 7 * * *"
velero schedule create <SCHEDULE NAME> --schedule "0 7 * * *"
```
This creates a Backup object with the name `<SCHEDULE NAME>-<TIMESTAMP>`.
1. A disaster happens and you need to recreate your resources.
1. Update the Ark server deployment, adding the argument for the `server` command flag `restore-only` set to `true`. This prevents Backup objects from being created or deleted during your Restore process.
1. Update the Velero server deployment, adding the argument for the `server` command flag `restore-only` set to `true`. This prevents Backup objects from being created or deleted during your Restore process.
1. Create a restore with your most recent Ark Backup:
1. Create a restore with your most recent Velero Backup:
```
ark restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
```

View File

@@ -1,17 +1,17 @@
# Expose Minio outside your cluster
When you run commands to get logs or describe a backup, the Ark server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Ark client -- you need to make Minio available outside the cluster. You can:
When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can:
- Change the Minio Service type from `ClusterIP` to `NodePort`.
- Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`.
In Ark 0.10, you can also specify the value of a new `publicUrl` field for the pre-signed URL in your backup storage config.
In Velero 0.10, you can also specify the value of a new `publicUrl` field for the pre-signed URL in your backup storage config.
For basic instructions on how to install the Ark server and client, see [the getting started example][1].
For basic instructions on how to install the Velero server and client, see [the getting started example][1].
## Expose Minio with Service of type NodePort
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Ark client.
The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client.
You must also get the Minio URL, which you can then specify as the value of the new `publicUrl` field in your backup storage config.
@@ -22,29 +22,29 @@ You must also get the Minio URL, which you can then specify as the value of the
- if you're running Minikube:
```shell
minikube service minio --namespace=heptio-ark --url
minikube service minio --namespace=velero --url
```
- in any other environment:
1. Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Ark client.
1. Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client.
1. Append the value of the NodePort to get a complete URL. You can get this value by running:
```shell
kubectl -n heptio-ark get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}'
```
1. In `examples/minio/05-ark-backupstoragelocation.yaml`, uncomment the `publicUrl` line and provide this Minio URL as the value of the `publicUrl` field. You must include the `http://` or `https://` prefix.
1. In `examples/minio/05-backupstoragelocation.yaml`, uncomment the `publicUrl` line and provide this Minio URL as the value of the `publicUrl` field. You must include the `http://` or `https://` prefix.
## Work with Ingress
Configuring Ingress for your cluster is out of scope for the Ark documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Ark configuration with Minio.
Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio.
In this case:
1. Keep the Service type as `ClusterIP`.
1. In `examples/minio/05-ark-backupstoragelocation.yaml`, uncomment the `publicUrl` line and provide the URL and port of your Ingress as the value of the `publicUrl` field.
1. In `examples/minio/05-backupstoragelocation.yaml`, uncomment the `publicUrl` line and provide the URL and port of your Ingress as the value of the `publicUrl` field.
[1]: get-started.md
[1]: get-started.md

View File

@@ -1,9 +1,9 @@
# Extend Ark
# Extend Velero
Ark includes mechanisms for extending the core functionality to meet your individual backup/restore needs:
Velero includes mechanisms for extending the core functionality to meet your individual backup/restore needs:
* [Hooks][27] allow you to specify commands to be executed within running pods during a backup. This is useful if you need to run a workload-specific command prior to taking a backup (for example, to flush disk buffers or to freeze a database).
* [Plugins][28] allow you to develop custom object/block storage back-ends or per-item backup/restore actions that can execute arbitrary logic, including modifying the items being backed up/restored. Plugins can be used by Ark without needing to be compiled into the core Ark binary.
* [Plugins][28] allow you to develop custom object/block storage back-ends or per-item backup/restore actions that can execute arbitrary logic, including modifying the items being backed up/restored. Plugins can be used by Velero without needing to be compiled into the core Velero binary.
[27]: hooks.md
[28]: plugins.md

View File

@@ -1,15 +1,15 @@
# FAQ
## When is it appropriate to use Ark instead of etcd's built in backup/restore?
## When is it appropriate to use Velero instead of etcd's built in backup/restore?
Etcd's backup/restore tooling is good for recovering from data loss in a single etcd cluster. For
example, it is a good idea to take a backup of etcd prior to upgrading etcd itself. For more
sophisticated management of your Kubernetes cluster backups and restores, we feel that Ark is
sophisticated management of your Kubernetes cluster backups and restores, we feel that Velero is
generally a better approach. It gives you the ability to throw away an unstable cluster and restore
your Kubernetes resources and data into a new cluster, which you can't do easily just by backing up
and restoring etcd.
Examples of cases where Ark is useful:
Examples of cases where Velero is useful:
* you don't have access to etcd (e.g. you're running on GKE)
* backing up both Kubernetes resources and persistent volume state
@@ -18,20 +18,20 @@ Examples of cases where Ark is useful:
* backing up Kubernetes resources that are stored across multiple etcd clusters (for example if you
run a custom apiserver)
## Will Ark restore my Kubernetes resources exactly the way they were before?
## Will Velero restore my Kubernetes resources exactly the way they were before?
Yes, with some exceptions. For example, when Ark restores pods it deletes the `nodeName` from the
Yes, with some exceptions. For example, when Velero restores pods it deletes the `nodeName` from the
pod so that it can be scheduled onto a new node. You can see some more examples of the differences
in [pod_action.go](https://github.com/heptio/ark/blob/master/pkg/restore/pod_action.go)
in [pod_action.go](https://github.com/heptio/velero/blob/master/pkg/restore/pod_action.go)
## I'm using Ark in multiple clusters. Should I use the same bucket to store all of my backups?
## I'm using Velero in multiple clusters. Should I use the same bucket to store all of my backups?
We **strongly** recommend that you use a separate bucket per cluster to store backups. Sharing a bucket
across multiple Ark instances can lead to numerous problems - failed backups, overwritten backups,
inadvertently deleted backups, etc., all of which can be avoided by using a separate bucket per Ark
across multiple Velero instances can lead to numerous problems - failed backups, overwritten backups,
inadvertently deleted backups, etc., all of which can be avoided by using a separate bucket per Velero
instance.
Related to this, if you need to restore a backup from cluster A into cluster B, please use restore-only
mode in cluster B's Ark instance (via the `--restore-only` flag on the `ark server` command specified
in your Ark deployment) while it's configured to use cluster A's bucket. This will ensure no
mode in cluster B's Velero instance (via the `--restore-only` flag on the `velero server` command specified
in your Velero deployment) while it's configured to use cluster A's bucket. This will ensure no
new backups are created, and no existing backups are deleted or overwritten.

View File

@@ -1,4 +1,4 @@
# Run Ark on GCP
# Run Velero on GCP
You can run Kubernetes on Google Cloud Platform in either:
@@ -7,9 +7,25 @@ You can run Kubernetes on Google Cloud Platform in either:
If you do not have the `gcloud` and `gsutil` CLIs locally installed, follow the [user guide][16] to set them up.
## Download Velero
1. Download the [latest release's](https://github.com/heptio/velero/releases) tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
_We strongly recommend that you use an [official release](https://github.com/heptio/velero/releases) of Velero. The tarballs for each release contain the
`velero` command-line client **and** version-specific sample YAML files for deploying Velero to your cluster. The code and sample YAML files in the master
branch of the Velero repository are under active development and are not guaranteed to be stable. Use them at your own risk!_
## Create GCS bucket
Heptio Ark requires an object storage bucket in which to store backups, preferably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create a GCS bucket, replacing the <YOUR_BUCKET> placeholder with the name of your bucket:
Velero requires an object storage bucket in which to store backups, preferably unique to a single Kubernetes cluster (see the [FAQ][20] for more details). Create a GCS bucket, replacing the <YOUR_BUCKET> placeholder with the name of your bucket:
```bash
BUCKET=<YOUR_BUCKET>
@@ -19,7 +35,7 @@ gsutil mb gs://$BUCKET/
## Create service account
To integrate Heptio Ark with GCP, create an Ark-specific [Service Account][15]:
To integrate Velero with GCP, create an Velero-specific [Service Account][15]:
1. View your current config settings:
@@ -36,13 +52,13 @@ To integrate Heptio Ark with GCP, create an Ark-specific [Service Account][15]:
2. Create a service account:
```bash
gcloud iam service-accounts create heptio-ark \
--display-name "Heptio Ark service account"
gcloud iam service-accounts create velero \
--display-name "Velero service account"
```
> If you'll be using Ark to backup multiple clusters with multiple GCS buckets, it may be desirable to create a unique username per cluster rather than the default `heptio-ark`.
> If you'll be using Velero to backup multiple clusters with multiple GCS buckets, it may be desirable to create a unique username per cluster rather than the default `velero`.
Then list all accounts and find the `heptio-ark` account you just created:
Then list all accounts and find the `velero` account you just created:
```bash
gcloud iam service-accounts list
```
@@ -51,11 +67,11 @@ To integrate Heptio Ark with GCP, create an Ark-specific [Service Account][15]:
```bash
SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:Heptio Ark service account" \
--filter="displayName:Velero service account" \
--format 'value(email)')
```
3. Attach policies to give `heptio-ark` the necessary permissions to function:
3. Attach policies to give `velero` the necessary permissions to function:
```bash
@@ -67,24 +83,25 @@ To integrate Heptio Ark with GCP, create an Ark-specific [Service Account][15]:
compute.snapshots.create
compute.snapshots.useReadOnly
compute.snapshots.delete
compute.zones.get
)
gcloud iam roles create heptio_ark.server \
gcloud iam roles create velero.server \
--project $PROJECT_ID \
--title "Heptio Ark Server" \
--title "Velero Server" \
--permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
--role projects/$PROJECT_ID/roles/heptio_ark.server
--role projects/$PROJECT_ID/roles/velero.server
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
```
4. Create a service account key, specifying an output file (`credentials-ark`) in your local directory:
4. Create a service account key, specifying an output file (`credentials-velero`) in your local directory:
```bash
gcloud iam service-accounts keys create credentials-ark \
gcloud iam service-accounts keys create credentials-velero \
--iam-account $SERVICE_ACCOUNT_EMAIL
```
@@ -93,7 +110,7 @@ To integrate Heptio Ark with GCP, create an Ark-specific [Service Account][15]:
If you run Google Kubernetes Engine (GKE), make sure that your current IAM user is a cluster-admin. This role is required to create RBAC objects.
See [the GKE documentation][22] for more information.
In the Ark directory (i.e. where you extracted the release tarball), run the following to first set up namespaces, RBAC, and other scaffolding. To run in a custom namespace, make sure that you have edited the YAML files to specify the namespace. See [Run in custom namespace][0].
In the Velero directory (i.e. where you extracted the release tarball), run the following to first set up namespaces, RBAC, and other scaffolding. To run in a custom namespace, make sure that you have edited the YAML files to specify the namespace. See [Run in custom namespace][0].
```bash
kubectl apply -f config/common/00-prereqs.yaml
@@ -103,15 +120,15 @@ Create a Secret. In the directory of the credentials file you just created, run:
```bash
kubectl create secret generic cloud-credentials \
--namespace heptio-ark \
--from-file cloud=credentials-ark
--namespace velero \
--from-file cloud=credentials-velero
```
**Note: If you use a custom namespace, replace `heptio-ark` with the name of the custom namespace**
**Note: If you use a custom namespace, replace `velero` with the name of the custom namespace**
Specify the following values in the example files:
* In file `config/gcp/05-ark-backupstoragelocation.yaml`:
* In file `config/gcp/05-backupstoragelocation.yaml`:
* Replace `<YOUR_BUCKET>`. See the [BackupStorageLocation definition][7] for details.
@@ -125,11 +142,11 @@ Specify the following values in the example files:
## Start the server
In the root of your Ark directory, run:
In the root of your Velero directory, run:
```bash
kubectl apply -f config/gcp/05-ark-backupstoragelocation.yaml
kubectl apply -f config/gcp/06-ark-volumesnapshotlocation.yaml
kubectl apply -f config/gcp/05-backupstoragelocation.yaml
kubectl apply -f config/gcp/06-volumesnapshotlocation.yaml
kubectl apply -f config/gcp/10-deployment.yaml
```

View File

@@ -1,46 +1,50 @@
## Getting started
The following example sets up the Ark server and client, then backs up and restores a sample application.
The following example sets up the Velero server and client, then backs up and restores a sample application.
For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster.
For additional functionality with this setup, see the docs on how to [expose Minio outside your cluster][31].
**NOTE** The example lets you explore basic Ark functionality. Configuring Minio for production is out of scope.
**NOTE** The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope.
See [Set up Ark on your platform][3] for how to configure Ark for a production environment.
See [Set up Velero on your platform][3] for how to configure Velero for a production environment.
If you encounter issues with installing or configuring, see [Debugging Installation Issues](debugging-install.md).
### Prerequisites
* Access to a Kubernetes cluster, version 1.7 or later. Version 1.7.5 or later is required to run `ark backup delete`.
* Access to a Kubernetes cluster, version 1.7 or later. Version 1.7.5 or later is required to run `velero backup delete`.
* A DNS server on the cluster
* `kubectl` installed
### Download
## Download Velero
1. Download the [latest release's][26] tarball for your platform.
1. Download the [latest release's](https://github.com/heptio/velero/releases) tarball for your client platform.
1. Extract the tarball:
```bash
tar -xzf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Ark directory" in subsequent steps.
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `ark` binary from the Ark directory to somewhere in your PATH.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
_We strongly recommend that you use an [official release](https://github.com/heptio/velero/releases) of Velero. The tarballs for each release contain the
`velero` command-line client **and** version-specific sample YAML files for deploying Velero to your cluster. The code and sample YAML files in the master
branch of the Velero repository are under active development and are not guaranteed to be stable. Use them at your own risk!_
#### MacOS Installation
On Mac, you can use [HomeBrew](https://brew.sh) to install the `ark` client:
On Mac, you can use [HomeBrew](https://brew.sh) to install the `velero` client:
```bash
brew install ark
brew install velero
```
### Set up server
These instructions start the Ark server and a Minio instance that is accessible from within the cluster only. See [Expose Minio outside your cluster][31] for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `ark describe` commands.
These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See [Expose Minio outside your cluster][31] for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands.
1. Start the server and the local storage service. In the Ark directory, run:
1. Start the server and the local storage service. In the Velero directory, run:
```bash
kubectl apply -f config/common/00-prereqs.yaml
@@ -53,10 +57,10 @@ These instructions start the Ark server and a Minio instance that is accessible
kubectl apply -f config/nginx-app/base.yaml
```
1. Check to see that both the Ark and nginx deployments are successfully created:
1. Check to see that both the Velero and nginx deployments are successfully created:
```
kubectl get deployments -l component=ark --namespace=heptio-ark
kubectl get deployments -l component=velero --namespace=velero
kubectl get deployments --namespace=nginx-example
```
@@ -65,25 +69,25 @@ These instructions start the Ark server and a Minio instance that is accessible
1. Create a backup for any object that matches the `app=nginx` label selector:
```
ark backup create nginx-backup --selector app=nginx
velero backup create nginx-backup --selector app=nginx
```
Alternatively if you want to backup all objects *except* those matching the label `backup=ignore`:
```
ark backup create nginx-backup --selector 'backup notin (ignore)'
velero backup create nginx-backup --selector 'backup notin (ignore)'
```
1. (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector:
```
ark schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
velero schedule create nginx-daily --schedule="0 1 * * *" --selector app=nginx
```
Alternatively, you can use some non-standard shorthand cron expressions:
```
ark schedule create nginx-daily --schedule="@daily" --selector app=nginx
velero schedule create nginx-daily --schedule="@daily" --selector app=nginx
```
See the [cron package's documentation][30] for more usage examples.
@@ -111,13 +115,13 @@ These instructions start the Ark server and a Minio instance that is accessible
1. Run:
```
ark restore create --from-backup nginx-backup
velero restore create --from-backup nginx-backup
```
1. Run:
```
ark restore get
velero restore get
```
After the restore finishes, the output looks like the following:
@@ -134,7 +138,7 @@ After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` a
If there are errors or warnings, you can look at them in detail:
```
ark restore describe <RESTORE_NAME>
velero restore describe <RESTORE_NAME>
```
For more information, see [the debugging information][18].
@@ -145,21 +149,21 @@ If you want to delete any backups you created, including data in object storage
volume snapshots, you can run:
```
ark backup delete BACKUP_NAME
velero backup delete BACKUP_NAME
```
This asks the Ark server to delete all backup data associated with `BACKUP_NAME`. You need to do
this for each backup you want to permanently delete. A future version of Ark will allow you to
This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do
this for each backup you want to permanently delete. A future version of Velero will allow you to
delete multiple backups by name or label selector.
Once fully removed, the backup is no longer visible when you run:
```
ark backup get BACKUP_NAME
velero backup get BACKUP_NAME
```
If you want to uninstall Ark but preserve the backup data in object storage and persistent volume
snapshots, it is safe to remove the `heptio-ark` namespace and everything else created for this
If you want to uninstall Velero but preserve the backup data in object storage and persistent volume
snapshots, it is safe to remove the `velero` namespace and everything else created for this
example:
```
@@ -171,5 +175,5 @@ kubectl delete -f config/nginx-app/base.yaml
[31]: expose-minio.md
[3]: install-overview.md
[18]: debugging-restores.md
[26]: https://github.com/heptio/ark/releases
[26]: https://github.com/heptio/velero/releases
[30]: https://godoc.org/github.com/robfig/cron

View File

@@ -1,16 +1,16 @@
# Hooks
Heptio Ark currently supports executing commands in containers in pods during a backup.
Velero currently supports executing commands in containers in pods during a backup.
## Backup Hooks
When performing a backup, you can specify one or more commands to execute in a container in a pod
when that pod is being backed up.
Ark versions prior to v0.7.0 only support hooks that execute prior to any custom action processing
Velero versions prior to v0.7.0 only support hooks that execute prior to any custom action processing
("pre" hooks).
As of version v0.7.0, Ark also supports "post" hooks - these execute after all custom actions have
As of version v0.7.0, Velero also supports "post" hooks - these execute after all custom actions have
completed, as well as after all the additional items specified by custom actions have been backed
up.
@@ -18,28 +18,26 @@ There are two ways to specify hooks: annotations on the pod itself, and in the B
### Specifying Hooks As Pod Annotations
You can use the following annotations on a pod to make Ark execute a hook when backing up the pod:
You can use the following annotations on a pod to make Velero execute a hook when backing up the pod:
#### Pre hooks
| Annotation Name | Description |
| --- | --- |
| `pre.hook.backup.ark.heptio.com/container` | The container where the command should be executed. Defaults to the first container in the pod. Optional. |
| `pre.hook.backup.ark.heptio.com/command` | The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]` |
| `pre.hook.backup.ark.heptio.com/on-error` | What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional. |
| `pre.hook.backup.ark.heptio.com/timeout` | How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional. |
| `pre.hook.backup.velero.io/container` | The container where the command should be executed. Defaults to the first container in the pod. Optional. |
| `pre.hook.backup.velero.io/command` | The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]` |
| `pre.hook.backup.velero.io/on-error` | What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional. |
| `pre.hook.backup.velero.io/timeout` | How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional. |
Ark v0.7.0+ continues to support the original (deprecated) way to specify pre hooks - without the
`pre.` prefix in the annotation names (e.g. `hook.backup.ark.heptio.com/container`).
#### Post hooks (v0.7.0+)
| Annotation Name | Description |
| --- | --- |
| `post.hook.backup.ark.heptio.com/container` | The container where the command should be executed. Defaults to the first container in the pod. Optional. |
| `post.hook.backup.ark.heptio.com/command` | The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]` |
| `post.hook.backup.ark.heptio.com/on-error` | What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional. |
| `post.hook.backup.ark.heptio.com/timeout` | How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional. |
| `post.hook.backup.velero.io/container` | The container where the command should be executed. Defaults to the first container in the pod. Optional. |
| `post.hook.backup.velero.io/command` | The command to execute. If you need multiple arguments, specify the command as a JSON array, such as `["/usr/bin/uname", "-a"]` |
| `post.hook.backup.velero.io/on-error` | What to do if the command returns a non-zero exit code. Defaults to Fail. Valid values are Fail and Continue. Optional. |
| `post.hook.backup.velero.io/timeout` | How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults to 30s. Optional. |
### Specifying Hooks in the Backup Spec
@@ -56,25 +54,25 @@ setup this example.
### Annotations
The Ark [example/nginx-app/with-pv.yaml][2] serves as an example of adding the pre and post hook annotations directly
The Velero [example/nginx-app/with-pv.yaml][2] serves as an example of adding the pre and post hook annotations directly
to your declarative deployment. Below is an example of what updating an object in place might look like.
```shell
kubectl annotate pod -n nginx-example -l app=nginx \
pre.hook.backup.ark.heptio.com/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
pre.hook.backup.ark.heptio.com/container=fsfreeze \
post.hook.backup.ark.heptio.com/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
post.hook.backup.ark.heptio.com/container=fsfreeze
pre.hook.backup.velero.io/command='["/sbin/fsfreeze", "--freeze", "/var/log/nginx"]' \
pre.hook.backup.velero.io/container=fsfreeze \
post.hook.backup.velero.io/command='["/sbin/fsfreeze", "--unfreeze", "/var/log/nginx"]' \
post.hook.backup.velero.io/container=fsfreeze
```
Now test the pre and post hooks by creating a backup. You can use the Ark logs to verify that the pre and post
Now test the pre and post hooks by creating a backup. You can use the Velero logs to verify that the pre and post
hooks are running and exiting without error.
```shell
ark backup create nginx-hook-test
velero backup create nginx-hook-test
ark backup get nginx-hook-test
ark backup logs nginx-hook-test | grep hookCommand
velero backup get nginx-hook-test
velero backup logs nginx-hook-test | grep hookCommand
```

View File

@@ -1,31 +1,47 @@
# Use IBM Cloud Object Storage as Ark's storage destination.
You can deploy Ark on IBM [Public][5] or [Private][4] clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Ark's backups.
# Use IBM Cloud Object Storage as Velero's storage destination.
You can deploy Velero on IBM [Public][5] or [Private][4] clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero's backups.
To set up IBM Cloud Object Storage (COS) as Ark's destination, you:
To set up IBM Cloud Object Storage (COS) as Velero's destination, you:
* Download an official release of Velero
* Create your COS instance
* Create an S3 bucket
* Define a service that can store data in the bucket
* Configure and start the Ark server
* Configure and start the Velero server
## Download Velero
1. Download the [latest release's](https://github.com/heptio/velero/releases) tarball for your client platform.
1. Extract the tarball:
```bash
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
```
We'll refer to the directory you extracted to as the "Velero directory" in subsequent steps.
1. Move the `velero` binary from the Velero directory to somewhere in your PATH.
_We strongly recommend that you use an [official release](https://github.com/heptio/velero/releases) of Velero. The tarballs for each release contain the
`velero` command-line client **and** version-specific sample YAML files for deploying Velero to your cluster. The code and sample YAML files in the master
branch of the Velero repository are under active development and are not guaranteed to be stable. Use them at your own risk!_
## Create COS instance
If you dont have a COS instance, you can create a new one, according to the detailed instructions in [Creating a new resource instance][1].
## Create an S3 bucket
Heptio Ark requires an object storage bucket to store backups in. See instructions in [Create some buckets to store your data][2].
Velero requires an object storage bucket to store backups in. See instructions in [Create some buckets to store your data][2].
## Define a service that can store data in the bucket.
The process of creating service credentials is described in [Service credentials][3].
Several comments:
1. The Ark service will write its backup into the bucket, so it requires the “Writer” access role.
1. The Velero service will write its backup into the bucket, so it requires the “Writer” access role.
2. Ark uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keysa set of HMAC credentials. You can create these HMAC credentials by specifying `{“HMAC”:true}` as an optional inline parameter. See step 3 in the [Service credentials][3] guide.
2. Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keysa set of HMAC credentials. You can create these HMAC credentials by specifying `{“HMAC”:true}` as an optional inline parameter. See step 3 in the [Service credentials][3] guide.
3. After successfully creating a Service credential, you can view the JSON definition of the credential. Under the `cos_hmac_keys` entry there are `access_key_id` and `secret_access_key`. We will use them in the next step.
4. Create an Ark-specific credentials file (`credentials-ark`) in your local directory:
4. Create an Velero-specific credentials file (`credentials-velero`) in your local directory:
```
[default]
@@ -37,7 +53,7 @@ Several comments:
## Credentials and configuration
In the Ark directory (i.e. where you extracted the release tarball), run the following to first set up namespaces, RBAC, and other scaffolding. To run in a custom namespace, make sure that you have edited the YAML files to specify the namespace. See [Run in custom namespace][0].
In the Velero directory (i.e. where you extracted the release tarball), run the following to first set up namespaces, RBAC, and other scaffolding. To run in a custom namespace, make sure that you have edited the YAML files to specify the namespace. See [Run in custom namespace][0].
```bash
kubectl apply -f config/common/00-prereqs.yaml
@@ -47,13 +63,13 @@ Create a Secret. In the directory of the credentials file you just created, run:
```bash
kubectl create secret generic cloud-credentials \
--namespace <ARK_NAMESPACE> \
--from-file cloud=credentials-ark
--namespace <VELERO_NAMESPACE> \
--from-file cloud=credentials-velero
```
Specify the following values in the example files:
* In `config/ibm/05-ark-backupstoragelocation.yaml`:
* In `config/ibm/05-backupstoragelocation.yaml`:
* Replace `<YOUR_BUCKET>`, `<YOUR_REGION>` and `<YOUR_URL_ACCESS_POINT>`. See the [BackupStorageLocation definition][6] for details.
@@ -61,12 +77,12 @@ Specify the following values in the example files:
* Replace `<YOUR_STORAGE_CLASS_NAME>` with your `StorageClass` name.
## Start the Ark server
## Start the Velero server
In the root of your Ark directory, run:
In the root of your Velero directory, run:
```bash
kubectl apply -f config/ibm/05-ark-backupstoragelocation.yaml
kubectl apply -f config/ibm/05-backupstoragelocation.yaml
kubectl apply -f config/ibm/10-deployment.yaml
```

View File

@@ -1,21 +1,21 @@
# Image tagging policy
This document describes Ark's image tagging policy.
This document describes Velero's image tagging policy.
## Released versions
`gcr.io/heptio-images/ark:<SemVer>`
`gcr.io/heptio-images/velero:<SemVer>`
Ark follows the [Semantic Versioning](http://semver.org/) standard for releases. Each tag in the `github.com/heptio/ark` repository has a matching image, e.g. `gcr.io/heptio-images/ark:v0.8.0`.
Velero follows the [Semantic Versioning](http://semver.org/) standard for releases. Each tag in the `github.com/heptio/velero` repository has a matching image, e.g. `gcr.io/heptio-images/velero:v0.11.0`.
### Latest
`gcr.io/heptio-images/ark:latest`
`gcr.io/heptio-images/velero:latest`
The `latest` tag follows the most recently released version of Ark.
The `latest` tag follows the most recently released version of Velero.
## Development
`gcr.io/heptio-images/ark:master`
`gcr.io/heptio-images/velero:master`
The `master` tag follows the latest commit to land on the `master` branch.
The `master` tag follows the latest commit to land on the `master` branch.

BIN
docs/img/velero.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

View File

@@ -1,42 +1,42 @@
# Set up Ark on your platform
# Set up Velero on your platform
You can run Ark with a cloud provider or on-premises. For detailed information about the platforms that Ark supports, see [Compatible Storage Providers][99].
You can run Velero with a cloud provider or on-premises. For detailed information about the platforms that Velero supports, see [Compatible Storage Providers][99].
In version 0.7.0 and later, you can run Ark in any namespace, which requires additional customization. See [Run in custom namespace][3].
In version 0.7.0 and later, you can run Velero in any namespace, which requires additional customization. See [Run in custom namespace][3].
In version 0.9.0 and later, you can use Ark's integration with restic, which requires additional setup. See [restic instructions][20].
In version 0.9.0 and later, you can use Velero's integration with restic, which requires additional setup. See [restic instructions][20].
## Customize configuration
Whether you run Ark on a cloud provider or on-premises, if you have more than one volume snapshot location for a given volume provider, you can specify its default location for backups by setting a server flag in your Ark deployment YAML.
Whether you run Velero on a cloud provider or on-premises, if you have more than one volume snapshot location for a given volume provider, you can specify its default location for backups by setting a server flag in your Velero deployment YAML.
For details, see the documentation topics for individual cloud providers.
## Cloud provider
The Ark repository includes a set of example YAML files that specify the settings for each supported cloud provider. For provider-specific instructions, see:
The Velero repository includes a set of example YAML files that specify the settings for each supported cloud provider. For provider-specific instructions, see:
* [Run Ark on AWS][0]
* [Run Ark on GCP][1]
* [Run Ark on Azure][2]
* [Use IBM Cloud Object Store as Ark's storage destination][4]
* [Run Velero on AWS][0]
* [Run Velero on GCP][1]
* [Run Velero on Azure][2]
* [Use IBM Cloud Object Store as Velero's storage destination][4]
## On-premises
You can run Ark in an on-premises cluster in different ways depending on your requirements.
You can run Velero in an on-premises cluster in different ways depending on your requirements.
First, you must select an object storage backend that Ark can use to store backup data. [Compatible Storage Providers][99] contains information on various
First, you must select an object storage backend that Velero can use to store backup data. [Compatible Storage Providers][99] contains information on various
options that are supported or have been reported to work by users. [Minio][101] is an option if you want to keep your backup data on-premises and you are
not using another storage platform that offers an S3-compatible object storage API.
Second, if you need to back up persistent volume data, you must select a volume backup solution. [Volume Snapshot Providers][100] contains information on
the supported options. For example, if you use [Portworx][102] for persistent storage, you can install their Ark plugin to get native Portworx snapshots as part
of your Ark backups. If there is no native snapshot plugin available for your storage platform, you can use Ark's [restic integration][20], which provides a
the supported options. For example, if you use [Portworx][102] for persistent storage, you can install their Velero plugin to get native Portworx snapshots as part
of your Velero backups. If there is no native snapshot plugin available for your storage platform, you can use Velero's [restic integration][20], which provides a
platform-agnostic backup solution for volume data.
## Examples
After you set up the Ark server, try these examples:
After you set up the Velero server, try these examples:
### Basic example (without PersistentVolumes)
@@ -49,7 +49,7 @@ After you set up the Ark server, try these examples:
1. Create a backup:
```bash
ark backup create nginx-backup --include-namespaces nginx-example
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
@@ -63,7 +63,7 @@ After you set up the Ark server, try these examples:
1. Restore your lost resources:
```bash
ark restore create --from-backup nginx-backup
velero restore create --from-backup nginx-backup
```
### Snapshot example (with PersistentVolumes)
@@ -79,7 +79,7 @@ After you set up the Ark server, try these examples:
1. Create a backup with PV snapshotting:
```bash
ark backup create nginx-backup --include-namespaces nginx-example
velero backup create nginx-backup --include-namespaces nginx-example
```
1. Simulate a disaster:
@@ -93,7 +93,7 @@ After you set up the Ark server, try these examples:
1. Restore your lost resources:
```bash
ark restore create --from-backup nginx-backup
velero restore create --from-backup nginx-backup
```
[0]: aws-config.md

View File

@@ -1,5 +1,5 @@
/*
Copyright 2018 the Heptio Ark contributors.
Copyright 2018 the Velero contributors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -24,7 +24,7 @@ import (
"os"
"text/template"
"github.com/heptio/ark/pkg/cmd/cli/bug"
"github.com/heptio/velero/pkg/cmd/cli/bug"
)
func main() {
@@ -38,7 +38,7 @@ func main() {
if err != nil {
log.Fatal(err)
}
err = tmpl.Execute(outFile, bug.ArkBugInfo{})
err = tmpl.Execute(outFile, bug.VeleroBugInfo{})
if err != nil {
log.Fatal(err)
}

View File

@@ -1,55 +1,55 @@
# Backup Storage Locations and Volume Snapshot Locations
Ark v0.10 introduces a new way of configuring where Ark backups and their associated persistent volume snapshots are stored.
Velero v0.10 introduces a new way of configuring where Velero backups and their associated persistent volume snapshots are stored.
## Motivations
In Ark versions prior to v0.10, the configuration for where to store backups & volume snapshots is specified in a `Config` custom resource. The `backupStorageProvider` section captures the place where all Ark backups should be stored. This is defined by a **provider** (e.g. `aws`, `azure`, `gcp`, `minio`, etc.), a **bucket**, and possibly some additional provider-specific settings (e.g. `region`). Similarly, the `persistentVolumeProvider` section captures the place where all persistent volume snapshots taken as part of Ark backups should be stored, and is defined by a **provider** and additional provider-specific settings (e.g. `region`).
In Velero versions prior to v0.10, the configuration for where to store backups & volume snapshots is specified in a `Config` custom resource. The `backupStorageProvider` section captures the place where all Velero backups should be stored. This is defined by a **provider** (e.g. `aws`, `azure`, `gcp`, `minio`, etc.), a **bucket**, and possibly some additional provider-specific settings (e.g. `region`). Similarly, the `persistentVolumeProvider` section captures the place where all persistent volume snapshots taken as part of Velero backups should be stored, and is defined by a **provider** and additional provider-specific settings (e.g. `region`).
There are a number of use cases that this basic design does not support, such as:
- Take snapshots of more than one kind of persistent volume in a single Ark backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
- Have some Ark backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
- Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
- Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
- For volume providers that support it (e.g. Portworx), have some snapshots be stored locally on the cluster and have others be stored in the cloud
Additionally, as we look ahead to backup replication, a major feature on our roadmap, we know that we'll need Ark to be able to support multiple possible storage locations.
Additionally, as we look ahead to backup replication, a major feature on our roadmap, we know that we'll need Velero to be able to support multiple possible storage locations.
## Overview
In Ark v0.10 we got rid of the `Config` custom resource, and replaced it with two new custom resources, `BackupStorageLocation` and `VolumeSnapshotLocation`. The new resources directly replace the legacy `backupStorageProvider` and `persistentVolumeProvider` sections of the `Config` resource, respectively.
In Velero v0.10 we got rid of the `Config` custom resource, and replaced it with two new custom resources, `BackupStorageLocation` and `VolumeSnapshotLocation`. The new resources directly replace the legacy `backupStorageProvider` and `persistentVolumeProvider` sections of the `Config` resource, respectively.
Now, the user can pre-define more than one possible `BackupStorageLocation` and more than one `VolumeSnapshotLocation`, and can select *at backup creation time* the location in which the backup and associated snapshots should be stored.
A `BackupStorageLocation` is defined as a bucket, a prefix within that bucket under which all Ark data should be stored, and a set of additional provider-specific fields (e.g. AWS region, Azure storage account, etc.) The [API documentation][1] captures the configurable parameters for each in-tree provider.
A `BackupStorageLocation` is defined as a bucket, a prefix within that bucket under which all Velero data should be stored, and a set of additional provider-specific fields (e.g. AWS region, Azure storage account, etc.) The [API documentation][1] captures the configurable parameters for each in-tree provider.
A `VolumeSnapshotLocation` is defined entirely by provider-specific fields (e.g. AWS region, Azure resource group, Portworx snapshot type, etc.) The [API documentation][2] captures the configurable parameters for each in-tree provider.
Additionally, since multiple `VolumeSnapshotLocations` can be created, the user can now configure locations for more than one volume provider, and if the cluster has volumes from multiple providers (e.g. AWS EBS and Portworx), all of them can be snapshotted in a single Ark backup.
Additionally, since multiple `VolumeSnapshotLocations` can be created, the user can now configure locations for more than one volume provider, and if the cluster has volumes from multiple providers (e.g. AWS EBS and Portworx), all of them can be snapshotted in a single Velero backup.
## Limitations / Caveats
- Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take an Ark backup using a volume snapshot location with a different region than where your cluster's volumes are, the backup will fail.
- Volume snapshots are still limited by where your provider allows you to create snapshots. For example, AWS and Azure do not allow you to create a volume snapshot in a different region than where the volume is. If you try to take an Velero backup using a volume snapshot location with a different region than where your cluster's volumes are, the backup will fail.
- Each Ark backup has one `BackupStorageLocation`, and one `VolumeSnapshotLocation` per volume provider. It is not possible (yet) to send a single Ark backup to multiple backup storage locations simultaneously, or a single volume snapshot to multiple locations simultaneously. However, you can always set up multiple scheduled backups that differ only in the storage locations used if redundancy of backups across locations is important.
- Each Velero backup has one `BackupStorageLocation`, and one `VolumeSnapshotLocation` per volume provider. It is not possible (yet) to send a single Velero backup to multiple backup storage locations simultaneously, or a single volume snapshot to multiple locations simultaneously. However, you can always set up multiple scheduled backups that differ only in the storage locations used if redundancy of backups across locations is important.
- Cross-provider snapshots are not supported. If you have a cluster with more than one type of volume (e.g. EBS and Portworx), but you only have a `VolumeSnapshotLocation` configured for EBS, then Ark will **only** snapshot the EBS volumes.
- Cross-provider snapshots are not supported. If you have a cluster with more than one type of volume (e.g. EBS and Portworx), but you only have a `VolumeSnapshotLocation` configured for EBS, then Velero will **only** snapshot the EBS volumes.
- Restic data is now stored under a prefix/subdirectory of the main Ark bucket, and will go into the bucket corresponding to the `BackupStorageLocation` selected by the user at backup creation time.
- Restic data is now stored under a prefix/subdirectory of the main Velero bucket, and will go into the bucket corresponding to the `BackupStorageLocation` selected by the user at backup creation time.
## Examples
Let's look at some examples of how we can use this new mechanism to address each of our previously unsupported use cases:
#### Take snapshots of more than one kind of persistent volume in a single Ark backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
#### Take snapshots of more than one kind of persistent volume in a single Velero backup (e.g. in a cluster with both EBS volumes and Portworx volumes)
During server configuration:
```shell
ark snapshot-location create ebs-us-east-1 \
velero snapshot-location create ebs-us-east-1 \
--provider aws \
--config region=us-east-1
ark snapshot-location create portworx-cloud \
velero snapshot-location create portworx-cloud \
--provider portworx \
--config type=cloud
```
@@ -57,43 +57,43 @@ ark snapshot-location create portworx-cloud \
During backup creation:
```shell
ark backup create full-cluster-backup \
velero backup create full-cluster-backup \
--volume-snapshot-locations ebs-us-east-1,portworx-cloud
```
Alternately, since in this example there's only one possible volume snapshot location configured for each of our two providers (`ebs-us-east-1` for `aws`, and `portworx-cloud` for `portworx`), Ark doesn't require them to be explicitly specified when creating the backup:
Alternately, since in this example there's only one possible volume snapshot location configured for each of our two providers (`ebs-us-east-1` for `aws`, and `portworx-cloud` for `portworx`), Velero doesn't require them to be explicitly specified when creating the backup:
```shell
ark backup create full-cluster-backup
velero backup create full-cluster-backup
```
#### Have some Ark backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
#### Have some Velero backups go to a bucket in an eastern USA region, and others go to a bucket in a western USA region
During server configuration:
```shell
ark backup-location create default \
velero backup-location create default \
--provider aws \
--bucket ark-backups \
--bucket velero-backups \
--config region=us-east-1
ark backup-location create s3-alt-region \
velero backup-location create s3-alt-region \
--provider aws \
--bucket ark-backups-alt \
--bucket velero-backups-alt \
--config region=us-west-1
```
During backup creation:
```shell
# The Ark server will automatically store backups in the backup storage location named "default" if
# The Velero server will automatically store backups in the backup storage location named "default" if
# one is not specified when creating the backup. You can alter which backup storage location is used
# by default by setting the --default-backup-storage-location flag on the `ark server` command (run
# by the Ark deployment) to the name of a different backup storage location.
ark backup create full-cluster-backup
# by default by setting the --default-backup-storage-location flag on the `velero server` command (run
# by the Velero deployment) to the name of a different backup storage location.
velero backup create full-cluster-backup
```
Or:
```shell
ark backup create full-cluster-alternate-location-backup \
velero backup create full-cluster-alternate-location-backup \
--storage-location s3-alt-region
```
@@ -102,11 +102,11 @@ ark backup create full-cluster-alternate-location-backup \
During server configuration:
```shell
ark snapshot-location create portworx-local \
velero snapshot-location create portworx-local \
--provider portworx \
--config type=local
ark snapshot-location create portworx-cloud \
velero snapshot-location create portworx-cloud \
--provider portworx \
--config type=cloud
```
@@ -116,49 +116,49 @@ During backup creation:
```shell
# Note that since in this example we have two possible volume snapshot locations for the Portworx
# provider, we need to explicitly specify which one to use when creating a backup. Alternately,
# you can set the --default-volume-snapshot-locations flag on the `ark server` command (run by
# the Ark deployment) to specify which location should be used for each provider by default, in
# you can set the --default-volume-snapshot-locations flag on the `velero server` command (run by
# the Velero deployment) to specify which location should be used for each provider by default, in
# which case you don't need to specify it when creating a backup.
ark backup create local-snapshot-backup \
velero backup create local-snapshot-backup \
--volume-snapshot-locations portworx-local
```
Or:
```shell
ark backup create cloud-snapshot-backup \
velero backup create cloud-snapshot-backup \
--volume-snapshot-locations portworx-cloud
```
#### One location is still easy
If you don't have a use case for more than one location, it's still just as easy to use Ark. Let's assume you're running on AWS, in the `us-west-1` region:
If you don't have a use case for more than one location, it's still just as easy to use Velero. Let's assume you're running on AWS, in the `us-west-1` region:
During server configuration:
```shell
ark backup-location create default \
velero backup-location create default \
--provider aws \
--bucket ark-backups \
--bucket velero-backups \
--config region=us-west-1
ark snapshot-location create ebs-us-west-1 \
velero snapshot-location create ebs-us-west-1 \
--provider aws \
--config region=us-west-1
```
During backup creation:
```shell
# Ark's will automatically use your configured backup storage location and volume snapshot location.
# Velero's will automatically use your configured backup storage location and volume snapshot location.
# Nothing new needs to be specified when creating a backup.
ark backup create full-cluster-backup
velero backup create full-cluster-backup
```
## Additional Use Cases
1. If you're using Azure's AKS, you may want to store your volume snapshots outside of the "infrastructure" resource group that is automatically created when you create your AKS cluster. This is now possible using a `VolumeSnapshotLocation`, by specifying a `resourceGroup` under the `config` section of the snapshot location. See the [Azure volume snapshot location documentation][3] for details.
1. If you're using Azure, you may want to store your Ark backups across multiple storage accounts and/or resource groups. This is now possible using a `BackupStorageLocation`, by specifying a `storageAccount` and/or `resourceGroup`, respectively, under the `config` section of the backup location. See the [Azure backup storage location documentation][4] for details.
1. If you're using Azure, you may want to store your Velero backups across multiple storage accounts and/or resource groups. This is now possible using a `BackupStorageLocation`, by specifying a `storageAccount` and/or `resourceGroup`, respectively, under the `config` section of the backup location. See the [Azure backup storage location documentation][4] for details.

View File

@@ -0,0 +1,82 @@
# Migrating from Heptio Ark to Velero
As of v0.11.0, Heptio Ark has become Velero. This means the following changes have been made:
* The `ark` CLI client is now `velero`.
* The default Kubernetes namespace and ServiceAccount are now named `velero` (formerly `heptio-ark`).
* The container image name is now `gcr.io/heptio-images/velero` (formerly `gcr.io/heptio-images/ark`).
* CRDs are now under the new `velero.io` API group name (formerly `ark.heptio.com`).
The following instructions will help you migrate your existing Ark installation to Velero.
# Prerequisites
* Ark v0.10.x installed. See the v0.10.x [upgrade instructions][1] to upgrade from older versions.
* `kubectl` installed.
* `cluster-admin` permissions.
# Migration process
At a high level, the migration process involves the following steps:
* Scale down the `ark` deployment, so it will not process schedules, backups, or restores during the migration period.
* Create a new namespace (named `velero` by default).
* Apply the new CRDs.
* Migrate existing Ark CRD objects, labels, and annotations to the new Velero equivalents.
* Recreate the existing cloud credentials secret(s) in the velero namespace.
* Apply the updated Kubernetes deployment and daemonset (for restic support) to use the new container images and namespace.
* Remove the existing Ark namespace (which includes the deployment), CRDs, and ClusterRoleBinding.
These steps are provided in a script here:
```bash
kubectl scale --namespace heptio-ark deployment/ark --replicas 0
OS=$(uname | tr '[:upper:]' '[:lower:]') # Determine if the OS is Linux or macOS
ARCH="amd64"
# Download the velero client/example tarball to and unpack
curl -L https://github.com/heptio/velero/releases/download/v0.11.0/velero-v0.11.0-${OS}-${ARCH}.tar.gz --output velero-v0.11.0-${OS}-${ARCH}.tar.gz
tar xvf velero-v0.11.0-${OS}-${ARCH}.tar.gz
# Create the prerequisite CRDs and namespace
kubectl apply -f config/common/00-prereqs.yaml
# Download and unpack the crd-migrator tool
curl -L https://github.com/vmware/crd-migration-tool/releases/download/v1.0.0/crd-migration-tool-v1.0.0-${OS}-${ARCH}.tar.gz --output crd-migration-tool-v1.0.0-${OS}-${ARCH}.tar.gz
tar xvf crd-migration-tool-v1.0.0-${OS}-${ARCH}.tar.gz
# Run the tool against your cluster.
./crd-migrator \
--from ark.heptio.com/v1 \
--to velero.io/v1 \
--label-mappings ark.heptio.com:velero.io,ark-schedule:velero.io/schedule-name \
--annotation-mappings ark.heptio.com:velero.io \
--namespace-mappings heptio-ark:velero
# Copy the necessary secret from the ark namespace
kubectl get secret --namespace heptio-ark cloud-credentials --export -o yaml | kubectl apply --namespace velero -f -
# Apply the Velero deployment and restic DaemonSet for your platform
## GCP
#kubectl apply -f config/gcp/10-deployment.yaml
#kubectl apply -f config/gcp/20-restic-daemonset.yaml
## AWS
#kubectl apply -f config/aws/10-deployment.yaml
#kubectl apply -f config/aws/20-restic-daemonset.yaml
## Azure
#kubectl apply -f config/azure/00-deployment.yaml
#kubectl apply -f config/azure/20-restic-daemonset.yaml
# Verify your data is still present
./velero get backup
./velero get restore
# Remove old Ark data
kubectl delete namespace heptio-ark
kubectl delete crds -l component=ark
kubectl delete clusterrolebindings -l component=ark
```
[1]: https://heptio.github.io/velero/v0.10.0/upgrading-to-v0.10

View File

@@ -2,31 +2,31 @@
*Using Backups and Restores*
Heptio Ark can help you port your resources from one cluster to another, as long as you point each Ark instance to the same cloud object storage location. In this scenario, we are also assuming that your clusters are hosted by the same cloud provider. **Note that Heptio Ark does not support the migration of persistent volumes across cloud providers.**
Velero can help you port your resources from one cluster to another, as long as you point each Velero instance to the same cloud object storage location. In this scenario, we are also assuming that your clusters are hosted by the same cloud provider. **Note that Velero does not support the migration of persistent volumes across cloud providers.**
1. *(Cluster 1)* Assuming you haven't already been checkpointing your data with the Ark `schedule` operation, you need to first back up your entire cluster (replacing `<BACKUP-NAME>` as desired):
1. *(Cluster 1)* Assuming you haven't already been checkpointing your data with the Velero `schedule` operation, you need to first back up your entire cluster (replacing `<BACKUP-NAME>` as desired):
```
ark backup create <BACKUP-NAME>
velero backup create <BACKUP-NAME>
```
The default TTL is 30 days (720 hours); you can use the `--ttl` flag to change this as necessary.
1. *(Cluster 2)* Add the `--restore-only` flag to the server spec in the Ark deployment YAML.
1. *(Cluster 2)* Add the `--restore-only` flag to the server spec in the Velero deployment YAML.
1. *(Cluster 2)* Make sure that the `BackupStorageLocation` and `VolumeSnapshotLocation` CRDs match the ones from *Cluster 1*, so that your new Ark server instance points to the same bucket.
1. *(Cluster 2)* Make sure that the `BackupStorageLocation` and `VolumeSnapshotLocation` CRDs match the ones from *Cluster 1*, so that your new Velero server instance points to the same bucket.
1. *(Cluster 2)* Make sure that the Ark Backup object is created. Ark resources are synchronized with the backup files in cloud storage.
1. *(Cluster 2)* Make sure that the Velero Backup object is created. Velero resources are synchronized with the backup files in cloud storage.
```
ark backup describe <BACKUP-NAME>
velero backup describe <BACKUP-NAME>
```
**Note:** As of version 0.10, the default sync interval is 1 minute, so make sure to wait before checking. You can configure this interval with the `--backup-sync-period` flag to the Ark server.
**Note:** As of version 0.10, the default sync interval is 1 minute, so make sure to wait before checking. You can configure this interval with the `--backup-sync-period` flag to the Velero server.
1. *(Cluster 2)* Once you have confirmed that the right Backup (`<BACKUP-NAME>`) is now present, you can restore everything with:
```
ark restore create --from-backup <BACKUP-NAME>
velero restore create --from-backup <BACKUP-NAME>
```
## Verify both clusters
@@ -36,13 +36,13 @@ Check that the second cluster is behaving as expected:
1. *(Cluster 2)* Run:
```
ark restore get
velero restore get
```
1. Then run:
```
ark restore describe <RESTORE-NAME-FROM-GET-COMMAND>
velero restore describe <RESTORE-NAME-FROM-GET-COMMAND>
```
If you encounter issues, make sure that Ark is running in the same namespace in both clusters.
If you encounter issues, make sure that Velero is running in the same namespace in both clusters.

View File

@@ -1,38 +1,38 @@
# Run in custom namespace
In Ark version 0.7.0 and later, you can run Ark in any namespace. To do so, you specify the
namespace in the YAML files that configure the Ark server. You then also specify the namespace when
you run Ark client commands.
In Velero version 0.7.0 and later, you can run Velero in any namespace. To do so, you specify the
namespace in the YAML files that configure the Velero server. You then also specify the namespace when
you run Velero client commands.
## Edit the example files
The Ark release tarballs include a set of example configs that you can use to set up your Ark server. The
examples place the server and backup/schedule/restore/etc. data in the `heptio-ark` namespace.
The Velero release tarballs include a set of example configs that you can use to set up your Velero server. The
examples place the server and backup/schedule/restore/etc. data in the `velero` namespace.
To run the server in another namespace, you edit the relevant files, changing `heptio-ark` to
To run the server in another namespace, you edit the relevant files, changing `velero` to
your desired namespace.
To store your backups, schedules, restores, and config in another namespace, you edit the relevant
files, changing `heptio-ark` to your desired namespace. You also need to create the
files, changing `velero` to your desired namespace. You also need to create the
`cloud-credentials` secret in your desired namespace.
First, ensure you've [downloaded & extracted the latest release][0].
For all cloud providers, edit `config/common/00-prereqs.yaml`. This file defines:
* CustomResourceDefinitions for the Ark objects (backups, schedules, restores, downloadrequests, etc.)
* The namespace where the Ark server runs
* CustomResourceDefinitions for the Velero objects (backups, schedules, restores, downloadrequests, etc.)
* The namespace where the Velero server runs
* The namespace where backups, schedules, restores, etc. are stored
* The Ark service account
* The RBAC rules to grant permissions to the Ark service account
* The Velero service account
* The RBAC rules to grant permissions to the Velero service account
### AWS
For AWS, edit:
* `config/aws/05-ark-backupstoragelocation.yaml`
* `config/aws/06-ark-volumesnapshotlocation.yaml`
* `config/aws/05-backupstoragelocation.yaml`
* `config/aws/06-volumesnapshotlocation.yaml`
* `config/aws/10-deployment.yaml`
@@ -40,16 +40,16 @@ For AWS, edit:
For Azure, edit:
* `config/azure/00-ark-deployment.yaml`
* `config/azure/05-ark-backupstoragelocation.yaml`
* `config/azure/06-ark-volumesnapshotlocation.yaml`
* `config/azure/00-deployment.yaml`
* `config/azure/05-backupstoragelocation.yaml`
* `config/azure/06-volumesnapshotlocation.yaml`
### GCP
For GCP, edit:
* `config/gcp/05-ark-backupstoragelocation.yaml`
* `config/gcp/06-ark-volumesnapshotlocation.yaml`
* `config/gcp/05-backupstoragelocation.yaml`
* `config/gcp/06-volumesnapshotlocation.yaml`
* `config/gcp/10-deployment.yaml`
@@ -57,16 +57,16 @@ For GCP, edit:
For IBM, edit:
* `config/ibm/05-ark-backupstoragelocation.yaml`
* `config/ibm/05-backupstoragelocation.yaml`
* `config/ibm/10-deployment.yaml`
## Specify the namespace in client commands
To specify the namespace for all Ark client commands, run:
To specify the namespace for all Velero client commands, run:
```
ark client config set namespace=<NAMESPACE_VALUE>
velero client config set namespace=<NAMESPACE_VALUE>
```

View File

@@ -1,15 +1,15 @@
# Output file format
A backup is a gzip-compressed tar file whose name matches the Backup API resource's `metadata.name` (what is specified during `ark backup create <NAME>`).
A backup is a gzip-compressed tar file whose name matches the Backup API resource's `metadata.name` (what is specified during `velero backup create <NAME>`).
In cloud object storage, each backup file is stored in its own subdirectory in the bucket specified in the Ark server configuration. This subdirectory includes an additional file called `ark-backup.json`. The JSON file lists all information about your associated Backup resource, including any default values. This gives you a complete historical record of the backup configuration. The JSON file also specifies `status.version`, which corresponds to the output file format.
In cloud object storage, each backup file is stored in its own subdirectory in the bucket specified in the Velero server configuration. This subdirectory includes an additional file called `velero-backup.json`. The JSON file lists all information about your associated Backup resource, including any default values. This gives you a complete historical record of the backup configuration. The JSON file also specifies `status.version`, which corresponds to the output file format.
The directory structure in your cloud storage looks something like:
```
rootBucket/
backup1234/
ark-backup.json
velero-backup.json
backup1234.tar.gz
```
@@ -18,11 +18,11 @@ rootBucket/
```json
{
"kind": "Backup",
"apiVersion": "ark.heptio.com/v1",
"apiVersion": "velero.io/v1",
"metadata": {
"name": "test-backup",
"namespace": "heptio-ark",
"selfLink": "/apis/ark.heptio.com/v1/namespaces/heptio-ark/backups/testtest",
"namespace": "velero",
"selfLink": "/apis/velero.io/v1/namespaces/velero/backups/testtest",
"uid": "a12345cb-75f5-11e7-b4c2-abcdef123456",
"resourceVersion": "337075",
"creationTimestamp": "2017-07-31T13:39:15Z"

View File

@@ -1,31 +1,45 @@
# Plugins
Heptio Ark has a plugin architecture that allows users to add their own custom functionality to Ark backups & restores
without having to modify/recompile the core Ark binary. To add custom functionality, users simply create their own binary
containing implementations of Ark's plugin kinds (described below), plus a small amount of boilerplate code to
expose the plugin implementations to Ark. This binary is added to a container image that serves as an init container for
the Ark server pod and copies the binary into a shared emptyDir volume for the Ark server to access.
Velero has a plugin architecture that allows users to add their own custom functionality to Velero backups & restores without having to modify/recompile the core Velero binary. To add custom functionality, users simply create their own binary containing implementations of Velero's plugin kinds (described below), plus a small amount of boilerplate code to expose the plugin implementations to Velero. This binary is added to a container image that serves as an init container for the Velero server pod and copies the binary into a shared emptyDir volume for the Velero server to access.
Multiple plugins, of any type, can be implemented in this binary.
A fully-functional [sample plugin repository][1] is provided to serve as a convenient starting point for plugin authors.
## Plugin Naming
When naming your plugin, keep in mind that the name needs to conform to these rules:
- have two parts separated by '/'
- none of the above parts can be empty
- the prefix is a valid DNS subdomain name
- a plugin with the same name cannot not already exist
### Some examples:
```
- example.io/azure
- 1.2.3.4/5678
- example-with-dash.io/azure
```
You will need to give your plugin(s) a name when registering them by calling the appropriate `RegisterX` function: <https://github.com/heptio/velero/blob/0e0f357cef7cf15d4c1d291d3caafff2eeb69c1e/pkg/plugin/framework/server.go#L42-L60>
## Plugin Kinds
Ark currently supports the following kinds of plugins:
Velero currently supports the following kinds of plugins:
- **Object Store** - persists and retrieves backups, backup logs and restore logs
- **Block Store** - creates volume snapshots (during backup) and restores volumes from snapshots (during restore)
- **Volume Snapshotter** - creates volume snapshots (during backup) and restores volumes from snapshots (during restore)
- **Backup Item Action** - executes arbitrary logic for individual items prior to storing them in a backup file
- **Restore Item Action** - executes arbitrary logic for individual items prior to restoring them into a cluster
## Plugin Logging
Ark provides a [logger][2] that can be used by plugins to log structured information to the main Ark server log or
per-backup/restore logs. See the [sample repository][1] for an example of how to instantiate and use the logger
within your plugin.
Velero provides a [logger][2] that can be used by plugins to log structured information to the main Velero server log or
per-backup/restore logs. It also passes a `--log-level` flag to each plugin binary, whose value is the value of the same
flag from the main Velero process. This means that if you turn on debug logging for the Velero server via `--log-level=debug`,
plugins will also emit debug-level logs. See the [sample repository][1] for an example of how to use the logger within your plugin.
[1]: https://github.com/heptio/ark-plugin-example
[2]: https://github.com/heptio/ark/blob/master/pkg/plugin/logger.go
[1]: https://github.com/heptio/velero-plugin-example
[2]: https://github.com/heptio/velero/blob/master/pkg/plugin/logger.go

View File

@@ -1,6 +1,6 @@
# Run Ark more securely with restrictive RBAC settings
# Run Velero more securely with restrictive RBAC settings
By default Ark runs with an RBAC policy of ClusterRole `cluster-admin`. This is to make sure that Ark can back up or restore anything in your cluster. But `cluster-admin` access is wide open -- it gives Ark components access to everything in your cluster. Depending on your environment and your security needs, you should consider whether to configure additional RBAC policies with more restrictive access.
By default Velero runs with an RBAC policy of ClusterRole `cluster-admin`. This is to make sure that Velero can back up or restore anything in your cluster. But `cluster-admin` access is wide open -- it gives Velero components access to everything in your cluster. Depending on your environment and your security needs, you should consider whether to configure additional RBAC policies with more restrictive access.
**Note:** Roles and RoleBindings are associated with a single namespaces, not with an entire cluster. PersistentVolume backups are associated only with an entire cluster. This means that any backups or restores that use a restrictive Role and RoleBinding pair can manage only the resources that belong to the namespace. You do not need a wide open RBAC policy to manage PersistentVolumes, however. You can configure a ClusterRole and ClusterRoleBinding that allow backups and restores only of PersistentVolumes, not of all objects in the cluster.
@@ -17,10 +17,10 @@ metadata:
namespace: YOUR_NAMESPACE_HERE
name: ROLE_NAME_HERE
labels:
component: ark
component: velero
rules:
- apiGroups:
- ark.heptio.com
- velero.io
verbs:
- "*"
resources:
@@ -44,4 +44,4 @@ roleRef:
[1]: https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/
[2]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
[3]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[4]: namespace.md
[4]: namespace.md

View File

@@ -1,16 +1,16 @@
# Restic Integration
As of version 0.9.0, Ark has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called
As of version 0.9.0, Velero has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called
[restic][1].
Ark has always allowed you to take snapshots of persistent volumes as part of your backups if youre using one of
Velero has always allowed you to take snapshots of persistent volumes as part of your backups if youre using one of
the supported cloud providers block storage offerings (Amazon EBS Volumes, Azure Managed Disks, Google Persistent Disks).
Starting with version 0.6.0, we provide a plugin model that enables anyone to implement additional object and block storage
backends, outside the main Ark repository.
backends, outside the main Velero repository.
We integrated restic with Ark so that users have an out-of-the-box solution for backing up and restoring almost any type of Kubernetes
volume*. This is a new capability for Ark, not a replacement for existing functionality. If you're running on AWS, and
taking EBS snapshots as part of your regular Ark backups, there's no need to switch to using restic. However, if you've
We integrated restic with Velero so that users have an out-of-the-box solution for backing up and restoring almost any type of Kubernetes
volume*. This is a new capability for Velero, not a replacement for existing functionality. If you're running on AWS, and
taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using restic. However, if you've
been waiting for a snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir,
local, or any other volume type that doesn't have a native snapshot concept, restic might be for you.
@@ -23,16 +23,16 @@ cross-volume-type data migrations. Stay tuned as this evolves!
### Prerequisites
- A working install of Ark version 0.10.0 or later. See [Set up Ark][2]
- A local clone of [the latest release tag of the Ark repository][3]
- Ark's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.10.0 and later.
- A working install of Velero version 0.10.0 or later. See [Set up Velero][2]
- A local clone of [the latest release tag of the Velero repository][3]
- Velero's restic integration requires the Kubernetes [MountPropagation feature][6], which is enabled by default in Kubernetes v1.10.0 and later.
### Instructions
1. Ensure you've [downloaded & extracted the latest release][3].
1. In the Ark directory (i.e. where you extracted the release tarball), run the following to create new custom resource definitions:
1. In the Velero directory (i.e. where you extracted the release tarball), run the following to create new custom resource definitions:
```bash
kubectl apply -f config/common/00-prereqs.yaml
@@ -40,19 +40,34 @@ cross-volume-type data migrations. Stay tuned as this evolves!
1. Run one of the following for your platform to create the daemonset:
Please note: In RancherOS , the path is not `/var/lib/kubelet/pods` , rather it is `/opt/rke/var/lib/kubelet/pods`
thereby requires modifying the restic-daemonset.yaml before applying.
```
hostPath:
path: /var/lib/kubelet/pods
```
to
```
hostPath:
path: /opt/rke/var/lib/kubelet/pods
```
- AWS: `kubectl apply -f config/aws/20-restic-daemonset.yaml`
- Azure: `kubectl apply -f config/azure/20-restic-daemonset.yaml`
- GCP: `kubectl apply -f config/gcp/20-restic-daemonset.yaml`
- Minio: `kubectl apply -f config/minio/30-restic-daemonset.yaml`
You're now ready to use Ark with restic.
You're now ready to use Velero with restic.
## Back up
1. Run the following for each pod that contains a volume to back up:
```bash
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.ark.heptio.com/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...
```
where the volume names are the names of the volumes in the pod spec.
@@ -84,91 +99,123 @@ You're now ready to use Ark with restic.
You'd run:
```bash
kubectl -n foo annotate pod/sample backup.ark.heptio.com/backup-volumes=pvc-volume,emptydir-volume
kubectl -n foo annotate pod/sample backup.velero.io/backup-volumes=pvc-volume,emptydir-volume
```
This annotation can also be provided in a pod template spec if you use a controller to manage your pods.
1. Take an Ark backup:
1. Take an Velero backup:
```bash
ark backup create NAME OPTIONS...
velero backup create NAME OPTIONS...
```
1. When the backup completes, view information about the backups:
```bash
ark backup describe YOUR_BACKUP_NAME
velero backup describe YOUR_BACKUP_NAME
kubectl -n heptio-ark get podvolumebackups -l ark.heptio.com/backup-name=YOUR_BACKUP_NAME -o yaml
kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOUR_BACKUP_NAME -o yaml
```
## Restore
1. Restore from your Ark backup:
1. Restore from your Velero backup:
```bash
ark restore create --from-backup BACKUP_NAME OPTIONS...
velero restore create --from-backup BACKUP_NAME OPTIONS...
```
1. When the restore completes, view information about your pod volume restores:
```bash
ark restore describe YOUR_RESTORE_NAME
velero restore describe YOUR_RESTORE_NAME
kubectl -n heptio-ark get podvolumerestores -l ark.heptio.com/restore-name=YOUR_RESTORE_NAME -o yaml
kubectl -n velero get podvolumerestores -l velero.io/restore-name=YOUR_RESTORE_NAME -o yaml
```
## Limitations
- `hostPath` volumes are not supported. [Local persistent volumes][4] are supported.
- Those of you familiar with [restic][1] may know that it encrypts all of its data. We've decided to use a static,
common encryption key for all restic repositories created by Ark. **This means that anyone who has access to your
common encryption key for all restic repositories created by Velero. **This means that anyone who has access to your
bucket can decrypt your restic backup data**. Make sure that you limit access to the restic bucket
appropriately. We plan to implement full Ark backup encryption, including securing the restic encryption keys, in
a future release.
appropriately. We plan to implement full Velero backup encryption, including securing the restic encryption keys, in
a future release.
## Customize Restore Helper Image
Velero uses a helper init container when performing a restic restore. By default, the image for this container is `gcr.io/heptio-images/velero-restic-restore-helper:<VERSION>`,
where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with
the alternate image. The ConfigMap must look like the following:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# any name can be used; Velero uses the labels (below)
# to identify it rather than the name
name: restic-restore-action-config
# must be in the velero namespace
namespace: velero
# the below labels should be used verbatim in your
# ConfigMap.
labels:
# this value-less label identifies the ConfigMap as
# config for a plugin (i.e. the built-in restic restore
# item action plugin)
velero.io/plugin-config: ""
# this label identifies the name and kind of plugin
# that this ConfigMap is for.
velero.io/restic: RestoreItemAction
data:
# "image" is the only configurable key. The value can either
# include a tag or not; if the tag is *not* included, the
# tag from the main Velero image will automatically be used.
image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG]
```
## Troubleshooting
Run the following checks:
Are your Ark server and daemonset pods running?
Are your Velero server and daemonset pods running?
```bash
kubectl get pods -n heptio-ark
kubectl get pods -n velero
```
Does your restic repository exist, and is it ready?
```bash
ark restic repo get
velero restic repo get
ark restic repo get REPO_NAME -o yaml
velero restic repo get REPO_NAME -o yaml
```
Are there any errors in your Ark backup/restore?
Are there any errors in your Velero backup/restore?
```bash
ark backup describe BACKUP_NAME
ark backup logs BACKUP_NAME
velero backup describe BACKUP_NAME
velero backup logs BACKUP_NAME
ark restore describe RESTORE_NAME
ark restore logs RESTORE_NAME
velero restore describe RESTORE_NAME
velero restore logs RESTORE_NAME
```
What is the status of your pod volume backups/restores?
```bash
kubectl -n heptio-ark get podvolumebackups -l ark.heptio.com/backup-name=BACKUP_NAME -o yaml
kubectl -n velero get podvolumebackups -l velero.io/backup-name=BACKUP_NAME -o yaml
kubectl -n heptio-ark get podvolumerestores -l ark.heptio.com/restore-name=RESTORE_NAME -o yaml
kubectl -n velero get podvolumerestores -l velero.io/restore-name=RESTORE_NAME -o yaml
```
Is there any useful information in the Ark server or daemon pod logs?
Is there any useful information in the Velero server or daemon pod logs?
```bash
kubectl -n heptio-ark logs deploy/ark
kubectl -n heptio-ark logs DAEMON_POD_NAME
kubectl -n velero logs deploy/velero
kubectl -n velero logs DAEMON_POD_NAME
```
**NOTE**: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument
@@ -178,71 +225,71 @@ to the container command in the deployment/daemonset pod template spec.
We introduced three custom resource definitions and associated controllers:
- `ResticRepository` - represents/manages the lifecycle of Ark's [restic repositories][5]. Ark creates
- `ResticRepository` - represents/manages the lifecycle of Velero's [restic repositories][5]. Velero creates
a restic repository per namespace when the first restic backup for a namespace is requested. The controller
for this custom resource executes restic repository lifecycle commands -- `restic init`, `restic check`,
and `restic prune`.
You can see information about your Ark restic repositories by running `ark restic repo get`.
You can see information about your Velero restic repositories by running `velero restic repo get`.
- `PodVolumeBackup` - represents a restic backup of a volume in a pod. The main Ark backup process creates
- `PodVolumeBackup` - represents a restic backup of a volume in a pod. The main Velero backup process creates
one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this
resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. The controller executes
`restic backup` commands to backup pod volume data.
- `PodVolumeRestore` - represents a restic restore of a pod volume. The main Ark restore process creates one
- `PodVolumeRestore` - represents a restic restore of a pod volume. The main Velero restore process creates one
or more of these when it encounters a pod that has associated restic backups. Each node in the cluster runs a
controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods
on that node. The controller executes `restic restore` commands to restore pod volume data.
### Backup
1. The main Ark backup process checks each pod that it's backing up for the annotation specifying a restic backup
should be taken (`backup.ark.heptio.com/backup-volumes`)
1. When found, Ark first ensures a restic repository exists for the pod's namespace, by:
1. The main Velero backup process checks each pod that it's backing up for the annotation specifying a restic backup
should be taken (`backup.velero.io/backup-volumes`)
1. When found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it
1. Ark then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation
1. The main Ark process now waits for the `PodVolumeBackup` resources to complete or fail
1. Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation
1. The main Velero process now waits for the `PodVolumeBackup` resources to complete or fail
1. Meanwhile, each `PodVolumeBackup` is handled by the controller on the appropriate node, which:
- has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data
- finds the pod volume's subdirectory within the above volume
- runs `restic backup`
- updates the status of the custom resource to `Completed` or `Failed`
1. As each `PodVolumeBackup` finishes, the main Ark process captures its restic snapshot ID and adds it as an annotation
to the copy of the pod JSON that's stored in the Ark backup. This will be used for restores, as seen in the next section.
1. As each `PodVolumeBackup` finishes, the main Velero process captures its restic snapshot ID and adds it as an annotation
to the copy of the pod JSON that's stored in the Velero backup. This will be used for restores, as seen in the next section.
### Restore
1. The main Ark restore process checks each pod that it's restoring for annotations specifying a restic backup
exists for a volume in the pod (`snapshot.ark.heptio.com/<volume-name>`)
1. When found, Ark first ensures a restic repository exists for the pod's namespace, by:
1. The main Velero restore process checks each pod that it's restoring for annotations specifying a restic backup
exists for a volume in the pod (`snapshot.velero.io/<volume-name>`)
1. When found, Velero first ensures a restic repository exists for the pod's namespace, by:
- checking if a `ResticRepository` custom resource already exists
- if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it (note that
in this case, the actual repository should already exist in object storage, so the Ark controller will simply
in this case, the actual repository should already exist in object storage, so the Velero controller will simply
check it for integrity)
1. Ark adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more
1. Velero adds an init container to the pod, whose job is to wait for all restic restores for the pod to complete (more
on this shortly)
1. Ark creates the pod, with the added init container, by submitting it to the Kubernetes API
1. Ark creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod
1. The main Ark process now waits for each `PodVolumeRestore` resource to complete or fail
1. Velero creates the pod, with the added init container, by submitting it to the Kubernetes API
1. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod
1. The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail
1. Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which:
- has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data
- waits for the pod to be running the init container
- finds the pod volume's subdirectory within the above volume
- runs `restic restore`
- on success, writes a file into the pod volume, in an `.ark` subdirectory, whose name is the UID of the Ark restore
- on success, writes a file into the pod volume, in a `.velero` subdirectory, whose name is the UID of the Velero restore
that this pod volume restore is for
- updates the status of the custom resource to `Completed` or `Failed`
1. The init container that was added to the pod is running a process that waits until it finds a file
within each restored volume, under `.ark`, whose name is the UID of the Ark restore being run
within each restored volume, under `.velero`, whose name is the UID of the Velero restore being run
1. Once all such files are found, the init container's process terminates successfully and the pod moves
on to running other init containers/the main containers.
[1]: https://github.com/restic/restic
[2]: install-overview.md
[3]: https://github.com/heptio/ark/releases/
[3]: https://github.com/heptio/velero/releases/
[4]: https://kubernetes.io/docs/concepts/storage/volumes/#local
[5]: http://restic.readthedocs.io/en/latest/100_references.html#terminology
[6]: https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation

View File

@@ -1,160 +0,0 @@
# Object Storage Layout Changes in v0.10
## Overview
Ark v0.10 includes breaking changes to where data is stored in your object storage bucket. You'll need to run a [one-time migration procedure](#upgrading-to-v010)
if you're upgrading from prior versions of Ark.
## Details
Prior to v0.10, Ark stored data in an object storage bucket using the following structure:
```
<your-bucket>/
backup-1/
ark-backup.json
backup-1.tar.gz
backup-1-logs.gz
restore-of-backup-1-logs.gz
restore-of-backup-1-results.gz
backup-2/
ark-backup.json
backup-2.tar.gz
backup-2-logs.gz
restore-of-backup-2-logs.gz
restore-of-backup-2-results.gz
...
```
Ark also stored restic data, if applicable, in a separate object storage bucket, structured as:
```
<your-ark-restic-bucket>/[<your-optional-prefix>/]
namespace-1/
data/
index/
keys/
snapshots/
config
namespace-2/
data/
index/
keys/
snapshots/
config
...
```
As of v0.10, we've reorganized this layout to provide a cleaner and more extensible directory structure. The new layout looks like:
```
<your-bucket>[/<your-prefix>]/
backups/
backup-1/
ark-backup.json
backup-1.tar.gz
backup-1-logs.gz
backup-2/
ark-backup.json
backup-2.tar.gz
backup-2-logs.gz
...
restores/
restore-of-backup-1/
restore-of-backup-1-logs.gz
restore-of-backup-1-results.gz
restore-of-backup-2/
restore-of-backup-2-logs.gz
restore-of-backup-2-results.gz
...
restic/
namespace-1/
data/
index/
keys/
snapshots/
config
namespace-2/
data/
index/
keys/
snapshots/
config
...
...
```
## Upgrading to v0.10
Before upgrading to v0.10, you'll need to run a one-time upgrade script to rearrange the contents of your existing Ark bucket(s) to be compatible with
the new layout.
Please note that the following scripts **will not** migrate existing restore logs/results into the new `restores/` subdirectory. This means that they
will not be accessible using `ark restore describe` or `ark restore logs`. They *will* remain in the relevant backup's subdirectory so they are manually
accessible, and will eventually be garbage-collected along with the backup. We've taken this approach in order to keep the migration scripts simple
and less error-prone.
### rclone-Based Script
This script uses [rclone][1], which you can download and install following the instructions [here][2].
Please read through the script carefully before starting and execute it step-by-step.
```bash
ARK_BUCKET=<your-ark-bucket>
ARK_TEMP_MIGRATION_BUCKET=<a-temp-bucket-for-migration>
# 1. This is an interactive step that configures rclone to be
# able to access your storage provider. Follow the instructions,
# and keep track of the "remote name" for the next step:
rclone config
# 2. Store the name of the rclone remote that you just set up
# in Step #1:
RCLONE_REMOTE_NAME=<your-remote-name>
# 3. Create a temporary bucket to be used as a backup of your
# current Ark bucket's contents:
rclone mkdir ${RCLONE_REMOTE_NAME}:${ARK_TEMP_MIGRATION_BUCKET}
# 4. Do a full copy of the contents of your Ark bucket into the
# temporary bucket:
rclone copy ${RCLONE_REMOTE_NAME}:${ARK_BUCKET} ${RCLONE_REMOTE_NAME}:${ARK_TEMP_MIGRATION_BUCKET}
# 5. Verify that the temporary bucket contains an exact copy of
# your Ark bucket's contents. You should see a short block
# of output stating "0 differences found":
rclone check ${RCLONE_REMOTE_NAME}:${ARK_BUCKET} ${RCLONE_REMOTE_NAME}:${ARK_TEMP_MIGRATION_BUCKET}
# 6. Delete your Ark bucket's contents (this command does not
# delete the bucket itself, only the contents):
rclone delete ${RCLONE_REMOTE_NAME}:${ARK_BUCKET}
# 7. Copy the contents of the temporary bucket into your Ark bucket,
# under the 'backups/' directory/prefix:
rclone copy ${RCLONE_REMOTE_NAME}:${ARK_TEMP_MIGRATION_BUCKET} ${RCLONE_REMOTE_NAME}:${ARK_BUCKET}/backups
# 8. Verify that the 'backups/' directory in your Ark bucket now
# contains an exact copy of the temporary bucket's contents:
rclone check ${RCLONE_REMOTE_NAME}:${ARK_BUCKET}/backups ${RCLONE_REMOTE_NAME}:${ARK_TEMP_MIGRATION_BUCKET}
# 9. OPTIONAL: If you have restic data to migrate:
# a. Copy the contents of your Ark restic location into your
# Ark bucket, under the 'restic/' directory/prefix:
ARK_RESTIC_LOCATION=<your-ark-restic-bucket[/optional-prefix]>
rclone copy ${RCLONE_REMOTE_NAME}:${ARK_RESTIC_LOCATION} ${RCLONE_REMOTE_NAME}:${ARK_BUCKET}/restic
# b. Check that the 'restic/' directory in your Ark bucket now
# contains an exact copy of your restic location:
rclone check ${RCLONE_REMOTE_NAME}:${ARK_BUCKET}/restic ${RCLONE_REMOTE_NAME}:${ARK_RESTIC_LOCATION}
# c. Delete your ResticRepository custom resources to allow Ark
# to find them in the new location:
kubectl -n heptio-ark delete resticrepositories --all
# 10. Once you've confirmed that Ark v0.10 works with your revised Ark
# bucket, you can delete the temporary migration bucket.
```
[1]: https://rclone.org/
[2]: https://rclone.org/downloads/

View File

@@ -1,26 +1,27 @@
# Compatible Storage Providers
Ark supports a variety of storage providers for different backup and snapshot operations. As of version 0.6.0, a plugin system allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Ark codebase.
Velero supports a variety of storage providers for different backup and snapshot operations. As of version 0.6.0, a plugin system allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase.
## Backup Storage Providers
| Provider | Owner | Contact |
|---------------------------|----------|---------------------------------|
| [AWS S3][2] | Ark Team | [Slack][10], [GitHub Issue][11] |
| [Azure Blob Storage][3] | Ark Team | [Slack][10], [GitHub Issue][11] |
| [Google Cloud Storage][4] | Ark Team | [Slack][10], [GitHub Issue][11] |
| [AWS S3][2] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Azure Blob Storage][3] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Google Cloud Storage][4] | Velero Team | [Slack][10], [GitHub Issue][11] |
## S3-Compatible Backup Storage Providers
Ark uses [Amazon's Go SDK][12] to connect to the S3 API. Some third-party storage providers also support the S3 API, and users have reported the following providers work with Ark:
Velero uses [Amazon's Go SDK][12] to connect to the S3 API. Some third-party storage providers also support the S3 API, and users have reported the following providers work with Velero:
_Note that these providers are not regularly tested by the Ark team._
_Note that these providers are not regularly tested by the Velero team._
* [IBM Cloud][5]
* [Minio][9]
* Ceph RADOS v12.2.7
* [DigitalOcean][7]
* Quobyte
* [NooBaa][16]
_Some storage providers, like Quobyte, may need a different [signature algorithm version][15]._
@@ -28,10 +29,10 @@ _Some storage providers, like Quobyte, may need a different [signature algorithm
| Provider | Owner | Contact |
|----------------------------------|-----------------|---------------------------------|
| [AWS EBS][2] | Ark Team | [Slack][10], [GitHub Issue][11] |
| [Azure Managed Disks][3] | Ark Team | [Slack][10], [GitHub Issue][11] |
| [Google Compute Engine Disks][4] | Ark Team | [Slack][10], [GitHub Issue][11] |
| [Restic][1] | Ark Team | [Slack][10], [GitHub Issue][11] |
| [AWS EBS][2] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Azure Managed Disks][3] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Google Compute Engine Disks][4] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Restic][1] | Velero Team | [Slack][10], [GitHub Issue][11] |
| [Portworx][6] | Portworx | [Slack][13], [GitHub Issue][14] |
| [DigitalOcean][7] | StackPointCloud | |
@@ -48,11 +49,12 @@ After you publish your plugin, open a PR that adds your plugin to the appropriat
[5]: ibm-config.md
[6]: https://docs.portworx.com/scheduler/kubernetes/ark.html
[7]: https://github.com/StackPointCloud/ark-plugin-digitalocean
[8]: https://github.com/heptio/ark-plugin-example/
[8]: https://github.com/heptio/velero-plugin-example/
[9]: get-started.md
[10]: https://kubernetes.slack.com/messages/ark-dr
[11]: https://github.com/heptio/ark/issues
[10]: https://kubernetes.slack.com/messages/velero
[11]: https://github.com/heptio/velero/issues
[12]: https://github.com/aws/aws-sdk-go/aws
[13]: https://portworx.slack.com/messages/px-k8s
[14]: https://github.com/portworx/ark-plugin/issues
[15]: api-types/backupstoragelocation.md#aws
[16]: http://www.noobaa.com/

View File

@@ -1,6 +1,6 @@
# Troubleshooting
These tips can help you troubleshoot known issues. If they don't help, you can [file an issue][4], or talk to us on the [#ark-dr channel][25] on the Kubernetes Slack server.
These tips can help you troubleshoot known issues. If they don't help, you can [file an issue][4], or talk to us on the [#velero channel][25] on the Kubernetes Slack server.
See also:
@@ -9,29 +9,29 @@ See also:
## General troubleshooting information
In `ark` version >= `0.1.0`, you can use the `ark bug` command to open a [Github issue][4] by launching a browser window with some prepopulated values. Values included are OS, CPU architecture, `kubectl` client and server versions (if available) and the `ark` client version. This information isn't submitted to Github until you click the `Submit new issue` button in the Github UI, so feel free to add, remove or update whatever information you like.
In `velero` version >= `0.10.0`, you can use the `velero bug` command to open a [Github issue][4] by launching a browser window with some prepopulated values. Values included are OS, CPU architecture, `kubectl` client and server versions (if available) and the `velero` client version. This information isn't submitted to Github until you click the `Submit new issue` button in the Github UI, so feel free to add, remove or update whatever information you like.
Some general commands for troubleshooting that may be helpful:
* `ark backup describe <backupName>` - describe the details of a backup
* `ark backup logs <backupName>` - fetch the logs for this specific backup. Useful for viewing failures and warnings, including resources that could not be backed up.
* `ark restore describe <restoreName>` - describe the details of a restore
* `ark restore logs <restoreName>` - fetch the logs for this specific restore. Useful for viewing failures and warnings, including resources that could not be restored.
* `kubectl logs deployment/ark -n heptio-ark` - fetch the logs of the Ark server pod. This provides the output of the Ark server processes.
* `velero backup describe <backupName>` - describe the details of a backup
* `velero backup logs <backupName>` - fetch the logs for this specific backup. Useful for viewing failures and warnings, including resources that could not be backed up.
* `velero restore describe <restoreName>` - describe the details of a restore
* `velero restore logs <restoreName>` - fetch the logs for this specific restore. Useful for viewing failures and warnings, including resources that could not be restored.
* `kubectl logs deployment/velero -n velero` - fetch the logs of the Velero server pod. This provides the output of the Velero server processes.
### Getting ark debug logs
### Getting velero debug logs
You can increase the verbosity of the Ark server by editing your Ark deployment to look like this:
You can increase the verbosity of the Velero server by editing your Velero deployment to look like this:
```
kubectl edit deployment/ark -n heptio-ark
kubectl edit deployment/velero -n velero
...
containers:
- name: ark
image: gcr.io/heptio-images/ark:latest
- name: velero
image: gcr.io/heptio-images/velero:latest
command:
- /ark
- /velero
args:
- server
- --log-level # Add this line
@@ -41,18 +41,18 @@ kubectl edit deployment/ark -n heptio-ark
## Known issue with restoring LoadBalancer Service
Because of how Kubernetes handles Service objects of `type=LoadBalancer`, when you restore these objects you might encounter an issue with changed values for Service UIDs. Kubernetes automatically generates the name of the cloud resource based on the Service UID, which is different when restored, resulting in a different name for the cloud load balancer. If the DNS CNAME for your application points to the DNS name of your cloud load balancer, you'll need to update the CNAME pointer when you perform an Ark restore.
Because of how Kubernetes handles Service objects of `type=LoadBalancer`, when you restore these objects you might encounter an issue with changed values for Service UIDs. Kubernetes automatically generates the name of the cloud resource based on the Service UID, which is different when restored, resulting in a different name for the cloud load balancer. If the DNS CNAME for your application points to the DNS name of your cloud load balancer, you'll need to update the CNAME pointer when you perform an Velero restore.
Alternatively, you might be able to use the Service's `spec.loadBalancerIP` field to keep connections valid, if your cloud provider supports this value. See [the Kubernetes documentation about Services of Type LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer).
## Miscellaneous issues
### Ark reports `custom resource not found` errors when starting up.
### Velero reports `custom resource not found` errors when starting up.
Ark's server will not start if the required Custom Resource Definitions are not found in Kubernetes. Apply
the `config/common/00-prereqs.yaml` file to create these definitions, then restart Ark.
Velero's server will not start if the required Custom Resource Definitions are not found in Kubernetes. Apply
the `config/common/00-prereqs.yaml` file to create these definitions, then restart Velero.
### `ark backup logs` returns a `SignatureDoesNotMatch` error
### `velero backup logs` returns a `SignatureDoesNotMatch` error
Downloading artifacts from object storage utilizes temporary, signed URLs. In the case of S3-compatible
providers, such as Ceph, there may be differences between their implementation and the official S3
@@ -66,6 +66,6 @@ Here are some things to verify if you receive `SignatureDoesNotMatch` errors:
[1]: debugging-restores.md
[2]: debugging-install.md
[4]: https://github.com/heptio/ark/issues
[4]: https://github.com/heptio/velero/issues
[5]: https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
[25]: https://kubernetes.slack.com/messages/ark-dr
[25]: https://kubernetes.slack.com/messages/velero

View File

@@ -1,89 +0,0 @@
# Upgrading to Ark v0.10
## Overview
Ark v0.10 includes a number of breaking changes. Below, we outline what those changes are, and what steps you should take to ensure
a successful upgrade from prior versions of Ark.
## Breaking Changes
### Switch from Config to BackupStorageLocation and VolumeSnapshotLocation CRDs, and new server flags
Prior to v0.10, Ark used a `Config` CRD to capture information about your backup storage and persistent volume providers, as well
some miscellaneous Ark settings. In v0.10, we've eliminated this CRD and replaced it with:
- A [BackupStorageLocation][1] CRD to capture information about where to store your backups
- A [VolumeSnapshotLocation][2] CRD to capture information about where to store your persistent volume snapshots
- Command-line flags for the `ark server` command (run by your Ark deployment) to capture miscellaneous Ark settings
When upgrading to v0.10, you'll need to transfer the configuration information that you currently have in the `Config` CRD
into the above. We'll cover exactly how to do this below.
For a general overview of this change, see the [Locations documentation][4].
### Reorganization of data in object storage
We've made [changes to the layout of data stored in object storage][3] for simplicity and extensibility. You'll need to
rearrange any pre-v0.10 data as part of the upgrade. We've provided a script to help with this.
## Step-by-Step Upgrade Instructions
1. Ensure you've [downloaded & extracted the latest release][5].
1. Scale down your existing Ark deployment:
```bash
kubectl scale -n heptio-ark deploy/ark --replicas 0
```
1. In the Ark directory (i.e. where you extracted the release tarball), re-apply the `00-prereqs.yaml` file to create new CRDs:
```bash
kubectl apply -f config/common/00-prereqs.yaml
```
1. Create one or more [BackupStorageLocation][1] resources based on the examples provided in the `config/` directory for your platform, using information from the existing `Config` resource as necessary.
1. If you're using Ark to take PV snapshots, create one or more [VolumeSnapshotLocation][2] resources based on the examples provided in the `config/` directory for your platform, using information from the existing `Config` resource as necessary.
1. Perform the one-time object storage migration detailed [here][3].
1. In your Ark deployment YAML (see the `config/` directory for samples), specify flags to the `ark server` command under the container's `args`:
a. The names of the `BackupStorageLocation` and `VolumeSnapshotLocation(s)` that should be used by default for backups. If defaults are set here,
users won't need to explicitly specify location names when creating backups (though they still can, if they want to store backups/snapshots in
alternate locations). If no value is specified for `--default-backup-storage-location`, the Ark server looks for a `BackupStorageLocation`
named `default` to use.
Flag | Default Value | Description | Example
---- | ------------- | ----------- | -------
`--default-backup-storage-location` | "default" | name of the backup storage location that should be used by default for backups | aws-us-east-1-bucket
`--default-volume-snapshot-locations` | [none] | name of the volume snapshot location(s) that should be used by default for PV snapshots, for each PV provider | aws:us-east-1,portworx:local
**NOTE:** the values of these flags should correspond to the names of a `BackupStorageLocation` and `VolumeSnapshotLocation(s)` custom resources
in the cluster.
b. Any non-default Ark server settings:
Flag | Default Value | Description
---- | ------------- | -----------
`--backup-sync-period` | 1m | how often to ensure all Ark backups in object storage exist as Backup API objects in the cluster
`--restic-timeout` | 1h | how long backups/restores of pod volumes should be allowed to run before timing out (previously `podVolumeOperationTimeout` in the `Config` resource in pre-v0.10 versions)
`--restore-only` | false | run in a mode where only restores are allowed; backups, schedules, and garbage-collection are all disabled
1. If you are using any plugins, update the Ark deployment YAML to reference the latest image tag for your plugins. This can be found under the `initContainers` section of your deployment YAML.
1. Apply your updated Ark deployment YAML to your cluster and ensure the pod(s) starts up successfully.
1. If you're using Ark's restic integration, ensure the daemon set pods have been re-created with the latest Ark image (if your daemon set YAML is using the `:latest` tag, you can delete the pods so they're recreated with an updated image).
1. Once you've confirmed all of your settings have been migrated over correctly, delete the Config CRD:
```bash
kubectl delete -n heptio-ark config --all
kubectl delete crd configs.ark.heptio.com
```
[1]: api-types/backupstoragelocation.md
[2]: api-types/volumesnapshotlocation.md
[3]: storage-layout-reorg-v0.10.md
[4]: locations.md
[5]: get-started.md#download

View File

@@ -1,6 +1,6 @@
# Upgrading Ark versions
# Upgrading Velero versions
Ark supports multiple concurrent versions. Whether you're setting up Ark for the first time or upgrading to a new version, you need to pay careful attention to versioning. This doc page is new as of version 0.10.0, and will be updated with information about subsequent releases.
Velero supports multiple concurrent versions. Whether you're setting up Velero for the first time or upgrading to a new version, you need to pay careful attention to versioning. This doc page is new as of version 0.10.0, and will be updated with information about subsequent releases.
## Minor versions, patch versions
@@ -14,13 +14,13 @@ Breaking changes are documented in the release notes and in the documentation.
- See [Upgrading to version 0.10.0][2]
## Ark versions and Kubernetes versions
## Velero versions and Kubernetes versions
Not all Ark versions support all versions of Kubernetes. You should be aware of the following known limitations:
Not all Velero versions support all versions of Kubernetes. You should be aware of the following known limitations:
- Ark version 0.9.0 requires Kubernetes version 1.8 or later. In version 0.9.1, Ark was updated to support earlier versions.
- Velero version 0.9.0 requires Kubernetes version 1.8 or later. In version 0.9.1, Velero was updated to support earlier versions.
- Restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. See [Restic Integration][3].
[1]: https://github.com/heptio/ark/releases
[2]: upgrading-to-v0.10.md
[1]: https://github.com/heptio/velero/releases
[2]: https://heptio.github.io/velero/v0.10.0/upgrading-to-v0.10
[3]: restic.md

View File

@@ -3,13 +3,13 @@
As an Open Source community, it is necessary for our work, communication, and collaboration to be done in the open.
GitHub provides a central repository for code, pull requests, issues, and documentation. When applicable, we will use Google Docs for design reviews, proposals, and other working documents.
While GitHub issues, milestones, and labels generally work pretty well, the Heptio team has found that product planning requires some additional tooling that GitHub projects do not offer.
While GitHub issues, milestones, and labels generally work pretty well, the Velero team has found that product planning requires some additional tooling that GitHub projects do not offer.
In our effort to minimize tooling while enabling product management insights, we have decided to use [ZenHub Open-Source](https://www.zenhub.com/blog/open-source/) to overlay product and project tracking on top of GitHub.
ZenHub is a GitHub application that provides Kanban visualization, Epic tracking, fine-grained prioritization, and more. It's primary backing storage system is existing GitHub issues along with additional metadata stored in ZenHub's database.
If you are an Ark user or Ark Developer, you do not _need_ to use ZenHub for your regular workflow (e.g to see open bug reports or feature requests, work on pull requests). However, if you'd like to be able to visualize the high-level project goals and roadmap, you will need to use the free version of ZenHub.
If you are an Velero user or Velero Developer, you do not _need_ to use ZenHub for your regular workflow (e.g to see open bug reports or feature requests, work on pull requests). However, if you'd like to be able to visualize the high-level project goals and roadmap, you will need to use the free version of ZenHub.
## Using ZenHub
ZenHub can be integrated within the GitHub interface using their [Chrome or FireFox extensions](https://www.zenhub.com/extension). In addition, you can use their dedicated [web application](https://app.zenhub.com/workspace/o/heptio/ark/boards?filterLogic=all&repos=99143276).
ZenHub can be integrated within the GitHub interface using their [Chrome or FireFox extensions](https://www.zenhub.com/extension). In addition, you can use their dedicated [web application](https://app.zenhub.com/workspace/o/heptio/velero/boards?filterLogic=all&repos=99143276).

View File

@@ -1,13 +1,13 @@
# Examples
This directory contains sample YAML config files for running Ark on each core provider. Starting with v0.10, these files are packaged into [the Ark release tarballs][2], and we highly recommend that you use the packaged versions of these files to ensure compatibility with the released code.
This directory contains sample YAML config files for running Velero on each core provider. Starting with v0.10, these files are packaged into [the Velero release tarballs][2], and we highly recommend that you use the packaged versions of these files to ensure compatibility with the released code.
* `common/`: Contains manifests to set up Ark. Can be used across cloud provider platforms. (Note that Azure requires its own deployment file due to its unique way of loading credentials).
* `common/`: Contains manifests to set up Velero. Can be used across cloud provider platforms. (Note that Azure requires its own deployment file due to its unique way of loading credentials).
* `minio/`: Used in the [Quickstart][1] to set up [Minio][0], a local S3-compatible object storage service. It provides a convenient way to test Ark without tying you to a specific cloud provider.
* `minio/`: Used in the [Quickstart][1] to set up [Minio][0], a local S3-compatible object storage service. It provides a convenient way to test Velero without tying you to a specific cloud provider.
* `aws/`, `azure/`, `gcp/`, `ibm/`: Contains manifests specific to the given cloud provider's setup.
[0]: https://github.com/minio/minio
[1]: /README.md#quickstart
[2]: https://github.com/heptio/ark/releases
[2]: https://github.com/heptio/velero/releases

View File

@@ -1,4 +1,4 @@
# Copyright 2018 the Heptio Ark contributors.
# Copyright 2018 the Velero contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,11 +13,11 @@
# limitations under the License.
---
apiVersion: ark.heptio.com/v1
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: heptio-ark
namespace: velero
spec:
provider: aws
objectStorage:

Some files were not shown because too many files have changed in this diff Show More