vSphere Tutorials Take #2 (#1900)

* vSphere Tutorials

Signed-off-by: cormachogan <chogan@vmware.com>
This commit is contained in:
Cormac Hogan
2019-10-08 19:15:49 +01:00
committed by KubeKween
parent 77b8dd4a71
commit 92a3797460
9 changed files with 725 additions and 0 deletions

View File

@@ -0,0 +1,258 @@
---
title: Velero v1.1 backing up and restoring apps on vSphere
image: /img/posts/vsphere-logo.jpg
excerpt: A How-To guide to run Velero on vSphere.
author_name: Cormac Hogan
author_avatar: /img/contributors/cormac-pic.png
categories: ['kubernetes']
# Tag should match author to drive author pages
tags: ['Velero', 'Cormac Hogan', 'how-to']
---
Velero version 1.1 provides support to backup Kubernetes applications deployed on vSphere. This post will provide detailed information on how to install and configure Velero to backup and restore a stateless application (`nginx`) that is running in Kubernetes on vSphere. At this time there is no vSphere plugin for snapshotting stateful applications on vSphere during a Velero backup. In this case, we rely on a third party program called `restic`. However this post does not include an example of how to backup a stateful application. That is available in another tutorial which can be found [here](../Velero-v1-1-Stateful-Backup-vSphere).
## Overview of steps
* Download and extract Velero v1.1
* Deploy and Configure a Minio Object store
* Install Velero using the `velero install` command, ensuring that both `restic` support and a Minio `publicUrl` are included
* Run a test backup/restore of a stateless application that has been deployed on upstream Kubernetes
## What this post does not show
* A demonstration on how to do backup/restore of a stateful application (i.e. PVs)
* The assumption is that the Kubernetes nodes in your cluster have internet access in order to pull the Velero images. This guide does not show how to add images using a local repository
## Download and extract Velero v1.1
The [Velero v1.1 binary can be found here](https://github.com/heptio/velero/releases/tag/v1.1.0.). Download and extract it to the desktop where you wish to manage your Velero backups, then copy or move the `velero` binary to somewhere in your $PATH.
## Deploy and Configure a Minio Object Store as a backup destination
Velero sends data and metadata about the Kubernetes objects being backed up to an S3 Object Store. If you do not have an S3 Object Store available, Velero provides the manifest file to create a Minio S3 Object Store on your Kubernetes cluster. This means that all Velero backups can be kept on-premises.
* Note: Stateful backups of applications deployed in Kubernetes on vSphere that use the `restic` plugin for backing up Persistent Volumes send the backup data to the same S3 Object Store.
There are a few different steps required to successfully deploy the Minio S3 Object Store.
### 1. Create a Minio credentials secret file
A simple credentials file containing the login/password (id/key) for the local on-premises Minio S3 Object Store must be created.
```bash
$ cat credentials-velero
[default]
aws_access_key_id = minio
aws_secret_access_key = minio123
```
### 2. Expose Minio Service on a NodePort
While this step is optional, it is useful for two reasons. The first is that it gives you a way to access the Minio portal through a browser and examine the backups. The second is that it enables you to specify a `publicUrl` for Minio, which in turn means that you can access backup and restore logs from the Minio S3 Object Store.
To expose the Minio Service on a NodePort, a modification of the `examples/minio/00-minio-deployment.yaml` manifest is necessary. The only change is to the type: field, from ClusterIP to NodePort:
```bash
spec:
# ClusterIP is recommended for production environments.
# Change to NodePort if needed per documentation,
# but only if you run Minio in a test/trial environment, for example with Minikube.
type: NodePort
```
### 3. Create the Minio Object Store
After making the changes above, simply run the following command to create the Minio Object Store.
```bash
$ kubectl apply -f examples/minio/00-minio-deployment.yaml
namespace/velero created
deployment.apps/minio created
service/minio created
job.batch/minio-setup created
```
### 4. Verify Minio Object Store has deployed successfully
Retrieve both the Kubernetes node on which the Minio Pod is running, and the port that the Minio Service has been exposed on. With this information, you can verify that Minio is working.
```bash
$ kubectl get pods -n velero
NAME READY STATUS RESTARTS AGE
minio-66dc75bb8d-95xpp 1/1 Running 0 25s
minio-setup-zpnfl 0/1 Completed 0 25s
```
```bash
$ kubectl describe pod minio-66dc75bb8d-95xpp -n velero | grep -i Node:
Node: 140ab5aa-0159-4612-b68c-df39dbea2245/192.168.192.5
```
```bash
$ kubectl get svc -n velero
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio NodePort 10.100.200.82 <none> 9000:32109/TCP 5s
```
In the above outputs, the node on which the Minio Object Storage is deployed has IP address `192.168.192.5`. The NodePort that the Minio Service is exposed is `32109`. If we now direct a browser to that `Node:port` combination, we should see the Minio Object Store web interface. You can use the credentials provided in the `credentials-velero` file earlier to login.
![Minio Object Store](../img/vsphere-tutorial-icons/Minio.png)
## Install Velero
To install Velero, the `velero install` command is used. There are a few options that need to be included. Since there is no vSphere plugin at this time, we rely on a third party plugin called `restic` to make backups of the Persistent Volume contents when Kubernetes is running on vSphere. The command line must include the option to use `restic`. As we also mentioned, we have setup a `publicUrl` for Minio, so we should also include this in our command line.
Here is a sample command based on a default installation on Velero for Kubernetes running on vSphere, ensuring that the `credentials-velero` secret file created earlier resides in the same directory where the command is run:
```bash
$ velero install --provider aws --bucket velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--use-restic \
--backup-location-config \
region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000,publicUrl=http://192.168.192.5:32109
```
Once the command is running, you should observe various output related to the creation of necessary Velero objects in Kubernetes. Everything going well, the output should complete with the following message:
```bash
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status.
```
Yes, that is a small sailboat in the output (Velero is Spanish for sailboat).
## Deploy a sample application to backup
Velero provides a sample `nginx` application for backup testing. This nginx deployment assumes the presence of a LoadBalancer for its Service. If you do not have a Load Balancer as part of your Container Network Interface (CNI), there are some easily configuration ones available to get your started. One example is MetalLb, available [here](https://metallb.universe.tf/).
* Note: This application is stateless. It does not create any Persistent Volumes, thus the restic driver is not utilizied as part of this example. To test whether restic is working correctly, you will need to backup a stateful application that is using Persistent Volumes.
To deploy the sample nginx application, run the following command:
```bash
$ kubectl apply -f examples/nginx-app/base.yaml
namespace/nginx-example created
deployment.apps/nginx-deployment created
service/my-nginx created
```
Check that the deployment was successful using the following commands:
```bash
$ kubectl get ns
NAME STATUS AGE
cassandra Active 23h
default Active 5d3h
kube-public Active 5d3h
kube-system Active 5d3h
nginx-example Active 4s
velero Active 9m40s
wavefront-collector Active 24h
```
```bash
$ kubectl get deployments --namespace=nginx-example
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 20s
```
```bash
$ kubectl get svc --namespace=nginx-example
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx LoadBalancer 10.100.200.147 100.64.0.1,192.168.191.70 80:30942/TCP 32s
```
In this example, a Load Balancer has provided the `nginx` service with an external IP address of 192.168.191.70. If I point a browser to that IP address, I get an nginx landing page identical to that shown below.
![nginx landing page](../img/vsphere-tutorial-icons/nginx.png)
We're now ready to do a backup and restore of the `nginx` application.
## Take your first Velero backup
In this example, we are going to stipulate at the `velero backup` command line that it should only backup applications that match `app=nginx`. Thus, we do not backup everything in the Kubernetes cluster, only the `nginx` application specific items.
```bash
$ velero backup create nginx-backup --selector app=nginx
Backup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
$ velero backup get
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
nginx-backup Completed 2019-08-07 16:13:44 +0100 IST 29d default app=nginx
```
You can now login to the Minio Object Storage via a browser and verify that the backup actually exists. You should see the name of the backup under the `velero/backups` folder:
![Minio Backup Details](../img/vsphere-tutorial-icons/minio-nginx-backup.png)
## Destroy your application
Lets now go ahead and remove the `nginx` namespace, then do a restore of the application from our backup. Later we will demonstrate how we can restore our `nginx` application.
```bash
$ kubectl delete ns nginx-example
namespace "nginx-example" deleted
```
This command should also have removed the `nginx` deployment and service.
## Do your first Velero restore
Restores are also done from the command line using the `velero restore` command. You simply need to specify which backup you wish to restore.
```bash
$ velero backup get
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
nginx-backup Completed 2019-08-07 16:13:44 +0100 IST 29d default app=nginx
```
```bash
$ velero restore create nginx-restore --from-backup nginx-backup
Restore request "nginx-restore" submitted successfully.
Run `velero restore describe nginx-restore` or `velero restore logs nginx-restore` for more details.
```
The following command can be used to examine the restore in detail, and check to see if it has successfully completed.
```bash
$ velero restore describe nginx-restore
Name: nginx-restore
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: Completed
Backup: nginx-backup
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
```
## Verify that the restore succeeded
You can see that the restore has now completed. Check to see if the namespace, DaemonSet and service has been restored using the `kubectl` commands shown previously. One item to note is that the `nginx` service may be restored with a new IP address from the LoadBalancer. This is normal.
Now lets see if we can successfully reach our `nginx` web server on that IP address. Yes we can! Looks like the restore was successful.
![nginx restored](../img/vsphere-tutorial-icons/nginx-restore-new-ip.png)
Backups and Restores are now working on Kubernetes deployed on vSphere using Velero v1.1.
## Feedback and Participation
As always, we welcome feedback and participation in the development of Velero. [All information on how to contact us or become active can be found here](https://velero.io/community/)
You can find us on [Kubernetes Slack in the #velero channel](https://kubernetes.slack.com/messages/C6VCGP4MT), and follow us on Twitter at [@projectvelero](https://twitter.com/projectvelero).

View File

@@ -0,0 +1,467 @@
---
title: Velero v1.1 backing up and restoring Stateful apps on vSphere
image: /img/posts/cassandra.gif
excerpt: This post demonstrates how Velero can be used on Kubernetes running on vSphere to backup a Stateful application. For the purposes of this example, we will backup and restore a Cassandra NoSQL database management system.
author_name: Cormac Hogan
author_avatar: /img/contributors/cormac-pic.png
categories: ['kubernetes']
# Tag should match author to drive author pages
tags: ['Velero', 'Cormac Hogan', 'how-to']
---
Velero version 1.1 provides support to backup applications orchestrated on upstream Kubernetes running natively on vSphere. This post will provide detailed information on how to use Velero v1.1 to backup and restore a stateful application (`Cassandra`) that is running in a Kubernetes cluster deployed on vSphere. At this time there is no vSphere plugin for snapshotting stateful applications during a Velero backup. In this case, we rely on a third party program called `restic` to copy the data contents from Persistent Volumes. The data is stored in the same S3 object store where the Kubernetes object metadata is stored.
## Overview of steps
* Download and deploy Cassandra
* Create and populate a database and table in Cassandra
* Prepare Cassandra for a Velero backup by adding appropriate annotations
* Use Velero to take a backup
* Destroy the Cassandra deployment
* Use Velero to restore the Cassandra application
* Verify that the Cassandra database and table of contents have been restored
## What this post does not show
* This tutorial does not show how to deploy Velero v1.1 on vSphere. This is available in other tutorials.
* For this backup to be successful, Velero needs to be installed with the `use-restic` flag. [More details on using Restic for stateful backups can be found in the docs here](https://velero.io/docs/v1.1.0/restic/#Setup)
* The assumption is that the Kubernetes nodes in your cluster have internet access in order to pull the necessary Velero images. This guide does not show how to pull images using a local repository.
## Download and Deploy Cassandra
For instructions on how to download and deploy a simple Cassandra StatefulSet, please refer to [this blog post](https://cormachogan.com/2019/06/12/kubernetes-storage-on-vsphere-101-statefulset/). This will show you how to deploy a Cassandra StatefulSet which we can use to do our Stateful aplication backup and restore. The manifests [available here](https://github.com/cormachogan/vsphere-storage-101/tree/master/StatefulSets) use an earlier version of Cassandra (v11) that includes the `cqlsh` tool which we will use now to create a database and populate a table with some sample data.
If you follow the instructions above on how to deploy Cassandra on Kubernetes, you should see a similar response if you run the following command against your deployment:
```bash
kubectl exec -it cassandra-0 -n cassandra -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.244.1.18 162.95 KiB 32 66.9% 2fc03eff-27ee-4934-b483-046e096ba116 Rack1-K8Demo
UN 10.244.1.19 174.32 KiB 32 61.4% 83867fd7-bb6f-45dd-b5ea-cdf5dcec9bad Rack1-K8Demo
UN 10.244.2.14 161.04 KiB 32 71.7% 8d88d0ec-2981-4c8b-a295-b36eee62693c Rack1-K8Demo
```
Now we will populate Cassandra with some data. Here we are connecting to the first Pod, cassandra-0 and running the `cqlsh` command which will allow us to create a Keyspace and a table.
```bash
$ kubectl exec -it cassandra-0 -n cassandra -- cqlsh
Connected to K8Demo at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE demodb WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 };
cqlsh> use demodb;
cqlsh:demodb> CREATE TABLE emp(emp_id int PRIMARY KEY, emp_name text, emp_city text, emp_sal varint,emp_phone varint);
cqlsh:demodb> INSERT INTO emp (emp_id, emp_name, emp_city, emp_phone, emp_sal) VALUES (100, 'Cormac', 'Cork', 999, 1000000);
cqlsh:demodb> select * from emp;
emp_id | emp_city | emp_name | emp_phone | emp_sal
--------+----------+----------+-----------+---------
100 | Cork | Cormac | 999 | 1000000
(1 rows)
cqlsh:demodb> exit
```
Now that we have populated the application with some data, let's annotate each of the Pods, back it up, destroy the application and then try to restore it using Velero v1.1.
## Prepare Cassandra for a Velero stateful backup by adding Annotations
The first step is to add annotations to each of the Pods in the StatefulSet to indicate that the contents of the persistent volumes, mounted on cassandra-data, needs to be backed up as well. As mentioned previously, Velero uses the `restic` program at this time for capturing state/data from Kubernetes running on vSphere.
```bash
$ kubectl -n cassandra describe pod/cassandra-0 | grep Annotations
Annotations: <none>
```
```bash
$ kubectl -n cassandra annotate pod/cassandra-0 backup.velero.io/backup-volumes=cassandra-data
pod/cassandra-0 annotated
```
```bash
$ kubectl -n cassandra describe pod/cassandra-0 | grep Annotations
Annotations: backup.velero.io/backup-volumes: cassandra-data
```
Repeat this action for the other Pods, in this example, Pods cassandra-1 and cassandra-2. This is an indication that we need to backup the persistent volume contents associated with each Pod.
## Take a backup
```bash
$ velero backup create cassandra --include-namespaces cassandra
Backup request "cassandra" submitted successfully.
Run `velero backup describe cassandra` or `velero backup logs cassandra` for more details.
```
```bash
$ velero backup describe cassandra
Name: cassandra
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: <none>
Phase: InProgress
Namespaces:
Included: cassandra
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2019-09-02 15:37:19 +0100 IST
Completed: <n/a>
Expiration: 2019-10-02 15:37:19 +0100 IST
Persistent Volumes: <none included>
Restic Backups (specify --details for more information):
In Progress: 1
```
```bash
$ velero backup describe cassandra
Name: cassandra
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: <none>
Phase: Completed
Namespaces:
Included: cassandra
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2019-09-02 15:37:19 +0100 IST
Completed: 2019-09-02 15:37:34 +0100 IST
Expiration: 2019-10-02 15:37:19 +0100 IST
Persistent Volumes: <none included>
Restic Backups (specify --details for more information):
Completed: 3
```
If we include the option `--details` to the previous command, we can see the various objects that were backed up.
```bash
$ velero backup describe cassandra --details
Name: cassandra
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: <none>
Phase: Completed
Namespaces:
Included: cassandra
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2019-09-02 15:37:19 +0100 IST
Completed: 2019-09-02 15:37:34 +0100 IST
Expiration: 2019-10-02 15:37:19 +0100 IST
Resource List:
apps/v1/ControllerRevision:
- cassandra/cassandra-55b978b564
apps/v1/StatefulSet:
- cassandra/cassandra
v1/Endpoints:
- cassandra/cassandra
v1/Namespace:
- cassandra
v1/PersistentVolume:
- pvc-2b574305-ca52-11e9-80e4-005056a239d9
- pvc-51a681ad-ca52-11e9-80e4-005056a239d9
- pvc-843241b7-ca52-11e9-80e4-005056a239d9
v1/PersistentVolumeClaim:
- cassandra/cassandra-data-cassandra-0
- cassandra/cassandra-data-cassandra-1
- cassandra/cassandra-data-cassandra-2
v1/Pod:
- cassandra/cassandra-0
- cassandra/cassandra-1
- cassandra/cassandra-2
v1/Secret:
- cassandra/default-token-bzh56
v1/Service:
- cassandra/cassandra
v1/ServiceAccount:
- cassandra/default
Persistent Volumes: <none included>
Restic Backups:
Completed:
cassandra/cassandra-0: cassandra-data
cassandra/cassandra-1: cassandra-data
cassandra/cassandra-2: cassandra-data
```
The command `velero backup logs` can be used to get additional information about the backup progress.
## Destroy the Cassandra deployment
Now that we have successfully taken a backup, which includes the `Restic` backups of the data, we will now go ahead and destroy the Cassandra namespace, and restore it once again.
```bash
$ kubectl delete ns cassandra
namespace "cassandra" deleted
$ kubectl get pv
No resources found.
$ kubectl get pods -n cassandra
No resources found.
$ kubestl get pvc -n cassandra
No resources found.
```
## Restore the Cassandra application via Velero
Now use Velero to restore the application and contents. The name of the backup must be specified at the command line using the `--from-backup` option. You can get the backup name from the following command:
```bash
$ velero backup get
NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
cassandra1 Completed 2019-10-02 15:37:34 +0100 IST 31d default <none>
```
Next, initiate the restore:
```bash
$ velero restore create cassandra1 --from-backup cassandra1
Restore request "cassandra1" submitted successfully.
Run `velero restore describe cassandra1` or `velero restore logs cassandra1` for more details.
```
```bash
$ velero restore describe cassandra1
Name: cassandra1
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: InProgress
Backup: cassandra1
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Restic Restores (specify --details for more information):
New: 3
```
Let's get some further information by adding the `--details` option.
```bash
$ velero restore describe cassandra1 --details
Name: cassandra1
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: InProgress
Backup: cassandra1
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Restic Restores:
New:
cassandra/cassandra-0: cassandra-data
cassandra/cassandra-1: cassandra-data
cassandra/cassandra-2: cassandra-data
```
When the restore completes, the `Phase` and `Restic Restores` should change to `Completed` as shown below.
```bash
$ velero restore describe cassandra1 --details
Name: cassandra1
Namespace: velero
Labels: <none>
Annotations: <none>
Phase: Completed
Backup: cassandra1
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Restic Restores:
Completed:
cassandra/cassandra-0: cassandra-data
cassandra/cassandra-1: cassandra-data
cassandra/cassandra-2: cassandra-data
```
The `velero restore logs` command can also be used to track restore progress.
## Validate the restored application
Use some commands seen earlier to validate that not only is the application restored, but also the data.
```bash
$ kubectl get ns
NAME STATUS AGE
cassandra Active 2m35s
default Active 13d
kube-node-lease Active 13d
kube-public Active 13d
kube-system Active 13d
velero Active 35m
wavefront-collector Active 7d5h
```
```bash
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-51ae99a9-cd91-11e9-80e4-005056a239d9 1Gi RWO Delete Bound cassandra/cassandra-data-cassandra-0 cass-sc-csi 2m28s
pvc-51b15558-cd91-11e9-80e4-005056a239d9 1Gi RWO Delete Bound cassandra/cassandra-data-cassandra-1 cass-sc-csi 2m22s
pvc-51b4079c-cd91-11e9-80e4-005056a239d9 1Gi RWO Delete Bound cassandra/cassandra-data-cassandra-2 cass-sc-csi 2m27s
```
```bash
$ kubectl get pvc -n cassandra
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cassandra-data-cassandra-0 Bound pvc-51ae99a9-cd91-11e9-80e4-005056a239d9 1Gi RWO cass-sc-csi 2m49s
cassandra-data-cassandra-1 Bound pvc-51b15558-cd91-11e9-80e4-005056a239d9 1Gi RWO cass-sc-csi 2m49s
cassandra-data-cassandra-2 Bound pvc-51b4079c-cd91-11e9-80e4-005056a239d9 1Gi RWO cass-sc-csi 2m49s
```
```bash
$ kubectl exec -it cassandra-0 -n cassandra -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.244.1.21 138.53 KiB 32 66.9% 2fc03eff-27ee-4934-b483-046e096ba116 Rack1-K8Demo
UN 10.244.1.22 166.45 KiB 32 71.7% 8d88d0ec-2981-4c8b-a295-b36eee62693c Rack1-K8Demo
UN 10.244.2.23 160.43 KiB 32 61.4% 83867fd7-bb6f-45dd-b5ea-cdf5dcec9bad Rack1-K8Demo
```
```bash
$ kubectl exec -it cassandra-0 -n cassandra -- cqlsh
Connected to K8Demo at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4]
Use HELP for help.
cqlsh> use demodb;
cqlsh:demodb> select * from emp;
emp_id | emp_city | emp_name | emp_phone | emp_sal
--------+----------+----------+-----------+---------
100 | Cork | Cormac | 999 | 1000000
(1 rows)
cqlsh:demodb>
```
It looks like the restore has been successful. Velero v1.1 has successfully restored the Kubenetes objects for the Cassandra application, as well as restored the database and table contents.
## Feedback and Participation
As always, we welcome feedback and participation in the development of Velero. [All information on how to contact us or become active can be found here](https://velero.io/community/)
You can find us on [Kubernetes Slack in the #velero channel](https://kubernetes.slack.com/messages/C6VCGP4MT), and follow us on Twitter at [@projectvelero](https://twitter.com/projectvelero).

Binary file not shown.

After

Width:  |  Height:  |  Size: 941 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB