Velero is a convenient backup tool for Kubernetes clusters that compresses and backs up Kubernetes objects to object storage. It also takes snapshots of your cluster’s Persistent Volumes using your cloud provider’s block storage snapshot features, and can then restore your cluster’s objects and Persistent Volumes to a previous state.
The DigitalOcean Velero Plugin allows you to use DigitalOcean block storage to snapshot your Persistent Volumes, and Spaces to back up your Kubernetes objects. When running a Kubernetes cluster on DigitalOcean, this allows you to quickly back up your cluster’s state and restore it should disaster strike.
In this tutorial we’ll set up and configure the velero
command line tool on a local machine, and deploy the server component into our Kubernetes cluster. We’ll then deploy a sample Nginx app that uses a Persistent Volume for logging and then simulate a disaster recovery scenario.
If you’re looking for a managed Kubernetes hosting service, check out our simple, managed Kubernetes service built for growth.
Before you begin this tutorial, you should have the following available to you:
On your local computer:
kubectl
command-line tool, configured to connect to your cluster. You can read more about installing and configuring kubectl
in the official Kubernetes documentation.git
command-line utility. You can learn how to install git
in Getting Started with Git.In your DigitalOcean account:
1.7.5
or later) on DigitalOcean Droplets.Read/Write
permissions or snapshots will not work.Once you have all of this set up, you’re ready to begin with this guide.
The Velero backup tool consists of a client installed on your local computer and a server that runs in your Kubernetes cluster. To begin, we’ll install the local Velero client.
In your web browser, navigate to the Velero GitHub repo releases page, find the release corresponding to your OS and system architecture, and copy the link address. For the purposes of this guide, we’ll use an Ubuntu 18.04 server on an x86-64 (or AMD64) processor as our local machine, and the Velero v1.2.0
release.
Note: To follow this guide, you should download and install v1.2.0 of the Velero client.
Then, from the command line on your local computer, navigate to the temporary /tmp
directory and cd
into it:
Use wget
and the link you copied earlier to download the release tarball:
Once the download completes, extract the tarball using tar
(note the filename may differ depending on the release version and your OS):
The /tmp
directory should now contain the extracted velero-v1.2.0-linux-amd64
directory as well as the tarball you just downloaded.
Verify that you can run the velero
client by executing the binary:
You should see the following help output:
OutputVelero is a tool for managing disaster recovery, specifically for Kubernetes
cluster resources. It provides a simple, configurable, and operationally robust
way to back up your application state and associated data.
If you're familiar with kubectl, Velero supports a similar model, allowing you to
execute commands such as 'velero get backup' and 'velero create schedule'. The same
operations can also be performed as 'velero backup get' and 'velero schedule create'.
Usage:
velero [command]
Available Commands:
backup Work with backups
backup-location Work with backup storage locations
bug Report a Velero bug
client Velero client related commands
completion Output shell completion code for the specified shell (bash or zsh)
create Create velero resources
delete Delete velero resources
describe Describe velero resources
get Get velero resources
help Help about any command
install Install Velero
plugin Work with plugins
restic Work with restic
restore Work with restores
schedule Work with schedules
snapshot-location Work with snapshot locations
version Print the velero version and associated image
. . .
At this point you should move the velero
executable out of the temporary /tmp
directory and add it to your PATH
. To add it to your PATH
on an Ubuntu system, simply copy it to /usr/local/bin
:
You’re now ready to configure secrets for the Velero server and then deploy it to your Kubernetes cluster.
Before setting up the server component of Velero, you will need to prepare your DigitalOcean Spaces keys and API token. Again navigate to the temporary directory /tmp
using the cd
command:
Now we’ll download a copy of the Velero plugin for DigitalOcean. Visit the plugin’s Github releases page and copy the link to the file ending in .tar.gz
.
Use wget
and the link you copied earlier to download the release tarball:
Once the download completes, extract the tarball using tar
(again note that the filename may differ depending on the release version):
The /tmp
directory should now contain the extracted velero-plugin-1.0.0
directory as well as the tarball you just downloaded.
Next we’ll cd
into the velero-plugin-1.0.0
directory:
Now we can save the access keys for our DigitalOcean Space and API token for use as a Kubernetes Secret. First, open up the examples/cloud-credentials
file using your favorite editor.
The file will look like this:
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
Edit the <AWS_ACCESS_KEY_ID>
and <AWS_SECRET_ACCESS_KEY>
placeholders to use your DigitalOcean Spaces keys. Be sure to remove the <
and >
characters.
The next step is to edit the 01-velero-secret.patch.yaml
file so that it includes your DigitalOcean API token. Open the file in your favourite editor:
It should look like this:
---
apiVersion: v1
kind: Secret
stringData:
digitalocean_token: <DIGITALOCEAN_API_TOKEN>
type: Opaque
Change the entire <DIGITALOCEAN_API_TOKEN>
placeholder to use your DigitalOcean personal API token. The line should look something like digitalocean_token: 18a0d730c0e0....
. Again, make sure to remove the <
and >
characters.
A Velero installation consists of a number of Kubernetes objects that all work together to create, schedule, and manage backups. The velero
executable that you just downloaded can generate and install these objects for you. The velero install
command will perform the preliminary set-up steps to get your cluster ready for backups. Specifically, it will:
Create a velero
Namespace.
Add the velero
Service Account.
Configure role-based access control (RBAC) rules to grant permissions to the velero
Service Account.
Install Custom Resource Definitions (CRDs) for the Velero-specific resources: Backup
, Schedule
, Restore
, Config
.
Register Velero Plugins to manage Block snapshots and Spaces storage.
We will run the velero install
command with some non-default configuration options. Specifically, you will to need edit each of the following settings in the actual invocation of the command to match your Spaces configuration:
--bucket velero-backups
: Change the velero-backups
value to match the name of your DigitalOcean Space. For example if you called your Space ‘backup-bucket’, the option would look like this: --bucket backup-bucket
--backup-location-config s3Url=https://nyc3.digitaloceanspaces.com,region=nyc3
: Change the URL and region to match your Space’s settings. Specifically, edit both nyc3
portions to match the region where your Space is hosted. For example, if your Space is hosted in the fra1
region, the line would look like this: --backup-location-config s3Url=https://fra1.digitaloceanspaces.com,region=fra1
. The identifiers for regions are: nyc3
, sfo2
, sgp1
, and fra1
.Once you are ready with the appropriate bucket and backup location settings, it is time to install Velero. Run the following command, substituting your values where required:
You should see the following output:
OutputCustomResourceDefinition/backups.velero.io: attempting to create resource
CustomResourceDefinition/backups.velero.io: created
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource
CustomResourceDefinition/backupstoragelocations.velero.io: created
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource
CustomResourceDefinition/deletebackuprequests.velero.io: created
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource
CustomResourceDefinition/downloadrequests.velero.io: created
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource
CustomResourceDefinition/podvolumebackups.velero.io: created
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource
CustomResourceDefinition/podvolumerestores.velero.io: created
CustomResourceDefinition/resticrepositories.velero.io: attempting to create resource
CustomResourceDefinition/resticrepositories.velero.io: created
CustomResourceDefinition/restores.velero.io: attempting to create resource
CustomResourceDefinition/restores.velero.io: created
CustomResourceDefinition/schedules.velero.io: attempting to create resource
CustomResourceDefinition/schedules.velero.io: created
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource
CustomResourceDefinition/serverstatusrequests.velero.io: created
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource
CustomResourceDefinition/volumesnapshotlocations.velero.io: created
Waiting for resources to be ready in cluster...
Namespace/velero: attempting to create resource
Namespace/velero: created
ClusterRoleBinding/velero: attempting to create resource
ClusterRoleBinding/velero: created
ServiceAccount/velero: attempting to create resource
ServiceAccount/velero: created
Secret/cloud-credentials: attempting to create resource
Secret/cloud-credentials: created
BackupStorageLocation/default: attempting to create resource
BackupStorageLocation/default: created
Deployment/velero: attempting to create resource
Deployment/velero: created
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status.
You can watch the deployment logs using the kubectl
command from the output. Once your deploy is ready, you can proceed to the next step, which is configuring the server. A successful deploy will look like this (with a different AGE column):
OutputNAME READY UP-TO-DATE AVAILABLE AGE
velero 1/1 1 1 2m
At this point you have installed the server component of Velero into your Kubernetes cluster as a Deployment. You have also registered your Spaces keys with Velero using a Kubernetes Secret.
Note: You can specify the kubeconfig
that the velero
command line tool should use with the --kubeconfig
flag. If you don’t use this flag, velero
will check the KUBECONFIG
environment variable and then fall back to the kubectl
default (~/.kube/config
).
When we installed the Velero server, the option --use-volume-snapshots=false
was part of the command. Since we want to take snapshots of the underlying block storage devices in our Kubernetes cluster, we need to tell Velero to use the correct plugin for DigitalOcean block storage.
Run the following command to enable the plugin and register it as the default snapshot provider:
You will see the following output:
OutputSnapshot volume location "default" configured successfully.
In the previous step we created block storage and object storage objects in the Velero server. We’ve registered the digitalocean/velero-plugin:v1.0.0
plugin with the server, and installed our Spaces secret keys into the cluster.
The final step is patching the cloud-credentials
Secret that we created earlier to use our DigitalOcean API token. Without this token the snapshot plugin will not be able to authenticate with the DigitalOcean API.
We could use the kubectl edit
command to modify the Velero Deployment object with a reference to the API token. However, editing complex YAML objects by hand can be tedious and error prone. Instead, we’ll use the kubectl patch
command since Kubernetes supports patching objects. Let’s take a quick look at the contents of the patch files that we’ll apply.
The first patch file is the examples/01-velero-secret.patch.yaml
file that you edited earlier. It is designed to add your API token to the secrets/cloud-credentials
Secret that already contains your Spaces keys. cat
the file:
It should look like this (with your token in place of the <DIGITALOCEAN_API_TOKEN>
placeholder):
. . .
---
apiVersion: v1
kind: Secret
stringData:
digitalocean_token: <DIGITALOCEAN_API_TOKEN>
type: Opaque
Now let’s look at the patch file for the Deployment:
You should see the following YAML:
. . .
---
apiVersion: v1
kind: Deployment
spec:
template:
spec:
containers:
- args:
- server
command:
- /velero
env:
- name: DIGITALOCEAN_TOKEN
valueFrom:
secretKeyRef:
key: digitalocean_token
name: cloud-credentials
name: velero
This file indicates that we’re patching a Deployment’s Pod spec that is called velero
. Since this is a patch we do not need to specify an entire Kubernetes object spec or metadata. In this case the Velero Deployment is already configured using the cloud-credentials
secret because the velero install
command created it for us. So all that this patch needs to do is register the digitalocean_token
as an environment variable with the already deployed Velero Pod.
Let’s apply the first Secret patch using the kubectl patch
command:
You should see the following output:
Outputsecret/cloud-credentials patched
Finally we will patch the Deployment. Run the following command:
You will see the following if the patch is successful:
Outputdeployment.apps/velero patched
Let’s verify the patched Deployment is working using kubectl get
on the velero
Namespace:
You should see the following output:
OutputNAME READY UP-TO-DATE AVAILABLE AGE
velero 1/1 1 1 12s
At this point Velero is running and fully configured, and ready to back up and restore your Kubernetes cluster objects and Persistent Volumes to DigitalOcean Spaces and Block Storage.
In the next section, we’ll run a quick test to make sure that the backup and restore functionality works as expected.
Now that we’ve successfully installed and configured Velero, we can create a test Nginx Deployment, with a Persistent Volume and Service. Once the Deployment is running we will run through a backup and restore drill to ensure that Velero is configured and working properly.
Ensure you are still working in the /tmp/velero-plugin-1.0.0
directory. The examples
directory contains a sample Nginx manifest called nginx-example.yaml
.
Open this file using your editor of choice:
You should see the following text:
Output. . .
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx-example
labels:
app: nginx
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-logs
namespace: nginx-example
labels:
app: nginx
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deploy
namespace: nginx-example
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: nginx-logs
persistentVolumeClaim:
claimName: nginx-logs
containers:
- image: nginx:stable
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-logs
readOnly: false
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx-svc
namespace: nginx-example
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
In this file, we observe specs for:
nginx-example
nginx:stable
container imagenginx-logs
), using the do-block-storage
StorageClassLoadBalancer
Service that exposes port 80
Create the objects using kubectl apply
:
You should see the following output:
Outputnamespace/nginx-example created
persistentvolumeclaim/nginx-logs created
deployment.apps/nginx-deploy created
service/nginx-svc created
Check that the Deployment succeeded:
You should see the following output:
OutputNAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 1/1 1 1 1m23s
Once Available
reaches 1, fetch the Nginx load balancer’s external IP using kubectl get
:
You should see both the internal CLUSTER-IP
and EXTERNAL-IP
for the my-nginx
Service:
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-svc LoadBalancer 10.245.147.61 159.203.48.191 80:30232/TCP 3m1s
Note the EXTERNAL-IP
and navigate to it using your web browser.
You should see the following NGINX welcome page:
This indicates that your Nginx Deployment and Service are up and running.
Before we simulate our disaster scenario, let’s first check the Nginx access logs (stored on a Persistent Volume attached to the Nginx Pod):
Fetch the Pod’s name using kubectl get
:
OutputNAME READY STATUS RESTARTS AGE
nginx-deploy-694c85cdc8-vknsk 1/1 Running 0 4m14s
Now, exec
into the running Nginx container to get a shell inside of it:
Once inside the Nginx container, cat
the Nginx access logs:
You should see some Nginx access entries:
Output10.244.0.119 - - [03/Jan/2020:04:43:04 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.244.0.119 - - [03/Jan/2020:04:43:04 +0000] "GET /favicon.ico HTTP/1.1" 404 153 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
Note these down (especially the timestamps), as we will use them to confirm the success of the restore procedure. Exit the pod:
We can now perform the backup procedure to copy all nginx
Kubernetes objects to Spaces and take a Snapshot of the Persistent Volume we created when deploying Nginx.
We’ll create a backup called nginx-backup
using the velero
command line client:
The --selector app=nginx
instructs the Velero server to only back up Kubernetes objects with the app=nginx
Label Selector.
You should see the following output:
OutputBackup request "nginx-backup" submitted successfully.
Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details.
Running velero backup describe nginx-backup --details
should provide the following output after a short delay:
OutputName: nginx-backup
Namespace: velero
Labels: velero.io/backup=nginx-backup
velero.io/pv=pvc-6b7f63d7-752b-4537-9bb0-003bed9129ca
velero.io/storage-location=default
Annotations: <none>
Phase: Completed
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: app=nginx
Storage Location: default
Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1
Started: 2020-01-02 23:45:30 -0500 EST
Completed: 2020-01-02 23:45:34 -0500 EST
Expiration: 2020-02-01 23:45:30 -0500 EST
Resource List:
apps/v1/Deployment:
- nginx-example/nginx-deploy
apps/v1/ReplicaSet:
- nginx-example/nginx-deploy-694c85cdc8
v1/Endpoints:
- nginx-example/nginx-svc
v1/Namespace:
- nginx-example
v1/PersistentVolume:
- pvc-6b7f63d7-752b-4537-9bb0-003bed9129ca
v1/PersistentVolumeClaim:
- nginx-example/nginx-logs
v1/Pod:
- nginx-example/nginx-deploy-694c85cdc8-vknsk
v1/Service:
- nginx-example/nginx-svc
Persistent Volumes:
pvc-6b7f63d7-752b-4537-9bb0-003bed9129ca:
Snapshot ID: dfe866cc-2de3-11ea-9ec0-0a58ac14e075
Type: ext4
Availability Zone:
IOPS: <N/A>
This output indicates that nginx-backup
completed successfully. The list of resources shows each of the Kubernetes objects that was included in the backup. The final section shows the PersistentVolume was also backed up using a filesystem snapshot.
To confirm from within the DigitalOcean Cloud Control Panel, navigate to the Space containing your Kubernetes backup files.
You should see a new directory called nginx-backup
containing the Velero backup files.
Using the left-hand navigation bar, go to Images and then Snapshots. Within Snapshots, navigate to Volumes. You should see a Snapshot corresponding to the PVC listed in the above output.
We can now test the restore procedure.
Let’s first delete the nginx-example
Namespace. This will delete everything in the Namespace, including the Load Balancer and Persistent Volume:
Verify that you can no longer access Nginx at the Load Balancer endpoint, and that the nginx-example
Deployment is no longer running:
OutputNo resources found in nginx-example namespace.
We can now perform the restore procedure, once again using the velero
client:
Here we use create
to create a Velero Restore
object from the nginx-backup
object.
You should see the following output:
Check the status of the restored Deployment:
OutputNAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 1/1 1 1 58s
Check for the creation of a Persistent Volume:
OutputNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-logs Bound pvc-6b7f63d7-752b-4537-9bb0-003bed9129ca 5Gi RWO do-block-storage 75s
The restore also created a LoadBalancer. Sometimes the Service will be re-created with a new IP address. You will need to find the EXTERNAL-IP
address again:
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-svc LoadBalancer 10.245.15.83 159.203.48.191 80:31217/TCP 97s
Navigate to the Nginx Service’s external IP once again to confirm that Nginx is up and running.
Finally, check the logs on the restored Persistent Volume to confirm that the log history has been preserved post-restore.
To do this, once again fetch the Pod’s name using kubectl get
:
OutputNAME READY STATUS RESTARTS AGE
nginx-deploy-694c85cdc8-vknsk 1/1 Running 0 2m20s
Then exec
into it:
Once inside the Nginx container, cat
the Nginx access logs:
Output10.244.0.119 - - [03/Jan/2020:04:43:04 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.244.0.119 - - [03/Jan/2020:04:43:04 +0000] "GET /favicon.ico HTTP/1.1" 404 153 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
You should see the same pre-backup access attempts (note the timestamps), confirming that the Persistent Volume restore was successful. Note that there may be additional attempts in the logs if you visited the Nginx landing page after you performed the restore.
At this point, we’ve successfully backed up our Kubernetes objects to DigitalOcean Spaces, and our Persistent Volumes using Block Storage Volume Snapshots. We simulated a disaster scenario, and restored service to the test Nginx application.
In this guide we installed and configured the Velero Kubernetes backup tool on a DigitalOcean-based Kubernetes cluster. We configured the tool to back up Kubernetes objects to DigitalOcean Spaces, and back up Persistent Volumes using Block Storage Volume Snapshots.
Velero can also be used to schedule regular backups of your Kubernetes cluster for disaster recovery. To do this, you can use the velero schedule
command. Velero can also be used to migrate resources from one cluster to another.
To learn more about DigitalOcean Spaces, consult the official Spaces documentation. To learn more about Block Storage Volumes, consult the Block Storage Volume documentation.
This tutorial builds on the README found in StackPointCloud’s ark-plugin-digitalocean
GitHub repo.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
It almost works but the file /var/log/nginx/access.log is empty after restore as long as I do not hit the loadbalancer.
For the people still following this guide 2019 and running into the issue that the config object is no longer part of hetpio-ark you only have to do a few things differently. All steps are the same however the 05-ark-backupstoragelocation.yaml file is a little different.
Use the new format supplied in the repo like this: apiVersion: ark.heptio.com/v1 kind: BackupStorageLocation metadata: name: default namespace: heptio-ark spec: provider: aws objectStorage: bucket: space_name_here config: s3Url: https://space_region_here.digitaloceanspaces.com region: space_region_here
You will also notice that there is a 06-ark-volumesnapshotlocation.yaml file. No changes needed to the file but make sure to also kubectl apply this file.
Hi Brecht!
Thanks for surfacing this issue and posting your solution! This guide is now pretty out of date since Ark changed to Velero and the project introduced a couple of changes. We have it on our radar to update this guide and will get to it as soon as possible.
Thanks again for contributing your solution!
Hi Hjet & Brecht,
I’ve run into the same problem, however the changes Brecht proposed don’t seem to solve it.
I get the following message after running
kubectl apply -f examples/05-ark-backupstoragelocation.yaml
: error: unable to recognize “examples/05-ark-backupstoragelocation.yaml”: no matches for kind “BackupStorageLocation” in version “ark.heptio.com/v1”I also had to change the namespace to velero earlier on in the examples/credentials-ark file to get the secret to be created.
Cheers, Rob
In the meantime whilst this article is being updated, would it be possible for the author to tell us what versions of the software he used were please?
I can see that he used ark-v0.9.6-linux-amd64.tar.gz in his tar command. Would it be possible to find out what version of the digital ocean ark plugin he used please?
I’m on a bit of a deadline to get my clusters backed up unfortunately.
Thanks in advance, Rob
Hi Rob!
Thanks for your patience as we get around to updating this article. No guarantees but I think I should be able to get this done next week.
In the mean time, I can confirm that this guide uses the following release for the DO Ark plugin: https://github.com/StackPointCloud/ark-plugin-digitalocean/releases/tag/v0.1.0 and as you mentioned uses
v0.9.6
for the Ark client and server. I’m going to pin these versions in the tutorial until it gets updated.Hi Rob,
The guide has been updated to
v0.10.0
of the Ark client & server, andv0.10.0
of the Ark DigitalOcean plugin. You can expect another update of this tutorial tov0.11.0
, the current version containing the Velero renaming/rebranding, once the plugin gets updated.Thanks for your patience as we keep this guide in sync with changes in the open source project!
when I run:
I see
does this means that velero is not starting correctly?
I tried:
the velero deployment stays at AVAILABLE 1 for like 5 sec, then it goes to 0
when I try to add the plugin:
it does not seem to make a difference
or either the new version:
when I run the backup sequence, I get an empty backup:
Not sure where the problem
TL;DR I switch to AWS S3 and everything worked as expected—and you have one less plugin to install
I’m getting the same “error”. Everything seems to work, even my Spaces space is getting data but the backup sequence also gives an empty Persistent Volumes field.
What could explain this? Thank you in advance.
Hi phillgr,
Thanks for surfacing your issue!
In version
v0.11.0
, the Ark project was renamed to Velero.It appears from your posted output that you’re using
v0.11.0
of the Velero server (the Deployment name should beark
and the Namespaceheptio-ark
), and notv0.10.0
. The above guide applies to versionv0.10.0
of the Ark/Velero server, and versionv0.10.0
of StackPointCloud’s DigitalOcean Ark Plugin. Per this issue, the plugin needs to be updated before Velerov0.11.0
can be used.A previous version of this guide had notes indicating which release versions to download at the appropriate steps. I’ve since updated the guide to explicitly include the pinned
v0.10.0
version in thewget
commands. I just ran through the guide and can confirm that this is the output you should get when runningkubectl get deployments --namespace=heptio-ark
:From this point, you can move on to running through the backup/restore exercise.
Thanks for your understanding and patience as we update this tutorial to reflect the changes of the open source project!
Hey there, I just switched from GKE to DO Kubernetes.
Any idea when this guide will be updated for Velero 1.0?
Or automated volume snapshots?
Thankyou!
The tutorial is updated to use Velero 1.2.0. The upstream changes in Velero to use plugins for everything should mean this guide will be up to date indefinitely. Thanks for your patience!
When trying to backup pods that make use of nfs-storages (e.g. to share volumes among pods, nfs provisioner, DO PVC with RW many the volume snapshotter runs into an error:
Anyone around with an idea how to solve this?
Same problem.
This is most likely doe to the fact the method set out in the NFS tutorial is a local nfs server that does not have a true CSI support for volumesnapshots as required by Velero pv backups:
https://velero.io/docs/v1.4/csi/
Velero relies on VolumeSnapshot functionality of the CSI to take PV backups and the NFS provisioner does not support VolumeSnapshots (that I could find in their docs:https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner), it is unlikely velero backsups of NFS will work.
I tried to back up a deployment created by the Helm MongoDb Chart. It did pull the definitions and everything, but the volume, which is the key piece, failed. I have trouble understanding the logs, but I think the two errors are:
And
Any hints to this? Is this some kind of access issue, with the volumes - or something more nasty (“nil pointer dereference”)?
The key part of the error message is
401 Unable to authenticate you
- I suspect there’s an issue with your DigitalOcean API key and the cloud-secrets Secret key in Kubernetes.You could try making a request using curl against the API directly to see if it works: https://developers.digitalocean.com/documentation/v2/#list-all-block-storage-volumes
If that request works, then you can be pretty sure the issue is with the secret in Kubernetes.
Hi, sorry I did not see your reply until now.
I have an API token and am able to query the volume using that token just fine. I can read the specs and the id of the connected droplet.
So, there seems to be an access issue here, but I have no clue on how to resolve it? When I ran the Velero test I used a brand new token.
Interesting, it sounds like the secret in Kubernetes itself may be the issue. Take a look at the secret values and then base64 decode them to compare kubernetes’ version to your working version:
You should see output like this:
Now look at the
cloud:
anddigitalocean_token:
lines. Copy the base64 encoded values (including trailing=
or==
characters) and run them throughprintf
orecho
and pipe those results tobase64 -d
to decode them like this:If those values don’t match your Spaces credentials and DigitalOcean API token respectively, then that’s the problem.
If they do match, then we can try enabling debug level logging for the plugin container and pore through the logs from it.
I don’t use the velero-token for anything else - so they don’t match. But I did fetch it from the secret, like you suggested, and ran a CURL query on the resource:
CURL …token,etc… “https://api.digitalocean.com/v2/volumes/dd403d04-48e5-11ea-9cc6-0a58ac14d10a”
This did work, i got the information about the volume just fine. So why doesn’t this work from Velero? Might it be requiring more access rights than a simple GET?
I had the same issue and was able to resolve it by running
kubectl delete pod -n velero -l component=velero
.For me, the new pod was stuck in pending after the patched credentials.
Hello, I followed this article and ran into an issue, please advice.
i get the error message when i run velero install .
“An error occurred: invalid argument “Url=https://ams3.digitaloceanspaces.com,” for “–backup-location-config” flag: error parsing “””
how do i resolve this issue? thank you
I think the problem was because of the space between the url and region. but then i ran into another issue.
An error occurred: open /root/examples/cloud-credentials: no such file or directory
I realize velero focuses on namespace-based restores, but is it possible to use velero to delete any resources that didn’t exist during backup?
In other words, can I use it for full-cluster restores on existing clusters?
The digitalocean/velero-plugin:v1.0.0 initContainer image is only compiled for Intel and will not work on ARM processors.