Blog

Velero - Multi-Cloud K8 Cluster Backup and Restore

CloudCover + Velero

by Nikhil Pandit

tl;dr

  • How to install velero client and server, backup and restore process of Kubernetes resources with persistent volumes within the same or different cloud providers.
  • Will test the backup and restoration procedures for Kubernetes resources by going through each cloud provider and examine how to restore specific backup versions.
  • Demonstrate how to backup and restore Kubernetes resources across different cloud providers in the conclusion.

Introduction

What is Velero?

Unlike other tools which directly access the Kubernetes etcd database to perform backups and restores, Velero uses the Kubernetes API to capture the state of cluster resources and to restore them when necessary. This API-driven approach has a number of key benefits:

  • High Flexibility

    High Flexibility

    Backups can capture subsets of the cluster’s resources, filtering by namespace, resource type, and/or label selector, providing a high degree of flexibility around what’s backed up and restored.

  • Users Access

    Users Access

    Users of managed Kubernetes offerings often do not have access to the underlying etcd database, so direct backups/restores of it are not possible.

  • Easy Backup and Restore

    Easy Backup and Restore

    Resources exposed through aggregated API servers can easily be backed up and restored even if they’re stored in a separate etcd database.

Velero enables you to backup and restore your applications’ persistent data alongside their configurations, using either your storage platform’s native snapshot capability or an integrated file-level backup tool called restic.

Velero Lets You

  • Take backups of your cluster and restore in case of loss

    Take backups of your cluster and restore in case of loss

  • Migrate cluster resources to other clusters

    Migrate cluster resources to other clusters

  • Replicate your prod cluster to dev and testing clusters

    Replicate your prod cluster to dev and testing clusters

Velero Consist Of

  • A server that runs on your cluster

    A server that runs on your cluster

  • A command-line client that runs locally

    A command-line client that runs locally

architecture

The high-level architecture of Velero

  1. The Velero client makes a call to the Kubernetes API server to create a Backup object.
  2. The BackupController notices the new Backup object and performs validation.
  3. The BackupController begins the backup process. It collects the data to back up by querying the API server for resources.
  4. The BackupController makes a call to the object storage service – for example, AWS S3 – to upload the backup file.

By default, velero backup create makes disk snapshots of any persistent volumes. You can adjust the snapshots by specifying additional flags. Run velero backup create --help to see available flags. Snapshots can be disabled with the option --snapshot-volumes=false.

CloudCover + Velero

Installation

Installation of the Velero client

Follows the documentation to install CLI on different os platforms.

#To install velero cli on macOs
brew install velero 

Prepare your AWS environment

Create s3 bucket - To store backup data on s3 storage

BUCKET=<YOUR_BUCKET>
REGION=<YOUR_REGION>
aws s3api create-bucket \
    --bucket $BUCKET \
    --region $REGION \
    --create-bucket-configuration LocationConstraint=$REGION

Create user and set up permissions for velero

aws iam create-user --user-name velero
# Create iam policy template
cat > velero-policy.json <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeVolumes",
                "ec2:DescribeSnapshots",
                "ec2:CreateTags",
                "ec2:CreateVolume",
                "ec2:CreateSnapshot",
                "ec2:DeleteSnapshot"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:PutObject",
                "s3:AbortMultipartUpload",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": [
                "arn:aws:s3:::${BUCKET}/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::${BUCKET}"
            ]
        }
    ]
}
EOF

Assign the permissions to the user

aws iam put-user-policy \
  --user-name velero \
  --policy-name velero \
  --policy-document velero-policy.json

# Create Access and secret keys  
aws iam create-access-key --user-name velero

# Save the credentials inside the local file (aws-credentials-velero)
$cat aws-credentials-velero
[default]
aws_access_key_id=<AWS_ACCESS_KEY_ID>
aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>  

Install and configure Velero

velero install \
    --provider aws \
    --plugins velero/velero-plugin-for-aws:v1.2.0 \
    --bucket $BUCKET \
    --backup-location-config region=$REGION \
    --snapshot-location-config region=$REGION \
    --secret-file ./credentials-velero

Validate the installation

$ kubectl get pods -n velero
NAME                      READY   STATUS    RESTARTS   AGE
velero-665dbd8677-qtm8q   1/1     Running   0          26m

Create backup

# Create backup for perticular namespace
velero create backup aws-nginx-backup --include-namespaces nginx-example

# Create a backup containing all resources.
velero create backup aws-cluster-backup

#You can exclude perticular namespaces
velero backup create backup2 --exclude-namespaces velero,default

# Schedule a backup
velero backup create --from-schedule daily-backup
velero schedule create <SCHEDULE NAME> --schedule "0 7 * * *"

# Configure the ttl, remove the backup from storage once ttls expired. default is 720h0m0s
velero create backup aws-cluster-backup --ttl 1h

# Set parameter --snapshot-volumes=false, if you don't want to take snapshot of persistent volume. By default is set to true 
velero create backup aws-nginx-backup3 --include-namespaces nginx-example --snapshot-volumes=false

# For more details please check help options
velero create backup --help 

# Describe and check the backups
velero backup get
velero backup describe aws-nginx-backup

Check the snapshots and backups

CloudCover + Velero

CloudCover + Velero

Restore Backups

# Get the list of backups
$ velero backup get
NAME                STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
aws-nginx-backup    Completed   0        0          2021-08-10 17:50:55 +0530 IST   29d       default            <none>
aws-nginx-backup2   Completed   0        0          2021-08-10 18:17:50 +0530 IST   50m       default            <none>
aws-nginx-backup3   Completed   0        0          2021-08-10 18:19:15 +0530 IST   29d       default            <none>

# Restore the backup from the vailable lists
velero restore create --from-backup aws-nginx-backup

# Check if restore is completed or not
$ velero restore  get
NAME                              BACKUP             STATUS      STARTED                         COMPLETED                       ERRORS   WARNINGS   CREATED                         SELECTOR
aws-nginx-backup-20210810175215   aws-nginx-backup   Completed   2021-08-10 17:52:17 +0530 IST   2021-08-10 17:52:18 +0530 IST   0        0          2021-08-10 17:52:17 +0530 IST   <none>

# Check the resources are created or not from the backups

Delete the backups

# This will delete the backup data created in s3 baucket as well as snapshots too.
velero backup delete aws-nginx-backup

CloudCover + Velero

Enviroment setup

Prepare your GCP environment

Create GCS bucket - To store k8s backup data on GCS

BUCKET=<YOUR_BUCKET>
gsutil mb gs://$BUCKET/

Setup permissions for Velero

PROJECT_ID=<Project ID>

# Create service account
gcloud iam service-accounts create velero \
    --display-name "Velero service account"

SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
  --filter="displayName:Velero service account" \
  --format 'value(email)')

# Attach permissions to service account
ROLE_PERMISSIONS=(
    compute.disks.get
    compute.disks.create
    compute.disks.createSnapshot
    compute.snapshots.get
    compute.snapshots.create
    compute.snapshots.useReadOnly
    compute.snapshots.delete
    compute.zones.get
)

gcloud iam roles create velero.server \
    --project $PROJECT_ID \
    --title "Velero Server" \
    --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
    --role projects/$PROJECT_ID/roles/velero.server

gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}

# Create service account key and store secrets to local file(credentials-velero)
gcloud iam service-accounts keys create credentials-velero \
    --iam-account $SERVICE_ACCOUNT_EMAIL    

Install Velero

velero install \
    --provider gcp \
    --plugins velero/velero-plugin-for-gcp:v1.2.0 \
    --bucket $BUCKET \
    --secret-file ./credentials-velero

Validate the installation

$ kubectl get pods -n velero
NAME                      READY   STATUS    RESTARTS   AGE
velero-868db548b8-4qt88   1/1     Running   0          85s

Create backup

# We are taking backup of k8 cluster with snapshot of persistent disk. Below are example how backup and restore process 
works under the snapshot.

# Deployment is running inside the namespace called "nginx-example"
$ kubectl get pods -n nginx-example
nginx-deploy-5df9976557-jmrcd   1/1     Running   0          16h

# All logs are stored under persistent volume
$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   REASON   AGE
pvc-82ab01b4-de2c-4547-af7a-1a64a5b20376   5Gi        RWO            Delete           Bound    nginx-example/nginx-logs   standard                19h

# Conatiner logs from the nginx application from the persistent disk storage
root@nginx-deploy-5df9976557-jmrcd:/# cat /var/log/nginx/access.log | head -5
10.160.0.72 - - [10/Aug/2021:09:26:45 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36" "-"
10.160.0.72 - - [10/Aug/2021:09:26:45 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://34.93.193.151/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36" "-"
10.160.0.77 - - [10/Aug/2021:10:13:34 +0000] "GET / HTTP/1.1" 200 612 "-" "Linux Gnu (cow)" "-"
10.120.2.1 - - [10/Aug/2021:10:27:38 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36" "-"
10.120.2.1 - - [10/Aug/2021:10:27:38 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36" "-"
root@nginx-deploy-5df9976557-jmrcd:/#

# Take the Backup
velero backup create gcp-kube-backup --snapshot-volumes=true --exclude-namespaces=kube-system,velero

# Check the status of backup
$ velero backup get
NAME              STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
gcp-kube-backup   Completed   0        0          2021-08-11 11:28:10 +0530 IST   29d       default            <none>

# Describe the backup to check for more details
velero backup describe gcp-kube-backup

Check the snapshots and backups

CloudCover + Velero

CloudCover + Velero

Restore Backups

Removing the nginx-example namespace, before restoring the actual backups.

CloudCover + Velero

# Check for the available backups 
$ velero backup get
NAME              STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
gcp-kube-backup   Completed   0        0          2021-08-11 11:28:10 +0530 IST   29d       default            <none>

# Restore the backup
velero restore create --from-backup gcp-kube-backup

# Check the restore status
velero restore get

CloudCover + Velero

# Check the status of application

$kubectl get pods -n nginx-example
NAME                            READY   STATUS    RESTARTS   AGE
nginx-deploy-5df9976557-jmrcd   1/1     Running   0          3m28s

$kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   REASON   AGE
pvc-82ab01b4-de2c-4547-af7a-1a64a5b20376   5Gi        RWO            Delete           Bound    nginx-example/nginx-logs   standard                3m42s

$root@nginx-deploy-5df9976557-jmrcd:/# cat var/log/nginx/access.log | head -5
10.160.0.72 - - [10/Aug/2021:09:26:45 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36" "-"
10.160.0.72 - - [10/Aug/2021:09:26:45 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://34.93.193.151/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36" "-"
10.160.0.77 - - [10/Aug/2021:10:13:34 +0000] "GET / HTTP/1.1" 200 612 "-" "Linux Gnu (cow)" "-"
10.120.2.1 - - [10/Aug/2021:10:27:38 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36" "-"
10.120.2.1 - - [10/Aug/2021:10:27:38 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36" "-"

Delete the backups

velero backup delete gcp-kube-backup

Enviroment setup

Prepare your Azure environment

Install the az CLI tool, follow the doc to install Create a resource group for the backups storage account

AZURE_BACKUP_RESOURCE_GROUP=Velero_Backups
az group create -n $AZURE_BACKUP_RESOURCE_GROUP --location <location>

Create storage account

AZURE_STORAGE_ACCOUNT_ID="velerostorage"
az storage account create \
    --name $AZURE_STORAGE_ACCOUNT_ID \
    --resource-group $AZURE_BACKUP_RESOURCE_GROUP \
    --sku Standard_GRS \
    --encryption-services blob \
    --https-only true \
    --kind BlobStorage \
    --access-tier Hot

Create a blob container

BLOB_CONTAINER=velero
az storage container create -n $BLOB_CONTAINER --public-access off --account-name $AZURE_STORAGE_ACCOUNT_ID

Create the credential file named credentials-velero in the local machine

# Obtain your Azure Account Subscription ID
AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv`

# Obtain your Azure Account Tenant ID
AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`

# Generate client secret
AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" --role "Contributor" --query 'password' -o tsv`

# Generate client ID
AZURE_CLIENT_ID=`az ad sp list --display-name "velero" --query '[0].appId' -o tsv`

cat << EOF  > ./credentials-velero
AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
AZURE_TENANT_ID=${AZURE_TENANT_ID}
AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
AZURE_RESOURCE_GROUP=${AZURE_BACKUP_RESOURCE_GROUP}
AZURE_CLOUD_NAME=AzurePublicCloud
EOF

Install velero on AKS cluster

velero install \
--provider azure \
--plugins velero/velero-plugin-for-microsoft-azure:v1.2.0 \
--bucket $BLOB_CONTAINER \
--secret-file ./credentials-velero \
--backup-location-config resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT_ID,subscriptionId=$AZURE_SUBSCRIPTION_ID \
--snapshot-location-config apiTimeout=5m,resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,subscriptionId=$AZURE_SUBSCRIPTION_ID

Get all resources from the “velero” namespace

kubectl get all -n velero

Take a backup of k8s cluster

velero backup create aks-nginx-example-ns-backup --include-namespaces=nginx-example

$ velero backup get
NAME                          STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
aks-nginx-example-ns-backup   Completed   0        0          2021-08-13 11:44:23 +0530 IST   29d       default            <none>

Check disk snapshots and backups

CloudCover + Velero

CloudCover + Velero

Restore Backups

$ velero restore create --from-backup aks-nginx-example-ns-backup

$ velero restore get
NAME                                         BACKUP                        STATUS      STARTED                         COMPLETED                       ERRORS   WARNINGS   CREATED                         SELECTOR
aks-nginx-example-ns-backup-20210813114926   aks-nginx-example-ns-backup   Completed   2021-08-13 11:49:29 +0530 IST   2021-08-13 11:49:56 +0530 IST   0        0          2021-08-13 11:49:27 +0530 IST   <none>

CloudCover + Velero

Backup & Restore - Same Cloud

Backup & Restore within the same cloud provider

Now, let’s demonstrate how to backup and restore GKE within the GCP cloud provider. Please follow the steps mentioned below for the same:

Install velero on cluster-1

Take a backup of cluster-1

velero backup create full-backup-cluster1-bkp3

CloudCover + Velero

CloudCover + Velero

Install velero on cluster-2. Configure the properties as below

# Create secret to access the storage bucket
kubectl create secret generic -n velero bsl-credentials --from-file=gcp=<credentials.json>

# Create new default backup-location with readOnly access, with default backup-location
velero --provider gcp backup-location create gcp-storage \
   --access-mode=ReadOnly --bucket <cluster1_bucket_name>  \
   --credential=bsl-credentials=gcp \
   --default 

# Get the list of available backup locations
$ velero backup-location get
NAME          PROVIDER   BUCKET/PREFIX     PHASE       LAST VALIDATED                  ACCESS MODE   DEFAULT
default       gcp        nikhilpn-bucket   Available   2021-08-13 13:42:48 +0530 IST   ReadWrite
gcp-storage   gcp        nikhilpn-bucket   Available   2021-08-13 13:42:48 +0530 IST   ReadOnly      true

Restore cluster-2 from cluster-1 backups

# Get the list of backups
$ velero backup get
NAME                        STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
full-backup-cluster1        Completed   0        0          2021-08-12 11:39:19 +0530 IST   28d       default            <none>
full-backup-cluster1-bkp2   Completed   0        0          2021-08-12 18:39:01 +0530 IST   29d       default            <none>
full-backup-cluster1-bkp3   Completed   0        0          2021-08-13 12:10:36 +0530 IST   29d       default            <none>
gke-namespace-bkp1          Completed   0        0          2021-08-12 18:39:43 +0530 IST   29d       default            <none>

$velero restore create --from-backup  full-backup-cluster1-bkp3 --exclude-namespaces kube-system

# Results = Cluster1

$kubectl get ns --context source-cluster
NAME              STATUS   AGE
cattle-system     Active   5d21h
default           Active   7d3h
demo              Active   5d21h
emojivoto         Active   3d3h
fleet-system      Active   5d21h
kube-node-lease   Active   7d3h
kube-public       Active   7d3h
kube-system       Active   7d3h
monitoring        Active   7d3h
nginx-example     Active   27h
spinnaker         Active   7d3h
velero            Active   47h

# Results = Cluster2

$kubectl get ns --context gke-restore-cluster
NAME              STATUS   AGE
cattle-system     Active   172m
default           Active   3h47m
demo              Active   172m
emojivoto         Active   172m
fleet-system      Active   172m
kube-node-lease   Active   3h47m
kube-public       Active   3h47m
kube-system       Active   3h47m
monitoring        Active   172m
nginx-example     Active   172m
spinnaker         Active   172m
velero            Active   3h32m

Check the status of the results

$ velero restore get
NAME                                       BACKUP                      STATUS            STARTED                         COMPLETED                       ERRORS   WARNINGS   CREATED                         SELECTOR
full-backup-cluster1-bkp3-20210813125131   full-backup-cluster1-bkp3   Completed         2021-08-13 12:51:32 +0530 IST   2021-08-13 12:52:07 +0530 IST   0        80         2021-08-13 12:51:32 +0530 IST   <none>

CloudCover + Velero

CloudCover + Velero

Backup & Restore - different Cloud

Backup & Restore within the different cloud provider

Now, let’s demonstrate how to backup the GKE in GCP and restore it to EKS in AWS. Please follow the steps mentioned below for the same:

Install velero on cluster-1[gke]

Take a backup of cluster-1[gke]

velero backup create full-backup-cluster1-bkp3

CloudCover + Velero

CloudCover + Velero

Install velero on cluster-2[eks]. Configure the properties as below

# Create secret to access the storage bucket
kubectl create secret generic -n velero bsl-credentials --from-file=gcp=<credentials.json>

# Create new default backup-location with readOnly access, with default backup-location
velero --provider gcp backup-location create gcp-storage \
   --access-mode=ReadOnly --bucket <cluster1_bucket_name>  \
   --credential=bsl-credentials=gcp \
   --default 

# Get the list of available backup locations
$ velero backup-location get
NAME          PROVIDER   BUCKET/PREFIX     PHASE       LAST VALIDATED                  ACCESS MODE   DEFAULT
default       gcp        nikhilpn-bucket   Available   2021-08-13 13:42:48 +0530 IST   ReadWrite
gcp-storage   gcp        nikhilpn-bucket   Available   2021-08-13 13:42:48 +0530 IST   ReadOnly      true

Restore cluster-2[eks] from cluster-1[gke] backups

# Get the list of backups
$ velero backup get
NAME                        STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
full-backup-cluster1-bkp3   Completed   0        0          2021-08-13 14:51:34 +0530 IST   29d       gcp-storage        <none>

$velero restore create --from-backup  full-backup-cluster1-bkp3 --exclude-namespaces kube-system --restore-volumes=false

# Results = Cluster1

$kubectl get ns --context source-cluster
NAME              STATUS   AGE
cattle-system     Active   6d21h
default           Active   8d
demo              Active   6d21h
emojivoto         Active   4d3h
fleet-system      Active   6d21h
kube-node-lease   Active   8d
kube-public       Active   8d
kube-system       Active   8d
monitoring        Active   8d
nginx-example     Active   2d3h
spinnaker         Active   8d
velero            Active   3h

# Results = Cluster2[eks]

$kubectl get ns --context eks-restore-cluster
NAME              STATUS   AGE
cattle-system     Active   11m
default           Active   8d
demo              Active   11m
emojivoto         Active   11m
fleet-system      Active   11m
kube-node-lease   Active   8d
kube-public       Active   8d
kube-system       Active   8d
monitoring        Active   11m
nginx-example     Active   39m
spinnaker         Active   11m
velero            Active   22m

Check the status of the results

$ velero restore get
NAME                                       BACKUP                      STATUS      STARTED                         COMPLETED                       ERRORS   WARNINGS   CREATED                         SELECTOR
full-backup-cluster1-bkp3-20210813145413   full-backup-cluster1-bkp3   Completed   2021-08-13 14:57:52 +0530 IST   2021-08-13 14:58:29 +0530 IST   0        88         2021-08-13 14:54:14 +0530 IST   <none>

CloudCover + Velero

CloudCover + Velero

Summary

  • CrossCloud Migrations

    During cross-cloud migrations manifest of persistent volumes are unchanged, you need to modify the manifest as per the cloud provider.

  • Data Migration

    For data migration between cloud providers in the restore process use restic.

  • No Overwriting

    Velero doesn’t overwrite objects in-cluster if they already exist.

  • One Set of Credentials

    It’s not yet possible to use different credentials for different object storage locations for the same provider.

  • Snapshots

    Volume snapshots are limited by where your provider allows you to create snapshots.

  • Multiple Backups

    You can set up multiple backups manually or scheduled that differ only in the storage locations.

  • Version Support

    When recovering, the Kubernetes version, Velero version (including container version), and Helm version have to be exactly the same as the original cluster.

  • Restic Data

    Restic data is stored under a prefix/subdirectory of the main Velero bucket

  • Cluster Migration

    The new cluster number of nodes should be equal to or greater than the original cluster.

References

Velero Docs