Isolated (Air Gap) Environment Setup


You may sometimes need to work in an environment that is completely behind the firewall. This section addresses the backup and restore procedures for those environments.

See Backup for restrictions and limitations.

Minio Server Setup

You need to set up a Minio server to configure a S3-compatible backup storage location. Refer to to setup the Minio server.

Once the Minio server is setup, use YOUR Minio server credentials to login to your Minio server.

  • Minio server URL

  • Minio server username

  • Minio server password

Backup and Restore Process

The script provided as part of this process uses publicly available Velero and Minio tools to complete the manual backup and restore process in isolated environments.

To backup and restore the CloudCenter Suite data in an air gap environment, follow this procedure.

  1. Create a bucket on the Minio server and provide a meaningful name. This example, uses velero. See Backup for details.

  2. Before installing Velero, annotate all the pods in your cluster by using Velero-specific annotations that are provided in the script below.

    kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME,YOUR_VOLUME_NAME_2,...

    To make things simpler here is a utility that does it for you. Be sure to save the following script contents to a file called to your local system.

    The script
    # This utility is used to annotate pods for Velero backups
    import random
    import logging
    import string
    import os
    import time
    import datetime
    from argparse import ArgumentParser
    import sys
    import zipfile
    import shutil
    import subprocess
    import re
    from pprint import pprint as pp
    import yaml
    __copyright__ = "Copyright 2019, abmitra"
    __license__ = "Cisco Systems"
    def script_run_time(seconds):
        min, sec = divmod(seconds, 60)
        hrs, min = divmod(min, 60)
        timedatastring = "%d:%02d:%02d" % (hrs, min, sec)
        return timedatastring
    def random_char(y):
        return ''.join(random.choice(string.ascii_letters) for x in range(y))
    def border_print(symbol, msg):
        line = "    " + msg + "    "
        totalLength = len(line) + 50"") * totalLength), symbol)) * totalLength)"")
    def setup_custom_logger(name, tcStartTime, fileBaseName, inputName=""):
        if inputName == "" or inputName == None:
            st = datetime.datetime.fromtimestamp(tcStartTime).strftime('%Y-%m-%d-%H-%M-%S')
            filename = fileBaseName + "-" + st + '.log'
            dirName = "po-scan" + st
            dirPath = os.path.abspath(os.path.join(os.path.dirname(__file__), '.', dirName))
            logfilename = os.path.join(dirPath, filename)
            if not os.path.isdir(dirPath):
            logfilename = inputName
        # print(logfilename)
        formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
                                      datefmt='%Y-%m-%d %H:%M:%S')
        handler = logging.FileHandler(logfilename, mode='w')
        screen_handler = logging.StreamHandler(stream=sys.stdout)
        logger = logging.getLogger(name)
        return logger, logfilename
    def shell_cmd(cmd):"Shell cmd execution >>> '{}'".format(cmd))
        p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, universal_newlines=True)
        output = p.communicate()[0]
        p_status = p.wait()
        return output.split("\n")
    def zipdir(path, ziph):
        # ziph is zipfile handle
        for root, dirs, files in os.walk(path):
            for file in files:
                # print(file)
                ziph.write(os.path.join(root, file))
    def create_zip():
        st = datetime.datetime.fromtimestamp(tcStartTime).strftime('%Y-%m-%d-%H-%M-%S')
        dirName = "ccs-log" + st
        zipFileName = dirName + ".zip"
        zipFilePath = os.path.abspath(os.path.join(os.path.dirname(__file__)))"Generating zip file '{}' at '{}'".format(zipFileName, zipFilePath))
        zipf = zipfile.ZipFile(zipFileName, 'w', zipfile.ZIP_DEFLATED)
        zipdir(dirName, zipf)
    if __name__ == "__main__":
        fileBaseName = os.path.basename(__file__).split(".")[0]
        tcStartTime = time.time()
        timeStamp = datetime.datetime.fromtimestamp(tcStartTime).strftime('%Y%m%d%H%M%S')
        parser = ArgumentParser()
        parser.add_argument("-n", "--namespace",dest="namespace", help="Kubernetes Namespace", required=True)
        args = parser.parse_args()
        namespace = args.namespace.strip()
        logger, logFileName = setup_custom_logger("Cloudcenter K8 Debug", tcStartTime, fileBaseName)
        cmd = "kubectl get pod -n " + namespace + " | grep -v NAME | awk '{print $1}'"
        pod_name_list = shell_cmd(cmd)
        pod_pvc_dict = {}
        pod_vol_dict = {}
        for pod in pod_name_list:
            if pod != "":
                cmd = "kubectl get pod {} -n {} -o yaml > temp.yaml".format(pod, namespace)
                data = shell_cmd(cmd)
                temp_file = open("temp.yaml", "r")
                with open('temp.yaml', 'r') as temp_file:
                        file_contents = (yaml.load(temp_file))
                        #print("Pod Name = {}".format(pod.strip()))
                        for vol in file_contents['spec']['volumes']:
                                pvc = vol['persistentVolumeClaim']
                                pod_vol_dict[pod.strip()] = vol['name'].strip()
                                #print("Vol Name = {}".format(vol['name']))
                    except yaml.YAMLError as exc:
                        logger.error("Error in reading YAML file.")
        border_print("+","Applying POD annotations")
        for pod in pod_vol_dict.keys():
            cmd = "kubectl -n {} annotate --overwrite pod {}{}".format(namespace,pod,pod_vol_dict[pod])
            data = shell_cmd(cmd)
  3. From where you have saved the script, run the following command.

    #Needs Python3
    python -n cisco
  4. Install Velero Version 0.11.0 – refer to for details.

  5. Create a credential file to store your credentials. This example, uses the following URL and credentials – this is only an example!

    Contents of the credentials-minio file
    aws_access_key_id = <your Minio username>
    aws_secret_access_key = <your Minio password>

  6. On the existing Kubernetes cluster, you must deploy Velero and configure it with the AWS compatible bucket location, in this example, minio.

    Velero and Minio Usage

    This process uses Velero to backup the Kubernetes data to a Minio server.

    Once you finish this task you can configure the AWS S3 storage provider using the Minio server credentials as specified below. Configuring Minio is similar to configuring an AWS S3 environment, the difference is that you must provide the region and endpoint details when adding the Minio server as AWS S3 storage. You can verify the data from Minio server GUI or command line. The following steps are an example to verify the data from the Minio command line.

    Refer to for additional details.

    velero install \
        --provider aws \
        --bucket <Minio bucket name from Step 1 above> \
        --secret-file <Fully qualified path of the Minio credentials file> \
        --use-volume-snapshots=false \
        --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=<your Minio server URL> \
        --use-restic \
  7. Start a backup using the following command.

    velero backup create <Minio backup name> --include-namespaces=cisco --wait
  8. Wait for the backup to complete and watch the logs. Once the backup is complete, the Minio output should look like the information displayed in the following screenshot.
  9. To restore the backup to a different cluster or a fresh cluster (assuming that the cisco namespace is not present).

    velero install \
        --provider aws \
        --bucket <Minio bucket name from Step 1 above> \
        --secret-file <Fully qualified path of the Minio credentials file> \
        --use-volume-snapshots=false \
        --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=<your Minio server URL> \
        --use-restic \
  10. Start the restore process:

    velero restore create --from-backup <Minio backup name>
  11. The Minio output should look like the information displayed in the following screenshot – you will see an additional restore folder as displayed in the following screenshot

You have now backed up and restored the CloudCenter Suite to an isolated environment using the Minio server.

Sample Commands Using Fictional Names

The following commands are only examples and need to be run using the names that you have assigned to resources in your environment.

Deploying Velero on the Exisitng Cluster
velero install \
    --provider aws \
    --bucket velero \
    --secret-file ./credentials-velero \
    --use-volume-snapshots=false \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url= \
    --use-restic \
Backup Command
velero backup create minio-backup --include-namespaces=cisco --wait
Deploying Velero on the New Cluster
velero install \
    --provider aws \
    --bucket velero \
    --secret-file ./credentials-velero \
    --use-volume-snapshots=false \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url= \
    --use-restic \
Restore Command
velero restore create --from-backup minio-backup
  • No labels
Terms & Conditions Privacy Statement Cookies Trademarks