Isolated (Air Gap) Environment Setup

Overview

You may sometimes need to work in an environment that is completely behind the firewall. This section addresses the backup and restore procedures for those environments.

See Backup for restrictions and limitations.

Minio Server Setup

You need to set up a Minio server to configure a S3-compatible backup storage location. Refer to https://min.io/download#/macos to setup the Minio server.

Once the Minio server is setup, use YOUR Minio server credentials to login to your Minio server.

  • Minio server URL

  • Minio server username

  • Minio server password

Backup and Restore Process

The script provided as part of this process uses publicly available Velero and Minio tools to complete the manual backup and restore process in isolated environments.

To backup and restore the CloudCenter Suite data in an air gap environment, follow this procedure.

  1. Create a bucket on the Minio server and provide a meaningful name. This example, uses velero. See Backup for details.

  2. Before installing Velero, annotate all the pods in your cluster by using Velero-specific annotations that are provided in the script below.

    kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...

    To make things simpler here is a utility that does it for you. Be sure to save the following script contents to a file called pod_vol_restic_scan.py to your local system.

    The pod_vol_restic_scan.py script
    # This utility is used to annotate pods for Velero backups
    
    import random
    import logging
    import string
    import os
    import time
    import datetime
    from argparse import ArgumentParser
    import sys
    import zipfile
    import shutil
    import subprocess
    import re
    from pprint import pprint as pp
    import yaml
    
    
    __copyright__ = "Copyright 2019, abmitra"
    __license__ = "Cisco Systems"
    
    
    
    def script_run_time(seconds):
        min, sec = divmod(seconds, 60)
        hrs, min = divmod(min, 60)
        timedatastring = "%d:%02d:%02d" % (hrs, min, sec)
        return timedatastring
    
    
    def random_char(y):
        return ''.join(random.choice(string.ascii_letters) for x in range(y))
    
    
    def border_print(symbol, msg):
        line = "    " + msg + "    "
        totalLength = len(line) + 50
        logger.info("")
        logger.info(symbol * totalLength)
        logger.info(line.center(totalLength, symbol))
        logger.info(symbol * totalLength)
        logger.info("")
    
    
    
    def setup_custom_logger(name, tcStartTime, fileBaseName, inputName=""):
        if inputName == "" or inputName == None:
            st = datetime.datetime.fromtimestamp(tcStartTime).strftime('%Y-%m-%d-%H-%M-%S')
            filename = fileBaseName + "-" + st + '.log'
            dirName = "po-scan" + st
            dirPath = os.path.abspath(os.path.join(os.path.dirname(__file__), '.', dirName))
            logfilename = os.path.join(dirPath, filename)
            if not os.path.isdir(dirPath):
                os.makedirs(dirPath)
        else:
            logfilename = inputName
    
        # print(logfilename)
        formatter = logging.Formatter(fmt='%(asctime)s %(levelname)-8s %(message)s',
                                      datefmt='%Y-%m-%d %H:%M:%S')
        handler = logging.FileHandler(logfilename, mode='w')
        handler.setFormatter(formatter)
        screen_handler = logging.StreamHandler(stream=sys.stdout)
        screen_handler.setFormatter(formatter)
        logger = logging.getLogger(name)
        logger.setLevel(logging.DEBUG)
        logger.addHandler(handler)
        logger.addHandler(screen_handler)
        return logger, logfilename
    
    
    
    def shell_cmd(cmd):
        logger.info("Shell cmd execution >>> '{}'".format(cmd))
        p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, universal_newlines=True)
        output = p.communicate()[0]
        p_status = p.wait()
        return output.split("\n")
    
    
    
    def zipdir(path, ziph):
        # ziph is zipfile handle
        for root, dirs, files in os.walk(path):
            for file in files:
                # print(file)
                ziph.write(os.path.join(root, file))
    
    
    def create_zip():
        st = datetime.datetime.fromtimestamp(tcStartTime).strftime('%Y-%m-%d-%H-%M-%S')
        dirName = "ccs-log" + st
        zipFileName = dirName + ".zip"
        zipFilePath = os.path.abspath(os.path.join(os.path.dirname(__file__)))
        logger.info("Generating zip file '{}' at '{}'".format(zipFileName, zipFilePath))
        zipf = zipfile.ZipFile(zipFileName, 'w', zipfile.ZIP_DEFLATED)
        zipdir(dirName, zipf)
        zipf.close()
        shutil.rmtree(dirName)
    
    
    if __name__ == "__main__":
    
    
        fileBaseName = os.path.basename(__file__).split(".")[0]
        tcStartTime = time.time()
        timeStamp = datetime.datetime.fromtimestamp(tcStartTime).strftime('%Y%m%d%H%M%S')
    
        parser = ArgumentParser()
        parser.add_argument("-n", "--namespace",dest="namespace", help="Kubernetes Namespace", required=True)
        args = parser.parse_args()
        namespace = args.namespace.strip()
        logger, logFileName = setup_custom_logger("Cloudcenter K8 Debug", tcStartTime, fileBaseName)
    
        cmd = "kubectl get pod -n " + namespace + " | grep -v NAME | awk '{print $1}'"
        pod_name_list = shell_cmd(cmd)
        pod_pvc_dict = {}
        pod_vol_dict = {}
    
    
        for pod in pod_name_list:
            if pod != "":
                cmd = "kubectl get pod {} -n {} -o yaml > temp.yaml".format(pod, namespace)
                data = shell_cmd(cmd)
                temp_file = open("temp.yaml", "r")
                with open('temp.yaml', 'r') as temp_file:
                    try:
                        file_contents = (yaml.load(temp_file))
                        #print("Pod Name = {}".format(pod.strip()))
                        for vol in file_contents['spec']['volumes']:
                            #pp(vol)
                            try:
                                pvc = vol['persistentVolumeClaim']
                                pod_vol_dict[pod.strip()] = vol['name'].strip()
                                #print("Vol Name = {}".format(vol['name']))
                            except:
                                pass
                    except yaml.YAMLError as exc:
                        logger.error("Error in reading YAML file.")
                        logger.error(exc)
                os.remove('temp.yaml')
    
    
        #pp(pod_vol_dict)
        border_print("+","Applying POD annotations")
        for pod in pod_vol_dict.keys():
            cmd = "kubectl -n {} annotate --overwrite pod {} backup.velero.io/backup-volumes={}".format(namespace,pod,pod_vol_dict[pod])
            data = shell_cmd(cmd)
  3. From where you have saved the pod_vol_restic_scan.py script, run the following command.

    #Needs Python3
    python pod_vol_restic_scan.py -n cisco
  4. Install Velero Version 0.11.0 – refer to https://velero.io/docs/v0.11.0/ for details.

  5. Create a credential file to store your credentials. This example, uses the following URL and credentials – this is only an example!

    Contents of the credentials-minio file
    [default]
    aws_access_key_id = <your Minio username>
    aws_secret_access_key = <your Minio password>


  6. On the existing Kubernetes cluster, you must deploy Velero and configure it with the AWS compatible bucket location, in this example, minio.

    Velero and Minio Usage

    This process uses Velero to backup the Kubernetes data to a Minio server.

    Once you finish this task you can configure the AWS S3 storage provider using the Minio server credentials as specified below. Configuring Minio is similar to configuring an AWS S3 environment, the difference is that you must provide the region and endpoint details when adding the Minio server as AWS S3 storage. You can verify the data from Minio server GUI or command line. The following steps are an example to verify the data from the Minio command line.

    Refer to https://docs.min.io/docs/aws-cli-with-minio.html for additional details.

    velero install \
        --provider aws \
        --bucket <Minio bucket name from Step 1 above> \
        --secret-file <Fully qualified path of the Minio credentials file> \
        --use-volume-snapshots=false \
        --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=<your Minio server URL> \
        --use-restic \
        --wait
  7. Start a backup using the following command.

    velero backup create <Minio backup name> --include-namespaces=cisco --wait
  8. Wait for the backup to complete and watch the logs. Once the backup is complete, the Minio output should look like the information displayed in the following screenshot.
  9. To restore the backup to a different cluster or a fresh cluster (assuming that the cisco namespace is not present).

    velero install \
        --provider aws \
        --bucket <Minio bucket name from Step 1 above> \
        --secret-file <Fully qualified path of the Minio credentials file> \
        --use-volume-snapshots=false \
        --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=<your Minio server URL> \
        --use-restic \
        --wait
  10. Start the restore process:

    velero restore create --from-backup <Minio backup name>
  11. The Minio output should look like the information displayed in the following screenshot – you will see an additional restore folder as displayed in the following screenshot

You have now backed up and restored the CloudCenter Suite to an isolated environment using the Minio server.

Sample Commands Using Fictional Names

The following commands are only examples and need to be run using the names that you have assigned to resources in your environment.

Deploying Velero on the Exisitng Cluster
velero install \
    --provider aws \
    --bucket velero \
    --secret-file ./credentials-velero \
    --use-volume-snapshots=false \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://12.16.1.1:9000 \
    --use-restic \
    --wait
Backup Command
velero backup create minio-backup --include-namespaces=cisco --wait
Deploying Velero on the New Cluster
velero install \
    --provider aws \
    --bucket velero \
    --secret-file ./credentials-velero \
    --use-volume-snapshots=false \
    --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://12.16.1.1:9000 \
    --use-restic \
    --wait
Restore Command
velero restore create --from-backup minio-backup
  • No labels
Terms & Conditions Privacy Statement Cookies Trademarks