Restore without Proxy

Overview

To restore data, the CloudCenter Suite requires that you launch a new cluster.

The backup/restore feature is only available on CloudCenter Suite clusters installed using CloudCenter Suite installers and not on existing Kubernetes clusters.

Limitations

If you configured the old cluster using a DNS, be sure to update the new IP address (from the restored cluster) that is mapped to the DNS entry. Once you update the DNS entry of your new cluster, these services will continue to work as designed.

Additionally, be aware that you may need to update the DNS for the Base URL Configuration  and SSO Setup (both ADFS and SP).

Reconfiguration of Base URL and SSO are only applicable for backup & restore functions IF the source cluster is created using the CloudCenter Suite  5.0.x installer and the destination cluster is freshly created using the CloudCenter Suite 5.1.1 installer.

Requirements

Before proceeding with a restore, adhere to the following limitations:

  • The Velero tool must be installed. Velero Version 0.11.0 – refer to https://velero.io/docs/v0.11.0/ for details.

  • Launch a new cluster to restore the data.

  • You will need to execute multiple scripts as part of these procedures. Make sure to use the 755 permission to execute each script mentioned in this section.

1. Launch the Target Cluster

To launch CloudCenter Suite on a new target cluster and access the Suite Admin UI for this cluster.

  1. Navigate to the Suite Admin Dashboard for the new cluster.

  2. Configure the identical backup configuration that you configured in your old cluster. See Backup > Process additional details. When you provide the credentials, the new cluster automatically connects to the cloud storage location.

    This step is REQUIRED to initiate the connection and fetch the backup(s).

  3. Wait for a few minutes (at least 5 Mins, maybe more) for the Velero service in the new cluster to be synced up with the cloud storage location. At this point return to your local command window (shell console or terminal window) to perform the remaining steps in this process.

    If both your clusters are accessible from your local machine, the scripts used in the following steps can be executed as designed.

    If either one of your clusters uses proxy access or if you cannot recover/download the KubeConfig file from your old cluster, follow the instructions provided in the Restore with Proxy section.

2. Download the KubeConfig Files

You must download the KubeConfig file from the Suite Admin Kubernetes cluster management page for your source and target clusters to your local machine via a local command window (shell console or terminal window):

  • From the source cluster, download the KubeConfig file and name it KUBECONFIG_OLD.

  • From the target cluster, download the KubeConfig file and name it KUBECONFIG_NEW

See Kubernetes Cluster Management for additional details on accessing the KubeConfig file as displayed in the following screenshot.

3. Download Velero

The restore process requires Velero and must be performed on a local command window (shell console or terminal window).

To download Velero, use one of the following options:

  • OSX option:

    $ cd <VELERO_DIRECTORY>
    $ curl -L -O https://github.com/heptio/velero/releases/download/v0.11.0/velero-v0.11.0-darwin-amd64.tar.gz
    $ tar -xvf velero-v0.11.0-darwin-amd64.tar.gz
  • CentOS Option:

    $ mkdir -p /velero-test && cd /velero-test
    $ curl -LO https://github.com/heptio/velero/releases/download/v0.11.0/velero-v0.11.0-linux-amd64.tar.gz
    $ tar -xvf velero-v0.11.0-linux-amd64.tar.gz && rm -rf velero-v0.11.0-linux-amd64.tar.gz 
    $ cp /velero-test/velero /usr/local/bin/

After you download Velero, export the KubeConfig file of the target (restore) cluster using the downloaded file:

export KUBECONFIG=<KUBECONFIG_PATH>


4. Download JQ

The restore process requires that you install JQ on your machine. Refer to https://stedolan.github.io/jq/download for additional details.

# To install jq on MacOS
$ brew install jq
 
 
# To install jq on Debian and Ubuntu
$ sudo apt-get install jq
 
 
# To install jq on CentOS
$ sudo yum install epel-release -y
$ sudo yum install jq -y
$ sudo jq --version

5. Pre-Restore Procedure

The pre-restore script creates the storageclass, if it does not exist on destination cluster, and saves the nginx-ingress-controller YAML file as well as the config maps for the following Suite Admin services:

  • The suite-k8 service

  • The suite-prod service

To execute the pre-restore script, run the pre-restore.sh script with the provided parameters:

# Command to execute the bashscript
$ ./pre-restore.sh <ccs_installer_version> </pathTo/oldCluster/kube_config> </pathTo/targetCluster/kube_config>

#<ccs_installer_version> is the CloudCenter Suite version without any characters inbetween. For example, "510"or"502"or"5101"
#</pathTo/oldCluster/kube_config> is the path to the OLD KubeConfig file downloaded in Step 2.
#</pathTo/targetCluster/kube_config> is the path to the NEW KubeConfig file downloaded in Step 2.

Make sure that the backup folder does not exist at ~/backup on the device in which you are execute these scripts. If a ~/backup exists, delete it using the following command:

rm -rf ~/backup

The following code block includes the pre-restore.sh script:

#!/bin/bash
INSTALLER_VERSION_OLD=$1
KUBECONFIG_OLD=$2
KUBECONFIG_NEW=$3
 
declare INSTALLER_STORAGECLASS
INSTALLER_STORAGECLASS["500"]="thin"
INSTALLER_STORAGECLASS["501"]="thin"
INSTALLER_STORAGECLASS["502"]="thin"
INSTALLER_STORAGECLASS["51"]="standard"
INSTALLER_STORAGECLASS["510"]="standard"
 
if [[ ( ($KUBECONFIG_OLD == "" && $INSTALLER_VERSION_OLD == "") || $KUBECONFIG_NEW == "" ) ]]; then
    echo "Missing Paths for kubeconfigs"
    echo "Quitting"
    exit 0
else
    export KUBECONFIG_SAVED=$KUBECONFIG
    export KUBECONFIG=$HOME/.kube/config
 
    mkdir $HOME/backup
    cp $HOME/.kube/config $HOME/backup/saved_config
 
    if [[ $KUBECONFIG_OLD != "" ]]; then
 
        # Fetching the storage class name for the old(backup) cluster and storing it in variable STORAGECLASS_NAME_OLD
        cp $KUBECONFIG_OLD $HOME/.kube/config
        STORAGECLASS_NAME_OLD=$(kubectl get storageclass -o json | jq '.items[0].metadata.name' | sed -e 's/^"//' -e 's/"$//') # Extracting the storage class name from the json file of old cluster
        echo "Creating storage class "${STORAGECLASS_NAME_OLD} "in the target cluster."
 
    else
        echo "Creating storage class "${INSTALLER_STORAGECLASS[$INSTALLER_VERSION_OLD]} "in the target cluster."
        STORAGECLASS_NAME_OLD=${INSTALLER_STORAGECLASS[$INSTALLER_VERSION_OLD]}
    fi
 
    # Creating a storage class with the name STORAGECLASS_NAME_OLD in the target(restore) cluster
    cp $KUBECONFIG_NEW $HOME/.kube/config
    kubectl get storageclass -o json | jq --arg inp1 $STORAGECLASS_NAME_OLD '.items[0].metadata.name=$inp1' > $HOME/backup/storageclass.json
    cat $HOME/backup/storageclass.json | kubectl create -f -
 
 
    #Scripts to backup ingress service spec, k8s and prod-mgmt configmaps on the target cluster
    mkdir -p $HOME/backup/configmap
    mkdir -p $HOME/backup/service
    mkdir -p $HOME/backup/sshkeys
 
    kubectl get svc -n cisco common-framework-nginx-ingress-controller -o json > $HOME/backup/service/ingress.json
 
    for cm in $(kubectl get configmaps -n cisco -o custom-columns=:metadata.name --no-headers=true | grep "k8s-mgmt")
    do
        kubectl get configmap $cm -n cisco -o yaml > $HOME/backup/configmap/$cm
    done
 
    for cm in $(kubectl get configmaps -n cisco -o custom-columns=:metadata.name --no-headers=true | grep "prod-mgmt")
    do
        kubectl get configmap $cm -n cisco -o yaml > $HOME/backup/configmap/$cm
    done
 
    kubectl get configmap suite.key -n cisco -o yaml > $HOME/backup/sshkeys/suite.key
    kubectl get configmap suite.pub -n cisco -o yaml > $HOME/backup/sshkeys/suite.pub
 
    cp $HOME/backup/saved_config $HOME/.kube/config
     
    export KUBECONFIG=$KUBECONFIG_SAVED
 
fi
 
 
echo 'Successfull!'

6. Restore Procedure

To restore the backed up data to the target cluster, run the following Velero commands from your local machine.

  1. List available backups.

    $ ./<VELERO_DIRECTORY>/velero backup get

    Verify if the backups are listed BEFORE proceeding to the next step.

  2. Make sure the backed up cisco namespace does not exist in the target cluster. Be sure to delete the cisco name space, if it exists, before you restore.

    $ kubectl delete ns cisco
  3. Restore from one of the listed backups.

    $ ./velero restore create --from-backup <BACKUPNAME>

You have now restored the CloudCenter Suite data to the new cluster.

7. Post-Restore Procedure

At this stage, you must restore the config maps for the following Suite Admin services:

  • The suite-k8 service

  • The suite-prod service

If the new cluster is accessible (from the local device) using the KubeConfig file, execute the following post-restore.sh script.

With Internet Access - The post-restore.sh script
#!/bin/bash
 
KUBECONFIG_NEW=$1
 
if [[ ( $KUBECONFIG_NEW == "" ) ]]; then
    echo "Missing Paths for kubeconfig"
    echo "Quitting"
    exit 0
else
    export KUBECONFIG_SAVED=$KUBECONFIG
    export KUBECONFIG=$HOME/.kube/config
 
    cp $HOME/.kube/config $HOME/backup/saved_config
    cp $KUBECONFIG_NEW $HOME/.kube/config
 
 
    kubectl delete svc -n cisco common-framework-nginx-ingress-controller
    cat $HOME/backup/service/ingress.json | kubectl create -f -
 
    for cm in $(ls $HOME/backup/configmap)
        do
            kubectl delete configmap $cm -n cisco
        done
 
    for cm in $(ls $HOME/backup/configmap)
        do
            cat $HOME/backup/configmap/$cm | kubectl create -f -
        done
 
 
    kubectl delete configmap suite.key -n cisco
    kubectl delete configmap suite.pub -n cisco
    cat $HOME/backup/sshkeys/suite.key | kubectl create -f -
    cat $HOME/backup/sshkeys/suite.pub | kubectl create -f -
     
    cp $HOME/backup/saved_config $HOME/.kube/config
    export KUBECONFIG=$KUBECONFIG_SAVED
 
    rm -r $HOME/backup/
fi
 
 
echo 'Successfull!'


8. Workload Manager-Specific Post-Restore Procedure

This migration procedure only applies to Running deployments.

Be sure to verify that you are only migrating deployment in the Running state.

The first few steps differ based on your use of private clouds or public clouds. Be sure to use the procedure applicable to your cloud environment.


8a. Understand the Workload Manager Restore Context

If you have installed the Workload Manager module, you must perform this procedure to update the DNS/IP address for the private cloud resources listed below and displayed in the following image:

  • The Worker AMQP IP

  • The Guacamole Public IP and Port

  • The Guacamole IP Address and Port for Application VMs

    As public clouds use load balancers and static IP ports, these resource details may differ accordingly. Be sure to use the resources applicable to your cloud environment.

8b. Retrieve the Port Numbers from the NEW Restored Cluster

The Kubernetes cluster contains the information that is required to update the Workload Manager UI. This section provides the commands required to retrieve this information.

As public clouds use load balancers and static IP ports, these resource details may differ accordingly. Be sure to use the resources applicable to your cloud environment.

To retrieve the port numbers from the new cluster for private clouds, follow this procedure.

  1. The port numbers for each component will differ.

    1. Run the following command on the new cluster (login to the KubeConfig of the new cluster) to locate the new port numbers for the Worker AMQP IP.

      kubectl get service -n cisco | grep rabbitmq-ext | awk '{print $5}' 
      
      # In the resulting response, locate the port corresponding to Port 443 and use that port number!
      
      443:26642/TCP,15672:8902/TCP
    2. Run the following command on the new cluster to retrieve the port number for the Guacamole Public IP and Port.

      kubectl get service -n cisco | grep cloudcenter-guacamole | awk '{print $5}'
      
      # In the resulting response, locate the port corresponding to Port 443 and use that port number for the Guacamole port!
      
      8080:2376/TCP,7788:25226/TCP,7789:32941/TCP,443:708/TCP
    3. Run the following command on the new cluster to retrieve the port number for the Guacamole IP Address and Port for Application VMs.

      kubectl get service -n cisco | grep cloudcenter-guacamole | awk '{print $5}'
      
      # In the resulting response, locate the port corresponding to Port 7789 and use that port number for the Guacamole port!
      
      8080:2376/TCP,7788:25226/TCP,7789:32941/TCP,443:708/TCP

8c. Retrieve the IP Address of the NEW Restored Cluster

Use the IP address of one of the masters of the NEW restored Kubernetes cluster for all the resources where the IP address needs to be replaced.

As public clouds use load balancers and static IP ports, these resource details may differ accordingly. Be sure to use the resources applicable to your cloud environment.

8d. Change the IP Address and Port Numbers for the NEW Restored Cluster

The IP addresses and port numbers are not updated automatically in the Workload Manager UI and you must explicitly update them using this procedure.

As public clouds use load balancers and static IP ports, these resource details may differ accordingly. Be sure to use the resources applicable to your cloud environment.

To configure the IP address and port number in the new cluster, follow this procedure.

  1. Access the Workload Manager module.

  2. Navigate to Clouds > Configure Cloud > Region Connectivity.

  3. Click Edit Connectivity in the Region Connectivity settings.

  4. In the Configure Region popup, change the 3 fields mentioned above to ensure that the IP and port details are updated to the NEW restored VM.

    DO NOT MAKE ANY OTHER CONFIGURATION CHANGES!

  5. Click OK to save your changes.

    Saving your changes may not automatically update the information in the Region Connectivity settings. Be sure to refresh the page to see the saved information.

You have now updated the DNS/IP/Port for the restored WM for this particular cloud. If you have configured other clouds in this environment, be sure to repeat this procedure for each cloud. Once you complete this procedure for all configured clouds, you can resume new deployment activities using the Workload Manager.

8e. Perform the Pre-Migrate Activities

Before you migrate the deployment details you need to ensure that you can connect to both clusters and have the required files to perform the migration.

To perform the pre-migrate activities, follow this procedure.

  1. Verify that the OLD cluster VMs can reach the NEW cluster. The remaining steps in this procedure are dependent on this connectivity in your environment.

  2. Save the contents of the following actions.json file using the same name and file extension to your local directory with a file type JSON format.

    The actions.json file
    {"repositories":[],"actions":{"resource":null,"size":2,"pageNumber":0,"totalElements":2,"totalPages":1,"actionJaxbs":[{"id":"57","resource":null,"name":"AgentReConfig_Linux","description":"","actionType":"EXECUTE_COMMAND","category":"ON_DEMAND","lastUpdatedTime":"2019-09-19 22:14:54.245","timeOut":1200,"enabled":true,"encrypted":false,"explicitShare":false,"showExplicitShareFeature":false,"deleted":false,"systemDefined":false,"bulkOperationSupported":true,"isAvailableToUser":true,"currentlyExecuting":false,"owner":1,"actionParameters":[{"paramName":"downloadFromBundle","paramValue":"true","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"},{"paramName":"bundlePath","paramValue":"http://10.0.0.3/5.1-release/ccs-bundle-artifacts-5.1.0-20190819/agent.zip","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"},{"paramName":"script","paramValue":"agent/agentReconfig.sh","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"},{"paramName":"executeOnContainer","paramValue":"false","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"},{"paramName":"rebootInstance","paramValue":"false","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"},{"paramName":"refreshInstanceInfo","paramValue":"false","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"}],"actionResourceMappings":[{"type":"VIRTUAL_MACHINE","actionResourceFilters":[{"cloudRegionResource":null,"serviceResource":null,"applicationProfileResource":null,"deploymentResource":null,"vmResource":{"type":"DEPLOYMENT_VM","appProfiles":["all"],"cloudRegions":["all"],"cloudAccounts":["all"],"services":["all"],"osTypes":[],"cloudFamilyNames":[],"nodeStates":[],"cloudResourceMappings":[]},"isEditable":true},{"cloudRegionResource":null,"serviceResource":null,"applicationProfileResource":null,"deploymentResource":null,"vmResource":{"type":"IMPORTED_VM","appProfiles":[],"cloudRegions":["all"],"cloudAccounts":["all"],"services":[],"osTypes":["all"],"cloudFamilyNames":[],"nodeStates":[],"cloudResourceMappings":[]},"isEditable":true}]}],"actionResourceMappingAncillaries":[],"actionCustomParamSpecs":[{"paramName":"brokerHost","displayName":"BrokerHost","helpText":"Ip Address or Hostname of Rabbit MQ cluster","type":"string","valueList":null,"defaultValue":"","confirmValue":"","pathSuffixValue":"","userVisible":true,"userEditable":true,"systemParam":false,"exampleValue":null,"dataUnit":null,"optional":false,"deploymentParam":false,"multiselectSupported":false,"useDefault":true,"valueConstraint":{"minValue":0,"maxValue":255,"maxLength":255,"regex":null,"allowSpaces":true,"sizeValue":0,"step":0,"calloutWorkflowName":null},"scope":null,"webserviceListParams":{"url":"","protocol":"","username":"","password":"","requestType":null,"contentType":null,"commandParams":null,"requestBody":null,"resultString":null},"secret":null,"tabularTypeData":null,"collectionList":[],"preference":"VISIBLE_UNLOCKED"},{"paramName":"brokerPort","displayName":"BrokerPort","helpText":"RabbitMQ Port number","type":"string","valueList":null,"defaultValue":"","confirmValue":"","pathSuffixValue":"","userVisible":true,"userEditable":true,"systemParam":false,"exampleValue":null,"dataUnit":null,"optional":false,"deploymentParam":false,"multiselectSupported":false,"useDefault":true,"valueConstraint":{"minValue":0,"maxValue":255,"maxLength":255,"regex":null,"allowSpaces":true,"sizeValue":0,"step":0,"calloutWorkflowName":null},"scope":null,"webserviceListParams":{"url":"","protocol":"","username":"","password":"","requestType":null,"contentType":null,"commandParams":null,"requestBody":null,"resultString":null},"secret":null,"tabularTypeData":null,"collectionList":[],"preference":"VISIBLE_UNLOCKED"}]},{"id":"58","resource":null,"name":"AgentReConfig_Win","description":"","actionType":"EXECUTE_COMMAND","category":"ON_DEMAND","lastUpdatedTime":"2019-09-19 22:15:02.311","timeOut":1200,"enabled":true,"encrypted":false,"explicitShare":false,"showExplicitShareFeature":false,"deleted":false,"systemDefined":false,"bulkOperationSupported":true,"isAvailableToUser":true,"currentlyExecuting":false,"owner":1,"actionParameters":[{"paramName":"downloadFromBundle","paramValue":"true","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"},{"paramName":"bundlePath","paramValue":"http://10.0.0.3/5.1-release/ccs-bundle-artifacts-5.1.0-20190819/agent.zip","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"},{"paramName":"script","paramValue":"agent\\agentReconfig.ps1","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"},{"paramName":"executeOnContainer","paramValue":"false","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"},{"paramName":"rebootInstance","paramValue":"false","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"},{"paramName":"refreshInstanceInfo","paramValue":"false","customParam":false,"required":true,"useDefault":false,"preference":"VISIBLE_UNLOCKED"}],"actionResourceMappings":[{"type":"VIRTUAL_MACHINE","actionResourceFilters":[{"cloudRegionResource":null,"serviceResource":null,"applicationProfileResource":null,"deploymentResource":null,"vmResource":{"type":"DEPLOYMENT_VM","appProfiles":["all"],"cloudRegions":["all"],"cloudAccounts":["all"],"services":["all"],"osTypes":[],"cloudFamilyNames":[],"nodeStates":[],"cloudResourceMappings":[]},"isEditable":true},{"cloudRegionResource":null,"serviceResource":null,"applicationProfileResource":null,"deploymentResource":null,"vmResource":{"type":"IMPORTED_VM","appProfiles":[],"cloudRegions":["all"],"cloudAccounts":["all"],"services":[],"osTypes":["all"],"cloudFamilyNames":[],"nodeStates":[],"cloudResourceMappings":[]},"isEditable":true}]}],"actionResourceMappingAncillaries":[],"actionCustomParamSpecs":[{"paramName":"brokerHost","displayName":"BrokerHost","helpText":"Ip Address or Hostname of Rabbit MQ cluster","type":"string","valueList":null,"defaultValue":"","confirmValue":"","pathSuffixValue":"","userVisible":true,"userEditable":true,"systemParam":false,"exampleValue":null,"dataUnit":null,"optional":false,"deploymentParam":false,"multiselectSupported":false,"useDefault":true,"valueConstraint":{"minValue":0,"maxValue":255,"maxLength":255,"regex":null,"allowSpaces":true,"sizeValue":0,"step":0,"calloutWorkflowName":null},"scope":null,"webserviceListParams":{"url":"","protocol":"","username":"","password":"","requestType":null,"contentType":null,"commandParams":null,"requestBody":null,"resultString":null},"secret":null,"tabularTypeData":null,"collectionList":[],"preference":"VISIBLE_UNLOCKED"},{"paramName":"brokerPort","displayName":"BrokerPort","helpText":"RabbitMQ Port number","type":"string","valueList":null,"defaultValue":"","confirmValue":"","pathSuffixValue":"","userVisible":true,"userEditable":true,"systemParam":false,"exampleValue":null,"dataUnit":null,"optional":false,"deploymentParam":false,"multiselectSupported":false,"useDefault":true,"valueConstraint":{"minValue":0,"maxValue":255,"maxLength":255,"regex":null,"allowSpaces":true,"sizeValue":0,"step":0,"calloutWorkflowName":null},"scope":null,"webserviceListParams":{"url":"","protocol":"","username":"","password":"","requestType":null,"contentType":null,"commandParams":null,"requestBody":null,"resultString":null},"secret":null,"tabularTypeData":null,"collectionList":[],"preference":"VISIBLE_UNLOCKED"}]}]},"repositoriesMappingRequired":false,"actionTypesCounts":[{"key":"EXECUTE_COMMAND","value":"2"}]}
  3. Access Workload Manager in your OLD cluster and navigate to the Actions Library page.

  4. Import the actions.json file that you saved in Step 2 above. You should see two files (AgentReconfig_Linux and AgentReconfig_Win) as displayed in the following screenshot.

  5. The files are disabled by default (OFF) – enable both files by toggling each switch to ON.

  6. Save the following script to a file in your local directory and name it agentReconfig.sh. This is the file to use for Linux environments.

    The agentReconfig.sh file
    #!/bin/bash
    
    #Write to system log as well as to terminal
    logWrite()
    {
        msg=$1
        echo "$(date) ${msg}"
        logger -t "OSMOSIX" "${msg}"
        return 0
    }
    
    
    logWrite "Starting agent migrate..."
    
    env_file="/usr/local/osmosix/etc/userenv"
    if [ -f $env_file ];
    then
        logWrite "Source the userenv file..."
        . $env_file
    fi
    
    
    if [ -z $brokerHost ];
    then
        logWrite "Broker Host / Rabbit Server Ip not passed as action parameter"
        exit 3;
    fi
    
    if [ -z $brokerPort ];
    then
        logWrite "Broker Port / Rabbit Server Port not passed as action parameter"
        exit 4
    fi
    
    replaceUserdataValue() {
        key=$1
        value=$2
    
        if [ -z $key ] || [ -z $value ];
        then
            logWrite "Command line arguments missing to update user-data file, key: $key, value:$value"
            return
        fi
    
        user_data_file="/usr/local/agentlite/etc/user-data"
        if [ -f $user_data_file ];
        then
            json_content=`cat $user_data_file`
            old_value=`echo $json_content | awk -F $key '{print $2}' | awk -F \" '{print $3}'`
            sed  -i 's@'"$old_value"'@'"$value"'@g'  $user_data_file
        fi
    
    }
    
    export AGENT_HOME="/usr/local/agentlite"
    
    logWrite "Updating the user data file"
    replaceUserdataValue "brokerClusterAddresses" "$brokerHost:$brokerPort"
    
    logWrite "Updating config.json file"
    sed -i '/AmqpAddress/c\    "AmqpAddress": "'"${brokerHost}:${brokerPort}"'",' "$AGENT_HOME/config/config.json"
    
    cd $AGENT_HOME
    echo "sleep 10" > execute.sh
    echo "/usr/local/agentlite/bin/agent-stop.sh" >> execute.sh
    echo "/usr/local/agentlite/bin/agent-start.sh" >> execute.sh
    chmod a+x execute.sh
    nohup bash execute.sh  > /dev/null 2>&1 &
    
    exit 0
    
  7. Save the following script to a file in your local directory and name it agentReconfig.ps1. This is the file to use for Windows environments.

    The agentReconfig.ps1 file
    param (
        [string]$brokerHost = "$env:brokerHost",
        [string]$brokerPort = "$env:brokerPort"
    )
    
    
    $SERVICE_NAME = "AgentService"
    $SYSTEM_DRIVE = (Get-WmiObject Win32_OperatingSystem).SystemDrive
    . "$SYSTEM_DRIVE\temp\userenv.ps1"
    
    
    if ($brokerHost -eq 0 -or $brokerHost -eq $null -or $brokerHost -eq "") {
        echo "Variable brokerHost not available in the env file"
        exit 1
    }
    
    if ($brokerPort -eq 0 -or $brokerPort -eq $null -or $brokerPort -eq "") {
        echo "Variable brokerPort not available in the env file"
        exit 2
    }
    
    $AGENTGO_PARENT_DIR = "$SYSTEM_DRIVE\opt"
    
    echo "Check if AgentGo Parent directory exists. If not create it: '$AGENTGO_PARENT_DIR'"
    if (-not (Test-Path $AGENTGO_PARENT_DIR)) {
        echo "Create $AGENTGO_PARENT_DIR..."
        mkdir $AGENTGO_PARENT_DIR
    }
    else {
        echo "$AGENTGO_PARENT_DIR already exists."
    }
    
    $AGENT_CONFIG="{0}\agentlite\config\config.json" -f $AGENTGO_PARENT_DIR
    if (Test-Path $AGENT_CONFIG) {
        echo "Changing the config.json file with the new broker host $env:brokerHost and port $env:brokerPort"
        $confJson = get-content $AGENT_CONFIG | out-string | convertfrom-json
        $confJson.AmqpAddress = "$($env:brokerHost):$($env:brokerPort)"
        $confJson | ConvertTo-Json | set-content $AGENT_CONFIG
    }
    
    $USER_DATA_FILE = "{0}\agentlite\etc\user-data" -f $AGENTGO_PARENT_DIR
    if (Test-Path $USER_DATA_FILE) {
        echo "Changing user-data file with new broker host $env:brokerHost and port $env:brokerPort"
        $userDataJson = get-content $USER_DATA_FILE | out-string | convertfrom-json
        $userDataJson.brokerClusterAddresses = "$($env:brokerHost):$($env:brokerPort)"
        $userDataJson | ConvertTo-Json | set-content $USER_DATA_FILE
    }
    
    $AGENT_SERVICE_NAME = "AgentService"
    echo "Stop-Service $AGENT_SERVICE_NAME" > $AGENTGO_PARENT_DIR\exec.ps1
    echo "sleep 10" >> $AGENTGO_PARENT_DIR\exec.ps1
    echo "Start-Service $AGENT_SERVICE_NAME" >> $AGENTGO_PARENT_DIR\exec.ps1
    
    echo "Restarting agent"
    Start-Process -filepath "powershell" -argumentlist "-executionpolicy bypass -noninteractive -file `"$AGENTGO_PARENT_DIR\exec.ps1`""
    
    echo "Agent set to restart after config changes"
    
  8. Add these two files to a folder called agent (just an example) and compress the folder to create agent.zip with the same structure displayed here.

    agent

    ├── agentReconfig.ps1

    └── agentReconfig.sh

  9. Move the agent.zip folder to an HTTP repository in your local environment that is accessible from the OLD and NEW clusters.

    This procedure uses the following URL as an example:

    http://10.0.0.3/repo/agent.zip

You have now ensured cluster connectivity and saved the required files for the migration procedure.

8f. Migrate Deployments from the OLD Cluster to the NEW Cluster

To migrate the deployment details from the old cluster to the new cluster, follow this procedure.

  1. Navigate to the Workload Manager Actions Libray page and edit the AgentReconfig_Linux action. This procedure continues to use the Linux file going foward.

  2. Scroll to the Actions Definition section and update the URL as displayed in the following screenshot.

    The URL and Script from Bundle fields in the above screenshot are in accordance with the steps above.

  3. Scroll to the Custom Fields section and change the default value of the Broker Host to use the NEW cluster IP.

  4. Scroll down to the Broker Port and change the default to use the NEW Worker AMQP IP port (for example, 26642 in Step 8 above).

  5. Click Done to save your default configuration changes in the OLD cluster.

  6. Navigate to the Virual Machines page and locate the VM to migrate to the new cluster.

  7. Click the Actions dropdown and verify if your newly modified actions are visible under the Custom Actions section in the dropdown list as visible in the following screenshot.

  8. Click one of the actions and verify that the configured defaults are displayed in the Broker host and Broker port fields as indicated earlier.

  9. Click Submit to migrate this VM to the new cluser.

  10. Verify that the migration is complete by going to the Deployment page in your NEW cluster and the VM is listed as RUNNING (green line).

  11. Repeat Steps 6 through 10 for each VM that needs to be migrated to the NEW cluster.

You have now migrated the deployment details from the old cluster to the new cluster


Back to: With Internet Access



  • No labels
Terms & Conditions Privacy Statement Cookies Trademarks