Installing CPS vDRA

Create Installer VM in vSphere

Create the installer VM in VMware vSphere.

Download the vDRA deployer VMDKs and base image VMDKs.

Upload the VDMK File

Upload the VDMK file as shown in the following example:

ssh root@my-esxi-1.cisco.com
cd /vmfs/volumes/<datastore>
mkdir cps-images
cd /vmfs/volumes/<datastore>/cps-images
wget http:/<your_host>/cps-deployer-host_<version>.vmdk

Convert CPS Deployer VMDK to ESXi Format

Convert the CPS deployer host VMDK to ESXi format as shown in the following example:

ssh root@my-esxi-1.cisco.com
cd /vmfs/volumes/<datastore>/cps-images
vmkfstools --diskformat thin -i cps-deployer-host_<version>.vmdk cps-deployer-host_<version>-esxi.vmdk

Create CPS Installer VM

Using the vSphere client, create the CPS Installer VM.

Procedure


Step 1

Login to the vSphere Web Client and select the blade where you want to create a new VM to install the cluster manager VM.

Step 2

Right-click on the blade and select New Virtual Machine. New Virtual Machine window opens up.

Step 3

Select Create a new virtual machine and click Next to open Select a name and folder.

Step 4

Enter a name for the virtual machine (for example, CPS Cluster Manager) and select the location for the virtual machine. Click Next.

Step 5

Select blade IP address from Select a compute resource window and click Next to open Select storage window.

Step 6

From Select storage window, select datastorename and click Next to open Select compatibility window.

Step 7

From Compatible with: drop-down list, select ESXi 7.0 and later and click Next to open Select a guest OS window.

Note

 

Support for VMX11 is added only for fresh install. For upgrade flow (option 2/option 3), upgrade of VMX is not supported.

Step 8

From Guest OS Family: drop-down list, select Linux and from Guest OS Version: drop-down list, select Ubuntu Linux (64-bit).

Step 9

Click Next to open Customize settings window.

Step 10

In Virtual Hardware tab:

  1. Select 4 CPUs.

  2. Select Memory size as 32 GB.

  3. Delete New Hard Disk (VM will use the existing disk created earlier with vmkfstools command).

  4. Expand New SCSI controller and from Change Type drop-down list, select VMware Paravirtual.

  5. 2 NICs are required (one for eth1 as internal and second for eth2 as management). One NIC already exists as default under New Network.

    Under New Network, check Connect At Power On is selected.

  6. To add another NIC, click ADD NEW DEVICE and from the list select Network Adapter.

    Under New Network, check Connect At Power On is selected.

  7. Click the VM options tab to configure additional options for the virtual machine:

    • Expand the Boot options. In the Firmware field, choose the BIOS mode from the drop-down list that will be used to boot the virtual machine.

  8. Click Next to open Ready to complete window.

Step 11

Review the settings displayed on Ready to complete window and click Finish.

Step 12

Press Ctrl + Alt +2 to go back to Hosts and Clusters and select the VM created above (CPS Cluster Manager).

  1. Right-click and select Edit Settings.... Virtual Hardware tab is displayed as default.

  2. Click ADD NEW DEVICE and from the list select Existing Hard Disk to open Select File window.

  3. Navigate to cps-deployer-host_<version>-esxi.vmdk file created earlier with the vmkfstools command and click OK.

Step 13

Adjust hard disk size.

  1. Press Ctrl + Alt +2 to go back to Hosts and Clusters and select the VM created above (CPS Cluster Manager).

  2. Right-click and select Edit Settings.... Virtual Hardware tab is displayed as default.

  3. In the Hard disk 1 text box enter 100 and click OK.

Step 14

Power ON the VM and open the console.


Configure Network

Procedure


Step 1

Log into the VM Console as user: cps, password: cisco123.

Step 2

Create the /etc/network/interfaces file using vi or using the here document syntax as shown in the example:

cps@ubuntu:~$ sudo -i
root@ubuntu:~# cat > /etc/network/interfaces <<EOF
auto lo
iface lo inet loopback
 
auto ens160
iface ens160 inet static
address 10.10.10.5
netmask 255.255.255.0
gateway 10.10.10.1
dns-nameservers 192.168.1.2
dns-search cisco.com
EOF
root@ubuntu:~#

Step 3

Restart networking as shown in the following example:

root@ubuntu:~# systemctl restart networking
root@ubuntu:~# ifdown ens160
root@ubuntu:~# ifup ens160
root@ubuntu:~# exit
cps@ubuntu:~$

What to do next

You can log in remotely using the SSH login cps/cisco123.

Binding-VNF

The process for installing the binding-vnf is the same as the dra-vnf. Create the configuration artifacts for the binding-vnf using the same VMDK. But use the binding ISO instead of DRA ISO. Similar to the dra-vnf, add a 200 GB data disk to the master and control VMs.

Artifacts Structure

cps@installer:/data/deployer/envs/binding-vnf$ tree
.
|-- base.env
|-- base.esxi.env
|-- user_data.yml
|-- user_data.yml.pam
`-- vms
    |-- control-0
    |   |-- control-binding-0
    |   |   |-- interfaces.esxi
    |   |   |-- user_data.yml
    |   |   |-- user_data.yml.pam
    |   |   |-- vm.env
    |   |   `-- vm.esxi.env
    |   |-- role.env
    |   `-- role.esxi.env
    |-- control-1
    |   |-- control-binding-1
    |   |   |-- interfaces.esxi
    |   |   |-- user_data.yml
    |   |   |-- user_data.yml.pam
    |   |   |-- vm.env
    |   |   `-- vm.esxi.env
    |   |-- role.env
    |   |-- role.esxi.env
    |   `-- user_data.yml.disk
    |-- master
    |   |-- master-binding-0
    |   |   |-- interfaces.esxi
    |   |   |-- user_data.yml
    |   |   |-- user_data.yml.functions
    |   |   |-- user_data.yml.pam
    |   |   |-- vm.env
    |   |   `-- vm.esxi.env
    |   |-- role.env
    |   `-- role.esxi.env
    `-- persistence-db
        |-- persistence-db-1
        |   |-- interfaces.esxi
        |   |-- vm.env
        |   `-- vm.esxi.env
        |-- persistence-db-2
        |   |-- interfaces.esxi
        |   |-- vm.env
        |   `-- vm.esxi.env
        |-- persistence-db-3
        |   |-- interfaces.esxi
        |   |-- vm.env
        |   `-- vm.esxi.env
        |-- role.env
        `-- role.esxi.env

11 directories, 38 files
cps@installer:/data/deployer/envs/binding-vnf$

CPS Installer Commands

Command Usage

Use the cps command to deploy VMs. The command is a wrapper around the docker command that is required to run the deployer container.

Example:

function cps () {
     docker run \
         -v /data/deployer:/data/deployer \
         -v /data/vmware/:/export/ \
         -it --rm dockerhub.cisco.com/cps-docker-v2/cps deployer/deployer:latest  \
         /root/cps "$@"
}

To view the help for the command, run the following command: cps -h

cps@installer:~$ cps -h
usage: cps [-h] [--artifacts_abs_root_path ARTIFACTS_ABS_ROOT_PATH]
           [--export_dir EXPORT_DIR] [--deploy_type DEPLOY_TYPE]
           [--template_dir TEMPLATE_DIR]
           [--status_table_width STATUS_TABLE_WIDTH] [--skip_create_ova]
           [--skip_delete_ova]
           {install,delete,redeploy,list,poweroff,poweron,datadisk}
           vnf_artifacts_relative_path [vm_name [vm_name ...]]

positional arguments:
  {install,delete,redeploy,list,poweroff,poweron,datadisk}
                        Action to perform
  vnf_artifacts_relative_path
                        VNF artifacts directory relative to vnf artifacts root
                        path. Example: dra-vnf
  vm_name               name of virtual machine

optional arguments:
  -h, --help            show this help message and exit
  --artifacts_abs_root_path ARTIFACTS_ABS_ROOT_PATH
                        Absolute path to artifacts root path. Example:
                        /data/deployer/envs
  --export_dir EXPORT_DIR
                        Abosolute path to store ova files and rendered
                        templates
  --deploy_type DEPLOY_TYPE
                        esxi
  --template_dir TEMPLATE_DIR
                        Absolute path to default templates
  --status_table_width STATUS_TABLE_WIDTH
                        Number of VMs displayed per row in vm status table
  --skip_create_ova     Skip the creation of ova files. If this option is
                        used, the ova files must be pre-created. This if for
                        testing and debugging
  --skip_delete_ova     Skip the deletion of ova files. If this option is
                        used, the ova files are not deleted. This if for
                        testing and debugging

List VMs in Artifacts

Use the following command to list VMs in artifacts:

cps list example-dra-vnf

where, example-dra-vnf is the VNF artifacts directory.

Deploy all VMs in Parallel

Use the following command to deploy all VMs in parallel:

cps install example-dra-vnf

Deploy one or more VMs

The following example command shows how to deploy dra-director-2 and dra-worker-1:

cps install example-dra-vnf dra-director-2 dra-worker-1

Deploy all VMs with or without a Hypervisor Flag

Use the following command to install all VMs that are tagged with a ESXIHOST value matching hypervisor name as esxi-host-1 in their vm.esxi.env file:

cps install dra-vnf --hypervisor esxi-host-1 

The following cps install command allows you to perform activities on more than one artifiactory files, which are tagged with or without --hypervisor flag.

cps install –addartifact artifact-env-2 
--hypervisor hypervisor-name 

Health Checks

Using the --hypervisor option that you can perform health check of docker engine and consul status of other VMs before making changes on the requested VM.

For example, if you run cps install --hypervisor esxi-host-1, then any VMs that are tagged with esxi-host-1 are excluded and the remaining set of VMs from the artifact file is considered for health check.

VM Name

ESXiHOST

vm01

esxi-host-1

vm02

esxi-host-2

vm03

esxi-host-2

This is done to ensure that VM’s on other blades are stable before performing the requested changes on their partner blade VMs. The health check fetches details of the master VM automatically from the artifiactory file and performs SSH to master, to check if the docker engine and consul status of vm02 and vm03 are in a proper state. If the state is proper, then cps command starts the requested operation such as install, power on, or redeploy and so on.

Delete one or more VMs

The following command is an example for deleting dra-director-1 and dra-worker-1 VMs:


Note


VM deletion can disrupt services.


cps delete example-dra-vnf dra-director-1 dra-worker-1

Redeploy all VMs

Redeploying VMs involves deleting a VM and then redeploying them. If more the one VM is specified, VMs are processed serially. The following command is an example for redeploing all VMs:


Note


VM deletion can disrupt services.


cps redeploy example-dra-vnf

Redeploy one or more VMs

Redeploying VMs involves deleting a VM and then redeploying them. If more the one VM is specified, VMs are processed serially. The following command is an example for redeploing two VMs:


Note


VM deletion can disrupt services.


cps redeploy example-dra-vnf dra-director-1 control-1

Power down one or more VMs

The following command is an example for powering down two VMs:


Note


Powering down the VM can disrupt services.


cps poweroff example-dra-vnf dra-director-1 dra-worker-1

Power up one or more VMs

The following command is an example for powering up two VMs:


Note


Powering Up the VM can disrupt services.


cps poweron example-dra-vnf dra-director-1 dra-worker-1

Upgrading VMs using Diagnostics and Redeployment Health Check

Diagnostics of VMs

Use the following command to perform system diagnostics on VMs from vDRA to DB VNFs.

cps diagnostics dra-vnf

Redeployment Health Check for VMs

Use the following command to perform the redeployment health check on VMs.

cps redeploy dra-vnf --healthcheck yes --sysenv dra

Ranking Details

To upgrade the VMs, create a group of specific VMs from artifact files and place it under /data/deployer/envs/upgradelist.txt. It is a one-time creation process and the file has a ranking mechanism.

Based on ranking, separate the contents with a comma(,) as given.

Example:

cat /data/deployer/envs/upgradelist.txt 
1,sk-master0
2,sk-control0,sk-dra-worker2
3,sk-control1,sk-dra-worker1
4,sk-dra-directo1,sk-dra-director2

The pre and postchecks for Master and Control VMs vary from other VMs.

Ranking Details

Rank 1

Master VM

Example: 1,sk-master0

If there is no master VM, then remove Rank1(1,sk-master0) from the upgradelist.txt file not to disturb the other ranks.

Rank 2

Control VM

Example:

2,sk-control0, sk-dra-worker2

3,sk-control1, sk-dra-worker1

  • Declare the control VMs for Ranks 2 and 3 and add one or more VMs.

  • If you do not redeploy control VMs, do not declare any values in the upgradelist.txt file starting with Rank 2 and 3.

Rank 3

Rank 4

Other VMs

Example: 4,sk-dra-directo1,sk-dra-director2

Do not contain either master or control VMs.

The differentiation between Rank 1(Master) and Rank2(Control) VMs is because the pre and postchecks for Master and Control VMs varies withing themselves.

Resume Redeployment

The resume option starts the VM redeployment from the last successful completion.

Consider the following scenario where the deployment occurs until site2-binding-control-0. For some reason, the VMs after site2-binding-control0 faces a problem and the automation feature terminates the execution.

root@ubuntu:~# cat /data/deployer/envs/upgradelist.txt 
1,site2-binding-master-1
2,site2-binding-control-0,site2-persistence-db-1
3,site2-binding-control-1,site2-persistence-db-2

Use the cps redeploy /data/deployer/envs/dba-vnf/ --healthcheck yes --sysenv dba command to resume the redeployment.

Configuration and Restriction:
  • The diagnostics and redeployment of VMs with the health check works only if the Master VM is active.

  • For a proper health check, copy the cps.pem key used for connecting to the Master VM to the /data/deployer/envs folder.

Validate Deployment

Use the CLI on the master VM to validate the installation.

Connect to the CLI using the default user and password (admin/admin).

ssh -p 2024 admin@<master management ip address>

show system status

Use show system status command to display the system status.


Note


System status percent-complete should be 100%.


admin@orchestrator[master-0]# show system status
system status running     true
system status upgrade     false
system status downgrade   false
system status external-services-enabled true
system status debug       false
system status percent-complete 100.0
admin@orchestrator[master-0]#

show system diagnostics

No diagnostic messages should appear using the following command:

admin@orchestrator[master-0]# show system diagnostics | tab | exclude pass
NODE       CHECK ID                        IDX  STATUS   MESSAGE
----------------------------------------------------------------

admin@orchestrator[master-0]#

show docker engine

All DRA-VNF VMs should be listed and in the CONNECTED state.

admin@orchestrator[master-0]# show docker engine
                              MISSED
ID                 STATUS     PINGS
--------------------------------------
control-0          CONNECTED  0
control-1          CONNECTED  0
dra-director-1     CONNECTED  0
dra-director-2     CONNECTED  0
dra-distributor-1  CONNECTED  0
dra-distributor-2  CONNECTED  0
dra-worker-1       CONNECTED  0
dra-worker-2       CONNECTED  0
master-0           CONNECTED  0

admin@orchestrator[master-0]#

show docker service

No containers should be displayed when using the exclude HEAL filter.

admin@orchestrator[master-0]# show docker service | tab | exclude HEAL
                                                             PENALTY
MODULE  INSTANCE NAME  VERSION  ENGINE  CONTAINER ID  STATE  BOX     MESSAGE
----------------------------------------------------------------------------

admin@orchestrator[master-0]#

Redeploy VMs during the ISSM Operation

To redeploy VMs during In-Service Software Migration (ISSM) , use the following procedure:

Procedure


Step 1

Find the consul container that is having a consul leader role:

  1. To find the consul leader use the following command:

    # docker exec consul-1 consul operator raft list-peers

For example, in the following output consul-3 is the leader.

admin@orchestrator[an-master]# docker exec consul-1 "consul operator raft list-peers"
==========output from container consul-1===========
Node                  ID                                    Address           State     Voter  RaftProtocol
consul-2.weave.local  52d5b25c-77fc-1163-0304-493b117096cd  10.46.128.2:8300  follower  true   3
consul-4.weave.local  fe68543b-ef72-66a7-7830-1c0405fd06a0  10.32.128.1:8300  follower  true   3
consul-5.weave.local  21539d8a-7d55-9cdb-c3e0-7680b448b5d5  10.32.160.1:8300  follower  true   3
consul-3.weave.local  f7a87957-a129-a12e-eb44-03bc3b385ec1  10.46.160.2:8300  leader    true   3
consul-1.weave.local  2d14416d-cc22-bcbd-e686-04bdc860332d  10.32.0.3:8300    follower  true   3
consul-7.weave.local  a3b0ba51-a8d4-68b4-b899-c20ede286e09  10.47.160.1:8300  follower  true   3
consul-6.weave.local  36d06c94-2ec5-094d-7acf-7ea190b36825  10.46.224.1:8300  follower  true   3
admin@orchestrator[an-master]#

Step 2

Use the following command to find the VM in which the consul leader is running:

show docker service | tab | include consul

For example, in the following output the consul leader is running in the director-0 vm.

admin@orchestrator[an-master]# show docker service | tab | include consul
consul                1         consul-1                    23.2.0-release  an-master          consul-1                         HEALTHY  false    -      
consul                1         consul-2                    23.2.0-release  an-control-0       consul-2                         HEALTHY  false    -      
consul                1         consul-3                    23.2.0-release  an-control-1       consul-3                         HEALTHY  false    -      
consul-dra            1         consul-4                    23.2.0-release  an-dra-director-0  consul-4                         HEALTHY  false    -      
consul-dra            1         consul-5                    23.2.0-release  an-dra-director-1  consul-5                         HEALTHY  false    -      
consul-dra            1         consul-6                    23.2.0-release  an-dra-worker-0    consul-6                         HEALTHY  false    -      
consul-dra            1         consul-7                    23.2.0-release  an-dra-worker-1    consul-7                         HEALTHY  false    -      
admin@orchestrator[an-master]#

Step 3

Perform consul leader failover in the consul leader container using docker exec <consul-leader-container> "supervisorctl stop consul-server" command .

Example: If the consul leader VM is same as the VM to be redeployed, then stop the consul-server in the consul leader container to perform consul leader failover.
admin@orchestrator[an-master]# docker exec consul-3 "supervisorctl stop consul-server"
==========output from container consul-3===========
consul-server: stopped
admin@orchestrator[an-master]# 

Step 4

Verify the consul leader failover with another VM that will not be redeployed. Use the docker exec consul-1 "consul operator raft list-peers" command to verify the details as shown in the sample configuration.

admin@orchestrator[an-master]# docker exec consul-1 "consul operator raft list-peers"
==========output from container consul-1===========
Node                  ID                                    Address           State     Voter  RaftProtocol
consul-2.weave.local  52d5b25c-77fc-1163-0304-493b117096cd  10.46.128.2:8300  follower  true   3
consul-4.weave.local  fe68543b-ef72-66a7-7830-1c0405fd06a0  10.32.128.1:8300  leader    true   3
consul-5.weave.local  21539d8a-7d55-9cdb-c3e0-7680b448b5d5  10.32.160.1:8300  follower  true   3
consul-3.weave.local  f7a87957-a129-a12e-eb44-03bc3b385ec1  10.46.160.2:8300  follower  true   3
consul-1.weave.local  2d14416d-cc22-bcbd-e686-04bdc860332d  10.32.0.3:8300    follower  true   3
consul-7.weave.local  a3b0ba51-a8d4-68b4-b899-c20ede286e09  10.47.160.1:8300  follower  true   3
consul-6.weave.local  36d06c94-2ec5-094d-7acf-7ea190b36825  10.46.224.1:8300  follower  true   3
admin@orchestrator[an-master]#

Step 5

Start the consul server in the consul container stopped in step 3.

Step 6

Verify the health of the consul using the show docker service | tab | include consul command to ensure that the consul containers are healthy after consul leader failover.

admin@orchestrator[an-master]# show docker service | tab | include consul
consul                1         consul-1                    23.2.0-release  an-master          consul-1                         HEALTHY  false    -      
consul                1         consul-2                    23.2.0-release  an-control-0       consul-2                         HEALTHY  false    -      
consul                1         consul-3                    23.2.0-release  an-control-1       consul-3                         HEALTHY  false    -      
consul-dra            1         consul-4                    23.2.0-release  an-dra-director-0  consul-4                         HEALTHY  false    -      
consul-dra            1         consul-5                    23.2.0-release  an-dra-director-1  consul-5                         HEALTHY  false    -      
consul-dra            1         consul-6                    23.2.0-release  an-dra-worker-0    consul-6                         HEALTHY  false    -      
consul-dra            1         consul-7                    23.2.0-release  an-dra-worker-1    consul-7                         HEALTHY  false    -      
admin@orchestrator[an-master]#

Step 7

Redeploy the VM.


Redeploy VMs during the ISSM Operation with Overlay Network

ISSM Initial Configuration for Overlay Network

Weave network is used to enable the communication between containers across VMs in the vDRA solution. When the Weave network is replaced with Overlay Docker network, you can redeploy theVMs during ISSM operation.


Note


This procedure is applicable only from CPS 25.1.0 or above versions with Overlay network.


Procedure

To redeploy the VM in Overlay network environment, complete the initial configuration steps:


To redeploy the VM in Overlay network environment, complete the initial configuration steps:

Step 1

In the base.env file located in the deployer VM, add the new properties mentioned here:

WEAVE_ENABLED=false
OVERLAY_NETWORK=overlay
JOIN_TOKEN=SWMTKN-1-4ptz26lhvvw5y4y0hqwnbpo3n1ovxknomohutf7muepr9y8guf-btjzf1trrnu5lpnzj8mletxg4
SWARM_PORT=2377
LEADER_IP=192.168.00.00
BRIDGE_SUBNET_IPV4=172.20.0.0/16
BRIDGE_GATEWAY_IPV4=172.20.0.1
BRIDGE_SUBNET_IPV6=fd00:3984:3989::/64
BRIDGE_GATEWAY_IPV6=fd00:3984:3989::1
  • WEAVE_ENABLED=false –This option allows you to launch the VM with overlay network. In case, if you want to redeploy the VM in Weave network change the WEAVE_ENABLED=true .

  • OVERLAY_NETWORK=overlay –This is the default name of the overlay network. During VM deployment, ensure that the property value of the OVERLAY_NETWORK and the OVERLAY_NW_NAME property value remains the same.

Note

 

The OVERLAY_NW_NAME is located at the /data/orchestrator/overlay-scripts file path in the Master VM.

  1. Run the docker swarm join-token { worker | manager } command in the master VM to get the JOIN_TOKEN, Swarm_PORT , and LEADER_IP values:

    Sample Configuration:

    cps@WPS-DRA-master:~$ docker swarm join-token manager
    docker swarm join –token SWMTKN-1-4ptz26lhvvw5y4y0hqwnbpo3n1ovxknomohutf7muepr9y8guf-8l3w5f09wi26vs0spgp6pmfcg 192.168.00.00:2377
    

    Example:

    JOIN_TOKEN= SWMTKN-1-4ptz26lhvvw5y4y0hqwnbpo3n1ovxknomohutf7muepr9y8guf-8l3w5f09wi26vs0spgp6pmfcg
    LEADER_IP = 192.168.00.00
    SWARM_PORT=2377
    
  2. Get the properties value from the /data/orchestrator/overlay-scripts/docker-overlay.conf file:

    BRIDGE_SUBNET_IPV4=172.00.0.0/16
    BRIDGE_GATEWAY_IPV4=172.00.0.1
    BRIDGE_SUBNET_IPV6=fd00:3984:3989::/64
    BRIDGE_GATEWAY_IPV6=fd00:3984:3989::1
    

Step 2

Find the WEAVE_PASSWORD value and add the mentioned property values subsequently in the user_data.yml file :


{% if WEAVE_ENABLED is defined %}"weave_enabled": "{{ WEAVE_ENABLED }}", {% endif %}
{% if OVERLAY_NETWORK is defined %}"overlay_network": "{{ OVERLAY_NETWORK }}", {% endif %}
{% if JOIN_TOKEN is defined %}"join_token": "{{ JOIN_TOKEN }}", {% endif %}
{% if SWARM_PORT is defined %}"swarm_port": "{{ SWARM_PORT }}", {% endif %}
{% if LEADER_IP is defined %}"leader_ip": "{{ LEADER_IP }}", {% endif %}
{% if BRIDGE_SUBNET_IPV4 is defined %}"bridge_subnet_ipv4": "{{ BRIDGE_SUBNET_IPV4 }}", {% endif %}
{% if BRIDGE_GATEWAY_IPV4 is defined %}"bridge_gateway_ipv4": "{{ BRIDGE_GATEWAY_IPV4 }}", {% endif %}
{% if BRIDGE_SUBNET_IPV6 is defined %}"bridge_subnet_ipv6": "{{ BRIDGE_SUBNET_IPV6` }}", {% endif %}
{% if BRIDGE_GATEWAY_IPV6 is defined %}"bridge_gateway_ipv6": "{{ BRIDGE_GATEWAY_IPV6 }}", {% endif %}

ISSM Upgrade in Overlay Setup

To perform the ISSM Upgrade in Overlay Network setup maintain seven SWARM manager VMs and remaining VMs as SWARM worker.

Before you begin
  • Run the docker node demote <list of swarm manager vm name> depromote command before redeploying the swarm manager VM.

  • In ISSM upgrade/redeployment, each set of the ISSM should not contain VMs which are marked as Leader/Reachable and empty in MANAGER STATUS column together. Separate the empty value VMs as a different set and Reachable/Leader VMs as a different set.

Procedure

Step 1

Use the docker node ls command to view the VM SWARM setup details:

cps@WPS-DRA-master:~$ docker node ls
ID                            HOSTNAME                    STATUS    AVAILABILITY   MANAGER STATUS   
o83ancg9iy9a1ezqgcyhsu2z2     WPS-DRA-control-0           Ready     Active         Leader           
jj0tmks06n6nemxhrbvo6x9ir     WPS-DRA-control-1           Ready     Active         Reachable        
qh3t8vk12kqcwhnd5xvn0y62p     WPS-DRA-dra-director-1      Ready     Active         Reachable        
pq9tf4bslnnhzqejjwurr9q5b     WPS-DRA-dra-director-2      Ready     Active         Reachable        
yyxvcwc1dgchz6gzbqkyavuxg     WPS-DRA-dra-distributor-1   Ready     Active                          
vnt2pdvn7jfzw3cnjxbgpv3bt     WPS-DRA-dra-worker-1        Ready     Active         Reachable        
ifg80p0jwmnwjf7wnk9d6vz61     WPS-DRA-dra-worker-2        Ready     Active         Reachable        
gjgnp3s34agpkoue83wqobhju *   WPS-DRA-master              Ready     Active         Reachable  
cps@WPS-DRA-master:~$

Step 2

Login to the the master VM and enter docker node demote WPS-DRA-master command to demote to SWARM worker.

cps@WPS-DRA-master:~$ docker node demote WPS-DRA-master
 

Step 3

In deployer VM, delete the master VM using this command:

root@ubuntu:/data/deployer/envs# cps delete dra-vnf WPS-DRA-master

Step 4

Run the docker swarm join-token manager command with any one of the control VMs and get the manager token and leader IP. Update the base.env file in deployer VM.

cps@WPS-DRA-control-0:~$ docker swarm join-token manager
result:
docker swarm join --token SWMTKN-1-4ptz26lhvvw5y4y0hqwnbpo3n1ovxknomohutf7muepr9y8guf-8l3w5f09wi26vs0spgp6pmfcg 192.168.00.00:0000

Step 5

In deployer VM, install a new master VM using cps install dra-vnf WPS-DRA-master command. and remove the old master VM entry from the docker node list using the given command:

  1. Ensure the system is healthy

  2. Remove the old master VM entry from the docker node list using this command:

    cps@WPS-DRA-master:~$ docker node rm <old node ID which is down status now>

cps@WPS-DRA-master:~$ docker node ls
ID                            HOSTNAME                    STATUS    AVAILABILITY   MANAGER STATUS  
xslmmoig3oori0lsubgbazlsf     WPS-DRA-control-0           Ready     Active         Reachable       
pb5ppz6o8iop5ifabxs8bnzd0     WPS-DRA-control-1           Ready     Active         Reachable       
ow8e1li67kneu91766vsndari     WPS-DRA-dra-director-1      Ready     Active         Leader          
m2vbfprnw0ivkpdt9cpxpp96t     WPS-DRA-dra-director-2      Ready     Active         Reachable       
7uo7c7ru0hl55g7r23jlkra9y     WPS-DRA-dra-distributor-1   Ready     Active                         
k7pdag4jv50r0hws0x262nbxa     WPS-DRA-dra-worker-1        Ready     Active         Reachable       
v9hsifa9ck7tmvb6aoputieal     WPS-DRA-dra-worker-2        Ready     Active         Reachable       
gjgnp3s34agpkoue83wqobhju *   WPS-DRA-master              Ready     Active         Reachable       
q1ey4ntzs00bpwmh9whfdjku4     WPS-DRA-master              Down      Active         		  

For example, enter the given command for the given ID:
cps@WPS-DRA-master:~$ docker node rm q1ey4ntzs00bpwmh9whfdjku4
Here is the output after removing the old master entry:
cps@WPS-DRA-master:~$ docker node ls
ID                            HOSTNAME                    STATUS    AVAILABILITY   MANAGER STATUS   
xslmmoig3oori0lsubgbazlsf     WPS-DRA-control-0           Ready     Active         Reachable        
pb5ppz6o8iop5ifabxs8bnzd0     WPS-DRA-control-1           Ready     Active         Reachable        
ow8e1li67kneu91766vsndari     WPS-DRA-dra-director-1      Ready     Active         Leader           
m2vbfprnw0ivkpdt9cpxpp96t     WPS-DRA-dra-director-2      Ready     Active         Reachable        
7uo7c7ru0hl55g7r23jlkra9y     WPS-DRA-dra-distributor-1   Ready     Active                          
k7pdag4jv50r0hws0x262nbxa     WPS-DRA-dra-worker-1        Ready     Active         Reachable        
v9hsifa9ck7tmvb6aoputieal     WPS-DRA-dra-worker-2        Ready     Active         Reachable        
gjgnp3s34agpkoue83wqobhju *   WPS-DRA-master              Ready     Active         Reachable       

Step 6

Run the given command in master VM and get the manager token and leader IP. Update the base.env file in the deployer VM.

cps@WPS-DRA-master:~$ docker swarm join-token manager
 docker swarm join --token SWMTKN-1-4ptz26lhvvw5y4y0hqwnbpo3n1ovxknomohutf7muepr9y8guf-8l3w5f09wi26vs0spgp6pmfcg 192.168.00.00:0001

Step 7

The ISSM should not contain VMs marked as Leader/Reachable and empty in MANAGER STATUS column together. Run the given command to depromote and redeploy the swarm manager VMs:


cps@WPS-DRA-master:~$ docker node demote WPS-DRA-control-0 WPS-DRA-dra-director-1 WPS-DRA-dra-worker-1

root@ubuntu:/data/deployer/envs# cps delete dra-vnf WPS-DRA-control-0 WPS-DRA-dra-director-1 WPS-DRA-dra-worker-1
root@ubuntu:/data/deployer/envs# cps install dra-vnf WPS-DRA-control-0 WPS-DRA-dra-director-1 WPS-DRA-dra-worker-1

cps@WPS-DRA-master:~$ docker node ls
ID                            HOSTNAME                    STATUS    AVAILABILITY   MANAGER STATUS   
o83ancg9iy9a1ezqgcyhsu2z2     WPS-DRA-control-0           Ready     Active         Reachable        
xslmmoig3oori0lsubgbazlsf     WPS-DRA-control-0           Down      Active         		
pb5ppz6o8iop5ifabxs8bnzd0     WPS-DRA-control-1           Ready     Active         Reachable        
ow8e1li67kneu91766vsndari     WPS-DRA-dra-director-1      Down      Active         		
qh3t8vk12kqcwhnd5xvn0y62p     WPS-DRA-dra-director-1      Ready     Active         Reachable        
m2vbfprnw0ivkpdt9cpxpp96t     WPS-DRA-dra-director-2      Ready     Active         Leader           
7uo7c7ru0hl55g7r23jlkra9y     WPS-DRA-dra-distributor-1   Ready     Active                          
k7pdag4jv50r0hws0x262nbxa     WPS-DRA-dra-worker-1        Down      Active         		
vnt2pdvn7jfzw3cnjxbgpv3bt     WPS-DRA-dra-worker-1        Ready     Active         Reachable        
v9hsifa9ck7tmvb6aoputieal     WPS-DRA-dra-worker-2        Ready     Active         Reachable        
gjgnp3s34agpkoue83wqobhju *   WPS-DRA-master              Ready     Active         Reachable        
For example, enter this command for the given set of IDs:
cps@WPS-DRA-master:~$ docker node rm xslmmoig3oori0lsubgbazlsf ow8e1li67kneu91766vsndari k7pdag4jv50r0hws0x262nbxa
Here is the sample output:
cps@WPS-DRA-master:~$ docker node ls
ID                            HOSTNAME                    STATUS    AVAILABILITY   MANAGER STATUS   
o83ancg9iy9a1ezqgcyhsu2z2     WPS-DRA-control-0           Ready     Active         Reachable        
pb5ppz6o8iop5ifabxs8bnzd0     WPS-DRA-control-1           Ready     Active         Reachable        
qh3t8vk12kqcwhnd5xvn0y62p     WPS-DRA-dra-director-1      Ready     Active         Reachable        
m2vbfprnw0ivkpdt9cpxpp96t     WPS-DRA-dra-director-2      Ready     Active         Leader           
7uo7c7ru0hl55g7r23jlkra9y     WPS-DRA-dra-distributor-1   Ready     Active                          
vnt2pdvn7jfzw3cnjxbgpv3bt     WPS-DRA-dra-worker-1        Ready     Active         Reachable        
v9hsifa9ck7tmvb6aoputieal     WPS-DRA-dra-worker-2        Ready     Active         Reachable        
gjgnp3s34agpkoue83wqobhju *   WPS-DRA-master              Ready     Active         Reachable      

Step 8

It is importanat for the ISSM not to have VMs marked as Leader/Reachable and empty in MANAGER STATUS column concurrently. Ensure the system is healthy and start another set of ISSM manager VMs upgrade.

cps@WPS-DRA-master:~$ docker node demote WPS-DRA-control-1 WPS-DRA-dra-director-2 WPS-DRA-dra-worker-2

root@ubuntu:/data/deployer/envs# cps delete dra-vnf WPS-DRA-control-1 WPS-DRA-dra-director-2 WPS-DRA-dra-worker-2
root@ubuntu:/data/deployer/envs# cps install dra-vnf WPS-DRA-control-1 WPS-DRA-dra-director-2 WPS-DRA-dra-worker-2
cps@WPS-DRA-master:~$ docker node ls
ID                            HOSTNAME                    STATUS    AVAILABILITY   MANAGER STATUS
o83ancg9iy9a1ezqgcyhsu2z2     WPS-DRA-control-0           Ready     Active         Leader         
pb5ppz6o8iop5ifabxs8bnzd0     WPS-DRA-control-1           Down      Active         		  
jj0tmks06n6nemxhrbvo6x9ir     WPS-DRA-control-1           Ready     Active         Reachable      
qh3t8vk12kqcwhnd5xvn0y62p     WPS-DRA-dra-director-1      Ready     Active         Reachable       
m2vbfprnw0ivkpdt9cpxpp96t     WPS-DRA-dra-director-2      Down      Active                         
pq9tf4bslnnhzqejjwurr9q5b     WPS-DRA-dra-director-2      Ready     Active         Reachable       
7uo7c7ru0hl55g7r23jlkra9y     WPS-DRA-dra-distributor-1   Ready     Active                         
vnt2pdvn7jfzw3cnjxbgpv3bt     WPS-DRA-dra-worker-1        Ready     Active         Reachable       
v9hsifa9ck7tmvb6aoputieal     WPS-DRA-dra-worker-2        Down      Active         	          
ifg80p0jwmnwjf7wnk9d6vz61     WPS-DRA-dra-worker-2        Ready     Active         Reachable	 
gjgnp3s34agpkoue83wqobhju *   WPS-DRA-master              Ready     Active         Reachable      

Remove the old VM entries that are in down status:

cps@WPS-DRA-master:~$ docker node rm pb5ppz6o8iop5ifabxs8bnzd0 m2vbfprnw0ivkpdt9cpxpp96t v9hsifa9ck7tmvb6aoputieal
 

Here is the output:

cps@WPS-DRA-master:~$ docker node ls
ID                            HOSTNAME                    STATUS    AVAILABILITY   MANAGER STATUS   
o83ancg9iy9a1ezqgcyhsu2z2     WPS-DRA-control-0           Ready     Active         Leader           
jj0tmks06n6nemxhrbvo6x9ir     WPS-DRA-control-1           Ready     Active         Reachable        
qh3t8vk12kqcwhnd5xvn0y62p     WPS-DRA-dra-director-1      Ready     Active         Reachable        
pq9tf4bslnnhzqejjwurr9q5b     WPS-DRA-dra-director-2      Ready     Active         Reachable        
7uo7c7ru0hl55g7r23jlkra9y     WPS-DRA-dra-distributor-1   Ready     Active                          
vnt2pdvn7jfzw3cnjxbgpv3bt     WPS-DRA-dra-worker-1        Ready     Active         Reachable        
ifg80p0jwmnwjf7wnk9d6vz61     WPS-DRA-dra-worker-2        Ready     Active         Reachable	  
gjgnp3s34agpkoue83wqobhju *   WPS-DRA-master              Ready     Active         Reachable

Step 9

Ensure the system is healthy and start another set of ISSM swam worker VMs upgrade. For the given ISSM upgrade, use the join-token as worker for the non-swarm manager VMs. For worker token, run the mentioned command in master and update the base.env file:

cps@WPS-DRA-master:~$ docker swarm join --token SWMTKN-1-4ptz26lhvvw5y4y0hqwnbpo3n1ovxknomohutf7muepr9y8guf-btjzf1trrnu5lpnzj8mletxg4 192.168.00.00:0002

root@ubuntu:/data/deployer/envs# cps delete WPS-DRA-dra-distributor-1
root@ubuntu:/data/deployer/envs# cps install WPS-DRA-dra-distributor-1

Note

 

The sample setup has one swarm worker VM, but you can add additional swarm worker VMs as required. It is possible to have multiple swarm worker VMs in other setups.

cps@WPS-DRA-master:~$ docker node ls
ID                            HOSTNAME                    STATUS    AVAILABILITY   MANAGER STATUS
o83ancg9iy9a1ezqgcyhsu2z2     WPS-DRA-control-0           Ready     Active         Leader         
jj0tmks06n6nemxhrbvo6x9ir     WPS-DRA-control-1           Ready     Active         Reachable      
qh3t8vk12kqcwhnd5xvn0y62p     WPS-DRA-dra-director-1      Ready     Active         Reachable       
pq9tf4bslnnhzqejjwurr9q5b     WPS-DRA-dra-director-2      Ready     Active         Reachable       
7uo7c7ru0hl55g7r23jlkra9y     WPS-DRA-dra-distributor-1   Down      Active                         
yyxvcwc1dgchz6gzbqkyavuxg     WPS-DRA-dra-distributor-1   Ready     Active                         
vnt2pdvn7jfzw3cnjxbgpv3bt     WPS-DRA-dra-worker-1        Ready     Active         Reachable       
ifg80p0jwmnwjf7wnk9d6vz61     WPS-DRA-dra-worker-2        Ready     Active         Reachable       
gjgnp3s34agpkoue83wqobhju *   WPS-DRA-master              Ready     Active         Reachable      
Remove the old VM entries that are in down status:
cps@WPS-DRA-master:~$ docker node rm 7uo7c7ru0hl55g7r23jlkra9y
 

Here is the output:

cps@WPS-DRA-master:~$ docker node ls
ID                            HOSTNAME                    STATUS    AVAILABILITY   MANAGER STATUS 
o83ancg9iy9a1ezqgcyhsu2z2     WPS-DRA-control-0           Ready     Active         Leader        
jj0tmks06n6nemxhrbvo6x9ir     WPS-DRA-control-1           Ready     Active         Reachable       
qh3t8vk12kqcwhnd5xvn0y62p     WPS-DRA-dra-director-1      Ready     Active         Reachable       
pq9tf4bslnnhzqejjwurr9q5b     WPS-DRA-dra-director-2      Ready     Active         Reachable       
yyxvcwc1dgchz6gzbqkyavuxg     WPS-DRA-dra-distributor-1   Ready     Active                         
vnt2pdvn7jfzw3cnjxbgpv3bt     WPS-DRA-dra-worker-1        Ready     Active         Reachable       
ifg80p0jwmnwjf7wnk9d6vz61     WPS-DRA-dra-worker-2        Ready     Active         Reachable      
gjgnp3s34agpkoue83wqobhju *   WPS-DRA-master              Ready     Active         Reachable