Step 1 |
Bring down the VPN Tunnel. Log in to the CSR VPN using SSH on the passive data center node and do the following:
|
Step 2 |
Power off the CSR VPN and CSR HUB VMs on the Active site. Powering off both these VMs will automatically break the tunnel.
|
Step 3 |
Source the OpenStack RC file that was copied to the passive data center node:
source vms-backup/infra/openrc-passive
|
Step 4 |
Change directory to the /msx-version/ansible folder inside the container:
|
Step 5 |
Switch to the passive data center:
ansible-playbook dualdc-switch-dc.yml --extra-vars '{dc: passive}'
|
Step 6 |
Stop consul replication on the passive data center:
ansible -m command -a "kubectl delete -f /etc/kube-manifests/consul-replicate-rc.yml" kube-master[0]
|
Step 7 |
Switch NSO to become the primary instance on the passive data center:
ansible -m command -a "curl -X PUT -d 'Active' -k -H 'X-CONSUL-TOKEN:<consul_root_token>' https://consul.service.consul:8500/v1/kv/private/dc_status" kube-master[0]
The default consul_root_token: 73b72724-4570-4000-b720-de18c30fabe2
|
Step 8 |
Stop platform microservices on the passive data center:
ansible -m command -a "kubectl delete -f /etc/kube-manifests/administrationservice-rc.yml -f /etc/kube-manifests/consumeservice-rc.yml -f /etc/kube-manifests/devicemanagerservice-rc.yml -f /etc/kube-manifests/manageservice-rc.yml -f /etc/kube-manifests/monitorservice-rc.yml -f /etc/kube-manifests/serviceextensionservice-rc.yml -f /etc/kube-manifests/usermanagementservice-rc.yml -f /etc/kube-manifests/orchestrationservice-rc.yml -f /etc/kube-manifests/billingservice-rc.yml -f /etc/kube-manifests/notificationservice-rc.yml" kube-master[0]
|
Step 9 |
Start user management on the passive data center and make sure it is up (for orchestration to read ncs streams); and verify
if it is up and running:
ansible -m command -a "kubectl create -f /etc/kube-manifests/usermanagementservice-rc.yml" kube-master[0]
|
Step 10 |
Ensure that the user management service status is up:
ansible -m command -a "curl -X GET http://routerservice.service.consul:8765/idm/admin/health" kube-master[0]
|
Step 11 |
Start the platform microservices on the passive data center:
ansible -m command -a "kubectl create -f /etc/kube-manifests/administrationservice-rc.yml -f /etc/kube-manifests/consumeservice-rc.yml -f /etc/kube-manifests/devicemanagerservice-rc.yml -f /etc/kube-manifests/manageservice-rc.yml -f /etc/kube-manifests/monitorservice-rc.yml -f /etc/kube-manifests/serviceextensionservice-rc.yml -f /etc/kube-manifests/orchestrationservice-rc.yml -f /etc/kube-manifests/billingservice-rc.yml -f /etc/kube-manifests/notificationservice-rc.yml" kube-master[0]
|
Step 12 |
Restore the A0 backup to passive side:
ansible-playbook backup-restore-ao.yml --extra-vars "{ BR_mode: restore, backup_tag: (tag from vms-backup dir) }"
Note
|
The backup tag `tag from vms-backup dir` will be determined by the admin and is located in the `/msx-3.8.0/ansible/vms-backup/`
directory.
|
|
Step 13 |
Stop the data platform probes on the passive data center:
ansible -m command -a "kubectl delete -f /etc/kube-manifests/heartbeat-cm.yml -f /etc/kube-manifests/sshbeat-cm.yml -f /etc/kube-manifests/snmpbeat-cm.yml -f /etc/kube-manifests/heartbeat-ps.yml -f /etc/kube-manifests/sshbeat-ps.yml -f /etc/kube-manifests/snmpbeat-ps.yml" kube-master[0]
|
Step 14 |
Start the data platform probes on the passive data center:
ansible -m command -a "kubectl create -f /etc/kube-manifests/heartbeat-cm.yml -f /etc/kube-manifests/sshbeat-cm.yml -f /etc/kube-manifests/snmpbeat-cm.yml -f /etc/kube-manifests/heartbeat-ps.yml -f /etc/kube-manifests/sshbeat-ps.yml -f /etc/kube-manifests/snmpbeat-ps.yml" kube-master[0]
|
Step 15 |
As a superuser, get the IDM token on the passive data center:
ansible -m command -a "curl -X POST http://routerservice.service.consul:8765/idm/api/v1/accesstoken -H 'accept: application/json' -H 'cache-control: no-cache' -H 'content-type: application/json' -d '{\"granttype\": \"password\", \"password\": \"<superuser_password>\", \"scope\": \"write\", \"username\": \"superuser\"}'" kube-master[0]
|
Step 16 |
Push the probe configuration on the passive data center:
ansible -m command -a "curl -X POST -H 'authorization: Bearer <token from previous step>' http://devicemanagerservice.service.consul:9104/devicemanagerservice/v1/admin/devicemetrics/start" kube-master[0]
|
Step 17 |
For the Cisco MSX SD-Branch service pack, run the following playbook:
ansible -m command -a "kubectl delete -f /etc/kube-manifests/nso-vbranch-ps.yml" kube-master[0]
|
Step 18 |
For the Cisco MSX Managed Device service pack, run the following playbook:
ansible -m command -a "kubectl delete -f /etc/kube-manifests/nso-manageddevice-ps.yml" kube-master[0]
|
Step 19 |
For the SD-Branch service pack, run the following playbook
ansible -m command -a "sed -i 's/replicas: 1/replicas: 2/g" kube-master
|
Step 20 |
For the Cisco MSX Managed Device service pack, run the following playbook:
ansible -m command -a "sed -i 's/replicas: 1/replicas: 2/g" kube-master
|
Step 21 |
In the following playbook specify the service packs that have been installed:
ansible-playbook deploy-nso-consul-cleanup.yml --extra-vars '{servicepack_list: ['vbranch', 'manageddevice', ‘cloudutd’]}'
|
Step 22 |
Deploy SD-Branch ncs if the SD-Branch service pack is installed:
ansible -m command -a "kubectl create -f /etc/kube-manifests/nso-vbranch-ps.yml" kube-master[0]
ansible -m command -a "sed -i 's/replicas: 1/replicas: 2/g" kube-master
|
Step 23 |
Deploy Managed Device ncs if the Managed Device service pack is installed:
ansible -m command -a "kubectl create -f /etc/kube-manifests/nso-manageddevice-ps.yml" kube-master[0]
|
Step 24 |
Stop the SD-Branch service pack microservice if SD-Branch is installed:
ansible -m command -a "kubectl delete -f /etc/kube-manifests/statemachineservice-rc.yml -f /etc/kube-manifests/vbranchservice-rc.yml" kube-master[0]
|
Step 25 |
Stop the Managed Device service pack microservice if Managed Device is installed:
ansible -m command -a "kubectl delete -f /etc/kube-manifests/manageddeviceservice-rc.yml" kube-master[0]
|
Step 26 |
Stop the SD-WAN service pack microservice if SD-WAN is installed:
ansible -m command -a "kubectl delete -f /etc/kube-manifests/sdwanservice-rc.yml
kube-master[0]
|
Step 27 |
Start the SD-Branch service pack microservice if SD-Branch is installed:
ansible -m command -a "kubectl create -f /etc/kube-manifests/statemachineservice-rc.yml -f /etc/kube-manifests/vbranchservice-rc.yml" kube-master[0]
|
Step 28 |
Start the Managed Device service pack microservice if Managed Device is installed:
ansible -m command -a "kubectl create -f /etc/kube-manifests/manageddeviceservice-rc.yml" kube-master[0]
|
Step 29 |
Start the SD-WAN service pack microservice if SD-WAN is installed.
ansible -m command -a "kubectl create -f /etc/kube-manifests/sdwanservice-rc.yml" kube-master[0]
|
Step 30 |
Run the following playbook to update the DNS entries:
ansible-playbook create-infra.yml --tags route53,server
Update the PNP hosts of all ENCS devices to point to the Edge Node IP address for the passive data center. Log in to the
ENCS NFVIS system, and provide the PNP server IP address or FQDN for the passive data center in the Host Plug-n-Play settings.
Note
|
This step is required only when NFVIS version used is lower than 3.8.1 and the IP address is provided for PNP.
|
If you are using NFVIS 3.8.1, which has FQDN support, the FQDN name can be provided directly (For example: orange-customer.com)
and this does not need to be updated after the failover.
After the failover, execute the following steps before proceeding with service operations:
-
Update the PNP hosts of all ENCS devices to point to the Edge Node IP address for the passive data center.
Note
|
This step is required only when the NFVIS version used is lower than 3.8.1 and the IP address is provided for PNP.
|
If you are using NFVIS 3.8.1, which has FQDN support, the FQDN name can be provided directly (For example: orange-customer.com)
and this does not need to be updated after the failover.
-
Verify that the new passive data center NAT IP addresses have been opened out for vOrchestrator connectivity before creating
control planes.
-
For Managed Device:
After the failover, perform the following step on NSO through ncs_cli:
-
Verify if the devices are in synch:
request devices check-sync
-
If the devices are out-of-sync, run this command:
request devices sync-from
Note
|
After the last step is performed, ensure that the ncs for either SD-Branch, or Managed Device, or both, is ready. If ncs is
ready, enter this command:
|
ansible -m shell -a "kubectl -n vms get pod | grep nso" kube-master[0]
If ncs is ready, then the return output is as follows.
nso-manageddevice-0 3/3 Running
nso-vbranch-0 3/3 Running
|
Step 31 |
Source the OpenStack RC file that was copied to the passive data center node:
vms-backup/infra/openrc-active
|
Step 32 |
Switch to active DC using the following command:
ansible-playbook dualdc-switch-dc.yml --extra-vars '{dc: active}' ----
|
Step 33 |
Stop SD-Branch SP microservice if SD-Branch is installed:
ansible -m command -a "kubectl delete -f /etc/kube-manifests/
statemachineservice-rc.yml -f /etc/kube-manifests/vbranchservice-rc.yml"
kube-master[0]
|
Step 34 |
Stop Managed Device SP microservice if Managed Device is installed:
ansible -m command -a "kubectl delete -f /etc/
kube-manifests/manageddeviceservice-rc.yml" kube-master[0]
|
Step 35 |
Scale SD-Branch NSO count to 0 for inactive passive DC if SD-Branch service pack is installed.
ansible -m command -a "kubectl scale --replicas=0 -f
/etc/kube-manifests/nso-vbranch-ps.yml" kube-master[0]
|
Step 36 |
Scale manageddevice NSO count to 0 for inactive passive DC if manageddevice ServicePack is installed.
ansible -m command -a "kubectl scale --replicas=0 -f
/etc/kube-manifests/nso-manageddevice-ps.yml" kube-master[0]
|
Step 37 |
Log in to the MSX Portal and create a tenant. For more information, see ‘Logging in to the Portal’ and ‘Managing Tenants’
in the Cisco Managed Services Accelerator (MSX) Platform User Guide.
|