The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Cisco Elastic Service Controller supports two type of upgrades:
You can upgrade the ESC instance as a standalone instance or as a high availability pair. The upgrade procedure is different for standalone and high availability pair.
IMPORTANT NOTES
ESC portal now displays the notification data that was present in the database, even after the upgrade. This feature is supported only from ESC 2.1. If you are upgrading from 1.1 to 2.1 or later, you will not be able to see the notifications from the 1.1 release on the ESC portal as this data was not present in the database.
After upgrading to the new ESC version, ESC service will manage the life cycle of all VNFs deployed in the previous release. To apply any new features (with new data models) to the existing VNFs, you must undeploy and redeploy these VNFs.
To upgrade standalone ESC instance, perform the following tasks:
Back up the ESC database. For more information, see Backup the Database for ESC Standalone Instance.
Redeploy the ESC instance. For more information, see the below section, Deploy the ESC for Upgrade.
Restore the ESC database on the new ESC instance. For more information, see the below section, Restoring the ESC Database.
![]() Note | In OpenStack, if an old ESC VM was assigned with floating IP, the new ESC VM should be associated with the same floating IP after the installation. |
Restore the ESC database on the new ESC instance , using the following procedure:
To upgrade ESC HA nodes, perform the following tasks:
Back up the database from an old ESC HA primary instance. For more information, see Backup the Database from the ESC HA Instances.
Deploy new ESC HA nodes based on new ESC version. For more information, see the below section, Deploy the ESC HA nodes for Upgrade.
Restore the Database on Primary ESC instance (Standby ESC instance will sync with the Primary ESC instance ). For more information, see the below section, Restoring the ESC Database on New Master and Standby Instances.
Step 1 | Connect to
the standby ESC instance using SSH.
$ ssh USERNAME@ESC_STANDBY_IP | ||
Step 2 | Verify that
the ESC instance is standby and note the name of the standby ESC HA instance :
$ escadm status If the output value shows "BACKUP", the node is the standby ESC node.
| ||
Step 3 | Shutdown the
standby ESC instance through OpenStack Kilo/Horizon using Nova command. For ESC
VM instances based in VMware vSphere, shutdown the primary instance through
VMware client dashboard. An example of shutting down the standby ESC instance
in OpenStack is shown below:
$ nova stop NEW_ESC_STANDBY_ID | ||
Step 4 | Connect to
the primary ESC instance using SSH.
$ ssh USERNAME@ESC_MASTER_IP | ||
Step 5 | Switch to
the root user.
$ sudo bash | ||
Step 6 | Verify that
the ESC instance is primary.
$ escadm status If the output value shows 'MASTER' , the node is the master ESC node. | ||
Step 7 | Stop the ESC
services on the master node and verify the status to ensure the services are
stopped.
$ escadm stop $ escadm status | ||
Step 8 | Restore the
database files.
$ escadm restore --file /tmp/db.tar.bz2 $ scp://<username>:<password>@<backup_ip>:<filename>
| ||
Step 9 | Reboot the
VM to restart the full ESC service:
$ escadm restart | ||
Step 10 | Use the $ escadm status to check the status of the ESC service. | ||
Step 11 | Start the
standby ESC node.
Power on the standby ESC node through OpenStack Nova/Horizon or VMware client. After starting the standby node, ESC HA upgrade process should be complete. | ||
Step 12 | Delete the
old HA instance through OpenStack Nova/Horizon or VMware client. An example of
deleting the VM on OpenStack is shown below:
$ nova delete OLD_ESC_MASTER_RENAMED OLD_ESC_STANDBY_RENAMED |
In ESC 2.1 and earlier, mapping the actions and metrics defined in the datamodel to the valid actions and metrics available in the monitoring agent is enabled using the dynamic_mappings.xml file. The file is stored in the ESC VM and can be modified using a text editor. ESC 2.2 and later do not have an esc-dynamic-mapping directory and dynamic_mappings.xml file. The CRUD operations for mapping the actions and the metrics is available through REST API.
To upgrade the VNF monitoring rules, you must back up the dynamic_mappings.xml file and then restore the file in the upgraded ESC VM. For more information, see the backup and restore procedures. For upgrade of HA instance, see Upgrading HA instance. For upgrade of the standalone instance, see Upgrading Standalone instance.
Use this procedure to upgrade the ESC high-availability nodes one node at a time with a minimum service interruption. This process leverages the ESC HA replication and failover capability to smoothly move ESC service to the new upgraded node without the manual database restore.
Step 1 | Backup ESC
database and log files.
| ||
Step 2 | Log into the
ESC HA secondary VM and stop the services.
$ sudo escadm stop | ||
Step 3 | Make sure
the secondary ESC VM is in STOP state.
$ escadm status --vIf ESC status=0 esc ha is stopped. | ||
Step 4 | Copy rpm
file into secondary ESC VM and execute the rpm command for upgrade:
$ sudo rpm -Uvh /home/admin/cisco-esc-2.2.9-50.rpm | ||
Step 5 | Log into the
primary instance, set ESC primary node into maintenance mode.
$ escadm op_mode set --mode=maintenanceMake sure there are no in-flight transactions and no new transactions during the upgrade. From ESC 2.3, you may use following commands to check in-flight transactions. $ escadm ip_transFor build older than ESC 2.3, you may need to check escmanager log and make sure no new transactions are recorded in this log file. The log file can be located at (/var/log/esc/escmanager.log). | ||
Step 6 | In OpenStack
controller, power off ESC primary node and make sure it is completely shut down
by OpenStack.
$ nova stop <primary_vm_name> $ nova show <primary_vm_name> | ||
Step 7 | Log in to
the upgraded ESC instance (previous secondary one), and start the ESC service.
The upgraded VM will take over primary role and provide ESC service.
$ escadm restart | ||
Step 8 | Check the
ESC version on the new primary instance to verify the upgraded version is
correct. Once it is in the Primary state, make sure ESC service is running
properly in the new Primary VM.
$ escadm status Expected output: 0 ESC status=0 ESC Master Healthy $ esc_version | ||
Step 9 | Power on
the old primary instance. In OpenStack controller, execute the following
command:
$ nova start <primary_vm_name> $ nova show <primary_vm_name> | ||
Step 10 | Log into the
VM which is still with old ESC version and repeat step 2, 3, 4, and 7 in the
VM.
|
Step 1 | Backup ESC
database and log files.
| ||
Step 2 | Redeploy
secondary ESC instance. Register new ESC image on the secondary instance, and
wait for the data to be synchronized.
| ||
Step 3 | Stop
keepalived service on Secondary instance, Power off primary instance, and then
start Secondary keepalived service.
| ||
Step 4 | Check the
ESC version on the new primary instance to verify the version is upgraded
correctly.
$ escadm status (check ha status) Expected output: 0 ESC status=0 ESC Master Healthy $ esc_version (check esc version) version : 3.x.x release : xxx | ||
Step 5 | Re-deploy
the old primary instance with the new ESC image.
Delete the old primary instance and redeploy it by using the new ESC package (bootvm.py and new registered image). | ||
Step 6 | Go back in
to the first upgraded primary instance and check the health and keepalived
state.
$ drbd-overview Expected output: 1:esc/0 Connected Primary/Secondary UpToDate/UpToDate /opt/cisco/esc/esc_database ext4 2.9G 52M 2.7G 2% $ escadm status (check ha status) Expected output: 0 ESC status=0 ESC Master Healthy $ esc_version (check esc version) Expected output: version : 2.x.x release : xxx $ health.sh (check esc health) Expected output: ESC HEALTH PASSED
|
Use this procedure to upgrade ESC high-availability nodes with a minimum service interruption on a Kernel-based virtual machine.
Step 1 | Backup ESC
database and log files.
| ||
Step 2 | Log into the
ESC HA secondary VM and stop the ESC service.
$ escadm stop | ||
Step 3 | Make sure the
secondary ESC VM is in STOP state.
$ escadm status --vIf ESC status=0 esc ha is stopped. | ||
Step 4 | In secondary
VM, execute the rpm command for upgrade:
$ sudo rpm -Uvh /home/admin/cisco-esc-<latest rpm filename>.rpm | ||
Step 5 | Log into the
primary instance, set ESC primary node into maintenance mode.
$ escadm op_mode set --mode=maintenanceMake sure there are no in-flight transactions and no new transactions during the upgrade. From ESC 2.3, you may use following commands to check in-flight transactions. $ escadm ip_transFor any build older than ESC 2.3, you may need to check escmanager log for transactions at (/var/log/esc/escmanager.log). | ||
Step 6 | Power off ESC
primary node and make sure it is completely shut down. In KVM ESC controller,
execute the following commands:
$ virsh destroy <primary_vm_name> $ virsh list --all | ||
Step 7 | Log in the
upgraded ESC instance (previous secondary one), start the ESC service. The
upgraded VM will take over primary role and provide ESC service.
$ escadm restart $ start esc_monitor | ||
Step 8 | Check the ESC
version on the new primary instance to verify the upgraded version is correct.
Once it is in the Primary state, make sure ESC service is running properly in
the new Primary VM.
$ escadm status Expected output: 0 ESC status=0 ESC Master Healthy $ esc_version $ health.sh Expected output: ESC HEALTH PASSED | ||
Step 9 | Power on the
old primary instance. In KVM ESC controller, execute the following commands:
$ virsh start <primary_vm_name> | ||
Step 10 | Log into the
VM which is still with old ESC version and repeat step 2, 3, 4, and 7 in the
VM.
|
Step 1 | Backup ESC
database and log files.
| ||
Step 2 | Redeploy
secondary ESC instance. Register new ESC image on the secondary instance.
| ||
Step 3 | Stop
keepalived service on Secondary instance, Power off primary instance, and then
start Secondary keepalived service.
| ||
Step 4 | Check the ESC
version on the new primary instance to verify the version is upgraded
correctly.
$ escadm status (check ha status) Expected output: 0 ESC status=0 ESC Master Healthy $ esc_version (check esc version) version : 3.x.x release : xxx $ health.sh (check esc health) Expected output: ESC HEALTH PASSED | ||
Step 5 | Re-deploy the
old primary instance with the new ESC image.
Delete the old primary instance and redeploy it by using the new ESC package (bootvm.py and new registered image). All other installation parameters should be the same as the old ESC VM deployment. For example, hostname, ip address, gateway_ip, ha_node_list, kad_vip, kad_vif have to be the same values. | ||
Step 6 | Go back in
to the first upgraded primary instance and check the health and keepalived
state.
$ drbd-overview Expected output: 1:esc/0 Connected Primary/Secondary UpToDate/UpToDate /opt/cisco/esc/esc_database ext4 2.9G 52M 2.7G 2% $ escadm status (check ha status) Expected output: 0 ESC status=0 ESC Master Healthy $ esc_version (check esc version) Expected output: version : 2.x.x release : xxx $ health.sh (check esc health) Expected output: ESC HEALTH PASSED
|
Use this procedure to upgrade the ESC high-availability nodes one node at a time with a minimum service interruption. This process leverages the ESC HA replication and failover capability to smoothly move ESC service to the new upgraded node without the manual database restore.
Step 1 | Backup ESC
database and log files.
| ||
Step 2 | Log into the
ESC HA secondary VM and stop the keepalived service.
$ sudo escadm stop | ||
Step 3 | Make sure the
secondary ESC VM is in STOP state.
$ escadm status --vIf ESC status=0 esc ha is stopped. | ||
Step 4 | In secondary
VM, execute the rpm command for upgrade:
$ sudo rpm -Uvh /home/admin/cisco-esc-2.2.9-50.rpm | ||
Step 5 | Log into the
primary instance, set ESC primary node into maintenance mode.
$ escadm op_mode set --mode=maintenanceMake sure there are no in-flight transactions and no new transactions during the upgrade. From ESC 2.3, you may use following commands to check in-flight transactions. $ escadm ip_transFor build older than ESC 2.3, you may need to check escmanager log and make sure no new transactions are recorded in this log file. The log file can be located at (/var/log/esc/escmanager.log). | ||
Step 6 | Power off ESC primary node. In VMware vSphare Client, select Home > Inventory > VMs and Templates, right click the primary instance name from the left panel, and select Power > Power Off. | ||
Step 7 | Log in to the
upgraded ESC instance (previous secondary one), and start the keepalived
service. The upgraded VM will take over primary role and provide ESC service.
$ escamd restart | ||
Step 8 | Check the ESC
version on the new primary instance to verify the upgraded version is correct.
Once it is in the Primary state, make sure ESC service is running properly in
the new Primary VM.
$ escadm status Expected output: 0 ESC status=0 ESC Master Healthy $ esc_version $ health.sh Expected output: ESC HEALTH PASSED | ||
Step 9 | Power on the old primary instance. In VMware vSphare Client, select Home > Inventory > VMs and Templates, right click the primary instance name from the left panel, then select Power > Power On. | ||
Step 10 | Log into the
VM which is still with old ESC version and repeat step 2, 3, 4, and 7 in the
VM.
|
Step 1 | Backup ESC
database and log files.
| ||
Step 2 | Redeploy
secondary ESC instance. Register new ESC image on the secondary instance, and
wait for the data to be synchronized.
| ||
Step 3 | Stop
keepalived service on Secondary instance, Power off primary instance, and then
start Secondary keepalived service.
| ||
Step 4 | Check the ESC
version on the new primary instance to verify the version is upgraded
correctly.
$ escadm status --v(check ha status) Expected output: 0 ESC status=0 ESC Master Healthy $ esc_version (check esc version) version : 3.x.x release : xxx $ health.sh (check esc health) Expected output: ESC HEALTH PASSED | ||
Step 5 | Re-deploy the
old primary instance with the new ESC image.
Delete the old primary instance and redeploy it by using the new ESC package (bootvm.py and new registered image). All other installation parameters should be the same as the old ESC VM deployment. For example, hostname, ip address, gateway_ip, ha_node_list, kad_vip, kad_vif have to be the same values. To delete, in the VMware vSphare Client, access, Home > Inventory > VMs and Templates, right click the instance name from the left panel, then select Delete from Disk. | ||
Step 6 | Go back in
to the first upgraded primary instance and check the health and keepalived
state.
$ drbd-overview Expected output: 1:esc/0 Connected Primary/Secondary UpToDate/UpToDate /opt/cisco/esc/esc_database ext4 2.9G 52M 2.7G 2% $ escadm status (check ha status) Expected output: 0 ESC status=0 ESC Master Healthy $ esc_version (check esc version) Expected output: version : 3.x.x release : xxx $ health.sh (check esc health) Expected output: ESC HEALTH PASSED
|