This chapter provides the following information about upgrading a VMR deployment that is integrated with V2PC. It includes the following topics:
A rolling upgrade enables you to upgrade the VMR components with minimal downtime in the VMR service, or no downtime if the database versions are compatible. A rolling upgrade is performed using the Kubernetes resource type, Deployment. Deployments are a higher abstraction, which create ReplicaSets resources. ReplicaSets watch over the pods and make sure the correct number of replicas are always running. When you want to update a pod, you can modify the Deployment manifest. This modification will create a new ReplicaSet, which will be scaled up while the previous ReplicaSet will be scaled down.
![]() Note | If VMR is installed in Standalone Mode, refer to Upgrading VMR Standalone for instructions about upgrading your deployment. |
The following considerations and prerequisites must be met prior to upgrading VMR:
VMR AIC and images must be compatible; the AIC version must be a subset of the image version. For example, if VMR AIC is 1.3.100601, you must upgrade to image version 1.3.1006.
VMR must be installed and fully configured (V2PC, VMR AIC and CDVR MFC), with all applications having a status of In Service.
The existing database must be upgraded to the new schema if the existing version and the new version are different. See Update Database Schema for instructions.
The impact of database upgrade on VMR microservices and the migration strategy must be evaluated for every upgrade version. There will be downtime during the rolling upgrade if the newer database schema is not backward compatible with the older database schema. If table columns are removed or indices are altered, the schema changes may not be backward compatible. If the newer schema is backward compatible with the older version of the microservice, there will be no downtime during rolling upgrade.
Prior to performing a hitless upgrade of OpenShift 1.4 to 1.5, a script must be run to relax the security context rules for zookeeper and fluentd pods that are dependent on hostPath reachability. The script can be obtained from the following location in the AIC: platform/resources/config/securitycontext/vmr-scc.yaml
![]() Note | If the script is not available in this location, contact Cisco support and request the vmr-scc.yaml script. Be sure the KUBECONFIG environment variable has been set prior to running the script. |
Use the following command to run the script:
oc project vmr kubectl apply -f vmr-scc.yaml
Perform the following procedures to upgrade VMR from version 1.3.1_017 to later versions.
This upgrade assumes changes to the database schema.
Perform the following procedure to log into the V2PC master node, set the KUBECONFIG variable and confirm the kubectl versions match.
Step 1 | Log in to the V2PC Launcher node using the root user:
ssh root@<v2p-launcher-ip> | ||
Step 2 | From the V2P Launcher, SSH into the V2PC master node that is running the AICM process leader:
ssh -i /root/.ssh/v2pcssh.key v2pc@<v2p-master-ip>
If necessary, use one of the following methods to find the V2PC AICM master. Use Consul UI:
Use Command Line: | ||
Step 3 | On the V2PC Master node, set the KUBECONFIG variable using the appropriate Unmanaged Platform Instance Controller (upic) instance name and version:
Example: ls /var/opt/cisco/v2p/v2pc/picm/controllers/cisco-k8s-upic/E2E_UMK8S_OS/1.0.29 export KUBECONFIG=/var/opt/cisco/v2p/v2pc/picm/controllers/cisco-k8s-upic/E2E_UMK8S_OS/1.0.29/kubeconfig
| ||
Step 4 | Run the following command to return the kubectl versions and confirm that the major and minor versions match:
ubectl version Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0+776c994", GitCommit:"a9e9cf3", GitTreeState:"clean", BuildDate:"2017-01-25T21:55:19Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"} | ||
Step 5 | If the kubectl major and minor versions do not match, rename them using the following commands: |
Perform the following procedure to import the VMR application instance controller (AIC) package:
Step 1 | On the V2PC Repo node, create a temporary directory to hold the VMR AIC files:
mkdir cisco-vmr-<release-version> Example: mkdir cisco-vmr-1.3.200201 |
Step 2 | Copy the VMR AIC package (cisco-vmr-<release-version>.tgz) to the newly created temporary directory:
cp <path-to-file>/cisco-vmr-<release-version>.tgz cisco-vmr-<release-version>/ |
Step 3 | Import the package:
/opt/cisco/v2p/v2pc/python/v2pPkgMgr.py --import --pkgtype aic --sourcepath <path-to-dir>/cisco-vmr-<release-version>/ If successful, a message similar to the following is displayed: Successfully imported NPM package from /home/v2pc/cisco-vmr-1.3.200201/ Archiving application package cisco-vmr version 1.3.200201 ... Archived application package to baseurl --> http://<v2pc-repo-ip>/cisco/aic/cisco-vmr/1.3.200201 |
Perform the following procedure to import the media flow controller (MFC) package:
Step 1 | On the V2PC Repo node, create a temporary directory to hold the MFC files:
mkdir cdvr-mfc-<release-version> |
Step 2 | Copy the MFC package (cdvr-mfc-<release-version>.tgz) to the newly created directory:
cp <path-to-file>/cdvr-mfc-<release-version>.tgz cdvr-mfc-<release-version>/ |
Step 3 | Import the package:
/opt/cisco/v2p/v2pc/python/v2pPkgMgr.py --import --pkgtype mfc --sourcepath <path-to-dir>/cdvr-mfc-<release-version>/ If successful, a message similar to the following is displayed: Successfully imported NPM package from /home/v2pc/cdvr-mfc-1.3.200201/ Archiving application package cdvr-mfc version 1.3.200201 ... Archived application package to baseurl --> http://<v2pc-repo-ip>/cisco/mfc/cdvr-mfc/1.3.200201 |
Step 4 | Exit the V2PC repo node:
exit |
A valid docker registry running on another external node is required. Perform the following procedure to upload the new VMR container images to the docker registry.
Step 1 | From a machine where you have access to the VMR registry, create a directory to hold the temporary files required to load the docker images, download all the container images and the load_to_registry.sh script, and then navigate to the directory.
Replace <cisco-release-tag> with the version of VMR to which you are upgrading (for example, 1.3.2_010), and <VMR registry> with the appropriate value. mkdir cisco-vmr-<cisco-release-tag> cd cisco-vmr-<cisco-release-tag>/ wget -r -np -nH --cut-dirs=2 --no-parent --exclude-directories=.svn --reject "index.html*, vmr_release*"http://<VMR registry>/vmr-releases/vmr-cisco-<cisco-release-tag>/rio-docker-images/ cd rio-docker-images/ wget http://<VMR registry>/vmr-releases/vmr-cisco-<cisco-release-tag>/scripts/load_to_registry.sh cd ../../ |
Step 2 | Place the downloaded docker files into any one OpenShift master node. Replace <ssh key> with appropriate value.
scp -r -i <ssh key> cisco-vmr-<cisco-release-tag>/* root@<Openshift master ip>:/tmp/ |
Step 3 | Grant execution permissions on this file. Replace <ssh key> with appropriate value.
ssh -i <ssh key> root@<Openshift master ip> cd /tmp/rio-docker-images/ chmod +x load_to_registry.sh |
Step 4 | Load the docker images to the local docker registr. Replace <docker-registry> with appropriate value. The <cisco-release-tag> is the version being upgraded to (for example, cisco-1.3.2_010).
./load_to_registry.sh <docker-registry> <cisco-release-tag> |
A V2P upgrade docker container is required to perform the AIC and MFC upgrades. This container resides on the V2P launcher node.
If this container already exists, login to the container and proceed to Step 6 in this section.
If the contain does not exist, perform all of the steps in this section.
Step 1 | From the V2PC launcher, navigate to the directory containing the V2P upgrade files. These are the files used to create the v2p upgrade docker container on the V2P launcher node.
cd /<path-to-dir>/upgrade/ |
Step 2 | List the contents of the directory.
ls -lrt The following files should be present: v2p-repo-<version>.iso v2p-upgrade-docker<version>.tar.gz |
Step 3 | Extract the contents of v2p-upgrade-docker-<version>.tar.gz file, if never done previously.
tar -zxvf v2p-upgrade-docker-<version>.tar.gz |
Step 4 | Navigate to the directory containing the upgrade files:
cd v2p-upgrade-docker/ |
Step 5 | Bring up the docker container:
./setup.sh ../v2p-repo-<version>.iso /root/.ssh/v2pcssh.key <v2pc-master-ip> If successful, the following message is displayed and you are logged in to the docker container: Loaded image: <name>:5007/v2p-launcher:<version> REPOSITORY TAG IMAGE ID CREATED SIZE <name>:5007/v2p-launcher <tag> <image> <months> ago <size> MB run docker image v2p-launcher:<version>} |
Step 6 | Within the v2p upgrade docker container, navigate to the directory containing the upgrade files:
cd /root/data/ |
Perform the following procedure to upgrade the VMR Application Instance Controller (AIC) instance running in the system.
Step 1 | In the v2p upgrade docker container, open the - appcontroller_upgrade_apps_by_appid.json file in a text editor:
vi appcontroller_upgrade_apps_by_appid.json |
Step 2 | Modify the file with the appropriate values for uihost, pkgtype, appid and version. The updated file should be similar to the following:
cat appcontroller_upgrade_apps_by_appid.json { "isoPath": "/root/data/v2p-repo-<currently-running-version>.iso", "type": "appcontroller", "ssh_key": "/root/data/v2pcssh.key", "action": "upgrade", "uihost": "<v2pc-master-ip>", "region": "region-0", "pkgtype": "aic", "appid": "cisco-vmr", "version": "1.3.200201"(this is the version being upgraded to) } |
Step 3 | Upgrade the VMR AIC:
./v2p_upgrade.sh appcontroller_upgrade_apps_by_appid.json If successful, the following message is displayed: All application instances of type aic/cisco-vmr update to version 1.3.200201 finished successfully |
Perform the following procedure to upgrade the cDVR Media Flow Controller (MFC) instance running in the system.
Step 1 | In the v2p upgrade docker container, open the file - appcontroller_upgrade_apps_by_appid.json in a text editor:
vi appcontroller_upgrade_apps_by_appid.json |
Step 2 | Modify the above file with the appropriate values for uihost, pkgtype, appid and version. The updated file should be similar to the following:
cat appcontroller_upgrade_apps_by_appid.json { "isoPath": "/root/data/v2p-repo-<currently-running-version>.iso", "type": "appcontroller", "ssh_key": "/root/data/v2pcssh.key", "action": "upgrade", "uihost": "<v2pc-master-ip>", "region": "region-0", "pkgtype": "mfc", "appid": "cdvr-mfc", "version": "1.3.200201" (this is the version being upgraded to) } |
Step 3 | Upgrade the MFC:
./v2p_upgrade.sh appcontroller_upgrade_apps_by_appid.json If successful, the following message is displayed: All application instances of type mfc/cdvr-mfc update to version 1.3.200201 finished successfully |
Step 4 | Exit the docker container:
exit |
The existing database must be upgraded to the new schema if the existing version and the new version are different. Perform the following procedure to upgrade the database schema.
Certain components may need to be scaled down prior to the upgrade. There may be interruptions during the course of the upgrade, and the components to be brought down should be evaluated on a case by case basis.
Step 1 | SSH into the V2PC Launcher node using root user:
ssh root@<v2p-launcher-ip> | ||
Step 2 | From the V2P Launcher, SSH into the V2PC master node that is running the AICM process leader:
ssh -i /root/.ssh/v2pcssh.key v2pc@<v2p-master-ip> If the V2P Master is not running in HA mode, use the single V2P Master IP. | ||
Step 3 | On the V2PC Master node, set the KUBECONFIG variable as shown in the following example. Use the appropriate Unmanaged Platform Instance Controller (upic) instance name and version in the path. In the example, E2E_UMK8S_OS is the upic instance name and 1.0.29 is the upic version.
Example: ls /var/opt/cisco/v2p/v2pc/picm/controllers/cisco-k8s-upic/E2E_UMK8S_OS/1.0.29 export KUBECONFIG=/var/opt/cisco/v2p/v2pc/picm/controllers/cisco-k8s-upic/ E2E_UMK8S_OS/1.0.29/kubeconfig | ||
Step 4 | On the V2PC Master node, determine the name of your VMR AIC and the VMR version that was created, and edit the <vmr-aic-name> and <vmr-aic-version> as follows:
cd /var/opt/cisco/v2p/v2pc/aic/controllers/cisco-vmr/<vmr-aic-name>/<vmr-aic-version>/ node_modules/cisco-vmr/platform/resources/config/ This location contains all the configuration files that VMR requires for setting up pods on kubernetes.
| ||
Step 5 | From the V2P Launcher, SSH into the V2PC repo node:
ssh -i /root/.ssh/v2pcssh.key v2pc@<v2p-repo-ip> | ||
Step 6 | Confirm that the major and minor versions of kubectl match:
kubectl version Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0+776c994", GitCommit:"a9e9cf3", GitTreeState:"clean", BuildDate:"2017-01-25T21:55:19Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"} If the versions do not match, rename as follows:
| ||
Step 7 | Verify the status of archive-agent Pods, Replica Sets and Deployment:
kubectl get pods | grep archive-agent kubectl get rs | grep archive-agent kubectl get deployment | grep archive-agent | ||
Step 8 | Stop (scale down) all archive-agent pods:
kubectl scale --replicas=0 deployment/archive-agent deployment "archive-agent" scaled | ||
Step 9 | Verify the status of the scale down:
kubectl get pods | grep archive-agent kubectl get rs | grep archive-agent kubectl get deployments | grep archive-agent | ||
Step 10 | Update the database schema: |
Prior to performing the VMR rolling upgrade, you must back up and apply configuration changes, and port the configuration changes to Config Maps.
If you are upgrading from VMR version 1.3.1_xxx to version 1.3.3_xxx, you need to back up and apply VMR configuration changes as described in this section.
Step 1 | Run the configSettingsBackUp.sh script to get a backup of the VMR configuration files. A report file will be generated at the provided location.
./configSettingsBackUp.sh -d /home/example_directory/reports |
Step 2 | Make the configuration changes from 1.3.1_xxx into 1.3.3_xxx directory that need to be retained. |
Prior to performing a rolling upgrade from 1.3.1_xxx to 1.3.3_016 and later, you must port configuration changes to config maps. This procedure should be executed after the VMR AIC has been upgraded, and before a manual upgrade is performed.
Some configurations in v1.3.1.xxx have been moved from the Deployment templates (*.json.template) to the corresponding Config Map templates (*-configmap.yaml) in v1.3.3_016. These include BULK_DELETE, A8_SUBSCRIBE_URL, IDLE_CONNS, etc.
If the values for any of these configurations have been modified and are to be retained post upgrade, you must manually modify the Config Map templates to reflect the new values. Otherwise, the values from V2PC will be used in the Config Map templates. These will then be persisted by the initialization scripts during the upgrade.
During an upgrade, the corresponding value from V2PC will be used only if the value for a key in a Config Map template is empty, Otherwise, the value already present in the Config Map template would be retained. The value for such keys in the Config Map templates that are configured by V2PC, are empty by default.
The changes that need to be completed are listed in the following table.
![]() Note | The values listed in the table are for reference only. |
Component |
Config Map Template |
Comnfiguration to be Modified |
Configuration Key |
Configuration Value |
---|---|---|---|---|
A8 Updater |
a8-updater- configmap.yaml |
A8_SUBSCRIBE_URL |
a8subscribeurl |
http://<vsrm>:80/a8/ subscribe |
Archive Agent |
archive-agent- configmap.yaml |
RECGC_WORKER_ TIMEOUT |
recgcworker timeout |
24h |
Archive Agent |
archive-agent- configmap.yaml |
BULK_DELETE |
bulkdelete |
true |
Archive Agent |
archive-agent- configmap.yaml |
STORAGE_TIMEOUT |
storagetimeout |
30s |
Archive Agent |
archive-agent- configmap.yaml |
RUN_ARCHIVE_GC |
runarchivegc |
false |
Archive Agent |
archive-agent- configmap.yaml |
RUN_REARCHIVE |
runrearchive |
false |
Archive Agent |
archive-agent- configmap.yaml |
RUN_METADATA_ ARCHIVE |
runmetadata archive |
false |
Archive Agent |
archive-agent- configmap.yaml |
RUN_PRIVATE_GC |
runprivategc |
false |
DASH Origin |
dash-origin- configmap.yaml |
IDLE_CONNS |
idleconns |
500 |
Manifest Agent |
manifest-agent- configmap.yaml |
IDLE_CONNS |
idleconns |
500 |
Manifest Agent |
manifest-agent- configmap.yaml |
IDLE_CONNS |
idleconns |
800 |
Manifest Agent |
manifest-agent- configmap.yaml |
MA_SEG_RETRY_ LIMIT |
masegretrylimit |
3 |
Recorder Manager |
recorder- manager- configmap.yaml |
A8_SUBSCRIBE_ URL |
a8subscribeurl |
http://<vsrm>:80/a8/ subscribe |
Perform the following steps to initiate the rolling upgrade.
Step 1 | Navigate to the directory containing the files for the latest version of VMR (for example, 1.3.3_xxx):
cd /var/opt/cisco/v2p/v2pc/aic/controllers/cisco-vmr/<vmr-instance-name>/ <release-version>/node_modules/cisco-vmr/scripts/ |
Step 2 | Initiate the rolling upgrade:
./manual_upgrade_vmr.sh /var/opt/cisco/v2p/v2pc/aic/controllers/cisco-vmr/ <vmr-instance-name>/<release-version>/node_modules/cisco-vmr <pic-instance-name> / var/opt/cisco/v2p/v2pc/picm/controllers/cisco-k8s-upic/<pic-instance-name>/ <pic-instance-version>/kubeconfig null vmr_release v2pc /home/v2pc/ upgrade_<release-version>.log + echo '2017-09-03 10:45:49 Upgrade Success' 2017-09-03 10:45:49 Upgrade Success + exit 0 |
After the upgrade script completes, perform the following procedure to verify that the upgrade was successful.
Step 1 | Get the list of pods:
kubectl get pods |
Step 2 | Describe one of the pods:
kubectl describe pod <pod-name> The output should be similar to the following: Containers: a8-updater: Container ID: docker://<some-hash> Image: <v2pc-repo-ip>:5000/vmr_release/a8-updater:cisco-1.3.2_002 Image ID: <some-id> The Image field includes the version to which VMR was upgraded, i.e. 1.3.2_002. |
Step 3 | Verify status of the Replica Sets:
kubectl get rs NAME DESIRED CURRENT READY AGE a8-updater-1420506392 0 0 0 22d a8-updater-2230793500 0 0 0 1d a8-updater-2433365277 0 0 0 1d a8-updater-901549769 4 4 4 7m In the example, the last entry with non-zero values for DESIRED, CURRENT and READY is the one after the upgrade. |
Step 4 | Verify status of the deployments:
kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE a8-updater 4 4 4 4 3h api 1 1 1 1 3h archive-agent 4 4 4 4 3h This completes the rolling upgrade. |
A rollback script can be executed to revert a VMR upgrade to an earlier working version if there are issues with the upgraded version.
This procedure will roll back only the VMR docker images; not the AIC and MFC versions. To roll back the AIC and MFC versions, refer to Upgrade VMR AIC and Upgrade cDVR MFC.
There is no need to roll back the database as VMR is compatible with older database versions.
Step 1 | Log in to the V2PC master and navigate to latest VMR AIC scripts folder:
cd /var/opt/cisco/v2p/v2pc/aic/controllers/cisco-vmr/<vmr-instance-name>/ <vmr-aic-version>/node_modules/cisco-vmr/scripts/ |
Step 2 | Run the following script to roll back the upgrade:
./manual_rollback_vmr.sh /var/opt/cisco/v2p/v2pc/aic/controllers/cisco-vmr/ <cisco-vmr>/1.3.1xxxxx/node_modules/cisco-vmr <pic-instance-name> /var/opt/cisco/v2p/ v2pc/picm/controllers/cisco-k8s-upic/<pic-instance-name>/<pic-instance-version>/ kubeconfig "a8-updater api <space separated list of components>" Replace <component> with a space separated list of components to be rolled back. + echo '2018-01-15 04:06:14 Rollback Success' 2018-01-15 04:06:14 Rollback Success + exit 0 |
Step 3 | Verify that the rollback is successful: |
Perform the following procedure to apply a patch release for any individual VMR service using rolling upgrade:
Step 1 | Load the VMR docker images of the target version to the V2PC REPO server or Docker Registry:
./download_release_files.sh <from Server ip> <releaseTag> <sshkey> <k8s master ip> <docker registry> |
Step 2 | Navigate to the VMR AIC config directory for the VMR service to be upgraded. For example:
/var/opt/cisco/v2p/v2pc/aic/controllers/cisco-vmr/vmr/<release-number>/node_modules/ cisco-vmr/platform/resources/config/segment-recorder/ |
Step 3 | Open the configuration file for the service (for example, segment-recorder-dp.json) and modify the image version to the value of the patch version. |
Step 4 | Save the file. |
Step 5 | Run the following command:
kubectl apply -f segment-recorder-dp.json |
Step 6 | Run the following command to verify the upgrade was successful:
check_deployment_status.sh <kubernetes cluster instance> <kubeconfig path> <optional: ["deploymentName1 deploymentName2 ..."]> ./check_deployment_status.sh man-k8 /var/opt/cisco/v2p/v2pc/picm/controllers/ cisco-k8s/man-k8/1.1.68/kubeconfig |
Step 7 | Run the following command to verify the upgraded image version number:
kubectl describe pods <podname> |