The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
The following section describe how to restore the cluster manager VM using that OVF template.
Note | Before restoring the Cluster Manager, configure the ESXi server to have enough memory and CPU available. Confirm that the network port group is configured for the internal network. |
Login to ESXi server using vSphere Web Client.
Right-click on the blade where you want to restore the Cluster Manager and select Deploy OVF Template. The Deploy OVF Template wizard opens.
Click Browse... to select all the files associated with an OVF template file. This includes files such as .ovf, .vmdk, and .iso. If you do not select all the required files, a warning message is displayed.
Click Next.
In the name and location window, do the following:
Specify the name that the virtual machine will have when it is deployed at the target location.
The name defaults to the selected template. If you change the default name, it must be unique within each vCenter Server virtual machine folder.
Select or search for a datacenter or folder for the virtual machine.
The default location is based on where you started the wizard. For example, if you started the wizard from a datastore, that datastore is preselected.
Click Next.
Note | If deploying the OVF template to the selected location might cause compatibility problems, the problems appear at the bottom of the window. |
Note | If some details are not as per your requirements, click Back and repeat the steps. |
Select the virtual disk format to store the files for the deployed template and click Next.
Format |
Description |
---|---|
Thick Provisioned Lazy Zeroed |
Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the virtual disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine. |
Thick Provision Eager Zeroed |
A type of thick virtual disk that supports clustering features such as Fault tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the flat format the data remaining on the physical device is zeroed out when the virtual disk is created. it might take much longer to create disks in this format than to create other types o disks. |
Thin Provision |
Use this format to save storage space. For the thin disk, you provision as much datastore space as the disk would require based on the value that you enter for the disk size. However, the thin disk starts small and at first, uses only as much datastore space as the disk needs for its initial operations. |
Select the network (map the networks used in OVF template to the network in your inventory) and click Next.
Verify the settings from Ready to Complete window and click Finish.
After the OVF template is successfully deployed, power ON the VM. The Cluster Manager VM is displayed successfully.
The Cluster Manager VM is the cluster deployment host that maintains all the necessary CPS software and deployment configurations. If a VM in the CPS cluster becomes corrupted, the VM can be recreated. For more information, see the CPS Installation Guide for OpenStack.
Note | Because of its role in the cluster, the Cluster Manager cannot be redeployed using these steps. To restore the Cluster Manager VM, refer to one of the previous two sections. |
The following sections describe how to restore/redeploy specific VM in the CPS cluster (other than the Cluster Manager).
To redeploy the pcrfclient01 VM:
Step 1 | Log in to the Cluster Manager VM as the root user. | ||
Step 2 | Note the UUID of
SVN repository using the following command:
svn info http://pcrfclient02/repos | grep UUID The command will output the UUID of the repository. For example: Repository UUID: ea50bbd2-5726-46b8-b807-10f4a7424f0e | ||
Step 3 | Import the
backup Policy Builder configuration data on the Cluster Manager, as shown in
the following example:
config_br.py -a import --etc-oam --svn --stats --grafanadb --auth-htpasswd --users /mnt/backup/oam_backup_27102016.tar.gz
| ||
Step 4 | To generate the
VM archive files on the Cluster Manager using the latest configurations,
execute the following command:
/var/qps/install/current/scripts/build/build_svn.sh | ||
Step 5 | To deploy the pcrfclient01 VM, perform one of the following: | ||
Step 6 | Re-establish SVN
master/slave synchronization between the pcrfclient01 and pcrfclient02 with
pcrfclient01 as the master by executing the following series of commands.
| ||
Step 7 | If pcrfclient01
is also the arbiter VM, then execute the following steps:
|
To redeploy the pcrfclient02 VM:
To redeploy a sessionmgr VM:
To redeploy the Policy Director (Load Balancer) VM:
Step 1 | Log in to the Cluster Manager VM as the root user. |
Step 2 | To import the
backup Policy Builder configuration data on the Cluster Manager, execute the
following command:
config_br.py -a import --network --haproxy --users /mnt/backup/lb_backup_27102016.tar.gz |
Step 3 | To generate the
VM archive files on the Cluster Manager using the latest configurations,
execute the following command:
/var/qps/install/current/scripts/build/build_svn.sh |
Step 4 | To deploy the lb01 VM, perform one of the following: |
To redeploy the Policy Server (QNS) VM:
Step 1 | Log in to the Cluster Manager VM as the root user. |
Step 2 | Import the
backup Policy Builder configuration data on the Cluster Manager, as shown in
the following example:
config_br.py -a import --users /mnt/backup/qns_backup_27102016.tar.gz |
Step 3 | To generate the
VM archive files on the Cluster Manager using the latest configurations,
execute the following command:
/var/qps/install/current/scripts/build/build_svn.sh |
Step 4 | To deploy the qns VM, perform one of the following: |
To restore databases in a production environment that use replica sets with or without sharding, a maintenance window is required as the CPS software on all the processing nodes and the sessionmgr nodes must be stopped. A database restore is needed after an outage or problem with the system and/or its hardware. In that case, service has been impacted and to properly fix the situation, service will need to be impacted again. From a database perspective, the main processing nodes must be stopped so that the system is not processing incoming requests while the databases are stopped and restored. If replica sets are used with or without sharding, then all the database instances must be stopped to properly restore the data and have the replica set synchronize from the primary to the secondary database nodes.
The following SNMP Notifications (Alarms) are indicators of issues with the CPS databases.
All DB Member of a replica set Down- CPS is unable to connect to any member of the replica set.
No Primary DB Member Found- CPS is unable to find the primary member for a replica set.
Secondary DB Member Down- In a replica set, a secondary DB member is not able to connect.
To determine the status of the databases, run the following command:
diagnostics.sh --get_replica_status
If the mongod process is stopped on any VM, try to manually start it using the following command, where (XXXXX is the DB port number):
/etc/init.d/sessionmgr-XXXXX start
The following steps describe how to import data from a previous backup (as described in General Procedure for Database Backup).
If the database is damaged, refer to Repair a Damaged Database, or Rebuild a Damaged Database, before proceeding with these database restoration steps.
Step 1 | Execute the
following command to restore the database:
config_br.py –a import --mongo-all /mnt/backup/backup_$date.tar.gz where $date is the timestamp when the export was made. For example, config_br.py –a import --mongo-all /mnt/backup/backup_27092016.tgz |
Step 2 | Log in to the
database and verify whether it is running and is accessible:
|
After an outage, the database may be in a state where the data is present but damaged. When you try to start the database process (mongod), it will start, and then stop immediately. You can also observe a “repair required” message in the /var/log/mongodb log file.
If this occurs, you can attempt to repair the database using the following commands:
Note | Because the session database (session_cache - 27717) stores only transient session data of active network sessions, you should not try to repair this database. If the session database is damaged, refer to Rebuild a Damaged Database to rebuild it. |
Run the following commands:
/etc/init.d/sessionmgr-$port stop
/etc/init.d/sessionmgr-$port repair
Verify that the mongod process is running on the VM:
ps –ef | grep mongo | grep $port
If it is not running, then start the mongod process:
/etc/init.d/sessionmgr-$port start
After repairing the database, you can proceed to import the most recent data using config_br.py as described in General Procedure for Database Backup.
If the existing data in the database is damaged and cannot be repaired/recovered (using the steps in Repair a Damaged Database), the database must be rebuilt.
Step 1 | Secure shell to
the pcrfclient01 VM as the root user:
ssh pcrfclient01 |
Step 2 | To rebuild the
failed database:
cd /var/qps/bin/support/mongo |
Step 3 | To rebuild a
specific replica-set:
build_set.sh --$db_name --create where: $db_name: Database name |
Step 4 | After repairing the database, you can proceed to import the most recent data using config_br.py as described in General Procedure for Database Backup. |
To restore the Policy Builder Configuration Data from a backup, execute the following command:
config_br.py –a import --svn /mnt/backup/backup_$date.tgz
where, $date is the date when the cron created the backup file.
After restoring the data, verify the working system by executing the following command:
/var/qps/bin/diag/diagnostics.sh
You can restore Grafana dashboard using the following command:
config_br.py -a import --grafanadb /mnt/backup/<backup_filename>