The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This section describes how to start and stop Cisco Policy Server nodes, VMs, and services.
The following sections describe the commands to shut down the Cisco Policy Server nodes:
CPS is composed of a cluster of nodes and services. This section describes how to restart the different services running on various CPS nodes.
Each database port and configuration is defined in the /etc/broadhop/mongoConfig.cfg file.
The scripts that start/stop the database services can be found in the /etc/init.d/ directory on the CPS nodes.
To stop and start a database, log into each Session Manager VM and execute the commands as shown below. For example, to restart the sessionmgr 27717 database, execute:
service sessionmgr-27717 stop
service sessionmgr-27717 start
or:
service sessionmgr-27717 restart
Note | It is important not to stop and start all of the databases in the same replica set at the same time. As a best practice, stop and start databases one at a time to avoid service interruption. |
If the Policy Server (QNS) VM was previously powered off, it must be powered on only during Maintenance Window or low traffic time. If the VM is powered on during high traffic, then when the qns java process comes up and it immediately starts taking up load. As a result there can be timeouts and high CPU until around 60 seconds from the Policy Server (QNS) VM during the JVM hotspot warmup time. Once the JVM warmup phase is completed, the VM must be able to handle traffic smoothly.
To restart all Policy Server (QNS) services on all VMs, execute the following from the Cluster Manager:
/var/qps/bin/control/restartall.sh
Note | This script only restarts the Policy Server (QNS) services. It does not restart any other services. |
Use summaryall.sh or statusall.sh to see details about these services.
To restart all Policy Server (QNS) services on a single CPS VM, execute the following from the Cluster Manager:
/var/qps/bin/control/restartqns.sh <hostname>
where <hostname> is the CPS node name of the VM (qns01, qns02, lb01, pcrfclient01, and so on).
The Monit service manages many of the services on each CPS VM.
To see a list of services managed by monit on a VM, log in to the specific VM and execute:
monit summary
To stop and start all services managed by monit, log in to the specific VM and execute the following commands:
monit stop all
monit start all
To stop and start a specific service managed by Monit, log in to the specific VM and execute the following commands:
monit stop <service_name>
monit start <service_name>
where <service_name> is the name as shown in the output of the monit summary command.
To restart Subversion (SVN) on OAM (pcrfclient) nodes, execute:
service httpd restart
To restart Policy Builder on OAM (pcrfclient) nodes (pcrfclient01/pcrfclient02), execute:
monit stop qns-2
monit start qns-2
To restart Control Center on OAM (pcrfclient) nodes (pcrfclient01/pcrfclient02), execute:
monit stop qns-1
monit start qns-1
The following commands are used to restart the services on the Policy Director (lb) nodes only (lb01 and lb02).
If there is a controlled or uncontrolled power outage, the following power on procedures should be followed to bring the system up properly.
Due to the operational inter-dependencies within the CPS, it is necessary for some CPS services and components to become active before others.
CPS can monitor the state of the cluster through the various stages of startup. It also includes functionality to allow the system to gracefully recover from unexpected outages.
CPS can monitor the state of the services and components of the cluster from the OAM (pcrfclient) VMs. By default, this functionality is disabled.
This functionality can be enabled by setting the cluster_state_monitor option to true in the CPS Deployment Template (Excel spreadsheet).
To update an existing deployment to support this functionality, modify this setting in your CPS Deployment Template and redeploy the csv files as described in the CPS Installation Guide for VMware.
This monitoring system reports the state of the system as an integer value as described in the following table:
Cluster State |
Description |
Values |
---|---|---|
0 |
unknown state/pre-inspection state |
The system will report ‘0’ until both conditions have been met under ‘1’: lbvip02 is UP AND databases are accessible. Various systems can be coming online while a ‘0’ state is being reported and does not automatically indicate an error. Even if the system cannot proceed to ‘1’ state, Policy Builder and Control Center UIs should be available in order to manage or troubleshoot the system. |
1 |
lbvip02 is alive and all databases in /etc/broadhop/mongoConfig.cfg have an accessible primary |
All backend databases must be available and the lbvip02 interface must be UP for the system to report this state. |
2 |
lbvip02 port 61616 is accepting TCP connections |
Backend Policy Server (QNS) processes access lbvip02 on this port. When this port is activated, it indicates that Policy Server (QNS) processes can proceed to start. |
3 |
at least 50% of backend Policy Server (QNS) processes are alive |
Once sufficient capacity is available from the backend processes, the Diameter protocol endpoint processes are allowed to start. |
The current cluster state is reported in the following file on the OAM (pcrfclient): /var/run/broadhop.cluster_state
The determine_cluster_state command logs output of the cluster state monitoring process into /var/log/broadhop/determine_cluster_state.log.
In addition to the monitoring functionality, CPS can also use the cluster state to regulate the startup of some of the CPS services pending the appropriate state of the cluster.
By default this functionality is disabled. It can be enabled for the entire CPS cluster, or for troubleshooting purposes can be enabled or disabled on a per-VM basis.
Note | Cluster State Monitoring must be enabled for Controlled Startup to function. |
The Controlled Startup functionality is enabled by the presence of the /etc/broadhop/cluster_state file.
To enable this feature on all CPS VMs in the cluster, execute the following commands on the Cluster Manager VM to create this file and to use the syncconfig.sh script to push those changes out to the other VMs.
touch /etc/broadhop/cluster_state
syncconfig.sh
To disable this feature on all VMs in the cluster, remove the cluster_state file on the Cluster Manager VM and sync the configuration:
rm /etc/broadhop/cluster_state
syncconfig.sh
To enable this feature on a specific VM, create a /etc/broadhop/cluster_state file on the VM:
touch /etc/broadhop/cluster_state
To disable this feature again on a specific VM, delete the /etc/broadhop/cluster_state file on the VM:
rm /etc/broadhop/cluster_state
Note | This is temporary measure and should only be used for diagnostic purposes. Local modifications to a VM can be overwritten under various circumstances, such as running syncconfig.sh. |
In CPS, the active and standby strategy applies only to the Policy Directors (lb). The following are the two Policy Directors in the system:
Step 1 | Log in to the active Policy Director (lb) VM. See Determining the Active Policy Director for details to determine which Policy Director is active. |
Step 2 | Restart the
Heartbeat service using the following command:
monit restart corosync This command will force the failover of the VIP from the active Policy Director to the standby Policy Director. |
Step 3 | To confirm the
switchover, SSH to the other Policy Director VM and run the following command
to determine if the VIP is now associated with this VM:
ifconfig -a If you see the eth0:0 or eth1:0 interfaces in the list and marked as “UP” then that is the active Policy Director. |
As a part of routine operations, it is important to make backups so that if there are any failures, the system can be restored. Do not store backups on system nodes.
For detailed information about backup and restore procedures, see the CPS Backup and Restore Guide.
Hardware replacement is usually performed by the hardware vendor with whom your company holds a support contract.
Hardware support is not provided by Cisco. The contact persons and scheduling for replacing hardware is made by your company.
Before replacing hardware, always make a backup. See the CPS Backup and Restore Guide.
Unless you have a readily available backup solution, use VMware Data Recovery. This solution, provided by VMware under a separate license, is easily integrated into your CPS environment.
The templates you download from the Cisco repository are partially pre-configured but require further configuration. Your Cisco technical representative can provide you with detailed instructions.
Note | You can download the VMware software and documentation from the following location: |
You can export and import service configurations for the migration and replication of data. You can use the export/import functions to back up both configuration and environmental data or system-specific information from the configuration for lab-to-production migration.
You can import the binary in the following two ways:
Import the binary produced by export - All configuration exported will be removed (If environment is included, only environment will be removed. If environment is excluded, environment will not be removed). The file passed is created from the export API.
Additive Import - Import the package created manually by adding configuration. The new configurations get added into the server without impacting the existing configurations. The import is allowed only if the CPS running version is greater than or equal to the imported package version specified in the configuration.