The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
If there are large number of VMs in your CPS deployment it is recommended to perform a Manual Deployment for one VM (for test purposes). After the success of the first VM, then all VMs can be deployed using Automatic Deployment process.
Note | During the VM deployment, do not perform any vCenter operations on the blades and VMs installed on them. |
Before deploying the VMs, build the VM images by executing the following command from the Cluster Manager VM:
/var/qps/install/current/scripts/build_all.sh
Building /etc/broadhop... Copying to /var/qps/images/etc.tar.gz... ... Copying wispr.war to /var/qps/images/wispr.war Output images to /var/qps/images/ [root@hostname]#
This section describes the steps to deploy each VM in the CPS deployment individually. To deploy all of the VMs in parallel using a single command refer to Automatic Deployment of All CPS VMs in Parallel. To deploy a selective list of VMs in parallel using a single command refer to Automatic Deployment of Selective CPS VMs in Parallel.
Note | Before proceeding, refer to License Generation and Installation to confirm you have installed the license correctly. |
For each host that is defined in the Hosts tab of the CPS Deployment Template spreadsheet execute the following:
Note | The following command uses the short alias name (qns01 qns02 etc.) as defined in the Hosts tab of the CPS Deployment Template. It will not work if you enter the full hostname. |
/var/qps/install/current/scripts/deployer/deploy.sh $host
where, $host is the short alias name and not the full host name.
For example,
./deploy.sh qns01 < === passed
./deploy.sh NDC2BSND2QNS01 < === failed
This section describes the steps to deploy all VMs in parallel in the CPS deployment.
Note | Before proceeding, refer to License Generation and Installation to confirm you have installed the license correctly. |
Execute the following command:
python /var/qps/install/current/scripts/deployer/support/deploy_all.py
The order in which VMs are deployed is managed internally.
Note | The amount of time needed to complete the entire deployment process depends on the number of VMs being deployed as well as the hardware on which it is being deployed. |
The following is a sample list of VM hosts deployed. The list varies according to the type of CPS deployment as well as the information you entered in the CPS Deployment Template.
Note | To install the VMs using shared or single storage, you must use /var/qps/install/current/scripts/deployer/deploy.sh $host command. For more information, refer to Manual Deployment. |
This section describes the steps to deploy a selective list of VMs in parallel in the CPS deployment.
Note | Before proceeding, refer to License Generation and Installation to confirm you have installed the license correctly. |
Execute the following command:
python /var/qps/install/current/scripts/deployer/support/deploy_all.py --vms <filename-of-vms>
Where <filename-of-vms> is the name of the file containing the list of VMs such as:
pcrfclient01
lb01
qns01
Note | The amount of time needed to complete the entire deployment process depends on the number of VMs being deployed as well as the hardware on which it is being deployed. |
The passwords for the users in an HA or GR deployment are not set by default. Before you can access the deployed VMs or CPS web interfaces, you must set these passwords.
Step 1 | Log into the Cluster Manager VM as the root user. The default credentials are root/cisco123. | ||
Step 2 | Execute the
change_passwd.sh script to set the password.
/var/qps/bin/support/change_passwd.sh | ||
Step 3 | When prompted,
enter qns.
Enter username whose password needs to be changed: qns | ||
Step 4 | When prompted,
enter and reconfirm the desired password for the
qns user.
Enter new password:
Re-enter new
password:
Changing password on $host... Connection to $host closed. Password for qns changed successfully on $host
| ||
Step 5 | Repeat
Step 2
to
Step 4
to set or change the passwords for
root and
qns-svn users.
For more information about this and other CPS administrative commands, refer to the CPS Operations Guide. |
After the VMs are deployed, execute the following script from the pcrfclient01 VM:
/var/qps/bin/support/start_svn_sync.sh
This command synchronizes the master/slave Policy Builder subversion repositories.
Note | You do not need to perform this step for AIO deployments. |
The following table lists the services and ports that CPS makes available to external users and applications. It is recommended that connectivity to these ports be granted from the appropriate networks that require access to the below services.
Service |
Common Port (For HA Environment) |
Deprecated Port (For HA Environment) |
Port (for All-in-One Environment) |
---|---|---|---|
Control Center |
443 |
443 |
8090 |
Policy Builder |
443 |
7443 |
7070 |
Grafana |
443 |
9443 |
80 |
Unified API |
443 |
8443 |
8080 |
Custom Reference Data REST API |
443 |
8443 |
8080 |
HAProxy Status |
5540 |
5540 |
Not Applicable |
For a full list of ports used for various services in CPS, refer to the CPS Architecture Guide, which is available by request from your Cisco Representative.
To avoid performance impact you must reserve all allocated memory to each CPS virtual machine. For more information, refer to Reserving Memory on the Virtual Machines (VMs).
Before service configuration can be done for the CPS system, the Session Managers in the cluster should be configured. CPS software needs the database to be available before functioning.
Note | The steps mentioned in the following sections must be performed in the Cluster Manager. |
The standard definition for supported replica-set is defined in mongo configuration file. This configuration file is self-explanatory which contain replica set set-name hostname port number data file path etc.
Location: /etc/broadhop/mongoConfig.cfg
While choosing mongo ports for replica-sets, consider the following:
Port is not in use by any other application. To check it, login to VM on which replica-set is to be created and execute the following command:
netstat -lnp | grep <port_no>
If no process is using same port then port can be chosen for replica-set for binding.
Port number used should be greater than 1024 and not in ephemeral port range i.e, not in between following range :
net.ipv4.ip_local_port_range = 32768 to 61000
While configuring mongo ports in a GR environment, there should be a difference of 100 ports between two respective sites. For example, consider there are two sites: Site1 and Site2. For Site1, if the port number used is 27717, then you can configure 27817 as the port number for Site2. This is helpful to identify a mongo member’s site. By looking at first three digits, one can decide where the mongo member belongs to. However, this is just a guideline. You should avoid having mongo ports of two different sites to close to each other (for exampl, 27717 on Site-1 and 27718 on Site2).
Reason: The reason is that the build_set.sh script fails when you create shards on the site (for example, Site1). This is because the script calculates the highest port number in the mongoConfig on the site where you are creating shards. This creates clash between the replica-sets on both sites. Since the port number which it allocates might overlap with the port number of mongoConfig on other site (for example, Site2). This is the reason why there should be some gap in the port numbers allocated between both the sites.
Currently, replica-set script supports creation of replica-sets for following databases:
Note | All the replica set members and required information like Host Name and port number arbiter host name and port number should be defined in /etc/broadhop/mongoConfig.cfg file. |
Note | Make sure all the replica set ports defined in the mongoConfig.cfg file are outside the range 32768 to 61000. For more information on the port range, refer to http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html. |
The following example shows replica-set set04:
[SPR-SET1] |
[Beginning Set Name-Set No] |
SETNAME=rep_set04 |
Set name i.e. rep_set04 |
ARBITER=pcrfclient0127720 |
Arbiter VM host with port number |
ARBITER_DATA_PATH=/var/data/sessions.4 |
Arbiter data directory |
MEMBER1=sessionmgr0127720 |
Primary Site Member1 |
MEMBER2=sessionmgr0227720 |
Primary Site Member2 |
DATA_PATH=/var/data/sessions.4 |
Data Directory Path for members |
[SPR-SET1-END] |
[Closing Set Name-Set No] |
Run the /var/qps/bin/support/mongo/build_set.sh script from the Cluster Manager.
Script Usage: /var/qps/bin/support/mongo/build_set.sh --help
The following convention must be used while creating cross site replica-set for the session database:
You must create the session database replica-set members on same VM and same port on both sites. For example, among four replica-set members (except arbiter), if sessionmgr01:27717 and sessionmgr02:27717 are two members of replica-set from SITE1 then choose sessionmgr01:27717 and sessionmgr02:27717 of SITE2 as other two replica-set members as shown in following example:
[SESSION-SET] SETNAME=set01 OPLOG_SIZE=5120 ARBITER=SITE-ARB-sessionmgr05:27717 ARBITER_DATA_PATH=/var/data/sessions.1/set1 PRIMARY-MEMBERS MEMBER1=SITE1-sessionmgr01:27717 MEMBER2=SITE1-sessionmgr02:27717 SECONDARY-MEMBERS MEMBER1=SITE2-sessionmgr01:27717 MEMBER2=SITE2-sessionmgr02:27717 DATA_PATH=/var/data/sessions.1/set1 [SESSION-SET-END]
Create replica-sets for session:
Note | Sharding for the Session Cache is done through a separate process (Create Session Shards) and must not be done using the build_set.sh script. |
/var/qps/bin/support/mongo/build_set.sh --session --create Starting Replica-Set Creation Please select your choice: replica sets sharded (1) or non-sharded (2):
2
Create replica-sets for SPR:
Note | SPR (USum) supports mongo hashed sharding. |
/var/qps/bin/support/mongo/build_set.sh --spr --create Starting Replica-Set Creation Please select your choice: replica sets sharded (1) or non-sharded (2):
2
Note | The installation log should be generated in the appropriate directory (/var/log/broadhop/scripts/) for debugging or troubleshooting purpose. |
Create replica-sets for Balance:
/var/qps/bin/support/mongo/build_set.sh --balance --create
Starting Replica-Set Creation Please select your choice replica sets sharded (1) or non-sharded (2)
2
Note | The installation log should be generated in the appropriate directory (/var/log/broadhop/scripts/) for debugging or troubleshooting purpose. |
Create replica-sets for Reporting:
/var/qps/bin/support/mongo/build_set.sh --report --create
Starting Replica-Set Creation Please select your choice: replica sets sharded (1) or non-sharded (2):
2
Note | The installation log should be generated in the appropriate directory (/var/log/broadhop/scripts/) for debugging or troubleshooting purpose. |
Create replica-sets for Audit
/var/qps/bin/support/mongo/build_set.sh --audit --create
Starting Replica-Set Creation Please select your choice replica sets sharded (1) or non-sharded (2)
2
Note | The installation log should be generated in the appropriate directory (/var/log/broadhop/scripts/) for debugging or troubleshooting purpose. |
The ADMIN database holds information related to licensing, diameter end-points and sharding for runtime use.
To create the ADMIN database:
Here are some examples for replica-sets:
After replica sets are created, you need to configure the priorities for the replica set members using set_priority.sh command. For more information on set_priority.sh, refer to CPS Operations Guide.
The session cache can be scaled by adding an additional sessionmgr VM (additional session replica-set). You must create separate administration database and the hostname and port should be defined in Policy Builder (cluster) as defined in the following sections:
After mongo configuration is done successfully (The build_set.sh script gives the status of the mongo configuration after the configuration has been finished) from Cluster Manager, run /var/qps/bin/control/restartall.sh script.
After we modify mongoconfig.cfg file, we can run the synconfig.sh script to rebuild etc.tar.gz image and trigger each VM to pull and extract it.
/var/qps/bin/update/syncconfig.sh
From Cluster Manager, run /var/qps/bin/diag/diagnostics.sh script.
To verify that the lbvip01 and lbvip02 are successfully configured in lb01 and lb02, perform the following steps:
From Cluster Manager, verify that you are able to ping all the hosts in the /etc/hosts file.
The following commands can be used to verify whether the installation was successful or not:
Note | For more information on other CPS administrative commands, refer to CPS Operations Guide. |
This command runs a set of diagnostics and displays the current state of the system. If any components are not running red failure messages will be displayed.
/var/qps/install/current/scripts/upgrade/reinit.sh
This command will prompt for reboot choice. Please select Y for the same and proceed.
/var/qps/bin/diag/diagnostics.sh -h Usage: /var/qps/bin/diag/diagnostics.sh [options] This script runs checks (i.e. diagnostics) against the various access, monitoring, and configuration points of a running CPS system. In HA/GR environments, the script always does a ping check for all VMs prior to any other checks and adds any that fail the ping test to the IGNORED_HOSTS variable. This helps reduce the possibility for script function errors. NOTE: See /var/qps/bin/diag/diagnostics.ini to disable certain checks for the HA/GR env persistently. The use of a flag will override the diagnostics.ini value. Examples: /var/qps/bin/diag/diagnostics.sh -q /var/qps/bin/diag/diagnostics.sh --basic_ports --clock_skew -v --ignored_hosts='portal01,portal02' Options: --basic_ports : Run basic port checks For AIO: 80, 11211, 27017, 27749, 7070, 8080, 8090, 8182, 9091, 9092 For HA/GR: 80, 11211, 7070, 8080, 8081, 8090, 8182, 9091, 9092, and Mongo DB ports based on /etc/broadhop/mongoConfig.cfg --clock_skew : Check clock skew between lb01 and all vms (Multi-Node Environment only) --diskspace : Check diskspace --get_replica_status : Get the status of the replica-sets present in environment. (Multi-Node Environment only) --get_shard_health : Get the status of the sharded database information present in environment. (Multi-Node Environment only) --get_sharded_replica_status : Get the status of the shards present in environment. (Multi-Node Environment only) --ha_proxy : Connect to HAProxy to check operation and performance statistics, and ports (Multi-Node Environment only) http://lbvip01:5540/haproxy?stats http://lbvip01:5540//haproxy-diam?stats --help -h : Help - displays this help --hostnames : Check hostnames are valid (no underscores, resolvable, in /etc/broadhop/servers) (AIO only) --ignored_hosts : Ignore the comma separated list of hosts. For example --ignored_hosts='portal01,portal02' Default is 'portal01,portal02,portallb01,portallb02' (Multi-Node Environment only) --ping_check : Check ping status for all VM --qns_diagnostics : Retrieve diagnostics from CPS java processes --qns_login : Check qns user passwordless login --quiet -q : Quiet output - display only failed diagnostics --redis : Run redis specific checks --svn : Check svn sync status between pcrfclient01 & pcrfclient02 (Multi-Node Environment only) --tacacs : Check Tacacs server reachability --swapspace : Check swap space --verbose -v : Verbose output - display *all* diagnostics (by default, some are grouped for readability) --virtual_ips : Ensure Virtual IP Addresses are operational (Multi-Node Environment only) --vm_allocation : Ensure VM Memory and CPUs have been allocated according to recommendations
[root@pcrfclient01 ~]# diagnostics.sh QNS Diagnostics Checking basic ports (80, 7070, 27017, 27717-27720, 27749, 8080, 9091)...[PASS] Checking qns passwordless logins on all boxes...[PASS] Validating hostnames...[PASS] Checking disk space for all VMs...[PASS] Checking swap space for all VMs...[PASS] Checking for clock skew...[PASS] Retrieving QNS diagnostics from qns01:9045...[PASS] Retrieving QNS diagnostics from qns02:9045...[PASS] Checking HAProxy status...[PASS] Checking VM CPU and memory allocation for all VMs...[PASS] Checking Virtual IPs are up...[PASS] [root@pcrfclient01 ~]#
This command displays core patch and feature version information and URLs to the various interfaces and APIs for the deployment.
This command can be executed from Cluster Manager or OAM (PCRFCLIENT).
/var/qps/bin/diag/about.sh [-h]
This command displays the features and versions of the features that are installed on each VM in the environment.
/var/qps/bin/diag/list_installed_features.sh
This command displays whether the monit service and CPS services are stopped or running on all VMs. This script can be executed from Cluster Manager or OAM (PCRFCLIENT).
/var/qps/bin/control/statusall.sh
Note | Refer to CPS Operations Guide for more details about the output of this command. |
To verify that the CPS web interfaces are running navigate to the following URLs where <lbvip01> is the virtual IP address you defined for the lb01 VM.
Note | Run the about.sh command from the Cluster Manager to display the actual addresses as configured in your deployment. |
Policy Builder: https://<lbvip01>:7443/pb
Default credentials: qns-svn/cisco123
Control Center: https://<lbvip01>:443
Default credentials: qns/cisco123
Grafana: https://<lbvip01>:9443/grafana
Default credentials: —
Note | You must create at least one Grafana user to access the web interface. Refer to the Graphite and Grafana chapter of the CPS Operations Guide for steps to configure User Authentication for Grafana. |
Unified API: http://<lbvip01>:8443/ua/soap
For more information related to CPS interfaces, refer to CPS Operations Guide.