The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This section describes several start and stop tasks for the Cisco Policy Server.
CPS is composed of a cluster of nodes and services. This section describes how to restart the different services running on various CPS nodes.
Each database port and configuration is defined in the /etc/broadhop/mongoConfig.cfg file.
The scripts that start/stop the database services can be found in the /etc/init.d/ directory on the CPS nodes.
To stop and start a database, log into each Session Manager VM and execute the commands as shown below. For example, to restart the sessionmgr 27717 database, execute:
service sessionmgr-27717 stop
service sessionmgr-27717 start
or:
service sessionmgr-27717 restart
![]() Note | It is important not to stop and start all of the databases in the same replica set at the same time. As a best practice, stop and start databases one at a time to avoid service interruption. |
To restart all Policy Server (QNS) services on all VMs, execute the following from the Cluster Manager:
/var/qps/bin/control/restartall.sh
![]() Note | This script only restarts the Policy Server (QNS) services. It does not restart any other services. |
Use summaryall.sh or statusall.sh to see details about these services.
To restart all Policy Server (QNS) services on a single CPS VM, execute the following from the Cluster Manager:
/var/qps/bin/control/restartqns.sh <hostname>
where <hostname> is the CPS node name of the VM (qns01, qns02, lb01, pcrfclient01, and so on).
The Monit service manages many of the services on each CPS VM.
To see a list of services managed by monit on a VM, log in to the specific VM and execute:
monit summary
To stop and start all services managed by monit, log in to the specific VM and execute the following commands:
monit stop all
monit start all
To stop and start a specific service managed by Monit, log in to the specific VM and execute the following commands:
monit stop <service_name>
monit start <service_name>
where <service_name> is the name as shown in the output of the monit summary command.
To restart Subversion (SVN) on OAM (pcrfclient) nodes, execute:
service httpd restart
To restart Policy Builder on OAM (pcrfclient) nodes (pcrfclient01/pcrfclient02), execute:
monit stop qns-2
monit start qns-2
To restart Control Center on OAM (pcrfclient) nodes (pcrfclient01/pcrfclient02), execute:
monit stop qns-1
monit start qns-1
The following commands are used to restart the services on the Policy Director (lb) nodes only (lb01 and lb02).
The following sections describe the commands to shut down the Cisco Policy Server nodes:
In the event of a controlled or uncontrolled power outage, the following power on procedures should be followed to bring the system up properly.
Due to the operational inter-dependencies within the CPS, it is necessary for some CPS services and components to become active before others.
CPS can monitor the state of the cluster through the various stages of startup. It also includes functionality to allow the system to gracefully recover from unexpected outages.
CPS can monitor the state of the services and components of the cluster from the OAM (pcrfclient) VMs. By default, this functionality is disabled.
This functionality can be enabled by setting the cluster_state_monitor option to true in the CPS Deployment Template (Excel spreadsheet).
To update an existing deployment to support this functionality, modify this setting in your CPS Deployment Template and redeploy the csv files as described in the CPS Installation Guide for VMware.
This monitoring system reports the state of the system as an integer value as described in the following table:
Cluster State |
Description |
Values |
---|---|---|
0 |
unknown state/pre-inspection state |
The system will report ‘0’ until both conditions have been met under ‘1’: lbvip02 is UP AND databases are accessible. Various systems can be coming online while a ‘0’ state is being reported and does not automatically indicate an error. Even if the system cannot proceed to ‘1’ state, Policy Builder and Control Center UIs should be available in order to manage or troubleshoot the system. |
1 |
lbvip02 is alive and all databases in /etc/broadhop/mongoConfig.cfg have an accessible primary |
All backend databases must be available and the lbvip02 interface must be UP for the system to report this state. |
2 |
lbvip02 port 61616 is accepting TCP connections |
Backend Policy Server (QNS) processes access lbvip02 on this port. When this port is activated, it indicates that Policy Server (QNS) processes can proceed to start. |
3 |
at least 50% of backend Policy Server (QNS) processes are alive |
Once sufficient capacity is available from the backend processes, the diameter protocol endpoint processes are allowed to start. |
The current cluster state is reported in the following file on the OAM (pcrfclient): /var/run/broadhop.cluster_state
The determine_cluster_state command logs output of the cluster state monitoring process into /var/log/broadhop/determine_cluster_state.log.
In addition to the monitoring functionality, CPS can also use the cluster state to regulate the startup of some of the CPS services pending the appropriate state of the cluster.
By default this functionality is disabled. It can be enabled for the entire CPS cluster, or for troubleshooting purposes can be enabled or disabled on a per-VM basis.
![]() Note | Cluster State Monitoring must be enabled for Controlled Startup to function. |
The Controlled Startup functionality is enabled by the presence of the /etc/broadhop/cluster_state file.
To enable this feature on all CPS VMs in the cluster, execute the following commands on the Cluster Manager VM to create this file and to use the syncconfig.sh script to push those changes out to the other VMs.
touch /etc/broadhop/cluster_state
syncconfig.sh
To disable this feature on all VMs in the cluster, remove the cluster_state file on the Cluster Manager VM and sync the configuration:
rm /etc/broadhop/cluster_state
syncconfig.sh
To enable this feature on a specific VM, create a /etc/broadhop/cluster_state file on the VM:
touch /etc/broadhop/cluster_state
To disable this feature again on a specific VM, delete the /etc/broadhop/cluster_state file on the VM:
rm /etc/broadhop/cluster_state
![]() Note | This is temporary measure and should only be used for diagnostic purposes. Local modifications to a VM can be overwritten under various circumstances, such as running syncconfig.sh. |
In CPS, the active and standby strategy applies only to the Policy Directors (lb). The following are the two Policy Directors in the system:
Step 1 | Log in to the active Policy Director (lb) VM. See Determining the Active Policy Director for details to determine which Policy Director is active. |
Step 2 | Restart the
Heartbeat service using the following command:
service corosync restart This command will force the failover of the VIP from the active Policy Director to the standby Policy Director. |
Step 3 | To confirm the
switchover, SSH to the other Policy Director VM and run the following command
to determine if the VIP is now associated with this VM:
ifconfig -a If you see the eth0:0 or eth1:0 interfaces in the list and marked as “UP” then that is the active Policy Director. |
Multiple users can be logged into Policy Builder at the same time.
In the event that two users attempt to make changes on same screen and one user saves their changes to the client repository, the other user may receive errors. In such cases the user must return to the login page, revert the configuration, and repeat their changes.
This section covers the following topics:
Step 1 | Log in to the Cluster Manager. |
Step 2 | Add a user to
CPS by executing:
adduser.sh |
Step 3 | When prompted
for the user’s group, set ‘qns-svn’ for read-write permissions or ‘qns-ro’ for
read-only permissions.
Refer to CPS Commands for more information about these commands. |
The user can revert the configuration if changes since the last publish/save to client repository are not wanted.
This can also be necessary in the case of a ‘syn conflict’ error where both pcrfclient01 and pcrfclient02 are in use at the same time by different users and publish/save to client repository changes to the same file. The effect of reverting changes is that all changes since the publish/save to client repository will be undone.
After the installation is complete, you need to configure the Control Center access. This is designed to give the customer a customized Control Center username.
This section describes updating Control Center mapping of read-write/read-only to user groups (Default: qns and qns-ro respectively).
Step 1 | Log in to the Cluster Manager VM. | ||
Step 2 | Update
/etc/broadhop/authentication-provider.xml to include
the group mapping for the group you want to use.
| ||
Step 3 | Run syncconfig.sh to put this file on all VMs. | ||
Step 4 | Restart the CPS
system, so that the changes done above are reflected in the VMs:
restartall.sh To add a new user to Control Center and specify the group you've specified in the configuration file above, refer to Add a Control Center User. |
CPS Control Center supports session limits per user. If the user exceeds the configured session limit, they are not allowed to log in. CPS also provides notifications to the user when other users are already logged in.
When a user logs in to Control Center, a Welcome message displays at the top of the screen. A session counter is shown next to the username. This represents the number of login sessions for this user. In the following example, this user is logged in only once ( [1] ).
The user can click the session counter ([1]) link to view details for the session(s), as shown below.
When another user is already logged in with the same username, a notification displays for the second user in the bottom right corner of the screen, as shown below.
The first user also receives a notification, as shown, and the session counter is updated to [2].
These notifications are not displayed in real time; CPS updates this status every 30 seconds.
The session limit can be configured by the runtime argument, which can be configured in the qns.conf file.
-Dcc.user.session.limit=3 (default value is 5)
The default session timeout can be changed by editing the following file on the Policy Server (QNS) instance:
./opt/broadhop/qns-1/plugins/com.broadhop.ui_3.5.0.release/war/WEB-INF/web.xml <!-- timeout after 15 mins of inactivity --> <session-config> <session-timeout>15</session-timeout> <cookie-config> <http-only>true</http-only> </cookie-config> </session-config>
![]() Note | The same timeout value must be entered on all Policy Server (QNS) instances. When the number of sessions of the user exceeds the session limit, the user is not allowed to log in and receives the message “Max session limit per user exceed!” ![]() |
If a user does not log out and then closes their browser, the session remains alive on the server until the session times out. When the session timeout occurs, the session is deleted from the memcached server. The default session timeout is 15 minutes. This is the idle time after which the session is automatically deleted.
When a Policy Server (QNS) instance is restarted, all user/session details are cleared.
When the memcached server is restarted without also restarting the Policy Server (QNS) instance, all http sessions on the Policy Server (QNS) instance are invalidated. In this case the user is asked to log in again and after that, the new session is created.
As a part of routine operations, it is important to make backups so that in the event of failures, the system can be restored. Do not store backups on system nodes.
For detailed information about backup and restore procedures, see the CPS Backup and Restore Guide.
Hardware replacement is usually performed by the hardware vendor with whom your company holds a support contract.
Hardware support is not provided by Cisco. The contact persons and scheduling for replacing hardware is made by your company.
Before replacing hardware, always make a backup. See the CPS Backup and Restore Guide.
Unless you have a readily available backup solution, use VMware Data Recovery. This solution, provided by VMware under a separate license, is easily integrated into your CPS environment.
The templates you download from the Cisco repository are partially pre-configured but require further configuration. Your Cisco technical representative can provide you with detailed instructions.
![]() Note | You can download the VMware software and documentation from the following location: |
This section describes the procedures needed to add a new disk to a VM.
All the VMs were created using the deployment process.
This procedure assumes the datastore that will be used to have the virtual disk has sufficient space to add the virtual disk.
This procedure assumes the datastore has been mounted to the VMware ESX server, regardless of the backend NAS device (SAN or iSCSI, etc).
Step 1 | Log in to the
ESX server shell, and make sure the datastore has enough space:
vmkfstools -c 4g /vmfs/volumes/datastore_name/VMNAME/xxxx.vmdk -d thin |
Step 2 | Execute
vim-cmd
vmsvc/getallvms to get the vmid of the VM where the disk needs to be
added.
Vmid Name File Guest OS Version Annotation 173 vminstaller-AIO [datastore5] vminstaller-AIO/vminstaller-AIO.vmx centos64Guest vmx-08 |
Step 3 | Assign the disk
to the VM.
xxxx is the disk name, and 0 and 1 indicate the SCSI device number. In this example, this is the second disk: vim-cmd vmsvc/device.diskaddexisting vmid /vmfs/volumes/path to xxxx.vmdk 0 1 |
Step 1 | Log in as root user on your Linux virtual machine. | ||
Step 2 | Open a terminal session. | ||
Step 3 | Execute the df command to examine the current disks that are mounted and accessible. | ||
Step 4 | Create an ext4
file system on the new disk:
mkfs -t ext4 /dev/sdb
| ||
Step 5 | Execute the
following command to verify the existence of the disk you created:
# fdisk -l | ||
Step 6 | Execute the
following command to create a mount point for the new disk:
# mkdir /<NewDirectoryName> | ||
Step 7 | Execute the
following command to display the current
/etc/fstab:
# cat /etc/fstab | ||
Step 8 | Execute the
following command to add the disk to
/etc/fstab so that it is available across
reboots:
/dev/sdb /<NewDirectoryName> ext4 defaults 1 3 | ||
Step 9 | Reboot the VM.
shutdown -r now | ||
Step 10 | Execute the df command to check the file system is mounted and the new directory is available. |
After the disk is added successfully, collectd can use the new disk to store the KPIs.
Step 1 | SSH into pcrfclient01/pcrfclient02. | ||
Step 2 | Execute the
following command to open the logback.xml file for editing:
vi /etc/collectd.d/logback.xml | ||
Step 3 | Update the file element <file> with the new directory that was added in the /etc/fstab. | ||
Step 4 | Execute the
following command to restart
collectd:
monit restart collectd
|
This section describes publishing Cisco Policy Builder data to the Cisco Policy Server. Publishing data occurs in the Cisco Policy Builder client interface, but affects the Cisco Policy Server. Refer to the CPS Mobile Configuration Guidefor steps to publish data to the server.
Cisco Policy Builder manages data stored in two areas:
The Client Repository stores data captured from the Policy Builder GUI in Subversion. This is a place where trial configurations can be developed and saved without affecting the operation of the Cisco Policy Builder server data.
The default URL is http://pcrfclient01/repos/configuration.
The Server Repository is where a copy of the client repository is created/updated and where the CPS picks up changes. This is done on Publish from Policy Builder.
![]() Note | Publishing will also do a Save to Client Repository to ensure the Policy Builder and Server configurations are not out of sync. |
The default URL is http://pcrfclient01/repos/run.
You can export and import service configurations for the migration and replication of data. You can use the export/import functions to back up both configuration and environmental data or system-specific information from the configuration for lab-to-production migration.
You can import the binary in the following two ways:
Import the binary produced by export - All configuration exported will be removed (If environment is included, only environment will be removed. If environment is excluded, environment will not be removed). The file passed is created from the export API.
Additive Import - Import the package created manually by adding configuration. The new configurations gets added into the server without impacting the existing configurations. The import is allowed only if the CPS running version is greater than or equal to the imported package version specified in the configuration.
Step 1 | In a browser,
navigate to the export/import page, available at the following URLs:
HA/GR: https://<<lbvip01>:7443/doc/import.html All-In-One (AIO): http://<ip>:7070/doc/import.html | ||||||||||||||||||||
Step 2 | Enter the API credentials. | ||||||||||||||||||||
Step 3 | Select the file
to be imported/exported.
The following table describes the export/import options:
| ||||||||||||||||||||
Step 4 | Select Import or Export. CPS displays response messages that indicate the status of the export/import. |
HAProxy is an opensource load balancer used in High Availability (HA) and Geographic Redundancy (GR) CPS deployments. It is used by the CPS Policy Directors (lbs) to forward IP traffic from lb01/lb02 to other CPS nodes. HAProxy runs on the active Policy Director VM.
Documentation for HAProxy is available at http://www.haproxy.org/#docs.
For a general diagnostics check of the HAProxy service, run the following command from any VM in the cluster (except sessionmgr):
diagnostics.sh --ha_proxy
QPS Diagnostics Multi-Node Environment --------------------------- Ping Check for qns01...[PASS] Ping Check for qns02...[PASS] Ping Check for qns03...[PASS] Ping Check for qns04...[PASS] Ping Check for lb01...[PASS] Ping Check for lb02...[PASS] Ping Check for sessionmgr01...[PASS] Ping Check for sessionmgr02...[PASS] Ping Check for sessionmgr03...[PASS] Ping Check for sessionmgr04...[PASS] Ping Check for pcrfclient01...[PASS] Ping Check for pcrfclient02...[PASS] HA Multi-Node Environment --------------------------- Checking HAProxy status...[PASS]
The following commands must be issued from the lb01 or lb02 VM.
To check the status of the HAProxy services, run the following command:
service haproxy status
[root@host-lb01 ~]# service haproxy status haproxy (pid 10005) is running...
To stop the HAProxy service, run the following command:
service haproxy stop
To restart the HAProxy service, run the following command:
service haproxy restart
To view HAProxy statistics, open a browser and navigate to the following URL:
http://<lbvip01>:5540/haproxy?stats
To change HAProxy log level in your CPS deployment, you must make changes to the HAProxy configuration files on the Cluster Manager and then push the changes out to the Policy Director (lb) VMs.
Once deployed, the HAProxy configuration files are stored locally on the Policy Director VMs at /etc/haproxy/haproxy.cfg.
![]() Note | Whenever you upgrade with latest ISO, the log level will be set to default level (err). |
Step 1 | Log in to the Cluster Manager. |
Step 2 | Create a backup
of the HAProxy configuration file before continuing:
cp /var/qps/install/current/puppet/modules/qps/templates/etc/haproxy/haproxy.cfg /var/qps/install/current/puppet/modules/qps/templates/etc/haproxy/haproxy.cfg-bak-<date> |
Step 3 | Edit the
following file as needed:
/var/qps/install/current/puppet/modules/qps/templates/etc/haproxy/haproxy.cfg By default, the logging level is set as error (err), as shown in the following line: log 127.0.0.1 local1 err The log level can be adjusted to any of the following log levels as needed: emerg alert crit err warning notice info debug |
Step 4 | Run build_all.sh to rebuild the CPS VM packages. |
Step 5 | Run reinit.sh to trigger all VMs to download the latest software and configuration from the Cluster Manager. |
By default, the CPS Unified API does not require username and password authentication. To enable authentication, refer to Enable Authentication for Unified API.
There are two options to include a username and password in an API request:
Include the username and password directly in the request. For example:
https://<username>:<password>@<lbvip02>:8443/ua/soap
Add an authentication header to the request:
Authorization: Basic <base64 encoded value of username:password>
For example:
wget -d -O - --header="Authorization: Basic cG9ydGFXNjbzEyMwo=" https://lbvip02:8443/ua/soap/keepalive
HAProxy is used to secure and balance calls to the CPS Unified API.
Step 1 | Back up the /etc/haproxy/haproxy.cfg file before making modifications in the following steps. | ||
Step 2 | Edit
/etc/haproxy/haproxy.cfg on lb01/lb02 and add a
userlist with at least one username and password.
Use the following syntax: userlist <userlist name> user <username1> password <encrypted password> user <username2> insecure-password <plain text password> For example: userlist L1 user apiuser password $6$eC8mFOWMcRnQo7FQ$C053tv5T2mPlmGAta0ukH87MpK9aLPtWgCEK | ||
Step 3 | Run the following command to generate an encrypted password:
/sbin/grub-crypt --sha-512 [root@host ~]# /sbin/grub-crypt --sha-512 Password: Retype password: <encrypted password output> | ||
Step 4 | Edit /etc/haproxy/haproxy.cfg on lb01/lb02 to configure HAProxy to
require authentication. Add the following 4 lines to the haproxy.cfg file:
acl validateAuth http_auth(<userlist_name>) acl unifiedAPI path_beg -i /ua/soap http-request allow if !unifiedAPI http-request auth unless validateAuth acl validateAuth http_auth(<userlist name>) frontend https-api description Unified API bind lbvip01:8443 ssl crt /etc/ssl/certs/quantum.pem default_backend api_servers reqadd X-Forwarded-Proto:\ https if { ssl_fc } backend api_servers mode http balance roundrobin option httpclose option abortonclose option httpchk GET /ua/soap/keepalive server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s server qns03_A qns03:8080 check inter 30s server qns04_A qns04:8080 check inter 30s acl validateAuth http_auth(L1) acl unifiedAPI path_beg -i /ua/soap http-request allow if !unifiedAPI http-request auth unless validateAuth The configuration above applies authentication on context /ua/soap, which is the URL path of the Unified API.
|
In order to access the Unified API WSDL while using authentication change the following line:
acl unifiedAPI path_beg -i /ua/soap
to
acl unifiedAPI path_beg -i /ua/.
The default address for the WSDL is https://<lbvip01>:8443/ua/wsdl/UnifiedApi.wsdl
The Unified API contains full documentation in an html format that is compatible with all major browsers.
The default address is https://<HA-server-IP>:8443/ua/wsdl/UnifiedApi.xsd
![]() Note | Run the about.sh command from the Cluster Manager to display the actual addresses as configured in your deployment. |
CPS 7.x onward uses HTTPS on port 8443 for Unified API access. To enable HTTP support (like pre-7.0) on port 8080, perform the following steps:
![]() Note | Make sure to open port 8080 if firewall is used on the setup. |
Step 1 | Create the
following directories (ignore File exists error), on Cluster Manager:
/bin/mkdir -p /var/qps/env_config/modules/custom/templates/etc/haproxy /bin/mkdir -p /var/qps/env_config/modules/custom/templates/etc/monit.d /bin/mkdir -p /var/qps/env_config/nodes |
Step 2 | Create the file
/var/qps/env_config/modules/custom/templates/etc/haproxy/haproxy-soaphttp.erb
with the following contents on Cluster Manager:
global daemon nbproc 1 # number of processing cores stats socket /tmp/haproxy-soaphttp defaults timeout client 60000ms # maximum inactivity time on the client side timeout server 180000ms # maximum inactivity time on the server side timeout connect 60000ms # maximum time to wait for a connection attempt to a server to succeed log 127.0.0.1 local1 err listen pcrf_proxy XXXX:8080 ----------- > where, XXXX, is Unified API interface hostname or IP mode http balance roundrobin option httpclose option abortonclose option httpchk GET /ua/soap/KeepAlive server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s server qns03_A qns03:8080 check inter 30s server qns04_A qns04:8080 check inter 30s server qns05_A qns05:8080 check inter 30s server qns06_A qns06:8080 check inter 30s server qns07_A qns07:8080 check inter 30s server qns08_A qns08:8080 check inter 30s server qns09_A qns09:8080 check inter 30s server qns10_A qns10:8080 check inter 30s |
Step 3 | Create the file
/var/qps/env_config/modules/custom/templates/etc/monit.d/haproxy-soaphttp
with the following contents on Cluster Manager:
check process haproxy-soaphttp with pidfile /var/run/haproxy-soaphttp.pid start = "/etc/init.d/haproxy-soaphttp start" stop = "/etc/init.d/haproxy-soaphttp stop" |
Step 4 | Create or modify
the
/var/qps/env_config/nodes/lb.yaml file with the
following contents on Cluster Manager:
If the file exists then just add custom::soap_http: classes: qps::roles::lb: custom::soap_http: |
Step 5 | Create the file
/var/qps/env_config/modules/custom/manifests/soap_http.pp
with the following contents on Cluster Manager.
Change ethX with the Unified API IP interface like eth0/eth1/eth2. class custom::soap_http( $haproxytype = "-soaphttp", ) { service { "haproxy-soaphttp": enable => false, require => [Package [ "haproxy" ],File ["/etc/haproxy/haproxy-soaphttp.cfg"], File['/etc/init.d/haproxy-soaphttp'], Exec["sysctl_refresh"]], } file { "/etc/init.d/haproxy-soaphttp": owner => "root", group => "root", content => template('qps/etc/init.d/haproxy'), require => Package [ "haproxy" ], notify => Service['haproxy-soaphttp'], mode => 0744 } file { "/etc/haproxy/haproxy-soaphttp.cfg": owner => "root", group => "root", content => template('custom/etc/haproxy/haproxy-soaphttp.erb'), require => Package [ "haproxy" ], notify => Service['haproxy-soaphttp'], } file { "/etc/monit.d/haproxy-soaphttp": content => template("custom/etc/monit.d/haproxy-soaphttp"), notify => Service["monit"], } exec { "remove ckconfig for haproxy-soaphttp": command => "/sbin/chkconfig --del haproxy-soaphttp", require => [Service['haproxy-soaphttp']], } firewall { '100 allow soap http': port => 8080, iniface => "ethX", proto => tcp, action => accept, } } |
Step 6 | Validate the
syntax of your newly created puppet script on Cluster Manager:
/usr/bin/puppet parser validate /var/qps/env_config/modules/custom/manifests/soap_http.pp |
Step 7 | Rebuild your
Environment Configuration on Cluster Manager:
/var/qps/install/current/scripts/build/build_env_config.sh |
Step 8 | Reinitialize
your lb01/02 environments on Cluster Manager:
The following commands will take few minutes to complete. ssh lb01 /etc/init.d/vm-init ssh lb02 /etc/init.d/vm-init |
Step 9 | Validate SOAP
request on http:
|
Update the HAProxy configuration to enable authentication and authorization mechanism in the CRD API module.
Step 1 | Back up the /etc/haproxy/haproxy.cfg file before making modifications in the following steps. | ||
Step 2 | Edit
/etc/haproxy/haproxy.cfg on lb01/lb02 and add a
userlist with at least one username and password.
Use the following syntax: userlist <userlist name> user <username1> password <encrypted password> For example: userlist cps_user_list user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 Run the following command to generate an encrypted password: /sbin/grub-crypt --sha-512 [root@host ~]# /sbin/grub-crypt --sha-512 Password: Retype password: <encrypted password output> | ||
Step 3 | Add the
following line in frontend
https-api to enable Authentication and Authorization
for CRD REST API and create a new backend server as
crd_api_servers to intercept CRD REST API requests:
mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s | ||
Step 4 | Update frontend
https_all_servers by replacing
api_servers with
crd_api_servers for CRD API as follows:
acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api | ||
Step 5 | To enable the
authentication, edit
/etc/haproxy/haproxy.cfg on lb01/lb02 and add the
following lines in the
backend crd_api_servers:
acl validateAuth http_auth(<userlist_name>) http-request auth unless validateAuth acl validateAuth http_auth(<userlist name>) | ||
Step 6 | To enable the
authorization, add at least one group with the user in
userlist created in
Step 2 as
follows:
group qns-ro users readonly userlist cps_user_list group qns-ro users readonly user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 | ||
Step 7 | Add the
following in the
backend crd_api_servers to set read-only permission
(GET HTTP operation) for group of users:
acl authoriseUsers http_auth_group(<user-list-name>) <group-name> http-request deny if !METH_GET authoriseUsers acl authorizeUsers http_auth_group(<userlist name>) <group-name> Example:HAProxy Configuration Example userlist cps_user_list group qns-ro users readonly user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrme AUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrme AUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 frontend https-api description API bind lbvip01:8443 ssl crt /etc/ssl/certs/quantum.pem mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api default_backend api_servers reqadd X-Forwarded-Proto:\ https if { ssl_fc } frontend https_all_servers description Unified API,CC,PB,Grafana,CRD-API,PB-API bind lbvip01:443 ssl crt /etc/ssl/certs/quantum.pem no-sslv3 no-tlsv10 ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:! aNULL:!eNULL:!LOW:! 3DES:!MD5:!EXP:!PSK:!SRP:!DSS mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s acl validateAuth http_auth(cps_user_list) acl authoriseUsers http_auth_group(cps_user_list) qns-ro http-request auth unless validateAuth http-request deny if !METH_GET authoriseUsers
|
You can mount all of the members of the Replication set to tmpfs, or you can mount specific members to tmpfs. These scenarios are described in the following sections.
Step 1 | Modify
mongoConfig.cfg using the vi editor on cluster manager. Change the DBPATH
directory for the SPR Replication set that needs to be put on tmpfs.
[SPR-SET1] SETNAME=set06 OPLOG_SIZE=5120 ARBITER=pcrfclient01a:27720 ARBITER_DATA_PATH=/var/data/sessions.6 MEMBER1=sessionmgr04a:27720 MEMBER2=sessionmgr03a:27720 MEMBER3=sessionmgr04b:27720 MEMBER4=sessionmgr03b:27720 DATA_PATH=/var/data/sessions.4 [SPR-SET1-END] The following example shows the contents of mongoConfig.cfg after modification: [SPR-SET1] SETNAME=set06 OPLOG_SIZE=5120 ARBITER=pcrfclient01a:27720 ARBITER_DATA_PATH=/var/data/sessions.6 MEMBER1=sessionmgr04a:27720 MEMBER2=sessionmgr03a:27720 MEMBER3=sessionmgr04b:27720 MEMBER4=sessionmgr03b:27720 DATA_PATH=/var/data/sessions.1/set06 [SPR-SET1-END] | ||
Step 2 | Run build_set to
generate new mongo DB startup scripts. It will generate new mongod startup
scripts for all the SPR Replication sets:
build_set.sh --spr --create-scripts In this example, we are generating new mongoDB startup scripts for the SPR database. Use balance/session depending on your activity. | ||
Step 3 | In you need to
generate new mongoDB scripts for specific setname, run the following command:
build_set.sh --spr --create-scripts --setname set06 | ||
Step 4 | Verify that the
new mongo script is generated. Ssh to one of the session manager servers and
run the following command. The DBPATH should match what you modified in step 1.
For example:
grep /var/data sessionmgr-27720 You should see the following output: DBPATH=/var/data/sessions.1/set06 | ||
Step 5 | Copy the
mongConfig.cfg to all nodes using the following command:
copytoall /etc/broadhop/mongoConfig.cfg /etc/broadhop/mongoConfig.cfg | ||
Step 6 | Run Build_etc.sh to update puppet files, which will retain the updated MongoConfig.cfg after reboot. | ||
Step 7 | Stop and start the mongo databases one by one. | ||
Step 8 | Run Diagnostics.sh. | ||
Step 9 | If this is an
Active/Active GEOHA setup, scp the mongoConfig.cfg to Site-B cluster manager,
and do the following:
|
Step 1 | Ssh to the respective session manager. |
Step 2 | Edit the mongoDB
startup file using the vi editor. In this example we are modifying the SPR
member.
[root@sessionmgr01 init.d]# vi /etc/init.d/sessionmgr-27720 |
Step 3 | Change the DBPATH directory from DBPATH=/var/data/sessions.4 to DBPATH=/var/data/sessions.1/set06. |
Step 4 | Save and exit the file (using !wq). |
Step 5 | Enter the
following commands to stop and start the SPR DB member:
/etc/init.d/sessionmgr-27720 stop (This might fail but continue to next steps) /etc/init.d/sessionmgr-27720 start |
Step 6 | Wait for the recovery to finish. |
CPS uses encryption on all appropriate communication channels in HA deployments. No additional configuration is required.
Default SSL certificates are provided with CPS but it is recommended that you replace these with your own SSL certificates. Refer to Replace SSL Certificates in the CPS Installation Guide for VMware for more information.
The Audit History is a way to track usage of the various GUIs and APIs it provides to the customer.
If enabled, each request is submitted to the Audit History database for historical and security purposes. The user who made the request, the entire contents of the request and if it is subscriber related (meaning that there is a networkId value), all networkIds are also stored in a searchable field.
By default, the Audit History uses a 1 GB capped collection in MongoDB. The capped collection automatically removes documents when the size restriction threshold is hit. The oldest document is removed as each new document is added. For customers who want more than 1 GB of audit data, contact the assigned Cisco Advanced Services Engineer to get more information.
Configuration in Policy Builder is done in GB increments. It is possible to enter decimals, for example, 9.5 will set the capped collection to 9.5 GB.
When using a capped collection, MongoDB places a restriction on the database and does not allow the deletion of data from the collection. Therefore, the entire collection must be dropped and re-created. This means that the PurgeAuditHistory queries have no impact on capped collections.
As a consequence of the XSS defense changes to the API standard operation, any XML data sent in an AuditRequest must be properly escaped even if inside CDATA tags.
For example, <ExampleRequest>...</ExampleRequest>
For more information on AuditType, refer to Cisco Policy Suite Unified API 2.3.0 Guide.
By default, Audit History is ON but it can be turned OFF.
There are three parts to the Audit History:
Step 1 | Start the
Policy Builder with the following property:
-Dua.client.submit.audit=false (set in /etc/broadhop/pb/pb.conf) |
Step 2 | Add and configure the appropriate plug-in configurations for Audit History and Unified API. |
Step 3 | Publish the Policy Builder configuration. |
Step 4 | Start the CPS servers. |
Step 5 | Restart the
Policy Builder with the following property:
-Dua.client.submit.audit=true -Dua.client.server.url=https://lbvip02:8443/ua/soap or -Dua.client.server.url=http://lbvip02:8080/ua/soap |
The Audit History does not log read requests by default.
GetRefDataBalance
GetRefDataServices
GetSubscriber
GetSubscriberCount
QueryAuditHistory
QueryBalance
QuerySession
QueryVoucher
SearchSubscribers
The Unified API also has a Policy Builder configuration option to log read requests which is set to false by default.
All APIs are automatically logged into the Audit Logging History database, except for QueryAuditHistory and KeepAlive. All Unified API requests have an added Audit element that should be populated to provide proper audit history.
The query is very flexible - it uses regex automatically for the id and dataid, and only one of the following are required: id, dataid, or request. The dataid element typically will be the networkId (Credential) value of a subscriber.
![]() Note | Disable Regex. The use of regular expressions for queries can be turned off in the Policy Builder configuration. |
The id element is the person or application who made the API request. For example, if a CSR logs into Control Center and queries a subscriber balance, the id will be that CSR's username.
The dataid element is typically the subscriber's username. For example, if a CSR logs into Control Center and queries a subscriber, the id will be that CSR's username, and the dataid will be the subscriber's credential (networkId value). For queries, the dataid value is checked for spaces and then tokenized and each word is used as a search parameter. For example, “networkId1 networkId2” is interpreted as two values to check.
The fromDate represents the date in the past from which to start the purge or query. If the date is null, the api starts at the oldest entry in the history.
The toDate represents the date in the past to which the purge or query of data includes. If the date is null, the api includes the most recent entry in the purge or query.
By default, the Audit History database is capped at 1 GB. Mongo provides a mechanism to do this and then the oldest data is purged as new data is added to the repository. There is also a PurgeAuditHistory request which can purge data from the repository. It uses the same search parameters as the QueryAuditHistory and therefore is very flexible in how much or how little data is matched for the purge.
![]() Note | Regex Queries! Be very careful when purging records from the Audit History database. If a value is given for dataid, the server uses regex to match on the dataid value and therefore will match many more records than expected. Use the QueryAuditHistory API to test the query. |
Each purge request is logged after the purge operation completes. This ensures that if the entire repo is destroyed, the purge action that destroyed the repo will be logged.
The Control Center version 2.0 automatically logs all requests.
This API purges the Audit History.
The query is very flexible - it uses regex automatically for the id and dataid, and only one of the following are required: id, dataid, or request. The dataid element typically will be the networkId (Credential) value of a subscriber.
The id element is the person or application who made the API request. For example, if a CSR logs into Control Center and queries a subscriber balance, the id will be that CSR's username.
The dataid element is typically the subscriber's username. For example, if a CSR logs into Control Center and queries a subscriber, the id will be that CSR's username, and the dataid will be the subscriber's credential (networkId value). For queries, the dataid value is checked for spaces and then tokenized and each word is used as a search parameter. For example, “networkId1 networkId2” is interpreted as two values to check.
The fromDate represents the date in the past from which to start the purge or query. If the date is null, the api starts at the oldest entry in the history.
The toDate represents the date in the past to which the purge or query of data includes. If the date is null, the api includes the most recent entry in the purge or query.
![]() Note | Size-Capped Database If the database is capped by size, then the purge request ignores the request key values and drops the entire database due to restrictions of the database software. |
Schema
<PurgeAuditHistoryRequest> <key> AuditKeyType </key> [1] </PurgeAuditHistoryRequest>
Example
<se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/"> <se:Body> <PurgeAuditHistoryRequest xmlns="http://broadhop.com/unifiedapi/soap/types"> <key> <id>username</id> <dataid>subscriber</dataid> <request>API Name</request> <fromDate>2011-01-01T00:00:00Z</fromDate> <toDate>2011-01-01T00:00:00Z</toDate> </key> </PurgeAuditHistoryRequest> </se:Body> </se:Envelope>
To purge all CreateSubscriberRequest:
<se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/"> <se:Body> <PurgeAuditHistoryRequest xmlns="http://broadhop.com/unifiedapi/soap/types"> <key> <request>CreateSubscriberRequest</request> </key> </PurgeAuditHistoryRequest> </se:Body> </se:Envelope>
To purge all CreateSubscriberRequest by CSR:
<se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/"> <se:Body> <PurgeAuditHistoryRequest xmlns="http://broadhop.com/unifiedapi/soap/types"> <key> <id>csrusername</id> <request>CreateSubscriberRequest</request> </key> </PurgeAuditHistoryRequest> </se:Body> </se:Envelope>
To purge all actions by CSR for a given subscriber for a date range:
<se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/"> <se:Body> <PurgeAuditHistoryRequest xmlns="http://broadhop.com/unifiedapi/soap/types"> <key> <id>csrusername</id> <dataid>subscriber@gmail.com</dataid> <fromDate>2010-01-01T00:00:00Z</fromDate> <toDate>2012-11-01T00:00:00Z</toDate> </key> </PurgeAuditHistoryRequest> </se:Body> </se:Envelope>
This API queries the Audit History.
The query is very flexible - it uses regex automatically for the id and dataid, and only one of the following are required: id, dataid, or request. The dataid element typically will be the networkId (Credential) value of a subscriber.
The id element is the person or application who made the API request. For example, if a CSR logs into Control Center and queries a subscriber balance, the id will be that CSR's username.
The dataid element is typically the subscriber's username. For example, if a CSR logs into Control Center and queries a subscriber, the id will be that CSR's username, and the dataid will be the subscriber's credential (networkId value). For queries, the dataid value is checked for spaces and then tokenized and each word is used as a search parameter. For example, "networkId1 networkId2" is interpreted as two values to check.
The fromDate represents the date in the past from which to start the purge or query. If the date is null, the api starts at the oldest entry in the history.
The toDate represents the date in the past to which the purge or query of data includes. If the date is null, the api includes the most recent entry in the purge or query.
Schema:
<QueryAuditHistoryRequest> <key> AuditKeyType </key> [1] </QueryAuditHistoryRequest>
Example:
<se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/"> <se:Body> <QueryAuditHistoryRequest xmlns="http://broadhop.com/unifiedapi/soap/types"> <key> <id>username</id> <dataid>subscriber</dataid> <request>API Name</request> <fromDate>2011-01-01T00:00:00Z</fromDate> <toDate>2011-01-01T00:00:00Z</toDate> </key> </QueryAuditHistoryRequest> </se:Body> </se:Envelope>
To find all CreateSubscriberRequest:
<se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/"> <se:Body> <QueryAuditHistoryRequest xmlns="http://broadhop.com/unifiedapi/soap/types"> <key> <request>CreateSubscriberRequest</request> </key> </QueryAuditHistoryRequest> </se:Body> </se:Envelope>
To find all CreateSubscriberRequest by CSR:
<se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/"> <se:Body> <QueryAuditHistoryRequest xmlns="http://broadhop.com/unifiedapi/soap/types"> <key> <id>csrusername</id> <request>CreateSubscriberRequest</request> </key> </QueryAuditHistoryRequest> </se:Body> </se:Envelope>
To find all actions by CSR for a given subscriber for a date range:
<se:Envelope xmlns:se="http://schemas.xmlsoap.org/soap/envelope/"> <se:Body> <QueryAuditHistoryRequest xmlns="http://broadhop.com/unifiedapi/soap/types"> <key> <id>csrusername</id> <dataid>subscriber@gmail.com</dataid> <fromDate>2010-01-01T00:00:00Z</fromDate> <toDate>2012-11-01T00:00:00Z</toDate> </key> </QueryAuditHistoryRequest> </se:Body> </se:Envelope>
The Policy Builder automatically logs all save operations (Publish and Save to Client) to the Audit History database and also to a log file.
Policy Builder Publish submits an entry to the Audit Logging Server (goes to database).
Policy Builder Save to Client Repository submits an entry to the Audit Logging Server (goes to database).
Whenever a screen is saved locally (Save button) XML is generated and logged for that user in /var/log/broadhop/qns-pb.log.
Example log in qns-pb.log from Local Save in Policy Builder:
2013-02-06 11:57:01,214 [UIThread [vt75cjqhk7v4noguyc9c7shp]] DEBUG c.b.c.r.BroadhopResourceSetAudit - Audit: Local file change made by: broadhop. Updated File: file:/var/broadhop/pb/workspace/tmp-ITC2/checkout/ConfiguredExtensionPoint-43730cd7-b238-4b29-a828-d9b4 47e5a64f-33851.xmi
XML Representation of changed screen:
<?xml version="1.0" encoding="UTF-8"?> <policy:ConfiguredExtensionPoint xmlns:policy="http://broadhop.com/policy" id="43730cd7-b238-4b29-a828-d9b447e5a64f-33851"> <extensionPoint href="virtual:URI#_vxG4swK1Ed-M48DL9vicxQ"/> <policies href="Policy-default-_sY__4L_REeGCdakzuzzlAg.xmi#_sY__4L_REeGCdakzuzzlAg"/> </policy:ConfiguredExtensionPoint>
Controlling Local Save output:
In the logback.xml file that controls Policy Builder logging, add com.broadhop.client.resourceset.BroadhopResourceSetAudit as a category and set it to the desired level.
For reporting purposes the following is the database structure in Mongo:
{
"_id" : ObjectId("5097d75be4b0d5f7ab0d90fe"),
"_id_key" : "username",
"comment_key" : "comment",
"data_id_key" : [ "networkId11921" ],
"timestamp_key" : ISODate("2012-11-05T15:12:27.673Z"),
"request_key" : "DeleteQuotaRequest",
"data_key" :
"<DeleteQuotaRequest><audit><id>username</id></audit><networkId><![CDATA [networkId11921]]></networkId><balanceCode>DATA</balanceCode><code>Recurring</code> <hardDelete>false</hardDelete></DeleteQuotaRequest>
"}
Field |
Description |
---|---|
_id |
The database unique identifier. |
_id_key |
the username of person who performed the action. In the above example the CSR who issued the debit request. |
comment_key |
Some description of the audit action. |
data_id_key |
The credential of the subscriber. It is a list so if the subscriber has multiple credentials then they will all appear in this list. Please note that it is derived from the request data so for a CreateSubscriber request there may be multiple credentials sent in the request then each will be saved in the data_id_key list. In the DebitRequest case only the one credential is listed because the request only has the single networkId field. |
timestamp_key |
The time the request was logged If the timestamp value is null in the request then the Audit module automatically populates this value. |
request_key |
The name of the request. This provides a way to search on type of API request. |
data_key |
The actual request XML. |
Step 1 | Click the
Reference Data tab, and then click
.
![]() | ||||||||||||||||||||||||||||||||||||||||||||
Step 2 | Click
Audit
Configuration in the right pane to open the
Audit
Configuration dialog box.
![]() | ||||||||||||||||||||||||||||||||||||||||||||
Step 3 | Under
Audit
Configuration there are different panes:
General Configuration,
Queue
Submission Configuration,
Database Configuration, and
Shard
Configuration. An example configuration is provided in the
following figures:
![]() ![]() ![]() The following parameters are used to size and manage the internal queue that aids in the processing of Audit messages. The application offloads message processing to a queue to speed up the response time from the API.
According to your network requirements, configure the parameters in Audit Configuration and save the configuration. |
In the /usr/share/doc/audit-version/ directory, the audit package provides a set of pre-configured rules files.
The Linux Audit system provides a way to track security-relevant information on your system. Based on pre-configured rules, Audit generates log entries to record as much information about the events that are happening on your system as possible.
In the /usr/share/doc/audit-version/ directory, the audit package provides a set of pre-configured rules files.
To use these pre-configured rule files, create a backup of your original /etc/audit/audit.rules file and copy the configuration file of your choice over the /etc/audit/audit.rules file:
cp /etc/audit/audit.rules /etc/audit/audit.rules_backup cp /usr/share/doc/audit-version/stig.rules /etc/audit/audit.rules
For more information on auditd process, refer to the link.
Cisco Policy Server comes with a set of utilities to actively monitor and trace policy execution. These utilities interact with the core policy server and the mongo database to trigger and store traces for specific conditions.
The policy tracing and execution analyzer is 3-tier architecture:
All commands are located on the Control Center virtual machine within /var/qps/bin/control directory. There are two main scripts which can be used for tracing: trace_ids.sh and trace.sh.
The trace_ids.sh script maintains all rules for activating and deactivating traces within the system.
The trace.sh script allows for the real time or historical retrieval of traces.
Before running trace_ids.sh and trace.sh, confirm which database you are using for traces. For more information, refer to Policy Trace Database. If no database has been configured, then by default the scripts connects to primary database member of SPR-SET1.
Running trace_ids.sh with -h arguments produces a help text describing the capabilities of the script.
/var/qps/bin/control/trace_ids.sh -h
Usage:
/var/qps/bin/control/trace_ids.sh -i <specific id> -d sessionmgr01:27719/policy_trace /var/qps/bin/control/trace_ids.sh -r <specific id> -d sessionmgr01:27719/policy_trace /var/qps/bin/control/trace_ids.sh -x -d sessionmgr01:27719/policy_trace /var/qps/bin/control/trace_ids.sh -l -d sessionmgr01:27719/policy_trace
![]() Note | By default , if -d option is not provided then the script connects to primary database member of SPR-SET1. If you are not using the SPR database, you need to find out the which database you are using. To find out which database you are using, refer to Policy Trace Database. Make sure to update the commands mentioned in Step 1 to Step 4 accordingly. |
This script starts a selective trace and outputs it to standard out.
Step 1 | Specific audit
ID tracing:
/var/qps/bin/control/trace_ids.sh -i <specific id> |
Step 2 | Remove trace for
specific audit ID:
/var/qps/bin/control/trace_ids.sh -r <specific id> |
Step 3 | Remove trace for
all IDs:
/var/qps/bin/control/trace_ids.sh -x |
Step 4 | List all IDs
under trace:
/var/qps/bin/control/trace_ids.sh -l Adding a specific audit ID for tracing requires running the command with the -i argument and passing in a specific ID. The policy server matches the incoming session with the ID provided and compares this against the following network session attributes: If an exact match is found then the transaction are traced. Spaces and special characters are not supported in the audit ids. |
Usage with SPR-SET as database:
#./trace_ids.sh -l MongoDB shell version: 2.6.3 connecting to: sessionmgr01:27720/policy_trace 112345 MongoDB shell version: 2.6.3 connecting to: sessionmgr01:27720/policy_trace null
Usage with -d option:
#./trace_ids.sh -l -d sessionmgr01:27717/policy_trace MongoDB shell version: 2.6.3 connecting to: sessionmgr01:27717/policy_trace 874838 MongoDB shell version: 2.6.3 connecting to: sessionmgr01:27717/policy_trace null
The following criteria cause the system to generate a trace regardless of whether the id is present in the trace database or not:
If there is an AVP with the code: audit_id, audit-id, auditid. In this case, the traces are stored in the database with the value of the AVP.
If there is a subscriber attribute (USuM AVP) with a code of audit-policy and a value of “true”. In this case, the traces are stored using the credentials stored for the subscriber.
If an error is triggered internally.
![]() Note | An error is defined as an internal processing error (e.g. database failure or other failure) and is not a failure message code. |
Running trace.sh with -h arguments produce a help text describing the capabilities of the script:
/var/qps/bin/control/trace.sh -h
Usage:
/var/qps/bin/control/trace.sh -i <specific id> -d sessionmgr01:27719/policy_trace /var/qps/bin/control/trace.sh -x <specific id> -d sessionmgr01:27719/policy_trace /var/qps/bin/control/trace.sh -a -d sessionmgr01:27719/policy_trace /var/qps/bin/control/trace.sh -e -d sessionmgr01:27719/policy_trace
![]() Note | By default , if -d option is not provided then the script connects to primary database member of SPR-SET1. If you are not using the SPR database, you need to find out the which database you are using. To find out which database you are using, refer to Policy Trace Database. Make sure to update the commands mentioned in Step 1 to Step 4 accordingly. |
This script starts a selective trace and outputs it to standard out.
Step 1 | Specific audit
ID tracing:
/var/qps/bin/control/trace.sh -i <specific id> Specifying the -i argument for a specific ID causes a real time policy trace to be generated while the script is running. Users can redirect this to a specific output file using standard Linux commands. |
Step 2 | Dump all traces
for specific audit ID:
/var/qps/bin/control/trace.sh -x <specific id> Specifying the -x argument with a specific ID, dumps all historical traces for a given ID. Users can redirect this to a specific output file using standard Linux commands. |
Step 3 | Trace all:
/var/qps/bin/control/trace.sh -a Specifying the -a argument causes all traces to output in real time policy trace while the script is running. Users can redirect this to a specific output file using standard Linux commands. |
Step 4 | Trace all
errors:
/var/qps/bin/control/trace.sh -e Specifying the -e argument causes all traces triggered by an error to output in real time policy trace while the script is running. Users can redirect this to a specific output file using standard Linux commands. |
The default location of the policy trace database is the administrative database and can be optionally specified in the trace database fields. These fields are defined at the cluster level in the system configurations.
![]() Note | Make sure to run all trace utility scripts from /var/qps/bin/control directory only. |
Step 1 | Log in to the Policy Builder. | ||||||||
Step 2 | From left pane, open up the name of your system and select the required cluster. | ||||||||
Step 3 | From right pane,
select the check box for
Trace
Database.
The following table provides the parameter descriptions under Trace Database check box:
|
This section covers the following topics:
Cisco Policy Suite (CPS) is built around a distributed system that runs on a large number of virtualized nodes. Previous versions of the CPS software allowed operators to add custom accounts to each of these virtual machines (VM), but management of these disparate systems introduced a large amount of administrative overhead.
CPS has been designed to leverage the Terminal Access Controller Access Control System Plus (TACACS+) to facilitate centralized management of users. Leveraging TACACS+, the system is able to provide system-wide authentication, authorization, and accounting (AAA) for the CPS system.
Further the system allows users to gain different entitlements based on user role. These can be centrally managed based on the attribute-value pairs (AVP) returned on TACACS+ authorization queries.
To provide sufficient information for the Linux-based operating system running on the VM nodes, there are several attribute-value pairs (AVP) that must be associated with the user on the ACS server used by the deployment. User records on Unix-like systems need to have a valid “passwd” record for the system to operate correctly. Several of these fields can be inferred during the time of user authentication, but the remaining fields must be provided by the ACS server.
A standard “passwd” entry on a Unix-like system takes the following form:
<username>:<password>:<uid>:<gid>:<gecos>:<home>:<shell>
When authenticating the user via TACACS+, the software can assume values for the username, password, and gecos fields, but the others must be provided by the ACS server. To facilitate this need, the system depends on the ACS server provided these AVP when responding to a TACACS+ Authorization query for a given username:
uid
A unique integer value greater than or equal to 501 that serves as the numeric user identifier for the TACACS+ authenticated user on the VM nodes. It is outside the scope of the CPS software to ensure uniqueness of these values.
gid
The group identifier of the TACACS+ authenticated user on the VM nodes. This value should reflect the role assigned to a given user, based on the following values:
gid=501 (qns-su)
This group identifier should be used for users that are entitled to attain super-user (or 'root') access on the CPS VM nodes.
gid=504 (qns-admin)
This group identifier should be used for users that are entitled to perform administrative maintenance on the CPS VM nodes.
![]() Note | For stopping/starting the Policy Servrer (QNS) process on node, the qns-admin user should use monit: |
For example,
sudo monit stop qns-1 sudo monit start qns-1
gid=505 (qns-ro)
This group identifier should be used for users that are entitled to read-only access to the CPS VM nodes.
home
The user's home directory on the CPS VM nodes. To enable simpler management of these systems, the users should be configured with a pre-deployed shared home directory based on the role they are assigned with the gid.
shell
The system-level login shell of the user. This can be any of the installed shells on the CPS VM nodes, which can be determined by reviewing the contents of /etc/shells on one of the CPS VM nodes. Typically, this set of shells is available in a CPS deployment:
The /usr/bin/sudosh shell can be used to audit user's activity on the system.
The user environment of the Linux-based VMs needs to be able to lookup a user's passwd entry via different columns in that record at different times. The TACACS+ NSS module provided as part of the CPS solution however is only able to query the Access Control Server (ACS) for this data using the username. For this reason the system relies upon the Name Service Cache Daemon (NSCD) to provide this facility locally after a user has been authorized to use a service of the ACS server.
More details on the operations of NSCD can be found by referring to online help for the software (nscd --help) or in its man page (nscd(8)). Within the CPS solution it provides a capability for the system to lookup a user's passwd entry via their uid as well as by their username.
To avoid cache coherence issues with the data provided by the ACS server the NSCD package has a mechanism for expiring cached information.
The default NSCD package configuration on the CPS VM nodes has the following characteristics:
Valid responses from the ACS server are cached for 600 seconds (10 minutes)
Invalid responses from the ACS server (user unknown) are cached for 20 seconds
Cached valid responses are reloaded from the ACS server 5 times before the entry is completely removed from the running set -- approximately 3000 seconds (50 minutes)
The cache are persisted locally so it survives restart of the NSCD process or the server
It is possible for an operator to explicitly expire the cache from the command line. To do so the administrator need to get the shell access to the target VM and execute the following command as a root user:
# nscd -i passwd
The above command will invalidate all entries in the passwd cache and force the VM to consult with the ACS server for future queries.
There may be some unexpected behaviors of the user environment for TACACS+ authenticated users connected to the system when their cache entries are removed from NSCD. This can be corrected by the user by logging out of the system and logging back into it or by issuing the following command, which forces the system to query the ACS server:
# id -a “$USER”
This section describes how to port the Policy Builder configuration from an All-In-One (AIO) environment to a High Availability (HA) environment.
All the VMs were created using the deployment process.
This procedure assumes the datastore that will be used to have the virtual disk has sufficient space to add the virtual disk.
This procedure assumes the datastore has been mounted to the VMware ESX server, regardless of the backend NAS device (SAN or iSCSI, etc).
Policy Builder configuration can be utilized between environments, however, the configuration for Systems and Policy Enforcement Points is environment-specific and should not be moved from one environment to another.
The following instructions will not overwrite the configuration specific to the environment. Please note that as the Systems tab and Policy Enforcement Points data is not moved, the HA system should have these things configured and running properly (as stated above).
The following steps describe the process to port a configuration from an AIO environment to an HA environment.
Step 1 | If the HA environment is currently in use, ensure that SVN backups are up to date. |
Step 2 | Find the URL
that Policy Builder is using to load the configuration that you want to use.
You can find this by clicking
Edit on the initial page in Policy Builder.
The URL is listed in the URL field. For the purpose of these instructions, the following URL will be used for exporting the configuration from the AIO environment and importing the configuration to the HA environment: http://pcrfclient01/repos/configuration ![]() |
Step 3 | On the AIO,
export the Policy Builder configuration by entering the following commands:
cd /var/tmp svn export http://pcrfclient01/repos/configuration aio_configuration This creates a directory /var/tmp/aio_configuration. |
Step 4 | Remove the
system configuration by entering the following commands:
cd aio_configuration rm -f System* *Configuration* DiameterStack* VoucherSettings* Cluster* Instance* |
Step 5 | Move the /var/tmp/aio_configuration directory to /var/tmp on your Cluster Manager (using scp, zip, etc.). |
Step 6 | SSH into the
pcrfclient01.
The following steps assume you will replace the existing default Policy Builder configuration located at http://pcrfclient01/repos/configuration on your HA environment. If you would like to access your old configuration, copy it to a new location. For example: svn cp http://pcrfclient01/repos/configuration http://pcrfclient01/repos/configuration_date Then set up a new Repository in the HA Policy Builder to access the old configuration. |
Step 7 | Check out the
old configuration (http://pcrfclient01/repos/configuration):
svn co http://pcrfclient01/repos/configuration /var/tmp/ha_configuration |
Step 8 | Remove the
non-system configuration:
svn rm ls | egrep -v '(System|Configuration|DiameterStack|VoucherSettings|Cluster|Instance)' |
Step 9 | Copy in the AIO
configuration files:
/bin/cp -f /var/tmp/aio_configuration/* . svn add * |
Step 10 | Commit the
configuration:
svn ci . -m 'commit configuration moved from AIO' |
Step 11 | If you are already logged into Policy Builder, reload the Policy Builder URL in your browser to access the new configuration. |
Step 12 | Check for errors
in Policy Builder. This often indicates a software mismatch.
Errors are shown with an (x) next to the navigation icons in the left pane of Policy Builder. For example: ![]() |
Step 13 | Publish the configuration. Refer to the CPS Mobile Configuration Guide for detailed steps. |
CPS supports a new network cutter utility, which keeps monitoring Policy Server (QNS) VMs failures. When any of the Policy Server VMs are down, utility cuts those unnecessary connections to avoid sending traffic to Policy Server VMs that are down, and this also results in avoiding timeouts.
This utility is started by monit on Policy Director (lb) VMs and keeps monitoring policy server VMs failures.
Utility stores log on /var/log/broadhop/network-cutter.log file.
You can verify the status of network cutter utility on lb01/02 VMs using monit summary and network-cutter status command:
monit summary | grep cutter Process 'cutter' Running
service network-cutter status network-cutter (pid 3735) is running
You can verify if network cutter utility has been started using ps -ef | grep cutter command:
ps -ef | grep cutter root 6496 1 0 Feb18 ? 00:16:22 /usr/java/default/bin/java -jar /var/broadhop/images/network-cutter.jar