The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
CPS includes southbound interfaces to various policy control enforcement functions (PCEFs) in the network, and northbound interfaces to OSS/BSS and subscriber applications, IMSs, and web applications.
Cisco Control Center enables you do these tasks:
HA: https://<lbvip01>:443
AIO: http://<ip>:8090
HTTPS/HTTP
There are two levels of administrative roles supported for Control Center: Full Privilege and View Only. The logins and passwords for these two roles are configurable in LDAP or in /etc/broadhop/authentication-password.xml.
Full Privilege Admin Users: These users can view, edit, and delete information and can perform all tasks. Admin users have access to all screens in Control Center.
View Only Admin Users: These users can view information in Control Center, but cannot edit or change information. View only administrators have access to a subset of screens in the interface.
The Custom Reference Data (CRD) REST API enables the query of, creation, deletion, and update of CRD table data without the need to access the Control Center GUI. The CRD APIs are available using an HTTP REST interface. The specific APIs are outlined in a later section in this guide.
HA: https:// <lbvip01>:443/custrefdata
AIO: http://<ip>:8080/custrefdata
A validation URL is:
HA: https:// <lbvip01>:8443/custrefdata
AIO: http://<ip>:8080/custrefdata
HTTPS/HTTP
Security and account management is accomplished by using the haproxy mechanism on the platform Policy Director (LB) by defining user lists, user groups, and specific users.
On Cluster Manager: /etc/puppet/modules/qps/templates/etc/haproxy/haproxy.cfg
Back up the /etc/haproxy/haproxy.cfg file.
Edit /etc/haproxy/haproxy.cfg on lb01/lb02 and add a userlist with at least one username and password as shown:
userlist <userlist name> user <username1> password <encrypted password>
For example:
userlist cps_user_list
user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1
user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1
Run the following command to generate an encrypted password:
/sbin/grub-crypt --sha-512
For example:
[root@host ~]# /sbin/grub-crypt --sha-512 Password: Retype password: <encrypted password output>
Add the following line in frontend https-api to enable Authentication and Authorization for CRD REST API and create a new backend server as crd_api_servers to intercept CRD REST API requests:
mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s
Update frontend https_all_servers by replacing api_servers with crd_api_servers for CRD API as follows:
acl crd_api path_beg -i /custrefdata/
use_backend crd_api_servers if crd_api
Add at least one group with user in userlist created in Step 2 as follows:
group qns-ro users readonly
group qns users apiuser
Add the following lines to the backend crd_api_servers:
acl authoriseUsers http_auth_group(<cps-user-list>) <user-group>
http-request auth realm CiscoApiAuth if !authoriseUsers
Map the group created in Step 5 with the acl as follows:
acl authoriseUsers http_auth_group(<cps-user-list>) <user-group>
Add the following in the backend crd_api_servers to set read-only permission (GET HTTP operation) for group of users:
http-request deny if !METH_GET authoriseUsers
HAProxy Configuration Example
userlist cps_user_list group qns-ro users readonly group qns users apiuser user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75B
XKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75B
XKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 frontend https-api description API bind lbvip01:8443 ssl crt /etc/ssl/certs/quantum.pem mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api default_backend api_servers reqadd X-Forwarded-Proto:\ https if { ssl_fc } frontend https_all_servers description Unified API,CC,PB,Grafana,CRD-API,PB-AP bind lbvip01:443 ssl crt /etc/ssl/certs/quantum.pem no-sslv3 no-tlsv10 ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:! aNULL:!eNULL:!LOW:! 3DES:!MD5:!EXP:!PSK:!SRP:!DSS mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s acl authoriseReadonlyUsers http_auth_group(cps_user_list) qns-ro acl authoriseAdminUsers http_auth_group(cps_user_list) qns http-request auth realm CiscoApiAuth if !authoriseReadonlyUsers !authoriseAdminUsers http-request deny if !METH_GET authoriseReadonlyUsers
![]() Note | The haproxy.cfg file is generated by the puppet tool. Any manual changes to the file in lb01/lb02 would be reverted if the pupdate or vm-init scripts are run. |
Grafana is a metrics dashboard and graph editor used to display graphical representations of system, application KPIs, bulkstats of various CPS components.
HA: https://<lbvip01>:9443/grafana
AIO: http://<ip>:443/grafana
HTTPS/HTTP
In CPS 7.5 and higher, at least one Grafana user account must be created to access the Grafana web interface.
In CPS 8.1 and higher, an administrative user account must be used to add, modify, or delete Grafana dashboards or perform other administrative actions.
Refer to the Graphite and Grafana chapter in this guide for details on adding or deleting these user accounts.
Haproxy is a frontend IP traffic proxy process in lb01/lb02 that routes the IP traffic for other applications in CPS. The details of individual port that haproxy forwards is already described in other individual sections.
More information about HAProxy is provided in the HAProxy.
Documentation for HAProxy is available at: http://www.haproxy.org/#docs
To view HAProxy statistics, open a browser and navigate to the following URL:
http://<lbvip01>:5540/haproxy?stats
Not applicable.
Java Management Extension (JMX) interface can be used for managing and monitoring applications and system objects.
Resources to be managed / monitored are represented by objects called managed beans (mbeans). MBean represents a resource running in JVM and external applications can interact with mbeans through the use of JMX connectors and protocol adapters for collecting statistics (pull); for getting/setting application configurations (push/pull); and notifying events like faults or state changes (push).
External applications can be configured to monitor application over JMX. In addition to this, there are scripts provided by application that connects to application over JMX and provide required statistics/information.
pcrfclient01/pcrfclient02:
lb01/lb02:
qns01/qns02/qns... : 9045
Ports should be blocked using firewall to prevent access from outside the CPS system.
Not applicable.
Logstash is a process that consolidates the log events from CPS nodes into pcrfclient01/pcrfclient02 for logging and alarms. The logs are forwarded to CPS application to raise necessary alarms.
There is no specific CLI interface for logstash.
TCP and UDP
TCP: 5544, 5545, 7546, 6514
UDP: 6514
Account and role management is not applicable.
MongoDB is used to manage session storage efficiently and address key requirements: Low latency reads/writes, high availability, multi key access etc.
CPS support different models of mongo database based on CPS deployment like AIO, HA or Geo-redundancy. Not all of the databases listed below may be used in your CPS deployment.
To rotate the mongoDB logs on the Session Manager VM, open the mongoDB file by executing the following command:
cat /etc/logrotate.d/mongodb
You will have output as similar to the following:
{ daily rotate 5 copytruncate create 640 root root sharedscripts postrotate endscript }
In the above script the mongoDB logs are rotated daily and it ensures that it keeps the latest 5 backups of these log files.
The standard definition for supported replica-set defined in configuration file. This configuration file is self-explanatory which contain replica set set-name, hostname, port number, data file path and so on.
Location: /etc/broadhop/mongoConfig.cfg
Database Name |
Port Number |
Primary DB Host |
Secondary DB Host |
Arbiter |
Purpose |
---|---|---|---|---|---|
session_cache |
27717 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Session database |
balance_mgmt |
27718 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Quota/Balance database |
audit |
27725 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Reporting database |
spr |
27720 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
USuM database |
cust_ref_data |
27717 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Custom Reference Data |
![]() Note | The port number configuration is based on what is configured in each of the respective Policy Builder plug-ins. Refer to the Plug-in Configuration chapter of the CPS Mobile Configuration Guide for correct port number and ports defined in mongo configuration file. |
The All-in-One deployment mongo database runs on ports 27017 and 27729.
Database Name |
Port Number |
Purpose |
---|---|---|
All |
27017 |
This port is used for all the databases. |
Port is not in use by any other application. To check it, login to VM on which replica-set is to be created and execute the following command:
netstat -lnp | grep <port_no>
If no process is using same port then port can be chosen for replica-set for binding.
Port number used should be greater than 1024 and not in ephemeral port range i.e, not in between following range :
net.ipv4.ip_local_port_range = 32768 to 61000
Use the following commands to access the MongoDB CLI:
HA:
Login to pcrfclient01 or pcrfclient02 and run: diagnostics.sh --get_replica_status
This command will output information about the databases configured in the CPS cluster.
AIO:
mongo --port 27017Not applicable.
Not applicable.
Restrict MongoDB Access for Readonly Users: If firewall is enabled on system, then on all VMs for all readonly users, IP table rule will be created for outgoing connections to reject outgoing traffic to mongoDB replica sets.
For example, rule similar to the following will be created.
REJECT tcp -- anywhere sessionmgr01 tcp dpt:27718 owner GID match qns-ro reject-with icmp-port-unreachable
With this qns-ro user will have restricted mongoDB access on sessionmgr01 on port 27718. Such rules will be added for all readonly users who are part of qns-ro group for all replia sets.
CPS is based on Open Service Gateway initiative (OSGi) and OSGi console is a command-line shell which can be used for analyzing problems at OSGi layer of the application.
Use the following command to access the OSGi console:
telnet <ip> <port>
The following commands can be executed on the OSGi console:
ss : List installed bundle status.
start <bundle-id> : Start the bundle.
stop <bundle-id> : Stop the bundle.
diag <bundle-id> : Diagnose the bundle.
Use the following OSGi commands to add or remove shards:
Command |
Description |
---|---|
listshards |
Lists all the shards. |
removeshard <shard id> |
Marks the shard for removal. If shard is non-backup, rebalance is require for shard to be removed fully. If shard is backup, it does not require rebalance of sessions and hence would be removed immediately. |
rebalance <rate limit> |
Rebalances the buckets and migrates session with rate limit. Rate limit is optional. If rate limit is passed, it is applied at rebalance. |
rebalancebg <rate limit> |
Rebalances the buckets and schedules background task to migrate sessions. Rate limit is optional. If rate limit is passed, it is applied at rebalance. |
rebalancestatus |
Displays the current rebalance status. |
rebuildAllSkRings |
In order for CPS to identify a stale session from the latest session, the secondary key mapping for each site stores the primary key in addition to the bucket ID and the site ID, that is, Secondary Key = <Bucket Id>; <Site Id>; <Primary Key>. To enable this feature, add the flag -Dcache.config.version=1 in the /etc/broadhop/qns.conf file. Enabling this flag and running rebuildAllSkRings starts the data migration for the new version so that CPS can load the latest version of the session. |
skRingRebuildStatus |
Displays the status of the migration and the current cache version. |
pcrfclientXX:
lbXX:
qnsXX: 9091
Ports should be blocked using a firewall to prevent access from outside the CPS cluster.
Not applicable.
Policy Builder is the web-based client interface for the configuration of policies in Cisco Policy Suite.
HA: https://<lbvip01>:7443/pb
AIO: http://<ip>:7070/pb
HTTPS/HTTP
Initial accounts are created during the software installation. Refer to the CPS Operations Guide for commands to add users and change passwords.
To allow initial investigation into a Proof of Concept API for managing a CPS System and Custom Reference Data related through an HTTPS accessible JSON API.
This is an HTTPS/Web interface and has no Command Line Interface.
API: http://<Cluster Manager IP>:8458
Documentation: http://<Cluster Manager IP>:7070/doc/index.html
Initial accounts are created during the software installation. Refer to the CPS Operations Guide for commands to add users and change passwords.
Enhanced log processing is provided using Rsyslog.
Rsyslog logs Operating System (OS) data locally (/var/log/messages etc.) using the /etc/rsyslog.conf and /etc/rsyslog.d/*conf configuration files.
rsyslog outputs all WARN level logs on CPS VMs to /var/log/warn.log file.
On all nodes, Rsyslog forwards the OS system log data to lbvip02 via UDP over the port defined in the logback_syslog_daemon_port variable as set in the CPS deployment template (Excel spreadsheet). To download the most current CPS Deployment Template (/var/qps/install/current/scripts/deployer/templates/QPS_deployment_config_template.xlsm), refer to the CPS Installation Guide for VMware or CPS Release Notes for this release.
Additional information is available in the Logging chapter of the CPS Troubleshooting Guide. Refer also to http://www.rsyslog.com/doc/ for the Rsyslog documentation.
Not applicable.
UDP
6514
Account and role management is not applicable.
CPS provides the ability to configure forwarding of consolidated syslogs from rsyslog-proxy on Policy Director VMs to remote syslog servers (refer to CPS Installation Guide for VMware). However, if additional customizations are made to rsyslog configuration to forward logs to external syslog servers in customer's network for monitoring purposes, such forwarding must be performed via dedicated action queues in rsyslog. In the absence of dedicated action queues, when rsyslog is unable to deliver a message to the remote server, its main message queue can fill up which can lead to severe issues, such as, preventing SSH logging, which in turn can prevent SSH access to the VM.
Sample configuration for dedicated action queues is available in the Logging chapter of the CPS Troubleshooting Guide. Refer to rsyslog documentation on http://www.rsyslog.com/doc/v5-stable/concepts/queues.html for more details about action queues.
Apache™ Subversion (SVN) is the versioning and revision control system used within CPS. It maintains all the CPS policy configurations and has repositories in which files can be created, updated and deleted. SVN maintains the file difference each time any change is made to a file on the server and for each change it generates a revision number.
In general, most interactions with SVN are performed via Policy Builder.
Use the following commands to access SVN:
Get all files from the server:
svn checkout --username <username> --password <password> <SVN Repository URL> <Local Path>
Example:
svn checkout --username broadhop --password broadhop http://pcrfclient01/repos/configuration/root/configuration
If <Local Path> is not provided, files are checked out to the current directory.
Store/check-in the changed files to the server:
svn commit --username <username> --password <password> <Local Path> -m “modified config”
Example:
svn commit --username broadhop --password broadhop /root/configuration -m “modified config”
Update local copy to latest from SVN:
svn update <Local Path>
Example:
svn update /root/configuration/
Check current revision of files:
svn info <Local Path>
Example:
svn info /root/configuration/
![]() Note | Use svn --help for a list of other commands. |
HTTP
80
From the pcrfclient01 VM, run adduser.sh to create a new user.
/var/qps/bin/support/adduser.sh
![]() Note | This command can also be run from the Cluster Manager VM, but you must include the OAM (PCRFCLIENT) option: /var/qps/bin/support/adduser.sh pcrfclient |
Example:
[root@pcrfclient01 /]# /var/qps/bin/support/adduser.sh Enter username: <username> Enter group for the user: <any group> Enter password: Re-enter password:
By default, the adduser.sh script creates a new user with read-only permissions. For read-write permission, you must assign the user to the qns-svn group and then run the vm-init command.
From the pcrfclient01 VM, run the adduser.sh script to create the new user.
Run the following command on both pcrfclient01 and pcrfclient02 VMs:
/etc/init.d/vm-init
You can now login and commit changes as the newly created user.
From the pcrfclient01 VM, run the change_passwd.sh script to change the password of a user.
/var/qps/bin/support/change_passwd.sh
Example:
[root@pcrfclient01 /]# /var/qps/bin/support/change_passwd.sh Enter username whose password needs to be changed: user1 Enter new password: Re-enter new password:
Perform all of the following commands on both the pcrfclient01 and pcrfclient02 VMs.
Use the htpasswd utility to add a new user
htpasswd -mb /var/www/svn/.htpasswd <username> <password>
Example:
htpasswd -mb /var/www/svn/.htpasswd user1 password
In some versions, the password file is /var/www/svn/password
Update the user role file /var/www/svn/users-access-file and add the username under admins (for read/writer permissions) or nonadmins (for read-only permissions). For example:
[groups] admins = broadhop nonadmins = read-only, user1 [/] @admins = rw @nonadmins = r
Use the htpasswd utility to change passwords.
htpasswd -mb /var/www/svn/.htpasswd <username> <password>
Example:
htpasswd -mb /var/www/svn/.htpasswd user1 password
CPS 7.0 and above has been designed to leverage the Terminal Access Controller Access Control System Plus (TACACS+) to facilitate centralized management of users. Leveraging TACACS+, the system is able to provide system-wide authentication, authorization, and accounting (AAA) for the CPS system.
Further the system allows users to gain different entitlements based on user role. These can be centrally managed based on the attribute-value pairs (AVP) returned on TACACS+ authorization queries.
No CLI is provided.
CPS communicates to the AAA backend using IP address/port combinations configured by the operator.
Configuration is managed by the Cluster Management VM which deploys the '/etc/tacplus.conf' and various PAM configuration files to the application VMs.
To provide sufficient information for the Linux-based operating system running on the VM nodes, there are several attribute-value pairs (AVP) that must be associated with the user on the ACS server used by the deployment. User records on Unix-like systems need to have a valid “passwd” record for the system to operate correctly. Several of these fields can be inferred during the time of user authentication, but the remaining fields must be provided by the ACS server.
A standard “passwd” entry on a Unix-like system takes the following form:
<username>:<password>:<uid>:<gid>:<gecos>:<home>:<shell>
When authenticating the user via TACACS+, the software can assume values for the 'username', password' and 'gecos' fields, but the others must be provided by the ACS server. To facilitate this need, the system depends on the ACS server provided these AVP when responding to a TACACS+ Authorization query for a given 'username':
uid
A unique integer value greater than or equal to 501 that serves as the numeric user identifier for the TACACS+ authenticated user on the VM nodes. It is outside the scope of the CPS software to ensure uniqueness of these values.
gid
The group identifier of the TACACS+ authenticated user on the VM nodes. This value should reflect the role assigned to a given user, based on the following values:
gid=501 (qns-su)
This group identifier should be used for users that are entitled to attain super-user (or 'root') access on the CPS VM nodes.
gid=504 (qns-admin)
This group identifier should be used for users that are entitled to perform administrative maintenance on the CPS VM nodes.
![]() Note | For stopping/starting Policy Server (QNS) process on node, qns-admin user should use monit: sudo monit stop qns-1 sudo monit start qns-1 |
gid=505 (qns-ro)
This group identifier should be used for users that are entitled to read-only access to the CPS VM nodes.
home
The user's home directory on the CPS VM nodes. To enable simpler management of these systems, the users should be configured with a pre-deployed shared home directory based on the role they are assigned with the gid.
home=/home/qns-su should be used for users in the 'qns-su' group (gid=501)
home=/home/qns-admin should be used for users in the 'qnsadmin' group (gid=504)
home=/home/qns-su should be used for users in the 'qns-ro' group (gid=505)
shell
The system-level login shell of the user. This can be any of the installed shells on the CPS VM nodes, which can be determined by reviewing the contents of /etc/shells on one of the CPS VM nodes.
Typically, this set of shells is available in a CPS deployment:
The /usr/bin/sudosh shell can be used to audit user's activity on the system.
gid
The group identifier of the TACACS+ authenticated user on the VM nodes. This value should reflect the role assigned to a given user, based on the following values:
gid=501 (qns-su)
This group identifier should be used for users that are entitled to attain super-user (or 'root') access on the CPS VM nodes.
gid=504 (qns-admin)
This group identifier should be used for users that are entitled to perform administrative maintenance on the CPS VM nodes.
![]() Note | To execute administrative scripts from qns-admin, prefix the command with sudo. For example sudo stopall.sh |
gid=505 (qns-ro)
This group identifier should be used for users that are entitled to read-only access to the CPS VM nodes.
home
The user's home directory on the CPS VM nodes. To enable simpler management of these systems, the users should be configured with a pre-deployed shared home directory based on the role they are assigned with the gid.
home=/home/qns-su should be used for users in the 'qns-su' group (gid=501)
home=/home/qns-admin should be used for users in the 'qnsadmin' group (gid=504)
home=/home/qns-su should be used for users in the 'qns-ro' group (gid=505)
For more information about TACACS+, refer to the following links:
TACAC+ Protocol Draft: http://tools.ietf.org/html/draft-grant-tacacs-02
Portions of the solution reuse software from the open source pam_tacplus project hosted at: https://github.com/jeroennijhof/pam_tacplus
Unified APIs are used to reference customer data table values.
HA: https://<lbvip01>:8443/ua/soap
AIO: http://<ip>:8080/ua/soap
HTTPS/HTTP
Currently there is no authorization for this API
Multiple users can be logged into Policy Builder at the same time.
In the event that two users attempt to make changes on same screen and one user saves their changes to the client repository, the other user may receive errors. In such cases the user must return to the login page, revert the configuration, and repeat their changes.
This section covers the following topics:
Step 1 | Log in to the Cluster Manager. |
Step 2 | Add a user to
CPS by executing:
adduser.sh |
Step 3 | When prompted
for the user’s group, set ‘qns-svn’ for read-write permissions or ‘qns-ro’ for
read-only permissions.
Refer to CPS Commands for more information about these commands. |
The user can revert the configuration if changes since the last publish/save to client repository are not wanted.
This can also be necessary in the case of a ‘syn conflict’ error where both pcrfclient01 and pcrfclient02 are in use at the same time by different users and publish/save to client repository changes to the same file. The effect of reverting changes is that all changes since the publish/save to client repository will be undone.
This section describes publishing Cisco Policy Builder data to the Cisco Policy Server. Publishing data occurs in the Cisco Policy Builder client interface, but affects the Cisco Policy Server. Refer to the CPS Mobile Configuration Guidefor steps to publish data to the server.
Cisco Policy Builder manages data stored in two areas:
The Client Repository stores data captured from the Policy Builder GUI in Subversion. This is a place where trial configurations can be developed and saved without affecting the operation of the Cisco Policy Builder server data.
The default URL is http://pcrfclient01/repos/configuration.
The Server Repository is where a copy of the client repository is created/updated and where the CPS picks up changes. This is done on Publish from Policy Builder.
![]() Note | Publishing will also do a Save to Client Repository to ensure the Policy Builder and Server configurations are not out of sync. |
The default URL is http://pcrfclient01/repos/run.
After the installation is complete, you need to configure the Control Center access. This is designed to give the customer a customized Control Center username.
This section describes updating Control Center mapping of read-write/read-only to user groups (Default: qns and qns-ro respectively).
Step 1 | Log in to the Cluster Manager VM. | ||
Step 2 | Update
/etc/broadhop/authentication-provider.xml to include
the group mapping for the group you want to use.
| ||
Step 3 | Run syncconfig.sh to put this file on all VMs. | ||
Step 4 | Restart the CPS
system, so that the changes done above are reflected in the VMs:
restartall.sh To add a new user to Control Center and specify the group you've specified in the configuration file above, refer to Add a Control Center User. |
CPS Control Center supports session limits per user. If the user exceeds the configured session limit, they are not allowed to log in. CPS also provides notifications to the user when other users are already logged in.
When a user logs in to Control Center, a Welcome message displays at the top of the screen. A session counter is shown next to the username. This represents the number of login sessions for this user. In the following example, this user is logged in only once ( [1] ).
The user can click the session counter ([1]) link to view details for the session(s), as shown below.
When another user is already logged in with the same username, a notification displays for the second user in the bottom right corner of the screen, as shown below.
The first user also receives a notification, as shown, and the session counter is updated to [2].
These notifications are not displayed in real time; CPS updates this status every 30 seconds.
The session limit can be configured by the runtime argument, which can be configured in the qns.conf file.
-Dcc.user.session.limit=3 (default value is 5)
The default session timeout can be changed by editing the following file on the Policy Server (QNS) instance:
./opt/broadhop/qns-1/plugins/com.broadhop.ui_3.5.0.release/war/WEB-INF/web.xml <!-- timeout after 15 mins of inactivity --> <session-config> <session-timeout>15</session-timeout> <cookie-config> <http-only>true</http-only> </cookie-config> </session-config>
![]() Note | The same timeout value must be entered on all Policy Server (QNS) instances. When the number of sessions of the user exceeds the session limit, the user is not allowed to log in and receives the message “Max session limit per user exceed!” ![]() |
If a user does not log out and then closes their browser, the session remains alive on the server until the session times out. When the session timeout occurs, the session is deleted from the memcached server. The default session timeout is 15 minutes. This is the idle time after which the session is automatically deleted.
When a Policy Server (QNS) instance is restarted, all user/session details are cleared.
When the memcached server is restarted without also restarting the Policy Server (QNS) instance, all http sessions on the Policy Server (QNS) instance are invalidated. In this case the user is asked to log in again and after that, the new session is created.
Update the HAProxy configuration to enable authentication and authorization mechanism in the CRD API module.
Step 1 | Back up the /etc/haproxy/haproxy.cfg file before making modifications in the following steps. | ||
Step 2 | Edit
/etc/haproxy/haproxy.cfg on lb01/lb02 and add a
userlist with at least one username and password.
Use the following syntax: userlist <userlist name> user <username1> password <encrypted password> For example: userlist cps_user_list user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 Run the following command to generate an encrypted password: /sbin/grub-crypt --sha-512 [root@host ~]# /sbin/grub-crypt --sha-512 Password: Retype password: <encrypted password output> | ||
Step 3 | Add the
following line in frontend
https-api to enable Authentication and Authorization
for CRD REST API and create a new backend server as
crd_api_servers to intercept CRD REST API requests:
mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s | ||
Step 4 | Update frontend
https_all_servers by replacing
api_servers with
crd_api_servers for CRD API as follows:
acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api | ||
Step 5 | To enable the
authentication, edit
/etc/haproxy/haproxy.cfg on lb01/lb02 and add the
following lines in the
backend crd_api_servers:
acl validateAuth http_auth(<userlist_name>) http-request auth unless validateAuth acl validateAuth http_auth(<userlist name>) | ||
Step 6 | To enable the
authorization, add at least one group with the user in
userlist created in
Step 2 as
follows:
group qns-ro users readonly userlist cps_user_list group qns-ro users readonly user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 | ||
Step 7 | Add the
following in the
backend crd_api_servers to set read-only permission
(GET HTTP operation) for group of users:
acl authoriseUsers http_auth_group(<user-list-name>) <group-name> http-request deny if !METH_GET authoriseUsers acl authorizeUsers http_auth_group(<userlist name>) <group-name> Example:HAProxy Configuration Example userlist cps_user_list group qns-ro users readonly user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrme AUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrme AUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 frontend https-api description API bind lbvip01:8443 ssl crt /etc/ssl/certs/quantum.pem mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api default_backend api_servers reqadd X-Forwarded-Proto:\ https if { ssl_fc } frontend https_all_servers description Unified API,CC,PB,Grafana,CRD-API,PB-API bind lbvip01:443 ssl crt /etc/ssl/certs/quantum.pem no-sslv3 no-tlsv10 ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:! aNULL:!eNULL:!LOW:! 3DES:!MD5:!EXP:!PSK:!SRP:!DSS mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s acl validateAuth http_auth(cps_user_list) acl authoriseUsers http_auth_group(cps_user_list) qns-ro http-request auth unless validateAuth http-request deny if !METH_GET authoriseUsers
|
By default, the CPS Unified API does not require username and password authentication. To enable authentication, refer to Enable Authentication for Unified API.
There are two options to include a username and password in an API request:
Include the username and password directly in the request. For example:
https://<username>:<password>@<lbvip02>:8443/ua/soap
Add an authentication header to the request:
Authorization: Basic <base64 encoded value of username:password>
For example:
wget -d -O - --header="Authorization: Basic cG9ydGFXNjbzEyMwo=" https://lbvip02:8443/ua/soap/keepalive
HAProxy is used to secure and balance calls to the CPS Unified API.
Step 1 | Back up the /etc/haproxy/haproxy.cfg file before making modifications in the following steps. | ||
Step 2 | Edit
/etc/haproxy/haproxy.cfg on lb01/lb02 and add a
userlist with at least one username and password.
Use the following syntax: userlist <userlist name> user <username1> password <encrypted password> user <username2> insecure-password <plain text password> For example: userlist L1 user apiuser password $6$eC8mFOWMcRnQo7FQ$C053tv5T2mPlmGAta0ukH87MpK9aLPtWgCEK | ||
Step 3 | Run the following command to generate an encrypted password:
/sbin/grub-crypt --sha-512 [root@host ~]# /sbin/grub-crypt --sha-512 Password: Retype password: <encrypted password output> | ||
Step 4 | Edit /etc/haproxy/haproxy.cfg on lb01/lb02 to configure HAProxy to
require authentication. Add the following 4 lines to the haproxy.cfg file:
acl validateAuth http_auth(<userlist_name>) acl unifiedAPI path_beg -i /ua/soap http-request allow if !unifiedAPI http-request auth unless validateAuth acl validateAuth http_auth(<userlist name>) frontend https-api description Unified API bind lbvip01:8443 ssl crt /etc/ssl/certs/quantum.pem default_backend api_servers reqadd X-Forwarded-Proto:\ https if { ssl_fc } backend api_servers mode http balance roundrobin option httpclose option abortonclose option httpchk GET /ua/soap/keepalive server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s server qns03_A qns03:8080 check inter 30s server qns04_A qns04:8080 check inter 30s acl validateAuth http_auth(L1) acl unifiedAPI path_beg -i /ua/soap http-request allow if !unifiedAPI http-request auth unless validateAuth The configuration above applies authentication on context /ua/soap, which is the URL path of the Unified API.
|
In order to access the Unified API WSDL while using authentication change the following line:
acl unifiedAPI path_beg -i /ua/soap
to
acl unifiedAPI path_beg -i /ua/.
The default address for the WSDL is https://<lbvip01>:8443/ua/wsdl/UnifiedApi.wsdl
The Unified API contains full documentation in an html format that is compatible with all major browsers.
The default address is https://<HA-server-IP>:8443/ua/wsdl/UnifiedApi.xsd
![]() Note | Run the about.sh command from the Cluster Manager to display the actual addresses as configured in your deployment. |
CPS 7.x onward uses HTTPS on port 8443 for Unified API access. To enable HTTP support (like pre-7.0) on port 8080, perform the following steps:
![]() Note | Make sure to open port 8080 if firewall is used on the setup. |
Step 1 | Create the
following directories (ignore File exists error), on Cluster Manager:
/bin/mkdir -p /var/qps/env_config/modules/custom/templates/etc/haproxy /bin/mkdir -p /var/qps/env_config/modules/custom/templates/etc/monit.d /bin/mkdir -p /var/qps/env_config/nodes |
Step 2 | Create the file
/var/qps/env_config/modules/custom/templates/etc/haproxy/haproxy-soaphttp.erb
with the following contents on Cluster Manager:
global daemon nbproc 1 # number of processing cores stats socket /tmp/haproxy-soaphttp defaults timeout client 60000ms # maximum inactivity time on the client side timeout server 180000ms # maximum inactivity time on the server side timeout connect 60000ms # maximum time to wait for a connection attempt to a server to succeed log 127.0.0.1 local1 err listen pcrf_proxy XXXX:8080 ----------- > where, XXXX, is Unified API interface hostname or IP mode http balance roundrobin option httpclose option abortonclose option httpchk GET /ua/soap/KeepAlive server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s server qns03_A qns03:8080 check inter 30s server qns04_A qns04:8080 check inter 30s server qns05_A qns05:8080 check inter 30s server qns06_A qns06:8080 check inter 30s server qns07_A qns07:8080 check inter 30s server qns08_A qns08:8080 check inter 30s server qns09_A qns09:8080 check inter 30s server qns10_A qns10:8080 check inter 30s |
Step 3 | Create the file
/var/qps/env_config/modules/custom/templates/etc/monit.d/haproxy-soaphttp
with the following contents on Cluster Manager:
check process haproxy-soaphttp with pidfile /var/run/haproxy-soaphttp.pid start = "/etc/init.d/haproxy-soaphttp start" stop = "/etc/init.d/haproxy-soaphttp stop" |
Step 4 | Create or modify
the
/var/qps/env_config/nodes/lb.yaml file with the
following contents on Cluster Manager:
If the file exists then just add custom::soap_http: classes: qps::roles::lb: custom::soap_http: |
Step 5 | Create the file
/var/qps/env_config/modules/custom/manifests/soap_http.pp
with the following contents on Cluster Manager.
Change ethX with the Unified API IP interface like eth0/eth1/eth2. class custom::soap_http( $haproxytype = "-soaphttp", ) { service { "haproxy-soaphttp": enable => false, require => [Package [ "haproxy" ],File ["/etc/haproxy/haproxy-soaphttp.cfg"], File['/etc/init.d/haproxy-soaphttp'], Exec["sysctl_refresh"]], } file { "/etc/init.d/haproxy-soaphttp": owner => "root", group => "root", content => template('qps/etc/init.d/haproxy'), require => Package [ "haproxy" ], notify => Service['haproxy-soaphttp'], mode => 0744 } file { "/etc/haproxy/haproxy-soaphttp.cfg": owner => "root", group => "root", content => template('custom/etc/haproxy/haproxy-soaphttp.erb'), require => Package [ "haproxy" ], notify => Service['haproxy-soaphttp'], } file { "/etc/monit.d/haproxy-soaphttp": content => template("custom/etc/monit.d/haproxy-soaphttp"), notify => Service["monit"], } exec { "remove ckconfig for haproxy-soaphttp": command => "/sbin/chkconfig --del haproxy-soaphttp", require => [Service['haproxy-soaphttp']], } firewall { '100 allow soap http': port => 8080, iniface => "ethX", proto => tcp, action => accept, } } |
Step 6 | Validate the
syntax of your newly created puppet script on Cluster Manager:
/usr/bin/puppet parser validate /var/qps/env_config/modules/custom/manifests/soap_http.pp |
Step 7 | Rebuild your
Environment Configuration on Cluster Manager:
/var/qps/install/current/scripts/build/build_env_config.sh |
Step 8 | Reinitialize
your lb01/02 environments on Cluster Manager:
The following commands will take few minutes to complete. ssh lb01 /etc/init.d/vm-init ssh lb02 /etc/init.d/vm-init |
Step 9 | Validate SOAP
request on http:
|
This section covers the following topics:
Cisco Policy Suite (CPS) is built around a distributed system that runs on a large number of virtualized nodes. Previous versions of the CPS software allowed operators to add custom accounts to each of these virtual machines (VM), but management of these disparate systems introduced a large amount of administrative overhead.
CPS has been designed to leverage the Terminal Access Controller Access Control System Plus (TACACS+) to facilitate centralized management of users. Leveraging TACACS+, the system is able to provide system-wide authentication, authorization, and accounting (AAA) for the CPS system.
Further the system allows users to gain different entitlements based on user role. These can be centrally managed based on the attribute-value pairs (AVP) returned on TACACS+ authorization queries.
To provide sufficient information for the Linux-based operating system running on the VM nodes, there are several attribute-value pairs (AVP) that must be associated with the user on the ACS server used by the deployment. User records on Unix-like systems need to have a valid “passwd” record for the system to operate correctly. Several of these fields can be inferred during the time of user authentication, but the remaining fields must be provided by the ACS server.
A standard “passwd” entry on a Unix-like system takes the following form:
<username>:<password>:<uid>:<gid>:<gecos>:<home>:<shell>
When authenticating the user via TACACS+, the software can assume values for the username, password, and gecos fields, but the others must be provided by the ACS server. To facilitate this need, the system depends on the ACS server provided these AVP when responding to a TACACS+ Authorization query for a given username:
uid
A unique integer value greater than or equal to 501 that serves as the numeric user identifier for the TACACS+ authenticated user on the VM nodes. It is outside the scope of the CPS software to ensure uniqueness of these values.
gid
The group identifier of the TACACS+ authenticated user on the VM nodes. This value should reflect the role assigned to a given user, based on the following values:
gid=501 (qns-su)
This group identifier should be used for users that are entitled to attain super-user (or 'root') access on the CPS VM nodes.
gid=504 (qns-admin)
This group identifier should be used for users that are entitled to perform administrative maintenance on the CPS VM nodes.
![]() Note | For stopping/starting the Policy Servrer (QNS) process on node, the qns-admin user should use monit: |
For example,
sudo monit stop qns-1 sudo monit start qns-1
gid=505 (qns-ro)
This group identifier should be used for users that are entitled to read-only access to the CPS VM nodes.
home
The user's home directory on the CPS VM nodes. To enable simpler management of these systems, the users should be configured with a pre-deployed shared home directory based on the role they are assigned with the gid.
shell
The system-level login shell of the user. This can be any of the installed shells on the CPS VM nodes, which can be determined by reviewing the contents of /etc/shells on one of the CPS VM nodes. Typically, this set of shells is available in a CPS deployment:
The /usr/bin/sudosh shell can be used to audit user's activity on the system.
The user environment of the Linux-based VMs needs to be able to lookup a user's passwd entry via different columns in that record at different times. The TACACS+ NSS module provided as part of the CPS solution however is only able to query the Access Control Server (ACS) for this data using the username. For this reason the system relies upon the Name Service Cache Daemon (NSCD) to provide this facility locally after a user has been authorized to use a service of the ACS server.
More details on the operations of NSCD can be found by referring to online help for the software (nscd --help) or in its man page (nscd(8)). Within the CPS solution it provides a capability for the system to lookup a user's passwd entry via their uid as well as by their username.
To avoid cache coherence issues with the data provided by the ACS server the NSCD package has a mechanism for expiring cached information.
The default NSCD package configuration on the CPS VM nodes has the following characteristics:
Valid responses from the ACS server are cached for 600 seconds (10 minutes)
Invalid responses from the ACS server (user unknown) are cached for 20 seconds
Cached valid responses are reloaded from the ACS server 5 times before the entry is completely removed from the running set -- approximately 3000 seconds (50 minutes)
The cache are persisted locally so it survives restart of the NSCD process or the server
It is possible for an operator to explicitly expire the cache from the command line. To do so the administrator need to get the shell access to the target VM and execute the following command as a root user:
# nscd -i passwd
The above command will invalidate all entries in the passwd cache and force the VM to consult with the ACS server for future queries.
There may be some unexpected behaviors of the user environment for TACACS+ authenticated users connected to the system when their cache entries are removed from NSCD. This can be corrected by the user by logging out of the system and logging back into it or by issuing the following command, which forces the system to query the ACS server:
# id -a “$USER”
Only qns-ro and qns-admin users are allowed to view log files at specific paths according to their role and maintenance requirement. Access to logs are allowed only using the following paths:
Commands such as cat, less, more, and find cannot be executed using sudo in CPS 10.0.0 or higher releases.
To read any file, execute the following script using sudo:
$ sudo /var/qps/bin/support/logReader.py -r h -n 2 -f /var/log/puppet.log
where,
-r: Corresponds to tail (t) ,tailf (tf), and head (h) respectively
-n: Determines number of lines to be read. It works with the -r option. This is an optional parameter.
-f: Determines the complete file path to be read.
![]() Note |
You use Custom Reference Data (CRD) APIs to query, create, delete, and update CRD table data without the need to utilize the Control Center interface. The CRD APIs are available via a REST interface.
These APIs allow maintenance of the actual data rows in the table. They do not allow the creation of new tables or the addition of new columns. Table creation and changes to the table structure must be completed via the Policy Builder application.
Table names must be all in lowercase alphanumeric to utilize these APIs. Neither spaces nor special characters are allowed in the table name.
The feature com.broadhop.custrefdata.service.feature needs to be installed on the Policy Server.
In a High Availability (HA)/Distributed CPS deployment, this feature should be installed on the QNS0x nodes.
The feature com.broadhop.client.feature.custrefdata needs to be installed in Policy Builder.
Step 1 | Login into Policy Builder. | ||||||||||||||||||||||
Step 2 | Select Reference Data tab. | ||||||||||||||||||||||
Step 3 | From the left pane, select Systems. | ||||||||||||||||||||||
Step 4 | Select and expand your system name. | ||||||||||||||||||||||
Step 5 | Select
Plugin
Configurations (or a sub cluster or instance), a Custom Reference
Data Configuration plugin configuration is defined.
| ||||||||||||||||||||||
Step 6 | In
Reference Data tab >
Custom
Reference Data Tables, at least one Custom Reference Data Table
must be defined.
![]()
|
The MongoDB database containing the CRD tables and the data is located in the MongoDB instance specified in the CRD plugin configuration.
The database is named cust_ref_data.
Two system collections exist in that database and do not actually contain CRD data:
system.indexes — used by MongoDB. These are indices set on the database.
crdversion — contains a document indicating the version of all the CRD tables you have defined. The version field increments by 1 every time you make a change or add data to any of your CRD tables.
A collection is created for each CRD table defined in Policy Builder.
This collection contains a document for each row you define in the CRD table.
Each document contains a field for each column you define in the CRD table.
The field contains the value specified for the column for that row in the table.
Additionally, there is a _id field which contains the internal key used by MongoDB and _version which is used by CPS to provide optimistic locking protection, essentially to avoid two threads overwriting the other's update, on the document.
An example is shown below:
Setting the Cache Results to true (checked) is the default and recommended settings in most cases as it yields the best performance. Use of the cached copy also removes the dependency on the availability of the CRD database, so if there is an outage or performance issue, policy decisions utilizing the CRD data won't be impacted.
The cached copy of the table is refreshed on CPS restart and whenever the API writes a change to the CRD table, otherwise the cached copy is used and the database is not accessed.
The URL used to access the CRD API are different depending on the type of deployment (High Availability or All-in-One):
High Availability (HA): https://<lbvip01>:8443/custrefdata/<tablename>/_<operation>
All-In-One (AIO): http://<ip>:8080/custrefdata/<tablename>/_<operation>
The examples in the following sections refer to the HA URL.
Returns all rows currently defined in the specified table.
GET
https://<lbvip01>:8443/custrefdata/test/_query
https://<lbvip01>:8443/custrefdata/test/_query?key1=Platinum
None, although parameters can be specified on the URL for filtering.
Success returns code 200 Ok; XML indicating rows defined is returned. If the table does not exist, code 400 Bad Request is returned.
<rows> <row> <field code=”field1” value=”1004”/> <field code=”field2” value=”testee”/> <field code=”key1” value=”Platinum”/> </row> <row> <field code=”field1” value=”1004”/> <field code=”field2” value=”testee”/> <field code=”key1” value=”Platinum99”/> </row> <row> <field code=”field1” value=”field1example1”/> <field code=”field2” value=”field2example1”/> <field code=”key1” value=”key1example1”/> </row> <row> <field code=”field1” value=”field1example2”/> <field code=”field2” value=”field2example2”/> <field code=”key1” value=”key1example2”/> </row> </rows>
<rows> <rows> <row> <field code=”field1” value=”1004”/> <field code=”field2” value=”testee”/> <field code=”key1” value=”Platinum”/> </row> </rows>
The response returns keys with the tag “field code”. If you want to use the output of Query as input to one of the other APIs, the tag needs to be changed to “key code”. Currently using “field code” for a key returns code 404 Bad Request and a java.lang.NullPointerException.
Create a new row in the specified table.
POST
https://<lbvip01>:8443/custrefdata/test/_create
<row> <key code=”key1” value=”Platinum”/> <field code=”field1” value=”1004”/> <field code=”field2” value=”testee”/> </row>
Success returns code 200 Ok; no data is returned. The key cannot already exist for another row; submission of a duplicate key returns code 400 Bad Request.
Updates the row indicated by the key code in the table with the values specified for the field codes.
POST
https://<lbvip01>:8443/custrefdata/test/_update
<row> <key code="key1" value="Platinum"/> <field code="field1" value="1005"/> <field code="field2" value="tester"/> </row>
Success returns code 200 Ok; no data is returned. The key cannot be changed. Any attempt to change the key returns code 404 Not Found.
Removes the row indicated by the key code from the table.
POST
https://<lbvip01>:8443/custrefdata/test/_delete
<row> <key code="key1" value="Platinum"/>"/> </row>
Success returns code 200 Ok; no data is returned. If the row to delete does not exist, code 404 Not Found is returned.
Determines whether the same CRD table data content is being used at different data centers.
tableName: Returns the checksum of a specified CRD table tableName indicating if there is any change in the specified table. If the value returned is same on different servers, it means there is no change in the configuration and content of that table.
includeCrdversion: Total database checksum contains combination of checksum of all CRD tables configured in Policy Builder. If this parameter is passed as true in API, then total database checksum includes the checksum of "crdversion" table. Default value is false.
orderSensitive: Calculates checksum of the table by utilizing the order of the CRD table content. By default, it does not sort the row checksums of the table and returns order sensitive checksum of every CRD table. Default value is true.
Database level API which returns if there is a change in any of the CRD tables. If the value returned is same on different servers, there will be no change in the configuration and content of any CRD table configured in Policy Builder.
GET
https://<lbvip01>:8443/custrefdata/_checksum
<response> <checksum><all-tables-checksum></checksum> <tables> <table name="<table-1-name>" checksum="<checksum-of-table-1>"/> <table name="<table-2-name>" checksum="<checksum-of-table-2>"/> <table name="<table-n-name>" checksum="<checksum-of-table-n>"/> </tables> </response>
Table specific API which returns if there is any change in the specified table. If the value returned is same on different servers, there will be no change in the configuration and content of that table.
GET
https://<lbvip01>:8443 /custrefdata/_checksum?tableName=<user-provided-table-name>
<response> <tables> <table name="<user-provided-table-name>" checksum="<checksum-of-specified-table"/> </tables> </response>
Drops custom reference table from MongoDB to avoid multiple stale tables in the system.
If a CRD table does not exist in Policy Builder but exists in the database, the API can be used to delete the table from the database.
If a CRD table exists in Policy Builder and database, the API cannot delete the table from the database. If this is attempted the API will return an error: “Not permitted to drop this table as it exists in Policy Builder”.
If a CRD table does not exist in Policy Builder and database, the API will also return an error No table found:<tablename>.
GET
https://<lbvip01>:8443/custrefdata/<table_name>/_drop
Exports single and multiple CRD table and its data.
Exports single CRD table and its data.
Returns an archived file containing csv file with information of specified CRD table table_name.
GET
https://<lbvip01>:8443/custrefdata/_export?tableName=<table_name>
Exports all CRD tables and its data.
Returns an archived file containing csv file with information for each CRD Table.
GET
https://<lbvip01>:8443 /custrefdata/_export
Imports CRD table and its data.
It takes an archived file as an input which contains one or more csv files containing CRD tables information.
POST
Creates a snapshot of the CRD tables on the system. The created snapshot will contain CRD table data, policy configuration and checksum information for all CRD tables.
POST
https://<lbvip01>:8443/custrefdata/_snapshot?userId=<user_id>&userComments=<user_comments>
userComments
Enables you to get the list of all valid snapshots in the system.
GET
https://<lbvip01>:8443/custrefdata/_snapshot
<snapshots> <snapshot> <name><date-and-time>_<user-id></name> <snapshotPath>/var/broadhop/snapshot/20160620011825306_qns</snapshotPath> <creationDateAndTime>20/06/2016 01:18:25:306</creationDateAndTime> <comments>snapshot-1 june</comments> <policyVersion>903</policyVersion> <checksum checksum="60f51dfd4cd4554910da44a776c66db1"> <table name=<table-name-1> checksum="<table-checksum-1>"/> … <table name=<table-name-n> checksum="<table-checksum-n>"/> </checksum> </snapshot> <snapshot> … </snapshot> </snapshots>
Enables you to revert the CRD data to a specific snapshot. If the specific snapshot name is not provided, the API will revert to the latest snapshot.
POST
https://<lbvip01>:8443/custrefdata/_revert?snapshotName=<snapshot_name>
snapshotName
The Query API is a GET operation which is the default operation that occurs when entering a URL into a typical web browser.
The POST operations, Create, Update, and Delete, require the use of a REST client so that the payload and content type can be specified in addition to the URL. REST clients are available for most web browsers as plug-ins or as part of web service tools, such as SoapUI. The content type when using these clients should be specified as application/xml or the equivalent in the chosen tool.
You can view the API logs in the OAM (pcrfclient) VM at the following location:
/var/log/broadhop/consolidated-qns.log