The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
CPS includes southbound interfaces to various policy control enforcement functions (PCEFs) in the network, and northbound interfaces to OSS/BSS and subscriber applications, IMSs, and web applications.
Cisco Control Center enables you to do these tasks:
HA: https://<lbvip01>:443
AIO: http://<ip>:8090
HTTPS/HTTP
There are two levels of administrative roles supported for Control Center: Full Privilege and View Only. The logins and passwords for these two roles are configurable in LDAP or in /etc/broadhop/authentication-password.xml.
Full Privilege Admin Users: These users can view, edit, and delete information and can perform all tasks. Admin users have access to all screens in Control Center.
View Only Admin Users: These users can view information in Control Center, but cannot edit or change information. View only administrators have access to a subset of screens in the interface.
The Custom Reference Data (CRD) REST API enables the query of, creation, deletion, and update of CRD table data without the need to access the Control Center GUI. The CRD APIs are available using an HTTP REST interface. The specific APIs are outlined in a later section in this guide.
HA: https:// <lbvip01>:443/custrefdata
AIO: http://<ip>:8080/custrefdata
A validation URL is:
HA: https:// <lbvip01>:8443/custrefdata
AIO: http://<ip>:8080/custrefdata
HTTPS/HTTP
Security and account management is accomplished by using the haproxy mechanism on the platform Policy Director (LB) by defining user lists, user groups, and specific users.
On Cluster Manager: /etc/puppet/modules/qps/templates/etc/haproxy/haproxy.cfg
Back up the /etc/haproxy/haproxy.cfg file.
Edit /etc/haproxy/haproxy.cfg on lb01/lb02 and add a userlist with at least one username and password as shown:
userlist <userlist name> user <username1> password <encrypted password>
For example:
userlist cps_user_list
user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1
user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1
Run the following command to generate an encrypted password:
/sbin/grub-crypt --sha-512
For example:
[root@host ~]# /sbin/grub-crypt --sha-512 Password: Retype password: <encrypted password output>
Add the following line in frontend https-api to enable Authentication and Authorization for CRD REST API and create a new backend server as crd_api_servers to intercept CRD REST API requests:
mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s
Update frontend https_all_servers by replacing api_servers with crd_api_servers for CRD API as follows:
acl crd_api path_beg -i /custrefdata/
use_backend crd_api_servers if crd_api
Add at least one group with user in userlist created in Step 2 as follows:
group qns-ro users readonly
group qns users apiuser
Add the following lines to the backend crd_api_servers:
acl authoriseUsers http_auth_group(<cps-user-list>) <user-group>
http-request auth realm CiscoApiAuth if !authoriseUsers
Map the group created in Step 5 with the acl as follows:
acl authoriseUsers http_auth_group(<cps-user-list>) <user-group>
Add the following in the backend crd_api_servers to set read-only permission (GET HTTP operation) for group of users:
http-request deny if !METH_GET authoriseUsers
HAProxy Configuration Example
userlist cps_user_list group qns-ro users readonly group qns users apiuser user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75B
XKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75B
XKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 frontend https-api description API bind lbvip01:8443 ssl crt /etc/ssl/certs/quantum.pem mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api default_backend api_servers reqadd X-Forwarded-Proto:\ https if { ssl_fc } frontend https_all_servers description Unified API,CC,PB,Grafana,CRD-API,PB-AP bind lbvip01:443 ssl crt /etc/ssl/certs/quantum.pem no-sslv3 no-tlsv10 ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:! aNULL:!eNULL:!LOW:! 3DES:!MD5:!EXP:!PSK:!SRP:!DSS mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s acl authoriseReadonlyUsers http_auth_group(cps_user_list) qns-ro acl authoriseAdminUsers http_auth_group(cps_user_list) qns http-request auth realm CiscoApiAuth if !authoriseReadonlyUsers !authoriseAdminUsers http-request deny if !METH_GET authoriseReadonlyUsers
![]() Note | The haproxy.cfg file is generated by the Puppet tool. Any manual changes to the file in lb01/lb02 would be reverted if the pupdate or vm-init scripts are run. |
Grafana is a metrics dashboard and graph editor used to display graphical representations of system, application KPIs, bulkstats of various CPS components.
HA: https://<lbvip01>:9443/grafana
AIO: http://<ip>:443/grafana
HTTPS/HTTP
In CPS 7.5 and higher, at least one Grafana user account must be created to access the Grafana web interface.
In CPS 8.1 and higher, an administrative user account must be used to add, modify, or delete Grafana dashboards or perform other administrative actions.
Refer to the Graphite and Grafana chapter in this guide for details on adding or deleting these user accounts.
Haproxy is a frontend IP traffic proxy process in lb01/lb02 that routes the IP traffic for other applications in CPS. The details of individual port that haproxy forwards is already described in other individual sections.
As per the Diameter configuration done, haproxy-diameter statistics will bind to one of the configurations and that URL will be displayed in about.sh output. For various options for Diameter configuration, refer to Diameter Related Configuration section in CPS Installation Guide for VMware.
More information about HAProxy is provided in the HAProxy.
Documentation for HAProxy is available at: http://www.haproxy.org/#docs
To view statistics, open a browser and navigate to the following URL:
Not applicable.
Java Management Extension (JMX) interface can be used for managing and monitoring applications and system objects.
Resources to be managed / monitored are represented by objects called managed beans (mbeans). MBean represents a resource running in JVM and external applications can interact with mbeans through the use of JMX connectors and protocol adapters for collecting statistics (pull); for getting/setting application configurations (push/pull); and notifying events like faults or state changes (push).
External applications can be configured to monitor application over JMX. In addition to this, there are scripts provided by application that connects to application over JMX and provide required statistics/information.
pcrfclient01/pcrfclient02:
lb01/lb02:
qns01/qns02/qns... : 9045
Ports should be blocked using firewall to prevent access from outside the CPS system.
Not applicable.
Logstash is a process that consolidates the log events from CPS nodes into pcrfclient01/pcrfclient02 for logging and alarms. The logs are forwarded to CPS application to raise necessary alarms and the logs are stored at /var/log/logstash/logstash.log.
There is no specific CLI interface for logstash.
TCP and UDP
TCP: 5544, 5545, 7546, 6514
UDP: 6514
Account and role management is not applicable.
In CPS 14.0.0 and higher releases, SSSD based authentication is supported, allowing users to authenticate against an external LDAP server and gain access to the CPS CLI. SSSD RPMs and default sssd.conf file is installed on each CPS VM when you perform a new installation or upgrade CPS.
For more information, refer to the CPS Installation Guide for VMware.
/etc/monit.d/sssd file has been added with the following content so that SSSD is monitored by monit:
check process sssd with pidfile /var/run/sssd.pid start program = "/etc/init.d/sssd start" with timeout 30 seconds stop program = "/etc/init.d/sssd stop" with timeout 30 seconds
Also /etc/logrotate.d/sssd file has been added to rotate the SSSD log files. Here is the default configuration:
“ /var/log/sssd/*.log { daily missingok notifempty sharedscripts nodateext rotate 5 size 100M compress delaycompress postrotate /bin/kill -HUP `cat /var/run/sssd.pid 2>/dev/null` 2> /dev/null || true endscript } “
Use the monit summary command to view the list of services managed by monit. Here is an example:
monit summary The Monit daemon 5.17.1 uptime: 4d 2h 22m Process 'whisper' Running Process 'sssd' Running Process 'snmptrapd' Running Process 'snmpd' Running Program 'vip_trap' Status ok Program 'gr_site_status_trap' Status ok Process 'redis' Running Process 'qns-4' Running Process 'qns-3' Running Process 'qns-2' Running Process 'qns-1' Running File 'monitor-qns-4' Accessible File 'monitor-qns-3' Accessible File 'monitor-qns-2' Accessible File 'monitor-qns-1' Accessible Process 'memcached' Running Process 'irqbalance' Running Process 'haproxy-diameter' Running Process 'haproxy' Running Process 'cutter' Running Process 'corosync' Running Program 'cpu_load_monitor' Status ok Program 'cpu_load_trap' Status ok Program 'gen_low_mem_trap' Status ok Process 'collectd' Running Process 'auditrpms.sh' Running System 'lb01' Running
Setting of other configuration files to support LDAP based authentication and the changes required in sssd.conf file as per the customer deployment is out of scope of this document. For more information, consult your Cisco Technical Representative.
Grafana support LDAP authentication over httpd and does not use SSSD feature. Due to this, if LDAP server is down then grafana is not accessible for LDAP users.
No CLI is provided.
Port number is not required.
Step 1 | Bypass the first level authentication by updating the
/etc/httpd/conf.d/grafana-proxy.conf file as
follows:
LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so # Set root to <ip address>/grafana ProxyPass /grafana http://127.0.0.1:3000 ProxyPassReverse /grafana http://127.0.0.1:3000 # Set authentication for Grafana # 1) Use httpd authentication as a front-end to Grafana # 2) Remove header since Grafana is configured for anonymous # authentication and will fail with a pass-thru header # # Notice: scope of authentication and header is limited to Grafana # to avoid conflicts with other applications. Apache configuration # in this file is global unless contained in the directive below. <Location "/grafana"> LoadModule headers_module modules/mod_headers.so Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods "GET, OPTIONS" Header set Access-Control-Allow-Headers "origin, authorization, accept" Header set Access-Control-Allow-Credentials true # Do not pass credentials to Grafana's anonymous authorization RequestHeader unset Authorization Satisfy Any #AuthName "Authentication Required" #AuthUserFile "/var/broadhop/.htpasswd" #Require valid-user #Order allow,deny # This is used for local calls to the API during puppet bring up Allow from 127.0.0.1 #Satisfy Any </Location> |
Step 2 | Restart httpd by running the
following command:
/etc/init.d/httpd restart
If port already in use error is displayed, execute the following steps: |
Step 3 | Update
/etc/grafana/grafana.ini file to point to LDAP
authentication instead of Basic Auth as follows:
#################################### Basic Auth ########################## [auth.basic] # For CPS, trusted API requests come here and need local authentication ;enabled = true #################################### Auth LDAP ########################## [auth.ldap] enabled = true config_file = /etc/grafana/ldap.toml |
Step 4 | Modify
/etc/grafana/ldap.toml file to provide LDAP
details (for example, search base dn, bind dn, group search base dn, member_of
attribute) as follows:
# Set to true to log user information returned from LDAP verbose_logging = true [[servers]] # Ldap server host (specify multiple hosts space separated) host = "ldap_l.cisco.com" # Default port is 389 or 636 if use_ssl = true port = 10648 # Set to true if ldap server supports TLS use_ssl = true # set to true if you want to skip ssl cert validation ssl_skip_verify = true # set to the path to your root CA certificate or leave unset to use system defaults #root_ca_cert = "/etc/openldap/certs/ldap_local.cer" # Search user bind dn bind_dn = "uid=admin,ou=system" # Search user bind password bind_password = 'secret' # User search filter, for example "(cn=%s)" or "(sAMAccountName=%s)" or "(uid=%s)" search_filter = "(uid=%s)" # An array of base dns to search through search_base_dns = ["ou=users,dc=sprint,dc=com"] #search_base_dns = ["ou=groups,dc=sprint,dc=com"] # In POSIX LDAP schemas, without memberOf attribute a secondary query must be made for groups. # This is done by enabling group_search_filter below. You must also set member_of= "cn" # in [servers.attributes] below. # Users with nested/recursive group membership and an LDAP server that supports LDAP_MATCHING_RULE_IN_CHAIN # can set group_search_filter, group_search_filter_user_attribute, group_search_base_dns and member_of # below in such a way that the user's recursive group membership is considered. # # Nested Groups + Active Directory (AD) Example: # # AD groups store the Distinguished Names (DNs) of members, so your filter must # recursively search your groups for the authenticating user's DN. For example: # # group_search_filter = "(member:1.2.840.113556.1.4.1941:=%s)" # group_search_filter_user_attribute = "distinguishedName" # group_search_base_dns = ["ou=groups,dc=grafana,dc=org"] # # [servers.attributes] # ... # member_of = "distinguishedName" ## Group search filter, to retrieve the groups of which the user is a member (only set if memberOf attribute is not available) #group_search_filter = "(cn=%s)" #group_search_filter = "(&(objectClass=*)(cn=%s))" ## Group search filter user attribute defines what user attribute gets substituted for %s in group_search_filter. ## Defaults to the value of username in [server.attributes] ## Valid options are any of your values in [servers.attributes] ## If you are using nested groups you probably want to set this and member_of in ## [servers.attributes] to "distinguishedName" group_search_filter_user_attribute = "cn" ## An array of the base DNs to search through for groups. Typically uses ou=groups group_search_base_dns = ["ou=groups,dc=sprint,dc=com"] #group_search_base_dns = ["cn=Roles,ou=groups,dc=sprint,dc=com"] # Specify names of the ldap attributes your ldap uses [servers.attributes] name = "cn" surname = "sn" username = "uid" member_of = "cn" email = "email" # Map ldap groups to grafana org roles [[servers.group_mappings]] group_dn = "cn=Admin,ou=groups,dc=sprint,dc=com" org_role = "Admin" # The Grafana organization database id, optional, if left out the default org (id 1) will be used # org_id = 1 [[servers.group_mappings]] group_dn = "cn=User,ou=groups,dc=sprint,dc=com" org_role = "Editor" #[[servers.group_mappings]] # If you want to match all (or no ldap groups) then you can use wildcard #group_dn = "*" #org_role = "Viewer" |
Step 5 | Restart Grafana server by running the following command: service grafana-server restart |
Step 6 | Log in to Grafana using LDAP user credentials. |
MongoDB is used to manage session storage efficiently and address key requirements: Low latency reads/writes, high availability, multi-key access and so on.
CPS support different models of mongo database based on CPS deployment like AIO, HA or Geo-redundancy. Not all of the databases listed below may be used in your CPS deployment.
To rotate the mongoDB logs on the Session Manager VM, open the mongoDB file by executing the following command:
cat /etc/logrotate.d/mongodb
You will have output as similar to the following:
{ daily rotate 5 copytruncate create 640 root root sharedscripts postrotate endscript }
In the above script the mongoDB logs are rotated daily and it ensures that it keeps the latest 5 backups of these log files.
The standard definition for supported replica-set defined in configuration file. This configuration file is self-explanatory which contains replica-set, set-name, hostname, port number, data file path and so on.
Location: /etc/broadhop/mongoConfig.cfg
Database Name |
Port Number |
Primary DB Host |
Secondary DB Host |
Arbiter |
Purpose |
---|---|---|---|---|---|
session_cache |
27717 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Session database |
balance_mgmt |
27718 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Quota/Balance database |
audit |
27725 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Reporting database |
spr |
27720 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
USuM database |
cust_ref_data |
27717 |
sessionmgr01 |
sessionmgr02 |
pcrfclient01 |
Custom Reference Data |
![]() Note | The port number configuration is based on what is configured in each of the respective Policy Builder plug-ins. Refer to the Plug-in Configuration chapter of the CPS Mobile Configuration Guide for correct port number and ports defined in mongo configuration file. |
The All-in-One deployment mongo database runs on ports 27017 and 27729.
Database Name |
Port Number |
Purpose |
---|---|---|
All |
27017 |
This port is used for all the databases. |
Port is not in use by any other application. To check it, login to VM on which replica-set is to be created and execute the following command:
netstat -lnp | grep <port_no>
If no process is using same port then port can be chosen for replica-set for binding.
Port number used should be greater than 1024 and not in ephemeral port range i.e, not in between following range :
net.ipv4.ip_local_port_range = 32768 to 61000
Use the following commands to access the mongoDB CLI:
HA:
Login to pcrfclient01 or pcrfclient02 and run: diagnostics.sh --get_replica_status
This command will output information about the databases configured in the CPS cluster.
AIO:
mongo --port 27017Not applicable.
Not applicable.
Restrict MongoDB Access for Readonly Users: If firewall is enabled on system, then on all VMs for all readonly users, IP table rule will be created for outgoing connections to reject outgoing traffic to mongoDB replica sets.
For example, rule similar to the following will be created.
REJECT tcp -- anywhere sessionmgr01 tcp dpt:27718 owner GID match qns-ro reject-with icmp-port-unreachable
With this qns-ro user will have restricted mongoDB access on sessionmgr01 on port 27718. Such rules will be added for all readonly users who are part of qns-ro group for all replica sets.
CPS is based on Open Service Gateway initiative (OSGi) and OSGi console is a command-line shell which can be used for analyzing problems at OSGi layer of the application.
Use the following command to access the OSGi console:
telnet <ip> <port>
The following commands can be executed on the OSGi console:
ss : List installed bundle status.
start <bundle-id> : Start the bundle.
stop <bundle-id> : Stop the bundle.
diag <bundle-id> : Diagnose the bundle.
Use the following OSGi commands to add or remove shards:
Command |
Description |
---|---|
listshards |
Lists all the shards. |
removeshard <shard id> |
Marks the shard for removal. If shard is non-backup, rebalance is required for shard to be removed fully. If shard is backup, it does not require rebalance of sessions and hence would be removed immediately. |
rebalance <rate limit> |
Rebalances the buckets and migrates session with rate limit. Rate limit is optional. If rate limit is passed, it is applied at rebalance. |
rebalancebg <rate limit> |
Rebalances the buckets and schedules background task to migrate sessions. Rate limit is optional. If rate limit is passed, it is applied at rebalance. |
rebalancestatus |
Displays the current rebalance status. |
rebuildAllSkRings |
In order for CPS to identify a stale session from the latest session, the secondary key mapping for each site stores the primary key in addition to the bucket ID and the site ID, that is, Secondary Key = <Bucket Id>; <Site Id>; <Primary Key>. To enable this feature, add the flag -Dcache.config.version=1 in the /etc/broadhop/qns.conf file. Enabling this flag and running rebuildAllSkRings starts the data migration for the new version so that CPS can load the latest version of the session. |
skRingRebuildStatus |
Displays the status of the migration and the current cache version. |
Use the following OSGi command to get the information related to open application alarms in CPS:
Command |
Description |
---|---|
listalarms |
To list the open/active application alarms since last restart of policy server (QNS) process on pcrfclient01/02 VM. |
Example:
osgi> listalarms Active Application Alarms id=1000 sub_id=3001 event_host=lb02 status=down date=2017-11-22,10:47:34, 051+0000 msg="3001:Host: site-host-gx Realm: site-gx-client.com is down" id=1000 sub_id=3001 event_host=lb02 status=down date=2017-11-22,10:47:34, 048+0000 msg="3001:Host: site-host-sd Realm: site-sd-client.com is down" id=1000 sub_id=3001 event_host=lb01 status=down date=2017-11-22,10:45:17, 927+0000 msg="3001:Host: site-server Realm: site-server.com is down" id=1000 sub_id=3001 event_host=lb02 status=down date=2017-11-22,10:47:34, 091+0000 msg="3001:Host: site-host-rx Realm: site-rx-client.com is down" id=1000 sub_id=3002 event_host=lb02 status=down date=2017-11-22,10:47:34, 111+0000 msg="3002:Realm: site-server.com:applicationId: 7:all peers are down"
pcrfclientXX:
lbXX:
qnsXX: 9091
Ports should be blocked using a firewall to prevent access from outside the CPS cluster.
Not applicable.
Policy Builder is the web-based client interface for the configuration of policies in Cisco Policy Suite.
HA: https://<lbvip01>:7443/pb
AIO: http://<ip>:7070/pb
HTTPS/HTTP
Initial accounts are created during the software installation. Refer to the CPS Operations Guide for commands to add users and change passwords.
To allow initial investigation into a Proof of Concept API for managing a CPS System and Custom Reference Data related through an HTTPS accessible JSON API.
This is an HTTPS/Web interface and has no Command Line Interface.
API: http://<Cluster Manager IP>:8458
Documentation: http://<Cluster Manager IP>:7070/doc/index.html
Initial accounts are created during the software installation. Refer to the CPS Operations Guide for commands to add users and change passwords.
Enhanced log processing is provided using Rsyslog.
Rsyslog logs Operating System (OS) data locally (/var/log/messages etc.) using the /etc/rsyslog.conf and /etc/rsyslog.d/*conf configuration files.
rsyslog outputs all WARN level logs on CPS VMs to /var/log/warn.log file.
On all nodes, Rsyslog forwards the OS system log data to lbvip02 via UDP over the port defined in the logback_syslog_daemon_port variable as set in the CPS deployment template (Excel spreadsheet). To download the most current CPS Deployment Template (/var/qps/install/current/scripts/deployer/templates/QPS_deployment_config_template.xlsm), refer to the CPS Installation Guide for VMware or CPS Release Notes for this release.
Additional information is available in the Logging chapter of the CPS Troubleshooting Guide. Refer also to http://www.rsyslog.com/doc/ for the Rsyslog documentation.
Not applicable.
UDP
6514
Account and role management is not applicable.
CPS provides the ability to configure forwarding of consolidated syslogs from rsyslog-proxy on Policy Director VMs to remote syslog servers (refer to CPS Installation Guide for VMware). However, if additional customizations are made to rsyslog configuration to forward logs to external syslog servers in customer's network for monitoring purposes, such forwarding must be performed via dedicated action queues in rsyslog. In the absence of dedicated action queues, when rsyslog is unable to deliver a message to the remote server, its main message queue can fill up which can lead to severe issues, such as, preventing SSH logging, which in turn can prevent SSH access to the VM.
Sample configuration for dedicated action queues is available in the Logging chapter of the CPS Troubleshooting Guide. Refer to rsyslog documentation on http://www.rsyslog.com/doc/v5-stable/concepts/queues.html for more details about action queues.
Apache™ Subversion (SVN) is the versioning and revision control system used within CPS. It maintains all the CPS policy configurations and has repositories in which files can be created, updated and deleted. SVN maintains the file difference each time any change is made to a file on the server and for each change it generates a revision number.
In general, most interactions with SVN are performed via Policy Builder.
Use the following commands to access SVN:
From a remote machine with the SVN client installed, use the following commands to access SVN:
Get all files from the server:
svn checkout --username <username> --password <password> <SVN Repository URL> <Local Path>
Example:
svn checkout --username broadhop --password broadhop http://pcrfclient01/repos/configuration/root/configuration
If <Local Path> is not provided, files are checked out to the current directory.
Store/check-in the changed files to the server:
svn commit --username <username> --password <password> <Local Path> -m “modified config”
Example:
svn commit --username broadhop --password broadhop /root/configuration -m “modified config”
Update local copy to latest from SVN:
svn update <Local Path>
Example:
svn update /root/configuration/
Check current revision of files:
svn info <Local Path>
Example:
svn info /root/configuration/
![]() Note | Use svn --help for a list of other commands. |
HTTP
80
From the pcrfclient01 VM, run adduser.sh to create a new user.
/var/qps/bin/support/adduser.sh
![]() Note | This command can also be run from the Cluster Manager VM, but you must include the OAM (PCRFCLIENT) option: /var/qps/bin/support/adduser.sh pcrfclient |
Example:
[root@pcrfclient01 /]# /var/qps/bin/support/adduser.sh Enter username: <username> Enter group for the user: <any group> Enter password: Re-enter password:
By default, the adduser.sh script creates a new user with read-only permissions. For read-write permission, you must assign the user to the qns-svn group and then run the vm-init command.
From the pcrfclient01 VM, run the adduser.sh script to create the new user.
Run the following command on both pcrfclient01 and pcrfclient02 VMs:
/etc/init.d/vm-init
You can now login and commit changes as the newly created user.
From the pcrfclient01 VM, run the change_passwd.sh script to change the password of a user.
/var/qps/bin/support/change_passwd.sh
Example:
[root@pcrfclient01 /]# /var/qps/bin/support/change_passwd.sh Enter username whose password needs to be changed: user1 Enter new password: Re-enter new password:
Perform all of the following commands on both the pcrfclient01 and pcrfclient02 VMs.
Use the htpasswd utility to add a new user
htpasswd -mb /var/www/svn/.htpasswd <username> <password>
Example:
htpasswd -mb /var/www/svn/.htpasswd user1 password
In some versions, the password file is /var/www/svn/password
Update the user role file /var/www/svn/users-access-file and add the username under admins (for read/writer permissions) or nonadmins (for read-only permissions). For example:
[groups] admins = broadhop nonadmins = read-only, user1 [/] @admins = rw @nonadmins = r
Use the htpasswd utility to change passwords.
htpasswd -mb /var/www/svn/.htpasswd <username> <password>
Example:
htpasswd -mb /var/www/svn/.htpasswd user1 password
CPS 7.0 and above has been designed to leverage the Terminal Access Controller Access Control System Plus (TACACS+) to facilitate centralized management of users. Leveraging TACACS+, the system is able to provide system-wide authentication, authorization, and accounting (AAA) for the CPS system.
Further the system allows users to gain different entitlements based on user role. These can be centrally managed based on the attribute-value pairs (AVP) returned on TACACS+ authorization queries.
No CLI is provided.
CPS communicates to the AAA backend using IP address/port combinations configured by the operator.
Configuration is managed by the Cluster Management VM which deploys the /etc/tacplus.conf and various PAM configuration files to the application VMs. For more account management information, refer to TACACS+ Service Requirements.
For more information about TACACS+, refer to the following links:
TACAC+ Protocol Draft: http://tools.ietf.org/html/draft-grant-tacacs-02
Portions of the solution reuse software from the open source pam_tacplus project hosted at: https://github.com/jeroennijhof/pam_tacplus
For information on CLI commands, refer to Accessing the CPS CLI.
Unified APIs are used to reference customer data table values.
HA: https://<lbvip01>:8443/ua/soap
AIO: http://<ip>:8080/ua/soap
HTTPS/HTTP
Currently there is no authorization for this API
sudo supports a plugin architecture for security policies and input/output logging. The default security policy is sudoers, which is configured via the file /etc/sudoers, contains the rules that users must follow when using the sudo command.
sudo allows a system administrator to delegate authority to give certain users (or groups of users) the ability to run some (or all) commands as root or another user while providing an audit trail of the commands and their arguments.
For example: %adm ALL=(ALL) NOPASSWD: ALL
This means that any user in the administrator group on any host may run any command as any user without a password. The first ALL refers to hosts, the second to target users, and the last to allowed commands.
When an authenticated user has one of the above group permissions, they can access the CPS CLI and run predefined commands available to that user role. A list of commands available after authentication can be viewed using the sudo -l command (-l for list), or any user with root privileges can use sudo -l -U <qns-role> to see the available command for a specific Policy Server (qns) role.
The /etc/sudoers file contains user specifications that define the commands that users may execute. When sudo is invoked, these specifications are checked in order, and the last match is used. A user specification looks like this at its most basic:
User Host = (Runas) Command
Read this as "User may run Command as the Runas user on Host". Any or all of the above may be the special keyword ALL, which always matches. User and Runas may be usernames, group names prefixed with %, numeric UIDs prefixed with #, or numeric GIDs prefixed with %#. Host may be a hostname, IP address, or a whole network (for example, 192.0.2.0/24), but not 127.0.0.1.
The group identifier of the TACACS+ authenticated user on the VM nodes. This value should reflect the role assigned to a given user, based on the following values:
group id=500 (qns)
The group identifier used by Policy Server (qns) user in application.
group id=501 (qns-su)
This group identifier should be used for users that are entitled to attain superuser (or 'root') access on the CPS VM nodes.
group id=504 (qns-admin)
This group identifier should be used for users that are entitled to perform administrative maintenance on the CPS VM nodes.
![]() Note | To execute administrative scripts from qns-admin, prefix the command with sudo. For example sudo stopall.sh |
group id=505 (qns-ro)
This group identifier should be used for users that are entitled to read-only access to the CPS VM nodes.
When an authenticated user has one of the above group permissions, they can access the CPS CLI and run predefined commands available to that user role. A list of commands available after authentication can be viewed using the sudo -l command (-l for list), or any user with root privileges can use sudo -l -U <qns-role> to see the available command for a specific Policy Server (qns) role.
For more information, refer to https://www.sudo.ws/intro.html.
The user's home directory on the CPS VM nodes. To enable simpler management of these systems, the users should be configured with a pre-deployed shared home directory based on the role they are assigned with the gid.
Multiple users can be logged into Policy Builder at the same time.
In the event that two users attempt to make changes on same screen and one user saves their changes to the client repository, the other user may receive errors. In such cases the user must return to the login page, revert the configuration, and repeat their changes.
This section covers the following topics:
Step 1 | Log in to the Cluster Manager. |
Step 2 | Add a user to
CPS by executing:
adduser.sh |
Step 3 | When prompted
for the user’s group, set ‘qns-svn’ for read-write permissions or ‘qns-ro’ for
read-only permissions.
Refer to CPS Commands for more information about these commands. |
The user can revert the configuration if changes since the last publish/save to client repository are not wanted.
This can also be necessary in the case of a ‘syn conflict’ error where both pcrfclient01 and pcrfclient02 are in use at the same time by different users and publish/save to client repository changes to the same file. The effect of reverting changes is that all changes since the publish/save to client repository will be undone.
Step 1 | On the Policy
Builder login screen, verify the user for which changes need to be reverted is
correct. This can be done by clicking
Edit and verifying that the Username and Password
fields are correct.
![]() |
Step 2 | Click
Revert.
The following confirmation dialog opens. ![]() |
Step 3 | Click
OK to revert back to the earlier configuration. The
following dialog confirms that the changes are reverted successfully.
![]() |
This section describes publishing Cisco Policy Builder data to the Cisco Policy Server. Publishing data occurs in the Cisco Policy Builder client interface, but affects the Cisco Policy Server. Refer to the CPS Mobile Configuration Guide for steps to publish data to the server.
Cisco Policy Builder manages data stored in two areas:
The Client Repository stores data captured from the Policy Builder GUI in Subversion. This is a place where trial configurations can be developed and saved without affecting the operation of the Cisco Policy Builder server data.
The default URL is http://pcrfclient01/repos/configuration.
The Server Repository is where a copy of the client repository is created/updated and where the CPS picks up changes. This is done on Publish from Policy Builder.
![]() Note | Publishing will also do a Save to Client Repository to ensure the Policy Builder and Server configurations are not out of sync. |
The default URL is http://pcrfclient01/repos/run.
After the installation is complete, you need to configure the Control Center access. This is designed to give the customer a customized Control Center username.
Step 1 | Login to the Cluster Manager VM. | ||
Step 2 | Execute the
following script to add a Control Center user.
/var/qps/bin/support/adduser.sh
Example: /var/qps/bin/support/adduser.sh Enter username: username Enter group for the user: groupname Enter password: password Re-enter password: password This example adds username to all the VMs in the cluster. |
This section describes updating Control Center mapping of read-write/read-only to user groups (Default: qns and qns-ro respectively).
Step 1 | Login to the Cluster Manager VM. | ||
Step 2 | Update
/etc/broadhop/authentication-provider.xml to include
the group mapping for the group you want to use.
| ||
Step 3 | Run syncconfig.sh to put this file on all VMs. | ||
Step 4 | Restart the CPS
system, so that the changes done above are reflected in the VMs:
restartall.sh To add a new user to Control Center and specify the group you have specified in the configuration file above, refer to Add a Control Center User. |
CPS Control Center supports session limits per user. If the user exceeds the configured session limit, they are not allowed to log in. CPS also provides notifications to the user when other users are already logged in.
When a user logs in to Control Center, a Welcome message displays at the top of the screen. A session counter is shown next to the username. This represents the number of login sessions for this user. In the following example, this user is logged in only once ( [1] ).
The user can click the session counter ([1]) link to view details for the session(s), as shown below.
When another user is already logged in with the same username, a notification displays for the second user in the bottom right corner of the screen, as shown below.
The first user also receives a notification, as shown, and the session counter is updated to [2].
These notifications are not displayed in real time; CPS updates this status every 30 seconds.
The session limit can be configured by the runtime argument, which can be configured in the qns.conf file.
-Dcc.user.session.limit=3 (default value is 5)
The default session timeout can be changed by editing the following file on the Policy Server (QNS) instance:
./opt/broadhop/qns-1/plugins/com.broadhop.ui_3.5.0.release/war/WEB-INF/web.xml <!-- timeout after 15 mins of inactivity --> <session-config> <session-timeout>15</session-timeout> <cookie-config> <http-only>true</http-only> </cookie-config> </session-config>
![]() Note | The same timeout value must be entered on all Policy Server (QNS) instances. When the number of sessions of the user exceeds the session limit, the user is not allowed to log in and receives the message “Max session limit per user exceed!” ![]() |
If a user does not log out and then closes their browser, the session remains alive on the server until the session times out. When the session timeout occurs, the session is deleted from the memcached server. The default session timeout is 15 minutes. This is the idle time after which the session is automatically deleted.
When a Policy Server (QNS) instance is restarted, all user/session details are cleared.
When the memcached server is restarted without also restarting the Policy Server (QNS) instance, all http sessions on the Policy Server (QNS) instance are invalidated. In this case the user is asked to log in again and after that, the new session is created.
Update the HAProxy configuration to enable authentication and authorization mechanism in the CRD API module.
Step 1 | Back up the /etc/haproxy/haproxy.cfg file before making modifications in the following steps. | ||
Step 2 | Edit
/etc/haproxy/haproxy.cfg on lb01/lb02 and add a
userlist with at least one username and password.
Use the following syntax: userlist <userlist name> user <username1> password <encrypted password> For example: userlist cps_user_list user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 Run the following command to generate an encrypted password: /sbin/grub-crypt --sha-512 [root@host ~]# /sbin/grub-crypt --sha-512 Password: Retype password: <encrypted password output> | ||
Step 3 | Add the
following line in frontend
https-api to enable Authentication and Authorization
for CRD REST API and create a new backend server as
crd_api_servers to intercept CRD REST API requests:
mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s | ||
Step 4 | Update
frontend
https_all_servers by replacing
api_servers with
crd_api_servers for CRD API as follows:
acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api | ||
Step 5 | To enable the
authentication, edit
/etc/haproxy/haproxy.cfg on lb01/lb02 and add the
following lines in the
backend
crd_api_servers:
acl validateAuth http_auth(<userlist_name>) http-request auth unless validateAuth acl validateAuth http_auth(<userlist name>) | ||
Step 6 | To enable the
authorization, add at least one group with the user in
userlist created in
Step 2 as
follows:
group qns-ro users readonly userlist cps_user_list group qns-ro users readonly user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrmeAUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 | ||
Step 7 | Add the
following in the
backend
crd_api_servers to set read-only permission (GET HTTP operation) for
group of users:
acl authoriseUsers http_auth_group(<user-list-name>) <group-name> http-request deny if !METH_GET authoriseUsers acl authorizeUsers http_auth_group(<userlist name>) <group-name> Example:HAProxy Configuration Example userlist cps_user_list group qns-ro users readonly user readonly password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrme AUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 user apiuser password $6$xRtThhVpS0w4lOoS$pyEM6VYpVaUAxO0Pjb61Z5eZrme AUUdCMF7D75BXKbs4dhNCbXjgChVE0ckfLDp4T2CsUzzNkoqLRdn7RbAAU1 frontend https-api description API bind lbvip01:8443 ssl crt /etc/ssl/certs/quantum.pem mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api default_backend api_servers reqadd X-Forwarded-Proto:\ https if { ssl_fc } frontend https_all_servers description Unified API,CC,PB,Grafana,CRD-API,PB-API bind lbvip01:443 ssl crt /etc/ssl/certs/quantum.pem no-sslv3 no-tlsv10 ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:! aNULL:!eNULL:!LOW:! 3DES:!MD5:!EXP:!PSK:!SRP:!DSS mode http acl crd_api path_beg -i /custrefdata/ use_backend crd_api_servers if crd_api backend crd_api_servers mode http balance roundrobin option httpclose option abortonclose server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s acl validateAuth http_auth(cps_user_list) acl authoriseUsers http_auth_group(cps_user_list) qns-ro http-request auth unless validateAuth http-request deny if !METH_GET authoriseUsers
|
By default, the CPS Unified API does not require username and password authentication. To enable authentication, refer to Enable Authentication for Unified API.
There are two options to include a username and password in an API request:
Include the username and password directly in the request. For example:
https://<username>:<password>@<lbvip02>:8443/ua/soap
Add an authentication header to the request:
Authorization: Basic <base64 encoded value of username:password>
For example:
wget -d -O - --header="Authorization: Basic cG9ydGFXNjbzEyMwo=" https://lbvip02:8443/ua/soap/keepalive
HAProxy is used to secure and balance calls to the CPS Unified API.
Step 1 | Back up the /etc/haproxy/haproxy.cfg file before making modifications in the following steps. | ||
Step 2 | Edit
/etc/haproxy/haproxy.cfg on lb01/lb02 and add a
userlist with at least one username and password.
Use the following syntax: userlist <userlist name> user <username1> password <encrypted password> user <username2> insecure-password <plain text password> For example: userlist L1 user apiuser password $6$eC8mFOWMcRnQo7FQ$C053tv5T2mPlmGAta0ukH87MpK9aLPtWgCEK | ||
Step 3 | Run the
following command to generate an encrypted password:
/sbin/grub-crypt --sha-512 [root@host ~]# /sbin/grub-crypt --sha-512 Password: Retype password: <encrypted password output> | ||
Step 4 | Edit
/etc/haproxy/haproxy.cfg on lb01/lb02 to
configure HAProxy to require authentication. Add the following 4 lines to the
haproxy.cfg file:
acl validateAuth http_auth(<userlist_name>) acl unifiedAPI path_beg -i /ua/soap http-request allow if !unifiedAPI http-request auth unless validateAuth acl validateAuth http_auth(<userlist name>) frontend https-api description Unified API bind lbvip01:8443 ssl crt /etc/ssl/certs/quantum.pem default_backend api_servers reqadd X-Forwarded-Proto:\ https if { ssl_fc } backend api_servers mode http balance roundrobin option httpclose option abortonclose option httpchk GET /ua/soap/keepalive server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s server qns03_A qns03:8080 check inter 30s server qns04_A qns04:8080 check inter 30s acl validateAuth http_auth(L1) acl unifiedAPI path_beg -i /ua/soap http-request allow if !unifiedAPI http-request auth unless validateAuth The configuration above applies authentication on context /ua/soap, which is the URL path of the Unified API.
|
In order to access the Unified API WSDL while using authentication change the following line:
acl unifiedAPI path_beg -i /ua/soap
to
acl unifiedAPI path_beg -i /ua/.
The default address for the WSDL is https://<lbvip01>:8443/ua/wsdl/UnifiedApi.wsdl
The Unified API contains full documentation in an html format that is compatible with all major browsers.
The default address is https://<HA-server-IP>:8443/ua/wsdl/UnifiedApi.xsd
![]() Note | Run the about.sh command from the Cluster Manager to display the actual addresses as configured in your deployment. |
CPS 7.x onward uses HTTPS on port 8443 for Unified API access. To enable HTTP support (like pre-7.0) on port 8080, perform the following steps:
![]() Note | Make sure to open port 8080 if firewall is used on the setup. |
Step 1 | Create the
following directories (ignore File exists error), on Cluster Manager:
/bin/mkdir -p /var/qps/env_config/modules/custom/templates/etc/haproxy /bin/mkdir -p /var/qps/env_config/modules/custom/templates/etc/monit.d /bin/mkdir -p /var/qps/env_config/nodes |
Step 2 | Create the file
/var/qps/env_config/modules/custom/templates/etc/haproxy/haproxy-soaphttp.erb
with the following contents on Cluster Manager:
global daemon nbproc 1 # number of processing cores stats socket /tmp/haproxy-soaphttp defaults timeout client 60000ms # maximum inactivity time on the client side timeout server 180000ms # maximum inactivity time on the server side timeout connect 60000ms # maximum time to wait for a connection attempt to a server to succeed log 127.0.0.1 local1 err listen pcrf_proxy XXXX:8080 ----------- > where, XXXX, is Unified API interface hostname or IP mode http balance roundrobin option httpclose option abortonclose option httpchk GET /ua/soap/KeepAlive server qns01_A qns01:8080 check inter 30s server qns02_A qns02:8080 check inter 30s server qns03_A qns03:8080 check inter 30s server qns04_A qns04:8080 check inter 30s server qns05_A qns05:8080 check inter 30s server qns06_A qns06:8080 check inter 30s server qns07_A qns07:8080 check inter 30s server qns08_A qns08:8080 check inter 30s server qns09_A qns09:8080 check inter 30s server qns10_A qns10:8080 check inter 30s |
Step 3 | Create the file
/var/qps/env_config/modules/custom/templates/etc/monit.d/haproxy-soaphttp
with the following contents on Cluster Manager:
check process haproxy-soaphttp with pidfile /var/run/haproxy-soaphttp.pid start = "/etc/init.d/haproxy-soaphttp start" stop = "/etc/init.d/haproxy-soaphttp stop" |
Step 4 | Create or modify
the
/var/qps/env_config/nodes/lb.yaml file with the
following contents on Cluster Manager:
If the file exists then just add custom::soap_http: classes: qps::roles::lb: custom::soap_http: |
Step 5 | Create the file
/var/qps/env_config/modules/custom/manifests/soap_http.pp
with the following contents on Cluster Manager.
Change ethX with the Unified API IP interface like eth0/eth1/eth2. class custom::soap_http( $haproxytype = "-soaphttp", ) { service { "haproxy-soaphttp": enable => false, require => [Package [ "haproxy" ],File ["/etc/haproxy/haproxy-soaphttp.cfg"], File['/etc/init.d/haproxy-soaphttp'], Exec["sysctl_refresh"]], } file { "/etc/init.d/haproxy-soaphttp": owner => "root", group => "root", content => template('qps/etc/init.d/haproxy'), require => Package [ "haproxy" ], notify => Service['haproxy-soaphttp'], mode => 0744 } file { "/etc/haproxy/haproxy-soaphttp.cfg": owner => "root", group => "root", content => template('custom/etc/haproxy/haproxy-soaphttp.erb'), require => Package [ "haproxy" ], notify => Service['haproxy-soaphttp'], } file { "/etc/monit.d/haproxy-soaphttp": content => template("custom/etc/monit.d/haproxy-soaphttp"), notify => Service["monit"], } exec { "remove ckconfig for haproxy-soaphttp": command => "/sbin/chkconfig --del haproxy-soaphttp", require => [Service['haproxy-soaphttp']], } firewall { '100 allow soap http': port => 8080, iniface => "ethX", proto => tcp, action => accept, } } |
Step 6 | Validate the
syntax of your newly created Puppet script on Cluster Manager:
/usr/bin/puppet parser validate /var/qps/env_config/modules/custom/manifests/soap_http.pp |
Step 7 | Rebuild your
Environment Configuration on Cluster Manager:
/var/qps/install/current/scripts/build/build_env_config.sh |
Step 8 | Reinitialize
your lb01/02 environments on Cluster Manager:
The following commands will take few minutes to complete. ssh lb01 /etc/init.d/vm-init ssh lb02 /etc/init.d/vm-init |
Step 9 | Validate SOAP
request on http:
|
This section covers the following topics:
Cisco Policy Suite (CPS) is built around a distributed system that runs on a large number of virtualized nodes. Previous versions of the CPS software allowed operators to add custom accounts to each of these virtual machines (VM), but management of these disparate systems introduced a large amount of administrative overhead.
CPS has been designed to leverage the Terminal Access Controller Access Control System Plus (TACACS+) to facilitate centralized management of users. Leveraging TACACS+, the system is able to provide system-wide authentication, authorization, and accounting (AAA) for the CPS system.
Further the system allows users to gain different entitlements based on user role. These can be centrally managed based on the attribute-value pairs (AVP) returned on TACACS+ authorization queries.
To provide sufficient information for the Linux-based operating system running on the VM nodes, there are several attribute-value pairs (AVP) that must be associated with the user on the ACS server used by the deployment. User records on Unix-like systems need to have a valid “passwd” record for the system to operate correctly. Several of these fields can be inferred during the time of user authentication, but the remaining fields must be provided by the ACS server.
A standard “passwd” entry on a Unix-like system takes the following form:
<username>:<password>:<uid>:<gid>:<gecos>:<home>:<shell>
When authenticating the user via TACACS+, the software can assume values for the username, password, and gecos fields, but the others must be provided by the ACS server. To facilitate this need, the system depends on the ACS server provided these AVP when responding to a TACACS+ Authorization query for a given username:
uid
A unique integer value greater than or equal to 501 that serves as the numeric user identifier for the TACACS+ authenticated user on the VM nodes. It is outside the scope of the CPS software to ensure uniqueness of these values.
gid
The group identifier of the TACACS+ authenticated user on the VM nodes. This value should reflect the role assigned to a given user, based on the following values:
gid=501 (qns-su)
This group identifier should be used for users that are entitled to attain superuser (or 'root') access on the CPS VM nodes.
gid=504 (qns-admin)
This group identifier should be used for users that are entitled to perform administrative maintenance on the CPS VM nodes.
![]() Note | For stopping/starting the Policy Server (QNS) process on node, the qns-admin user should use monit: |
For example,
sudo monit stop qns-1 sudo monit start qns-1
gid=505 (qns-ro)
This group identifier should be used for users that are entitled to read-only access to the CPS VM nodes.
home
The user's home directory on the CPS VM nodes. To enable simpler management of these systems, the users should be configured with a pre-deployed shared home directory based on the role they are assigned with the gid.
shell
The system-level login shell of the user. This can be any of the installed shells on the CPS VM nodes, which can be determined by reviewing the contents of /etc/shells on one of the CPS VM nodes. Typically, this set of shells is available in a CPS deployment:
The /usr/bin/sudosh shell can be used to audit user's activity on the system.
The user environment of the Linux-based VMs needs to be able to lookup a user's passwd entry via different columns in that record at different times. The TACACS+ NSS module provided as part of the CPS solution however is only able to query the Access Control Server (ACS) for this data using the username. For this reason the system relies upon the Name Service Cache Daemon (NSCD) to provide this facility locally after a user has been authorized to use a service of the ACS server.
More details on the operations of NSCD can be found by referring to online help for the software (nscd --help) or in its man page (nscd(8)). Within the CPS solution it provides a capability for the system to lookup a user's passwd entry via their uid as well as by their username.
To avoid cache coherence issues with the data provided by the ACS server the NSCD package has a mechanism for expiring cached information.
The default NSCD package configuration on the CPS VM nodes has the following characteristics:
Valid responses from the ACS server are cached for 600 seconds (10 minutes)
Invalid responses from the ACS server (user unknown) are cached for 20 seconds
Cached valid responses are reloaded from the ACS server 5 times before the entry is completely removed from the running set -- approximately 3000 seconds (50 minutes)
The cache are persisted locally so it survives restart of the NSCD process or the server
It is possible for an operator to explicitly expire the cache from the command line. To do so the administrator need to get the shell access to the target VM and execute the following command as a root user:
# nscd -i passwd
The above command will invalidate all entries in the passwd cache and force the VM to consult with the ACS server for future queries.
There may be some unexpected behaviors of the user environment for TACACS+ authenticated users connected to the system when their cache entries are removed from NSCD. This can be corrected by the user by logging out of the system and logging back into it or by issuing the following command, which forces the system to query the ACS server:
# id -a “$USER”
Only qns-ro and qns-admin users are allowed to view log files at specific paths according to their role and maintenance requirement. Access to logs are allowed only using the following paths:
Commands such as cat, less, more, and find cannot be executed using sudo in CPS 10.0.0 or higher releases.
To read any file, execute the following script using sudo:
$ sudo /var/qps/bin/support/logReader.py -r h -n 2 -f /var/log/puppet.log
where,
-r: Corresponds to tail (t) ,tailf (tf), and head (h) respectively
-n: Determines number of lines to be read. It works with the -r option. This is an optional parameter.
-f: Determines the complete file path to be read.
![]() Note |
You use Custom Reference Data (CRD) APIs to query, create, delete, and update CRD table data without the need to utilize the Control Center interface. The CRD APIs are available via a REST interface.
These APIs allow maintenance of the actual data rows in the table. They do not allow the creation of new tables or the addition of new columns. Table creation and changes to the table structure must be completed via the Policy Builder application.
Table names must be all in lowercase alphanumeric to utilize these APIs. Neither spaces nor special characters are allowed in the table name.
The feature com.broadhop.custrefdata.service.feature needs to be installed on the Policy Server.
In a High Availability (HA)/Distributed CPS deployment, this feature should be installed on the QNS0x nodes.
The feature com.broadhop.client.feature.custrefdata needs to be installed in Policy Builder.
Step 1 | Login into Policy Builder. | ||||||||||||||||||||||
Step 2 | Select Reference Data tab. | ||||||||||||||||||||||
Step 3 | From the left pane, select Systems. | ||||||||||||||||||||||
Step 4 | Select and expand your system name. | ||||||||||||||||||||||
Step 5 | Select
Plugin
Configurations (or a sub cluster or instance), a Custom Reference
Data Configuration plugin configuration is defined.
| ||||||||||||||||||||||
Step 6 | In
Reference Data tab >
Custom
Reference Data Tables, at least one Custom Reference Data Table
must be defined.
![]()
|
The MongoDB database containing the CRD tables and the data is located in the MongoDB instance specified in the CRD plugin configuration.
The database is named cust_ref_data.
Two system collections exist in that database and do not actually contain CRD data:
system.indexes — used by MongoDB. These are indices set on the database.
crdversion — contains a document indicating the version of all the CRD tables you have defined. The version field increments by 1 every time you make a change or add data to any of your CRD tables.
A collection is created for each CRD table defined in Policy Builder.
This collection contains a document for each row you define in the CRD table.
Each document contains a field for each column you define in the CRD table.
The field contains the value specified for the column for that row in the table.
Additionally, there is a _id field which contains the internal key used by MongoDB and _version which is used by CPS to provide optimistic locking protection, essentially to avoid two threads overwriting the other's update, on the document.
An example is shown below:
Setting the Cache Results to true (checked) is the default and recommended settings in most cases as it yields the best performance. Use of the cached copy also removes the dependency on the availability of the CRD database, so if there is an outage or performance issue, policy decisions utilizing the CRD data won't be impacted.
The cached copy of the table is refreshed on CPS restart and whenever the API writes a change to the CRD table, otherwise the cached copy is used and the database is not accessed.
The URL used to access the CRD API are different depending on the type of deployment (High Availability or All-in-One):
High Availability (HA): https://<lbvip01>:8443/custrefdata/<tablename>/_<operation>
All-In-One (AIO): http://<ip>:8080/custrefdata/<tablename>/_<operation>
The examples in the following sections refer to the HA URL.
Returns all rows currently defined in the specified table.
GET
https://<lbvip01>:8443/custrefdata/test/_query
https://<master or control ip>:8443/custrefdata/test/_query
https://<lbvip01>:8443/custrefdata/test/_query?key1=Platinum
https://<master or control ip>:8443/custrefdata/test/_query?key1=Platinum
None, although parameters can be specified on the URL for filtering.
Success returns code 200 Ok; XML indicating rows defined is returned. If there are no records in the table, 200 Ok is returned with empty rows in it.
If the table does not exist, code 400 Bad Request is returned.
<rows> <row> <field code=”field1” value=”1004”/> <field code=”field2” value=”testee”/> <field code=”key1” value=”Platinum”/> </row> <row> <field code=”field1” value=”1004”/> <field code=”field2” value=”testee”/> <field code=”key1” value=”Platinum99”/> </row> <row> <field code=”field1” value=”field1example1”/> <field code=”field2” value=”field2example1”/> <field code=”key1” value=”key1example1”/> </row> <row> <field code=”field1” value=”field1example2”/> <field code=”field2” value=”field2example2”/> <field code=”key1” value=”key1example2”/> </row> </rows>
<rows> <rows> <row> <field code=”field1” value=”1004”/> <field code=”field2” value=”testee”/> <field code=”key1” value=”Platinum”/> </row> </rows>
The response returns keys with the tag “field code”. If you want to use the output of Query as input to one of the other APIs, the tag needs to be changed to “key code”. Currently using “field code” for a key returns code 404 Bad Request and a java.lang.NullPointerException.
Create a new row in the specified table.
POST
https://<lbvip01>:8443/custrefdata/test/_create
https://<master or control ip>:8443/custrefdata/test/_create
<row> <key code=”key1” value=”Platinum”/> <field code=”field1” value=”1004”/> <field code=”field2” value=”testee”/> </row>
Success returns code 200 Ok; no data is returned. The key cannot already exist for another row; submission of a duplicate key returns code 400 Bad Request.
If creating a row fails, API returns 400 Bad Request.
![]() Note | Create API does not support SVN CRD table operations and displays the following error message when Snv Crd Data checkbox is enabled in CRD table configuration: Create operation is not allowed for subversion table |
Updates the row indicated by the key code in the table with the values specified for the field codes.
POST
https://<lbvip01>:8443/custrefdata/test/_update
https://<master or control ip>:8443/custrefdata/test/_update
<row> <key code="key1" value="Platinum"/> <field code="field1" value="1005"/> <field code="field2" value="tester"/> </row>
Success returns code 200 Ok; no data is returned. The key cannot be changed. Any attempt to change the key returns code 404 Not Found.
If updating a row fails, API returns 400 Bad Request.
![]() Note | Update API does not support SVN CRD table operations and displays the following error message when Snv Crd Data checkbox is enabled in CRD table configuration: Update operation is not allowed for subversion table |
Removes the row indicated by the key code from the table.
POST
https://<lbvip01>:8443/custrefdata/test/_delete
https://<master or control ip>:8443/custrefdata/test/_delete
<row> <key code="key1" value="Platinum"/>"/> </row>
Success returns code 200 Ok; no data is returned. If the row to delete does not exist, code 404 Not Found is returned.
If deleting a row fails, API returns 400 Bad Request.
![]() Note | Delete API does not support SVN CRD table operations and displays the following error message when Snv Crd Data checkbox is enabled in CRD table configuration: Delete operation is not allowed for subversion table |
Determines whether the same CRD table data content is being used at different data centers.
The following three optional parameters can be provided to the API:
tableName: Returns the checksum of a specified CRD table tableName indicating if there is any change in the specified table. If the value returned is same on different servers, it means there is no change in the configuration and content of that table.
includeCrdversion: Total database checksum contains combination of checksum of all CRD tables configured in Policy Builder. If this parameter is passed as true in API, then total database checksum includes the checksum of "crdversion" table. Default value is false.
orderSensitive: Calculates checksum of the table by utilizing the order of the CRD table content. By default, it does not sort the row checksums of the table and returns order sensitive checksum of every CRD table. Default value is true.
Database level Checksum API returns checksum details for all the CRD tables and the database. If the value returned is same on different servers, there will be no change in the configuration and content of any CRD table configured in Policy Builder.
GET
https://<lbvip01>:8443/custrefdata/_checksum
https://<master or control ip>:8443/custrefdata/_checksum
<response> <checksum><all-tables-checksum></checksum> <tables> <table name="<table-1-name>" checksum="<checksum-of-table-1>"/> <table name="<table-2-name>" checksum="<checksum-of-table-2>"/> <table name="<table-n-name>" checksum="<checksum-of-table-n>"/> </tables> </response>
Table specific Checksum API returns the checksum details for the specific CRD table. If the value returned is same on different servers, there will be no change in the configuration and content of that table.
GET
https://<lbvip01>:8443 /custrefdata/_checksum?tableName=<user-provided-table-name>
https://<master or control ip>:8443 /custrefdata/_checksum?tableName=<user-provided-table-name>
<response> <tables> <table name="<user-provided-table-name>" checksum="<checksum-of-specified-table"/> </tables> </response>
![]() Note | Table specific Checksum API does not support SVN CRD table operations and displays the following error message when Snv Crd Data checkbox is enabled in CRD table configuration: Checksum operation is not allowed for subversion table |
Drops custom reference table from MongoDB to avoid multiple stale tables in the system.
The Table Drop API is used in the following scenarios:
If a CRD table does not exist in Policy Builder but exists in the database, the API can be used to delete the table from the database.
If a CRD table exists in Policy Builder and database, the API cannot delete the table from the database. If this is attempted the API will return an error: “Not permitted to drop this table as it exists in Policy Builder”.
If a CRD table does not exist in Policy Builder and database, the API will also return an error No table found:<tablename>.
POST
https://<lbvip01>:8443/custrefdata/<table_name>/_drop
https://<master or control ip>:8443/custrefdata/<table_name>/_drop
![]() Note | Drop API does not support SVN CRD table operations and displays the following error message when Snv Crd Data checkbox is enabled in CRD table configuration: Drop operation is not allowed for subversion table |
Exports single and multiple CRD table and its data.
Exports single CRD table and its data.
Returns an archived file containing csv file with information of specified CRD table table_name.
GET
https://<lbvip01>:8443/custrefdata/_export?tableName=<table_name>
https://<master or control ip>:8443/custrefdata/_export?tableName=<table_name>
Exports all CRD tables and its data.
Returns an archived file containing csv file with information for each CRD Table.
GET
https://<lbvip01>:8443 /custrefdata/_export
https://<master or control ip>:8443 /custrefdata/_export
![]() Note | Export API does not support Svn CRD tables and displays the following warning message in the Response Header "Export-Warning": Datasource for tables [table1, table2,...] is subversion. Response will not contain data for these tables and skipped SVN CRD tables to be a part of archive. |
Imports CRD table and its data.
It takes an archived file as an input which contains one or more csv files containing CRD tables information.
POST
https://<lbvip01>:8443/custrefdata/_import
https://<master or control ip>:8443/custrefdata/_import
https://<lbvip01>:8443/custrefdata/_import?batchOperation=true
https://<lbvip01>:8443/custrefdata/_import?batchOperation=false&duplicateValidation=true
![]() Note |
|
![]() Note | Import API supports SVN CRD table operations in the following scenarios:
|
Imports bulk CRD data by sending any supported file in the API request.
Supports only CSV and XLS file formats.
POST
https://<lbvip01>:8443/custrefdata/_importsinglefile
![]() Note |
|
Creates a snapshot of the CRD tables on the system. The created snapshot will contain CRD table data, policy configuration and checksum information for all CRD tables.
POST
https://<lbvip01>:8443/custrefdata/_snapshot?userId=<user_id>&userComments=<user_comments>
https://<master or control ip>:8443/custrefdata/_snapshot?userId=<user_id>&userComments=<user_comments>
userComments
![]() Note | Snapshot POST API does not support export of the contents of Svn CRD tables. The API returns the following warning message if there are any Svn CRD tables present while creating snapshot: Datasource for tables [table_1, table_2…] is subversion. Data for these tables will not come from database (mongodb) |
Enables you to get the list of all valid snapshots in the system.
The following information is available in the list of snapshots:
GET
https://<lbvip01>:8443/custrefdata/_snapshot
https://<master or control ip>:8443/custrefdata/_snapshot
<snapshots> <snapshot> <name><date-and-time>_<user-id></name> <snapshotPath>/var/broadhop/snapshot/20160620011825306_qns</snapshotPath> <creationDateAndTime>20/06/2016 01:18:25:306</creationDateAndTime> <comments>snapshot-1 june</comments> <policyVersion>903</policyVersion> <checksum checksum="60f51dfd4cd4554910da44a776c66db1"> <table name=<table-name-1> checksum="<table-checksum-1>"/> … <table name=<table-name-n> checksum="<table-checksum-n>"/> </checksum> </snapshot> <snapshot> … </snapshot> </snapshots>
![]() Note | Snapshot GET API does not return checksum information of Svn CRD tables as they are not part of created snapshots. |
Enables you to revert the CRD data to a specific snapshot. If the specific snapshot name is not provided, the API will revert to the latest snapshot.
POST
https://<lbvip01>:8443/custrefdata/_revert?snapshotName=<snapshot_name>
https://<master or control ip>:8443/custrefdata/_revert?snapshotName=<snapshot_name>
snapshotName
![]() Note | Revert API does not support reverting of CRD data for Svn CRD tables. For Svn CRD table, it clears the mongodb table and displays the following warning message: Datasource for tables [table_1, table_2…] is subversion. Data for these tables will be reverted using svn datasource not from database (mongodb) |
The Query API is a GET operation which is the default operation that occurs when entering a URL into a typical web browser.
The POST operations, Create, Update, and Delete, require the use of a REST client so that the payload and content type can be specified in addition to the URL. REST clients are available for most web browsers as plug-ins or as part of web service tools, such as SoapUI. The content type when using these clients should be specified as application/xml or the equivalent in the chosen tool.
You can view the API logs in the OAM (pcrfclient) VM at the following location:
/var/log/broadhop/consolidated-qns.log
You can view the API logs with the following commands: