The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This section describes how to port the Policy Builder configuration from an All-In-One (AIO) environment to a High Availability (HA) environment.
All the VMs were created using the deployment process.
This procedure assumes the datastore that will be used to have the virtual disk has sufficient space to add the virtual disk.
This procedure assumes the datastore has been mounted to the VMware ESX server, regardless of the backend NAS device (SAN or iSCSI, etc).
Policy Builder configuration can be utilized between environments, however, the configuration for Systems and Policy Enforcement Points is environment-specific and should not be moved from one environment to another.
The following instructions will not overwrite the configuration specific to the environment. Please note that as the Systems tab and Policy Enforcement Points data is not moved, the HA system should have these things configured and running properly (as stated above).
The following steps describe the process to port a configuration from an AIO environment to an HA environment.
HAProxy is an opensource load balancer used in High Availability (HA) and Geographic Redundancy (GR) CPS deployments. It is used by the CPS Policy Directors (lbs) to forward IP traffic from lb01/lb02 to other CPS nodes. HAProxy runs on the active Policy Director VM.
Documentation for HAProxy is available at http://www.haproxy.org/#docs.
For a general diagnostics check of the HAProxy service, run the following command from any VM in the cluster (except sessionmgr):
diagnostics.sh --ha_proxy
QPS Diagnostics Multi-Node Environment --------------------------- Ping Check for qns01...[PASS] Ping Check for qns02...[PASS] Ping Check for qns03...[PASS] Ping Check for qns04...[PASS] Ping Check for lb01...[PASS] Ping Check for lb02...[PASS] Ping Check for sessionmgr01...[PASS] Ping Check for sessionmgr02...[PASS] Ping Check for sessionmgr03...[PASS] Ping Check for sessionmgr04...[PASS] Ping Check for pcrfclient01...[PASS] Ping Check for pcrfclient02...[PASS] HA Multi-Node Environment --------------------------- Checking HAProxy status...[PASS]
The following commands must be issued from the lb01 or lb02 VM.
To check the status of the HAProxy services, run the following command:
monit status haproxy
[root@host-lb01 ~]# service haproxy status haproxy (pid 10005) is running...
To stop the HAProxy service, run the following command:
monit stop haproxy
To restart the HAProxy service, run the following command:
monit restart haproxy
To view statistics, open a browser and navigate to the following URL:
To change HAProxy log level in your CPS deployment, you must make changes to the HAProxy configuration files on the Cluster Manager and then push the changes out to the Policy Director (lb) VMs.
Once deployed, the HAProxy configuration files are stored locally on the Policy Director VMs at /etc/haproxy/haproxy.cfg.erb and /etc/haproxy/haproxy-diameter.erb.
Note | Whenever you upgrade with latest ISO, the log level will be set to default level (err). |
For future installations and network upgrades, this section proposes what hardware and components you should consider as you grow your network. The CPS solution is a robust and scalable software-based solution that can be expanded by adding additional hardware and software components. The following sections explain typical scenarios of when to expand the hardware and software to effect such growth.
Your network may grow for the following reasons:
The subscriber base has grown or will grow beyond the initial installation specifications.
In this case, the number of active or non-active subscribers becomes larger than the initial deployment. This can cause one or more components to reach capacity. New components must be added to accommodate the growth.
The services or subscriber scenarios have changed, or new services have been introduced, and the transactions per second on a component no longer meet requirements.
When a new service or scenario occurs, often there is a change in the overall Transactions Per Second (TPS), or in the TPS on a specific component. When this occurs, new components are necessary to handle the new load.
The operator notices that there are factors outside of the initial design that are causing either the overall system or a specific component to have a high resource load.
This may cause one or multiple components to reach its capacity for TPS. When this occurs, new components are necessary to handle the new factors.
Adding a new component may require adding additional hardware. However, the addition of more hardware depends on the physical resources already available, plus what is needed for the new component.
If the number of subscribers exceeds 10 million, then the customer needs to Clone and Repartition sessionmgr Disks. See Manage Disks to Accommodate Increased Subscriber Load.
When adding more hardware, the design must take into consideration the high availability (HA) needs of the system. The HA design for a single-site system is N+1 at the hardware and application level. As a result, adding a new blade incrementally increases the HA capacity of the system.
For example, in a basic installation there are 2 Cisco Policy Server blades handling the traffic. The solution is designed so that if one of the blades fails, the other blade can handle the entire capacity of the system. When adding a third blade for capacity expansion, there are now 2 blades to handle the system load if one of the blades fails. This allows for a more linear scaling approach because each additional blade can be accountable for being able to use its full capacity.
Note | When adding new blades to a cluster, the blades in the cluster must be co-located to achieve the proper throughput between other components. |
The most common components to be expanded are on the Cisco Policy Servers. As your system begins to scale up, you will need to add more CPS nodes and more SessionMgrs. Expansion for other components can follow the same pattern as described here. The next sections discuss the configurations needed for those specific components to be active in the system.
CPS uses encryption on all appropriate communication channels in HA deployments. No additional configuration is required.
Default SSL certificates are provided with CPS but we recommend that you replace these with your own SSL certificates. Refer to Replace SSL Certificates in the CPS Installation Guide for VMware for more information.