Planning the CPS Deployment
CPS Dimensioning Evaluation
With assistance from Cisco Technical Representatives, a dimensioning evaluation must be performed for each CPS deployment. This dimensioning evaluation uses customer-specific information such as call model, product features to be used, and traffic profiles to determine the specific requirements for your deployment, including:
-
Hardware specifications (number and type of blades, memory, etc.)
-
VM information (number, type and resource allocation).
The requirements established in the dimensioning evaluation must be met or exceeded.
The following sections, Hardware Requirements and Virtual Machine Requirements, provide minimum guidelines for a typical CPS deployment.
Hardware Requirements
CPS is optimized for standard Commercial Off-The-Shelf (COTS) blade servers.
The following table provides a summary of the minimum requirements for a typical single-site High Availability (HA) CPS deployment.
Minimum Hardware Requirements (Blade Server) |
|
---|---|
Memory |
The total size of memory for a blade server should be sufficient to meet the memory needs for all the Virtual Machines (VMs) installed in the blade. Refer to the Virtual Machine Requirements section for the amount of memory needed for each VMs. Also consider the memory needed by the Hypervisor. For VMware 5.x it is recommended to reserve 8 GB memory. |
Storage |
Two (2) 400 GB Enterprise Performance SSD Drives Supporting hardware RAID 1 with write-back cache |
Interconnect |
Dual Gigabit Ethernet ports |
Virtualization |
Must be listed in the VMware Compatibility Guide at: https://www.vmware.com/resources/compatibility/search.php |
Minimum Hardware Requirements (Chassis) |
|
Device Bays |
A minimum of 4 is required for HA deployments |
Interconnect |
Redundant interconnect support |
Power |
Redundant AC or DC power supplies (as required by the service provider) |
Cooling |
Redundant cooling support |
Virtual Machine Requirements
High Availability Deployment
The following table provides the minimum CPU RAM and disk space requirements for each type of CPS virtual machine (VM) in a typical deployment (4 blade single-site high availability).
Important |
The requirements mentioned in the table is based on:
|
Physical Cores / Blade |
VM Type |
Memory (in GB) |
Hard Disk (in GB) |
vCPU |
Configuration |
---|---|---|---|---|---|
Blade with 16 CPUs |
Policy Server VMs (QNS) |
16 |
100 |
12 |
Threading = 200 Mongo per host = 10 Criss-cross Mongo for Session Cache = 2 on each VM |
Blade with 16 CPUs |
Session Manager VMs |
128 |
100 |
6 |
|
Blade with 16 CPUs |
Control Center (OAM) VMs |
16 |
100 |
6 |
|
Blade with 16 CPUs |
Policy Director VMs (LB) |
32 |
100 |
12 |
|
Blade with 16 CPUs |
Cluster Manager |
12 |
- |
2 |
|
Blade with 24 CPUs |
Policy Server VMs (QNS) |
16 |
100 |
10 |
Threading = 100 Mongo per host = 10 Criss-cross Mongo for Session Cache = 2 on each VM |
Blade with 24 CPUs |
Session Manager VMs |
80 |
100 |
8 |
|
Blade with 24 CPUs |
Control Center (OAM) VMs |
16 |
100 |
12 |
|
Blade with 24 CPUs |
Policy Director VMs (LB) |
32 |
100 |
12 |
|
Blade with 24 CPUs |
Cluster Manager |
12 |
- |
2 |
Physical Cores / Blade |
VM Type |
Memory (in GB) |
Hard Disk (in GB) |
vCPU |
Configuration |
---|---|---|---|---|---|
Blade with 16 CPUs |
Policy Server VMs |
16 |
100 |
12+ |
Threading = 200 Mongo per host = 10 Criss-cross Mongo for Session Cache = 2 on each VM |
Blade with 16 CPUs |
Session Manager VMs |
128 |
100 |
6+ |
|
Blade with 16 CPUs |
Control Center (OAM) VMs |
16 |
100 |
6+ |
|
Blade with 16 CPUs |
Policy Director VMs |
32 |
100 |
8+ |
|
Blade with 16 CPUs |
Cluster Manager |
12 |
- |
2+ |
|
Blade with 24 CPUs |
Policy Server VMs |
16 |
100 |
10+ |
Threading = 100 Mongo per host = 10 Criss-cross Mongo for Session Cache = 2 on each VM |
Blade with 24 CPUs |
Session Manager VMs |
80 |
100 |
8+ |
|
Blade with 24 CPUs |
Control Center (OAM) VMs |
16 |
100 |
12+ |
|
Blade with 24 CPUs |
Policy Director VMs |
32 |
100 |
12+ |
|
Blade with 24 CPUs |
Cluster Manager |
12 |
- |
2+ |
Important |
|
Note |
For large scale deployments having Policy Server (qns) VMs more than 35, Session Manager (sessionmgr) VMs more than 20, Policy Director (lb) VMs more than 2, recommended RAM for OAM (pcrfclient) VMs is 64GB. |
Note |
For large scale deployments having Policy Server (qns) VMs more than 32, Session Manager (sessionmgr) VMs more than 16, Policy Director (lb) VMs more than 2, recommended vCPU for OAM (pcrfclient) VMs is 12+. |
Note |
If CPS is deployed in a cloud environment where over-allocation is possible, it is recommended to enable hyper-threading and double the number of vCPUs. |
Note |
The hard disk size of all VMs are fixed at 100 GB (thin provisioned). Contact your Cisco Technical Representative if you need to reduce this setting. |
The /var/data/sessions.1 directory size of all sessionmgr VMs are 60% of actual allocated RAM size of that VM and this directory is mounted on tmpfs file system and used for session replica set. If you want to change /var/data/sessions.1 directory size you must update (increase/decrease) the RAM size of that VM and re-deploy it.
For example, if 24 GB RAM is allocated to the Session Manager VM, 16 GB is allocated to /var/data/sessions.1 directory on tmpfs.
If you need to update sessions.1 directory settings consult your Cisco Technical Representative.
Considerations
-
Each blade should have at least 2 CPU's reserved for the Hypervisor.
-
When supported by the Hypervisor, deployments must enable CPU and memory reservation.
-
For VMware environments, hardware must be ESX/ESXi compatible.
-
The total number of VM CPU cores allocated should be 2 less than the total number of CPU cores per blade.
-
CPU must be a high performance Intel x86 64-bit chipset.
Note
BIOS settings should be set to high-performance values, rather than energy saving, hibernating, or speed stepping (contact hardware vendor for specific values).
-
CPU benchmark of at least 13,000 rating per chip and 1,365 rating per thread.
-
Monitor the CPU STEAL statistic. This statistic should not cross 2% for more than 1 minute^1.
Note
A high CPU STEAL value indicates the application is waiting for CPU, and is usually the result of CPU over allocation or no CPU pinning. CPS performance cannot be guaranteed in an environment with high CPU STEAL.
-
Scaling and higher performance can be achieved by adding more VM's, not by adding more system resources to VM's.
-
For deployments which cannot scale by adding more VM's, Cisco will support the allocation of additional CPU's above the recommendation, but does not guarantee a linear performance by increasing more number of the VMs.
-
Cisco will not support performance SLA's for CPS implementations with less than the recommended CPU allocation.
-
Cisco will not support performance SLA's for CPS implementations with CPU over-allocation (assigning more vCPU than are available on the blade, or sharing CPU's).
-
RAM latency should be lower than 15 ns.
-
RAM should be error-correcting ECC memory.
-
Disk storage performance needs to support less than 2ms average latency.
-
Disk storage performance needs to support greater than 5000 input/output operations per second (IOPS) per CPS VM.
-
Disk storage must provide redundancy and speed, such as RAID 0+1.
-
Cisco does not validate its CPS solution on external storage (SAN storage, shared block storage, shared file systems).
-
Hardware must support 1 Gbps ports/links for each VM network interface.
-
Hardware and hardware design must be configured for better than 99.999% availability.
-
For HA deployments, Cisco requires the customer designs comply with the Cisco CPS HA design guidelines, such as:
-
At least two of each CPS VM type (PD, PS, SM, CC) for each platform.
-
Each CPS VM type (PD, PS, SM, CC) must not share common HW zone with the same CPS VM type.
-
-
VMware memory (RAM) Reservation must be enabled at the maximum for each CPS VM (no over-subscription of RAM).
Deployment Examples
High Availability (HA) Deployment Example
Note |
The session replica-set for mongo port 27717 must always be built by using sessionmgr01 and sessionmgr02. If you build session replica-set for mongo port 27717 with other session managers other than SM01 and SM02, the Policy Server (qns) process does not come up. It is not recommended to use 2 or 4 blades layout for production. |
Blade |
VM Type |
Replica-sets |
---|---|---|
1 |
CC 6, LB 8, QNS 8, SM 6, CM 2 |
SM: ADMIN, Balance, Session, SPR, Reporting |
2 |
CC 6, LB 8, QNS 8, SM 6 |
SM: ADMIN, Balance, Session, SPR, Reporting |
Blade |
VM Type |
Replica-sets |
---|---|---|
1 |
CC 6, LB 12, 2 x QNS 8, SM 8, CM 4 |
SM: ADMIN, Balance, Session, SPR, Reporting |
2 |
CC 6, LB 12, 2 x QNS 8, SM 8 |
SM: ADMIN, Balance, Session, SPR, Reporting |
Blade |
VM Type |
Replica-sets |
---|---|---|
1 |
CM 4, CC 8, LB 8, QNS 8 |
- |
2 |
CC 8, LB 8, QNS 8 |
- |
3 |
2 x QNS 8, SM 8 |
SM: ADMIN, Session RS1,2, Balance RS1, SPR RS1, Reporting RS1 |
4 |
2 x QNS 8, SM 8 |
SM: ADMIN, Session RS1,2, Balance RS1, SPR RS1, Reporting RS1 |
Blade |
VM Type |
Replica-sets |
---|---|---|
1 |
CM 4, CC 8, LB 8, QNS 10, SM 8, HSF 8 |
SM: ADMIN, Session (Backup), Balance (Backup), SPR |
2 |
CC 8, LB 8, QNS 10, SM 8, HSF 8 |
SM: ADMIN, Session (Backup), Balance (Backup), SPR |
3 |
3 x QNS 10, 2 x SM 8 |
SM: Session RS1,2, Balance RS1 |
4 |
3 x QNS 10, 2 x SM 8 |
SM: Session RS1,2, Balance RS1 |
Blade |
VM Type |
Replica-sets |
---|---|---|
1 |
CM 4, CC 6, LB 12, HSF 6 |
SM: ADMIN, Session (Backup), Balance (Backup) |
2 |
CC 6, LB 12, HSF 6 |
SM: ADMIN, Session (Backup), Balance (Backup) |
3 |
2 x QNS 12, SM 6 |
SM: Session RS1,2, Balance RS1 |
4 |
2 x QNS 12, SM 6 |
SM: Session RS2,1, Balance RS2 |
5 |
2 x QNS 12, SM 6 |
SM: Session RS3,4, SPR RS1 |
6 |
2 x QNS 12, SM 6 |
SM: Session RS4,3, SPR RS2 |
7 |
2 x QNS 12, SM 6 |
SM: Session RS5,6, Reporting RS1 |
8 |
2 x QNS 12, SM 6 |
SM: Session RS6,5, Reporting RS2 |
Blade |
VM Type |
Replica-sets |
---|---|---|
1 |
CC 12, 2 x LB 12, HSF 8 |
SM: ADMIN, Session (Backup), Balance (Backup) |
2 |
CC 12, 2 x LB 12, HSF 8 |
SM: ADMIN, Session (Backup), Balance (Backup) |
3 |
3 x QNS 10, 2 x SM 8 |
SM: Session RS1,2,7,8, Balance RS1 |
4 |
3 x QNS 10, 2 x SM 8 |
SM: Session RS2,1,8,7, Balance RS2 |
5 |
3 x QNS 10, 2 x SM 8 |
SM: Session RS3,4,9,10, SPR RS1 |
6 |
3 x QNS 10, 2 x SM 8 |
SM: Session RS4,3,10,9, SPR RS2 |
7 |
3 x QNS 10, 2 x SM 8 |
SM: Session RS5,6,11,12, Reporting RS1 |
8 |
3 x QNS 10, 2 x SM 8 |
SM: Session RS6,5,12,11, Reporting RS2 |
9 |
CM 4 |
CM: Cluster Manager |
Platform WSP File Sizing Calculation
Note |
The following section is for reference purposes only. For your deployment specific calculations, contact your Cisco Account representative. |
For calculation purposes, consider there are 10 VMs in a standard deployment.
-
pcrfclient (OAM)- 2
-
Policy Server (qns) - 4
-
Policy Director (lb) - 2
-
Session Manager (sessionmgr) - 2
The following table provides the statistics sizing details for different VM types in a standard deployment:
Note |
The size of 1 WSP file is 1.59 MB. |
Statistics Type |
VM Details (size in MB for each VM) |
Total Disk Usage (in MB) |
|||
---|---|---|---|---|---|
pcrfclient (OAM) |
Policy Server (qns) |
Policy Director (lb) |
Session Manager |
||
cpu |
76.32 |
76.32 |
101.76 |
76.32 |
|
disk |
76.32 |
76.32 |
73.14 |
73.14 |
750.48 |
memory |
11.13 |
11.13 |
11.13 |
11.13 |
111.30 |
interface |
38.16 |
25.44 |
38.16 |
25.44 |
305.28 |
fhcount |
4.77 |
4.77 |
4.77 |
4.77 |
47.70 |
df |
57.24 |
57.24 |
57.24 |
62.01 |
581.94 |
process |
305.28 |
57.24 |
267.12 |
171.72 |
1717.20 |
set |
1.59 |
0 |
0 |
0 |
3.18 |
collectd |
4.77 |
0 |
0 |
0 |
9.54 |
swap |
7.95 |
7.95 |
7.95 |
7.95 |
79.5 |
load |
4.77 |
4.77 |
4.77 |
4.77 |
47.7 |
tcpconns |
17.49 |
17.49 |
17.49 |
17.49 |
174.9 |
db |
0 |
0 |
0 |
213.06 |
426.12 |
Total Size in MB |
605.79 |
338.67 |
583.53 |
667.8 |
5068.20 |
Total Size in GB |
4.950117188 |
Sample Customer Deployment
Let us consider customer has deployed 36 VMs.
-
pcrfclient (OAM) - 2
-
Policy Server (qns) - 12
-
Policy Director (lb) - 2
-
Session Manager (sessionmgr) - 12
Note |
The size of 1 WSP file is 1.59 MB. |
Statistics Type |
VM Details (size in MB for each VM) |
Total Disk Usage (in MB) |
|||
---|---|---|---|---|---|
pcrfclient (OAM) |
Policy Server (qns) |
Policy Director (lb) |
Session Manager |
||
cpu |
152 |
152 |
102 |
102 |
3052.8 |
disk |
91 |
73 |
76 |
73 |
2121.06 |
memory |
11 |
11 |
11 |
11 |
311.64 |
interface |
38 |
115 |
25 |
25 |
915.84 |
fhcount |
5 |
5 |
5 |
5 |
133.56 |
df |
110 |
81 |
72 |
62 |
1984.32 |
process |
496 |
267 |
57 |
114 |
3587.04 |
set |
16 |
0 |
0 |
0 |
31.8 |
collectd |
5 |
0 |
0 |
0 |
9.54 |
swap |
8 |
8 |
8 |
8 |
222.6 |
load |
5 |
5 |
5 |
5 |
133.56 |
tcpconns |
18 |
18 |
18 |
18 |
489.72 |
db |
0 |
0 |
0 |
145 |
1729.92 |
Total Size in MB |
955 |
735 |
379 |
568 |
14723.4 |
Total Size in GB |
14.37832031 |
Application KPI Metrics Sizing Calculation
Note |
The following section is for reference purposes only. For your deployment specific calculations, contact your Cisco Account representative. |
For calculation purposes, Node2 to Node4 is the Diameter Endpoint. Number of nodes can be increased based on the Endpoints that are configured. The calculations are done based on the data received from the customer site. If any new interfaces such as, Sd are configured, the statistics generated will increase. This results in increase in the number of WSP files generated.
Setup details:
-
pcrfclient (OAM) - 2
-
Policy Director (lb) - 2
-
Policy Server (qns) - 4
Node/Each VM |
Policy Director (lb) |
Policy Server (qns) |
Total |
---|---|---|---|
Node1 |
30 |
494 |
524 |
Node2 |
696 |
0 |
696 |
Node3 |
696 |
0 |
696 |
Node4 |
696 |
0 |
696 |
No. of WSP Files per VM |
2118 |
494 |
2612 |
No. of VMs |
2 |
4 |
6 |
Total No. of WSP Files on all VMs |
4236 |
1976 |
6212 |
Policy Director (lb) |
Policy Server (qns) |
Total |
|
---|---|---|---|
All possible conditions |
3367.62 |
785.46 |
4153.08 |
Policy Director (lb) |
Policy Server (qns) |
Total |
|
---|---|---|---|
All possible conditions |
6735.24 |
3141.84 |
9877.08 |
Sample Customer Deployment
The calculations done in this section are based on the data received from the customer site. Node2 to Node4 is the Diameter Endpoint. Number of nodes can be increased based on the Endpoints that have been configured. If any new interfaces such as, Sd are configured, the statistics generated will increase. This results in increase in number of WSP files generated.
Setup details:
-
pcrfclient (OAM) - 2
-
Policy Director (lb) - 2
-
Policy Server (qns) - 12
-
Session Manager (sessionmgr) - 12
Node/Each VM |
Policy Director (lb) |
Policy Server (qns) |
Total |
---|---|---|---|
Node1 |
54 |
2998 |
3052 |
Node2 |
28267 |
0 |
28267 |
Node3 |
28267 |
0 |
28267 |
Node4 |
28267 |
0 |
28267 |
No. of WSP Files per VM |
84855 |
2998 |
87853 |
No. of VMs |
2 |
12 |
14 |
Total No. of WSP Files on all VMs |
169710 |
35976 |
205686 |
Node/Each VM |
Policy Director (lb) |
Policy Server (qns) |
Total |
---|---|---|---|
Node1 |
54 |
1627 |
1681 |
Node2 |
1143 |
0 |
1143 |
Node3 |
1143 |
0 |
1143 |
Node4 |
1143 |
0 |
1143 |
No. of WSP Files per VM |
3483 |
1627 |
5110 |
No. of VMs |
2 |
12 |
14 |
Total No. of WSP Files on all VMs |
6966 |
19524 |
26490 |
Policy Director (lb) |
Policy Server (qns) |
Total |
|
---|---|---|---|
As per data gathered from customer site |
5537.97 |
2586.93 |
8124.9 |
All possible conditions for Customer Site |
134919.5 |
4766.82 |
139686.27 |
Policy Director (lb) |
Policy Server (qns) |
Total |
|
---|---|---|---|
As per data gathered from customer site |
11075.94 |
31043.16 |
42119.1 |
All possible conditions for Customer Site |
269838.9 |
57201.84 |
327040.74 |