The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Cisco Policy Suite offers a carrier-grade, high capacity, high performance, virtualized software solution, capable of running on VMware, OpenStack/KVM hypervisors or cloud infrastructures. To meet the stringent performance, capacity, and availability demands, the Cisco software requires that all allocated hardware system resources be 100% available when needed, and not oversubscribed or shared across separate VM's.
The following steps outline the basic process for a new installation of CPS:
Review virtual machine requirements
Orchestration Requirements
Install OpenStack
CPU Pinning
Configure OpenStack Users and Networks
Define Availability Zones
Download the required CPS images
Import images to Glance
Create Cinder Volumes
Verify or updated Default Quotas
Create Flavors
Set up Access and Security
For customers operating a cloud infrastructure, the infrastructure must be configured to guarantee CPU, memory, network, and I/O availability for each CPS VM. Oversubscription of system resources will reduce the performance and capacity of the platform, and may compromise availability and response times. CPU core requirements are listed as pCPUs (physical cores) not vCPU's (hyper-threaded virtual cores).
In addition, the CPS carrier-grade platform requires:
RAM reservation is enabled for all memory allocated to the CPS VM.
CPU Hyperthreading must be ENABLED. To prevent over-subscription of CPU cores, CPU pinning should be ENABLED.
CPU benchmark of at least 13,000 rating per chip and 1,365 rating per thread.
The total number of VM CPU cores allocated should be 2 less than the total number of CPU cores per blade.
Note | A high CPU STEAL value indicates the application is waiting for CPU, and is usually the result of CPU over allocation or no CPU pinning. CPS performance cannot be guaranteed in an environment with high CPU STEAL. |
Note | BIOS settings should be set to high-performance values, rather than energy saving, hibernating, or speed stepping (contact hardware vendor for specific values). |
For deployments which cannot scale by adding more VM's, Cisco will support the allocation of additional CPU's above the recommendation to a single VM, but does not guarantee a linear performance increase.
Cisco will not support performance SLA's for CPS implementations with less than the recommended CPU allocation.
Cisco will not support performance SLA's for CPS implementations with CPU over-allocation (assigning more vCPU than are available on the blade, or sharing CPU's).
Scaling and higher performance can be achieved by adding more VM's, not by adding more system resources to VM's.
RAM latency should be lower than 15 nanosecond.
RAM should be error-correcting ECC memory.
Disk storage performance should be less than 2 millisecond average latency.
Disk storage performance needs to support greater than 5000 input/output operations per second (IOPS) per CPS VM.
Disk storage must provide redundancy and speed, such as RAID 0+1.
Hardware and hardware design must be configured for better than 99.999% availability.
The number of CPU cores, memory, NICs, and storage allocated per CPS VM must meet or exceed the requirements.
The following table provides information related to vCPU requirements based on:
Hyper-threading: Enabled (Default)
CPU Pinning: Enabled
CPU Reservation: Yes (if allowed by hypervisor)
Memory Reservation: Yes (if allowed)
Hard Disk (in GB): 100
Physical Cores / Blade |
VM Type |
Memory (in GB) |
Hard Disk (in GB) |
vCPU |
Configuration |
---|---|---|---|---|---|
Blade with 16 CPUs |
Policy Server VMs (QNS) |
16 |
100 |
12 |
Threading = 200 Mongo per host = 10 Criss-cross Mongo for Session Cache = 2 on each VM |
Blade with 16 CPUs |
Session Manager VMs |
128 |
100 |
6 |
|
Blade with 16 CPUs |
Control Center (OAM) VMs |
16 |
100 |
6 |
|
Blade with 16 CPUs |
Policy Director VMs (LB) |
32 |
100 |
12 |
|
Blade with 24 CPUs |
Policy Server VMs (QNS) |
16 |
100 |
10 |
Threading = 100 Mongo per host = 10 Criss-cross Mongo for Session Cache = 2 on each VM Hyper-threading = Default (Enable) |
Blade with 24 CPUs |
Session Manager VMs |
80 |
100 |
8 |
|
Blade with 24 CPUs |
Control Center (OAM) VMs |
16 |
100 |
8 |
|
Blade with 24 CPUs |
Policy Director VMs (LB) |
32 |
100 |
12 |
Physical Cores / Blade |
VM Type |
Memory (in GB) |
Hard Disk (in GB) |
vCPU |
Configuration |
---|---|---|---|---|---|
Blade with 16 CPUs |
Policy Server VMs (QNS) |
16 |
100 |
12+ |
Threading = 200 Mongo per host = 10 Criss-cross Mongo for Session Cache = 2 on each VM |
Blade with 16 CPUs |
Session Manager VMs |
128 |
100 |
6+ |
|
Blade with 16 CPUs |
Control Center (OAM) VMs |
16 |
100 |
6+ |
|
Blade with 16 CPUs |
Policy Director VMs (LB) |
32 |
100 |
8+ |
|
Blade with 24 CPUs |
Policy Server VMs (QNS) |
16 |
100 |
10+ |
Threading = 100 Mongo per host = 10 Criss-cross Mongo for Session Cache = 2 on each VM |
Blade with 24 CPUs |
Session Manager VMs |
80 |
100 |
8+ |
|
Blade with 24 CPUs |
Control Center (OAM) VMs |
16 |
100 |
8+ |
|
Blade with 24 CPUs |
Policy Director VMs (LB) |
32 |
100 |
12+ |
Note | For large scale deployments having Policy Server (qns) VMs more than 35, Session Manager (sessionmgr) VMs more than 20, Policy Director (lb) VMs more than 2, recommended RAM for OAM (pcrfclient) VMs is 64GB. |
The following orchestration capabilities are required to administer a CPS deployment in OpenStack:
NODE_TYPE=pcrfclient01
Policy Server (qns)
Policy Director (lb and iomanager)
OAM (pcrfclient)
Session Manager (sessionmgr)
During a rollback, half of SM VMs must be deleted during Rollback procedure. As a result, the replica sets must be configured such that not all members of the replica set are in one half or the other. Replica set members must be distributed across the two upgrade sets so that the replica set is not entirely deleted. Refer to the CPS Migration and Upgrade Guide for more details.
CPS is supported on OpenStack Liberty or Newton.
CPS can also be installed on Cisco distributed platforms: Ultra B1.0 or Mercury 2.2.8
For more information about installing OpenStack and Cisco distributed platforms, refer to:
OpenStack Liberty: http://docs.openstack.org/liberty/
OpenStack Newton: https://docs.openstack.org/newton/
Ultra B1.0: https://www.cisco.com/c/en/us/solutions/service-provider/virtualized-packet-core/index.html
Mercury 2.2.8:
After you install OpenStack, you must perform some prerequisite tasks. The following sections describe these prerequisite tasks.
Note | The example commands in the following sections are related to OpenStack Liberty. For commands related to other supported platforms, refer to the corresponding platform documentation. |
CPU pinning is supported and recommended in OpenStack deployments where hyperthreading is enabled. This enables CPS VMs to be pinned to dedicated physical CPU cores.
OpenStack Liberty (OSP 7.2) or OpenStack Newton or Ultra B1.0 or Mercury 2.2.8
Numactl must be installed on control and compute nodes.
Refer to the following link for general instructions to enable CPU pinning for guest VMs: http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
The numactl package provides a command to examine the NUMA layout of the blades. Install this package on compute nodes to help determine which CPUs to set aside for pinning.
To configure the hypervisor so that it will not use the CPUs identified for CPU Pinning:
Step 1 | Follow the instructions in the post (refer to Prerequisites) to create host-aggregate, add compute hosts to the aggregate and set the CPU pinning metadata. |
Step 2 | Update Flavors which are NOT used for CPU pinning (non-CPS VM flavors) with the following command: nova flavor-key <id> set "aggregate_instance_extra_specs:pinned"="false" |
Step 3 | Update Flavors
which are CPS VM flavors (or a sub-set, if you are planning to bring up only
certain VMs in CPS with CPU pinning) with the following command:
nova flavor-key <id> set hw:cpu_policy=dedicated nova flavor-key <id> set aggregate_instance_extra_specs:pinned=true |
Step 4 | Launch a CPS VM
with the performance enhanced Flavor. Note the host on which the instance is
created and the instance name.
nova show <id> will show the host on which the VM is created in the field: OS-EXT-SRV-ATTR:host nova show <id> will show the virsh Instance name in the field: OS-EXT-SRV-ATTR:instance_name |
Step 5 | To verify that
vCPUs were pinned to the reserved physical CPUs, log in to the Compute node on
which the VM is created and run the following command:
virsh dumpxml
<instance_name>
<vcpu placement='static'>4</vcpu> <cputune> <shares>4096</shares> <vcpupin vcpu='0' cpuset='11'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='10'/> <emulatorpin cpuset='2-3,10-11'/> </cputune> |
For more information about keystone commands, refer to the keystone command reference at: http://docs.openstack.org/cli-reference/index.html
Step 1 | A core user must have administrator role to launch VMs on specific
hosts in OpenStack. Add the administrator role to the core user in the core
tenant as shown in the following command:
keystone user-role-add --user "core" --role admin --tenant "core"
| ||
Step 2 | You must define at least one availability zone in OpenStack. Nova
hypervisors list will show list of available hypervisors. For example:
[root@os24-control]# nova hypervisor-list +----+--------------------------+ | ID | Hypervisor hostname | +----+--------------------------+ | 1 | os24-compute-2.cisco.com | | 2 | os24-compute-1.cisco.com | +----+--------------------------+ | ||
Step 3 | Create availability zones specific to your deployment. The
following commands provide an example of how to create the availability zones:
nova aggregate-create osXX-compute-1 az-1 nova aggregate-add-host osXX-compute-1 osXX-compute-1.cisco.com nova aggregate-create osXX-compute-2 az-2 nova aggregate-add-host osXX-compute-2 osXX-compute-2.cisco.com
| ||
Step 4 | Configure the compute nodes to create volumes on availability
zones: Edit the
/etc/cinder/cinder.conf file to add the
storage_availability_zone parameter below the
[DEFAULT] line. For example:
ssh root@os24-compute-1.cisco.com [DEFAULT] storage_availability_zone=az-1:os24-compute-1.cisco.com After adding the storage availability zone lines in cinder.conf file, restart the cinder volume with following command: systemctl restart openstack-cinder-volume Repeat Step 4 for other compute nodes. |
Download the CPS ISO image file (CPS_x.x.x.release.iso) for the release from software.cisco.com and load it on the OpenStack control node.
CPS supports the QCOW2 image format for OpenStack installations. The QCOW2 base image is available to download as a separate file, and is not packaged inside the ISO.
Download the CPS QCOW2 base image file and extract it as shown in the following command:
tar -zxvf CPS_x.x.x_Base.qcow2.release.tar.gz
Locate the base image that is the root disk used by Cisco Policy Suite VM.
Note | The commands mentioned in this section are specific to OpenStack Liberty. For other OpenStack release specific commands, refer to https://releases.openstack.org/. |
Import the Cisco Policy Suite base QCOW2 or VMDK image into the OpenStack glance repository.
To import the QCOW2 image, enter the following:
source /root/keystonerc_admin
glance image-create --name "<base vm name>" --visibility "<visibility>" --disk-format "qcow2" --container "bare" --file <path of base qcow2>
To import the VMDK image, enter the following:
source /root/keystonerc_admin
glance image-create --name " <base vm name> " --visibility "<visibility>" --disk-format "vmdk" --container "bare" --file <path of base vmdk>
Import the ISO image by running the following command:
source /root/keystonerc_admin
glance image-create --name "CPS_x.x.x.release.iso" --visibility "public" --disk-format "iso" --container "bare" --file <path to iso file>
For more information on glance commands, refer to http://docs.openstack.org/cli-reference/glance.html.
Create a cinder volume to map the glance image to the volume. This ensures that the cinder volume (and also the ISO that you imported to glance) can be automatically attached to the Cluster Manager VM when it is launched.
In the core tenant, create and format the following cinder volumes to be attached to various VMs:
It is recommended you work with Cisco AS to determine the size of each volume.
Note | For mongo01 and mongo02, the minimum recommended size is 60 GB. |
The following commands illustrate how to create the cinder volumes:
source /root/keystonerc_user cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name svn01 --availability-zone az-1:os24-compute-1.cisco.com 2 cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name svn02 --availability-zone az-2:os24-compute-2.cisco.com 2 cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name mongo01 --availability-zone az-1:os24-compute-1.cisco.com 60 cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name mongo02 --availability-zone az-2:os24-compute-2.cisco.com 60 cps_iso_id=$(glance image-list | grep $cps_iso_name | awk ' {print $2}') cinder create --display-name $cps_iso_name --image-id $cps_iso_id --availability-zone az-1:os24-compute-1.cisco.com 3
Note |
|
OpenStack must have enough Default Quotas (that is, size of RAM, number of vCPUs, number of instances) to spin up all the VMs.
Update the Default Quotas in the following page of the OpenStack dashboard:
.For example:
OpenStack flavors define the virtual hardware templates defining sizes for RAM, disk, vCPUs, and so on.
To create the flavors for your CPS deployment, run the following commands, replacing the appropriate values for your CPS deployment.
source /root/keystonerc_admin nova flavor-create --ephemeral 0 pcrfclient01 auto 16384 0 2 nova flavor-create --ephemeral 0 pcrfclient02 auto 16384 0 2 nova flavor-create --ephemeral 0 cluman auto 8192 0 4 nova flavor-create --ephemeral 0 qps auto 10240 0 4 nova flavor-create --ephemeral 0 sm auto 16384 0 4 nova flavor-create --ephemeral 0 lb01 auto 8192 0 6 nova flavor-create --ephemeral 0 lb02 auto 8192 0 6
Allow access of the following TCP and UDP ports from the OpenStack dashboard
or from the CLI as shown in the following example:source /root/keystonerc_user nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 nova secgroup-add-rule default tcp 53 53 0.0.0.0/0 nova secgroup-add-rule default udp 53 53 0.0.0.0/0 nova secgroup-add-rule default tcp 80 80 0.0.0.0/0 nova secgroup-add-rule default tcp 443 443 0.0.0.0/0 nova secgroup-add-rule default tcp 7443 7443 0.0.0.0/0 nova secgroup-add-rule default tcp 8443 8443 0.0.0.0/0 nova secgroup-add-rule default tcp 9443 9443 0.0.0.0/0 nova secgroup-add-rule default tcp 5540 5540 0.0.0.0/0 nova secgroup-add-rule default tcp 1553 1553 0.0.0.0/0 nova secgroup-add-rule default tcp 3868 3868 0.0.0.0/0 nova secgroup-add-rule default tcp 9160 9160 0.0.0.0/0 nova secgroup-add-rule default tcp 27717 27720 0.0.0.0/0 nova secgroup-add-rule default tcp 5432 5432 0.0.0.0/0 nova secgroup-add-rule default tcp 61616 61616 0.0.0.0/0 nova secgroup-add-rule default tcp 9443 9450 0.0.0.0/0 nova secgroup-add-rule default tcp 8280 8290 0.0.0.0/0 nova secgroup-add-rule default tcp 7070 7070 0.0.0.0/0 nova secgroup-add-rule default tcp 8080 8080 0.0.0.0/0 nova secgroup-add-rule default tcp 8090 8090 0.0.0.0/0 nova secgroup-add-rule default tcp 7611 7611 0.0.0.0/0 nova secgroup-add-rule default tcp 7711 7711 0.0.0.0/0 nova secgroup-add-rule default udp 694 694 0.0.0.0/0 nova secgroup-add-rule default tcp 10080 10080 0.0.0.0/0 nova secgroup-add-rule default tcp 11211 11211 0.0.0.0/0 nova secgroup-add-rule default tcp 111 111 0.0.0.0/0 nova secgroup-add-rule default udp 111 111 0.0.0.0/0 nova secgroup-add-rule default tcp 2049 2049 0.0.0.0/0 nova secgroup-add-rule default udp 2049 2049 0.0.0.0/0 nova secgroup-add-rule default tcp 32767 32767 0.0.0.0/0 nova secgroup-add-rule default udp 32767 32767 0.0.0.0/0 nova secgroup-add-rule default tcp 9763 9763 0.0.0.0/0 nova secgroup-add-rule default tcp 8140 8140 0.0.0.0/0 nova secgroup-add-rule default tcp 8161 8161 0.0.0.0/0 nova secgroup-add-rule default tcp 12712 12712 0.0.0.0/0 nova secgroup-add-rule default tcp 9200 9200 0.0.0.0/0 nova secgroup-add-rule default tcp 5060 5060 0.0.0.0/0 nova secgroup-add-rule default udp 5060 5060 0.0.0.0/0 nova secgroup-add-rule default tcp 8458 8458 0.0.0.0/0 nova secgroup-add-rule default udp 8458 8458 0.0.0.0/0
Where: default is the name of the security group.