The Cisco Policy Suite
offers a carrier-grade, high capacity, high performance, virtualized software
solution, capable of running on VMWare, OpenStack/KVM hypervisors or cloud
infrastructures. To meet the stringent performance, capacity, and availability
demands, the Cisco software requires that all allocated hardware system
resources be 100% available when needed, and not oversubscribed or shared
across separate VM's.
operating a cloud infrastructure, the infrastructure must be configured to
guarantee CPU, memory, network, and I/O availability for each CPS VM.
Oversubscription of system resources will reduce the performance and capacity
of the platform, and may compromise availability and response times. CPU core
requirements are listed as pCPUs (physical cores) not vCPU's (hyper-threaded
In addition, the CPS
carrier-grade platform requires:
is enabled for all memory allocated to the CPS VM.
Hyperthreading must be ENABLED. To prevent over-subscription of CPU cores, CPU
pinning should be ENABLED.
of at least 13,000 rating per chip and 1,365 rating per thread.
The total number
of VM CPU cores allocated should be 2 less than the total number of CPU cores
Monitor the CPU
STEAL statistic. This statistic should not cross 2% for more than 1 minute.
A high CPU
STEAL value indicates the application is waiting for CPU, and is usually the
result of CPU over allocation or no CPU pinning. CPS performance cannot be
guaranteed in an environment with high CPU STEAL.
CPU must be a
high performance Intel x86 64-bit chipset.
settings should be set to high-performance values, rather than energy saving,
hibernating, or speed stepping (contact hardware vendor for specific values).
which cannot scale by adding more VM's, Cisco will support the allocation of
additional CPU's above the recommendation to a single VM, but does not
guarantee a linear performance increase.
Cisco will not
support performance SLA's for CPS implementations with less than the
recommended CPU allocation.
Cisco will not
support performance SLA's for CPS implementations with CPU over-allocation
(assigning more vCPU than are available on the blade, or sharing CPU's).
higher performance can be achieved by adding more VM's, not by adding more
system resources to VM's.
which cannot scale by adding more VM's, Cisco will support the allocation of
additional CPU's above the recommendation, but does not guarantee a linear
should be lower than 15 ns.
RAM should be
error-correcting ECC memory.
performance should be less than 2 millisecond average latency.
performance needs to support greater than 5000 input/output operations per
second (IOPS) per CPS VM.
Disk storage must
provide redundancy and speed, such as RAID 0+1.
hardware design must be configured for better than 99.999% availability.
deployments, Cisco requires the customer designs comply with the Cisco CPS HA
two of each CPS VM type must be deployed: Policy Server (qns), Policy Director
(lb), OAM (pcrfclient), Session Manager (sessionmgr).
Each CPS VM
type must not share common HW zone with the same CPS VM type.
The number of CPU
cores, memory, NICs, and storage allocated per CPS VM must meet or exceed the
The following table
provides information related to vCPU requirements based on:
Yes (if allowed by hypervisor)
Reservation: Yes (if allowed)
Table 1 Resource
Allocation Recommendations for New Deployments in Virtualized Environments
24 GB plus
600 MB for
every 100000 data sessions
400 MB for
100000 SPR subscribers
400 MB for
100 K OCS subscribers
Center (OAM - PCRFCLIENT)
deployments larger than 24 VMs, contact Cisco Advanced Services.
orchestration capabilities are required to administer a CPS deployment in
Ability to independently
create/delete/re-create the Cluster Manager VM.
Ability to snapshot Cluster
Manager VM and restore the Cluster Manager VM from snapshot.
Ability to attach and detach
the ISO cinder volume to/from the Cluster Manager VM.
CPS recommends that the CPS
software ISO be mapped to a cinder volume. In deployments where this
recommendation is used, prior to installation or upgrade, the ISO cinder volume
must be attached to the Cluster Manager so that the ISO can be mounted inside
the Cluster Manager. The sample HEAT template provided in this document
demonstrates how to automate mounting the ISO inside Cluster Manager. In
deployments where this recommendation is not used, the CPS software ISO must be
made available inside Cluster Manager VM and mounted using the method
implemented by the customer.
The Config drive must be
used to pass in files such as userdata and the Config drive must be mounted to
CPS VM in the 'iso9660' format.
Any cinder volume required
by the product code must be attached first to the VM and any customer
environment specific cinder volumes should be attached after. One exception is
the ISO cinder volume attached to Cluster Manager VM. In cases where ISO cinder
volume is attached in a different order, the API to mount the ISO needs to be
supplied with the right device name in the API payload.
eth0 needs to be on the
'internal' network for inter-VM communication.
On all CPS VMs, the Cluster
Manager IP needs to be injected in /etc/hosts to ensure connectivity between
each host and the Cluster Manager.
CPS VM's role needs to be
/etc/broadhop/.profile, for example:
For upgrades and upgrade
rollbacks, the orchestrator must have the ability to independently
create/delete/re-create half/all of the following CPS VMs:
Director (lb and iomanager)
upgrade rollback, half of SM VMs must be deleted during Rollback procedure. As
a result, the replica sets must be configured such that not all members of the
replica set are in one half or the other. Replica set members must be
distributed across the two upgrade sets so that the replica set is not entirely
deleted. Refer to the CPS Upgrade Guide for more details.
For scaling, the
orchestrator must have the ability to independently create/delete/re-create
half/all of the following CPS VMs in each scaling unit:
CPS is supported on OpenStack Kilo.
For more information about installing OpenStack, see the following URL:
pool of CPUs you want to set aside for pinning.
At least 2 CPUs
should be set aside for the Hypervisor in each node if it is a compute only
blade. If the blade is operating as both a control and compute node, set aside
more CPUs for OpenStack services.
remaining CPUs for pinning.
In the above
example, the following CPUs could be selected for pinning:
2, 3, 6, 7,
from Using CPUs Set Aside for Pinning
To configure the
hypervisor so that it will not use the CPUs identified for CPU Pinning:
Open the KVM
console for the Compute node.
following command to update the kernal boot parameters:
/etc/nova/nova.conf file on that blade and set the
vcpu_pin_set value to a list or range of physical CPU
cores to reserve for virtual machine processes. For example:
After Linux has
finished the boot process, enter the following command to verify the above
Kernel boot options:
options you defined will be displayed, for example:
Follow the rest
of the instructions in the above blog post (refer to
to create host-aggregate, add compute hosts to the aggregate and set the CPU
which are NOT used for CPU pinning (non-CPS VM flavors) with the following
flavor-key <id> set
which are CPS VM flavors (or a sub-set, if you are planning to bring up only
certain VMs in CPS with CPU pinning) with the following command:
nova flavor-key <id> set hw:cpu_policy=dedicated
nova flavor-key <id> set aggregate_instance_extra_specs:pinned=true
Launch a CPS VM
with the performance enhanced Flavor. Note the host on which the instance is
created and the instance name.
<id> will show the host on which the VM
is created in the field:
<id> will show the virsh Instance name
in the field:
To verify that
vCPUs were pinned to the reserved physical CPUs, log in to the Compute node on
which the VM is created and run the following command:
<instance_name> The following
section will show the physical CPUs in the field
cpuset from the list of CPUs that were set aside
earlier. For example:
The administrator role for the core user is not required if
you do not intend to launch the VM on a specific host and if you prefer to
allow nova to select the host for the VM.
You must define at least one availability zone in OpenStack. Nova
hypervisors list will show list of available hypervisors. For example:
[root@os24-control]# nova hypervisor-list+----+--------------------------+| ID | Hypervisor hostname |+----+--------------------------+| 1 | os24-compute-2.cisco.com || 2 | os24-compute-1.cisco.com |+----+--------------------------+
Create availability zones specific to your deployment. The
following commands provide an example of how to create the availability zones:
nova aggregate-create osXX-compute-1 az-1
nova aggregate-add-host osXX-compute-1
nova aggregate-create osXX-compute-2 az-2
nova aggregate-add-host osXX-compute-2
The above command creates two availability zones az-1 and az-2.
You need to specify the zones az-1 or az-2 using Nova boot commands (see
Create CPS VMs using Nova Boot Commands),
or in the Heat environment files (see
Create CPS VMs using Heat).
You can also put more than one compute node in an availability zone. You could
create az-1 with both blades, or in a 6-blade system, put three blades in each
and then use
az-1:osXX-compute-2.cisco.com to lock that VM
onto that blade.
Availability zone for svn01 volume should be the same as that of
pcrfclient01, svn02 volume as that of pcrfclient02, similarly for mongo01 and
sessionmgr01, mongo02 and sessionmgr02. The same concept is applicable to
cluman – the ISO volume and Cluster Manager (cluman) should be in same zone.
Configure the compute nodes to create volumes on availability
zones: Edit the
/etc/cinder/cinder.conf file to add the
storage_availability_zone parameter below the
[DEFAULT] line. For example:
Import the ISO image
by running following command:
glance image-create --name
"CPS_x.x.x.release.iso" --is-public "True" --disk-format
"iso" --container "bare" --file <name of iso file>
Create a cinder volume
to map the glance image to the volume. This ensures that the cinder volume (and
also the ISO that you imported to glance) can be automatically attached to the
Cluster Manager VM when it is launched.
In the core tenant,
create and format the following cinder volumes to be attached to various VMs:
It is recommended you
work with Cisco AS to determine the size of each volume.
For mongo01 and
mongo02, the minimum recommended size is 60 GB.
The following commands
illustrate how to create the cinder volumes:
$cps_iso_name with the ISO filename. For example:
If any host
in the availability zone may be used, then only the zone needs to be specified.
Currently, the recommendation only specifies
Verify or Update
OpenStack must have
enough Default Quotas (that is, size of RAM, number of vCPUs, number of
instances) to spin up all the VMs.
Update the Default
Quotas in the following page of the OpenStack dashboard:
Admin > Defaults > Update
define the virtual hardware templates defining sizes for RAM, disk, vCPUs, and
To create the flavors
for your CPS deployment, run the following commands, replacing the appropriate
values for your CPS deployment.
nova flavor-create --ephemeral 0 pcrfclient01 auto 16384 0 2
nova flavor-create --ephemeral 0 pcrfclient02 auto 16384 0 2
nova flavor-create --ephemeral 0 cluman auto 8192 0 4
nova flavor-create --ephemeral 0 qps auto 10240 0 4
nova flavor-create --ephemeral 0 sm auto 16384 0 4
nova flavor-create --ephemeral 0 lb01 auto 8192 0 6
nova flavor-create --ephemeral 0 lb02 auto 8192 0 6
Set up Access and
Allow access of the
following TCP and UDP ports from the OpenStack dashboard
Project > Access & Security
> default / Manage Rules or from the CLI
as shown in the following example: