Installation Overview
Cisco Policy Suite VMs is deployed using either Nova boot commands or Heat templates.
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Cisco Policy Suite VMs is deployed using either Nova boot commands or Heat templates.
| Step 1 | Create cloud configuration files for each VM to be deployed (xxx-cloud.cfg). These configurations are used to define the OpenStack parameters for each CPS VM. | ||||
| Step 2 | Run the following command on the control node: 
 | ||||
| Step 3 | Deploy each CPS VM with the following nova boot command: 
 The following example shows the nova boot commands to deploy a Cluster Manager (cluman), two OAMs (pcrfclients), two sessionmgrs, two Policy Directors (load balancers), and four Policy Server (qns) VMs. In the following example: 
 
 | ||||
| Step 4 | Update the ports to allow address pairing on the Neutron ports: | ||||
| Step 5 | Wait approximately 10 minutes for the Cluster Manager VM to be deployed, then check the readiness status of the Cluster Manager VM using the following API: GET http://<Cluster Manager IP>:8458/api/system/status/cluman Refer to /api/system/status/cluman for more information. When this API response indicates that the Cluster Manager VM is in a ready state ( Refer also to the /var/log/cloud-init-output.log on the Cluster Manager VM for deployment details. | 
For nova boot installation of CPS, you must create a cloud configuration file for each CPS VM to be deployed.
The following sections
                              		show an example Cluster Manager cloud configuration (cluman-cloud.cfg),
                              		and a pcrflient01 cloud configuration (pcrfclient01-cloud.cfg). 
                              	 
                           
These files must be placed in the directory in which you execute the nova launch commands, typically /root/cps-install/.
|  Note | Use  For Cluman/Arbiter VM, include  | 
#cloud-config
write_files:
 - path: /etc/sysconfig/network-scripts/ifcfg-eth0
   encoding: ascii
   content: |
     DEVICE=eth0
     BOOTPROTO=none
     NM_CONTROLLED=no
     IPADDR=172.16.2.19    <---- Internal IP to access via private IP
     NETMASK=255.255.255.0
     NETWORK=172.16.2.0    <------ Internal network
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth1
   encoding: ascii
   content: |
     DEVICE=eth1
     BOOTPROTO=none
     NM_CONTROLLED=no
     IPADDR=172.18.11.101   <---- Management IP to access via public IP
     NETMASK=255.255.255.0
     GATEWAY=172.18.11.1
     NETWORK=172.18.11.0
   owner: root:root
   permissions: '0644'
 - path: /var/lib/cloud/instance/payload/launch-params
   encoding: ascii
   owner: root:root
   permissions: '0644'
 - path: /root/.autoinstall.sh
   encoding: ascii
   content: |
     #!/bin/bash
     if [[ -d /mnt/iso ]] && [[ -f /mnt/iso/install.sh ]]; then
       /mnt/iso/install.sh << EOF
     mobile
     y
     1
     EOF
     fi
   permissions: '0755'
mounts:
 - [ /dev/vdb, /mnt/iso, iso9660, "auto,ro", 0, 0 ]
runcmd:
 - ifdown eth0
 - ifdown eth1
 - echo 172.16.2.19 installer >> /etc/hosts   <---- Internal/private IP of cluman 
 - ifup eth0
 - ifup eth1
- echo ifdown eth0 >>  /etc/rc.d/rc.local
 - echo ifup eth0 >> /etc/rc.d/rc.local
 - echo ifdown eth1 >>  /etc/rc.d/rc.local
 - echo ifup eth1 >> /etc/rc.d/rc.local
 - chmod +x /etc/rc.d/rc.local
- /root/.autoinstall.sh
|  Note | If actual hostname for Cluster Manager VM is other than 'installer', then modify installer/cluman entry in /etc/hosts accordingly. Example: 
 | 
The following example configuration file is for pcrfclient01. You must create separate configuration files for each CPS VM to be deployed.
For each file,
                                       				modify the 
                                       				NODE_TYPE, and network settings (IPADDR, 
                                       				GATEWAY, 
                                       				NETWORK) accordingly. 
                                       			 
                                    
A typical CPS deployment would require the following files:
pcrfclient01-cloud.cfg
pcrfclient02-cloud.cfg
lb01-cloud.cfg
lb02-cloud.cfg
sessionmgr01-cloud.cfg
sessionmgr02-cloud.cfg
qns01-cloud.cfg
qns02-cloud.cfg
qns03-cloud.cfg
qns04-cloud.cfg
pcrfclient01-cloud.cfg
pcrfclient02-cloud.cfg
lb01-cloud.cfg
lb02-cloud.cfg
sessionmgr01-cloud.cfg
sessionmgr02-cloud.cfg
qns01-cloud.cfg
qns02-cloud.cfg
qns03-cloud.cfg
qns04-cloud.cfg
Modify 
                                       				IPADDR to the IP address used in nova boot command for
                                       				that interface. 
                                       			 
                                    
 Set 
                                       				NETMASK, 
                                       				GATEWAY, and 
                                       				NETWORK according to your environment. 
                                       			 
                                    
#cloud-config
# hostname: pcrfclient01
fqdn: pcrfclient01 
write_files:
 - path: /etc/sysconfig/network-scripts/ifcfg-eth0
   encoding: ascii
   content: |
     DEVICE=eth0
     BOOTPROTO=none
     NM_CONTROLLED=no
     IPADDR=172.16.2.20
     NETMASK=255.255.255.0
     NETWORK=172.16.2.0
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth1
   encoding: ascii
   content: |
     DEVICE=eth1
     BOOTPROTO=none
     NM_CONTROLLED=no
     IPADDR=172.18.11.152
     NETMASK=255.255.255.0
     GATEWAY=172.18.11.1
     NETWORK=172.18.11.0
   owner: root:root
   permissions: '0644'
 - path: /var/lib/cloud/instance/payload/launch-params
   encoding: ascii
   owner: root:root
   permissions: '0644'
 - path: /etc/broadhop.profile
   encoding: ascii
   content: "NODE_TYPE=pcrfclient01\n"
   owner: root:root
   permissions: '0644'
runcmd:
 - ifdown eth0
 - ifdown eth1
 - echo 172.16.2.19 installer >> /etc/hosts
 - ifup eth0
 - ifup eth1
 - sed -i '/^HOSTNAME=/d' /etc/sysconfig/network && echo HOSTNAME=pcrfclient01 >> /etc/sysconfig/network
 - echo pcrfclient01 > /etc/hostname
- hostname pcrfclient01To create the CPS VMs using OpenStack Heat, you must first create an environment file and a Heat template containing information for your deployment.
These files include information about the ISO, base image, availability zones, management IPs, and volumes. Modify the sample files provided below with information for your deployment.
After populating these files, continue with Create Heat Stack.
|  Note | Update the network/vlan names, internal and management IPs, VIPs, and volumes for your environment. 
 Also update the
                                             			 heat template (hot-cps.yaml) with your availability zone variables (for
                                             			 example: 
                                             			  | 
# cat hot-cps.env
# This is an example environment file parameters:
  cps_iso_image_name: CPS_9.0.0.release.iso
  base_vm_image_name: CPS_9.0.0_Base.release
  cps_az_1: az-1
  cps_az_2: az-2
  internal_net_name: internal
  internal_net_cidr: 172.16.2.0/24
  management_net_name: management
  management_net_cidr: 172.18.11.0/24
  management_net_gateway: 172.18.11.1
  gx_net_name: gx
  gx_net_cidr: 192.168.2.0/24
  cluman_flavor_name: cluman
  cluman_internal_ip: 172.16.2.19
  cluman_management_ip: 172.18.11.151
  lb_internal_vip: 172.16.2.200
  lb_management_vip: 172.18.11.156
  lb_gx_vip: 192.168.2.200
  lb01_flavor_name: lb01
  lb01_internal_ip: 172.16.2.201
  lb01_management_ip: 172.18.11.154
  lb01_gx_ip: 192.168.2.201
  lb02_flavor_name: lb02
  lb02_internal_ip: 172.16.2.202
  lb02_management_ip: 172.18.11.155
  lb02_gx_ip: 192.168.2.202
  pcrfclient01_flavor_name: pcrfclient01
  pcrfclient01_internal_ip: 172.16.2.20
  pcrfclient01_management_ip: 172.18.11.152
  pcrfclient02_flavor_name: pcrfclient02
  pcrfclient02_internal_ip: 172.16.2.21
  pcrfclient02_management_ip: 172.18.11.153
  qns01_internal_ip: 172.16.2.24
  qns02_internal_ip: 172.16.2.25
  qns03_internal_ip: 172.16.2.26
  qns04_internal_ip: 172.16.2.27
  sessionmgr01_internal_ip: 172.16.2.22
  sessionmgr01_management_ip: 172.18.11.157
  sessionmgr02_internal_ip: 172.16.2.23
  sessionmgr02_management_ip: 172.18.11.158
  mongo01_volume_id: "54789405-f683-401b-8194-c354d8937ecb"
  mongo02_volume_id: "9694ab92-8ddd-407e-8520-8b0280f5db03"
  svn01_volume_id: "5b6d7263-40d1-4748-b45c-d1af698d71f7"
  svn02_volume_id: "b501f834-eff9-4044-90c3-a24378f3734d"
  cps_iso_volume_id: "ef52f944-411b-42b1-b86a-500950f5b398"|  Note | 
 | 
#cat hot-cps.yaml
heat_template_version: 2014-10-16
description: A minimal CPS deployment for big bang deployment
parameters:
#=========================
# Global Parameters
#=========================
  base_vm_image_name:
    type: string
    label: base vm image name
    description: name of the base vm as imported into glance
  cps_iso_image_name:
    type: string
    label: cps iso image name
    description: name of the cps iso as imported into glance
  cps_install_type:
    type: string
    label: cps installation type (mobile|mog|pats|arbiter|andsf|escef)
    description: cps installation type (mobile|mog|pats|arbiter|andsf|escef)
    default: mobile
  cps_az_1:
    type: string
    label: first availability zone
    description: az for "first half" of cluster
    default: nova
  cps_az_2:
    type: string
    label: second availability zone
    description: az for "second half" of cluster
    default: nova
#=========================
# Network Parameters
#=========================
  internal_net_name:
    type: string
    label: internal network name
    description: name of the internal network
  internal_net_cidr:
    type: string
    label: cps internal cidr
    description: cidr of internal subnet
  management_net_name:
    type: string
    label: management network name
    description: name of the management network
  management_net_cidr:
    type: string
    label: cps management cidr
    description: cidr of management subnet
  management_net_gateway:
    type: string
    label: management network gateway
    description: gateway on management network
    default: ""
  gx_net_name:
    type: string
    label: gx network name
    description: name of the gx network
  gx_net_cidr:
    type: string
    label: cps gx cidr
    description: cidr of gx subnet
  gx_net_gateway:
    type: string
    label: gx network gateway
    description: gateway on gx network
    default: ""
  cps_secgroup_name:
    type: string
    label: cps secgroup name
    description: name of cps security group
    default: cps_secgroup
#=========================
# Volume Parameters
#=========================
  mongo01_volume_id:
    type: string
    label: mongo01 volume id
    description: uuid of the mongo01 volume
  mongo02_volume_id:
    type: string
    label: mongo02 volume id
    description: uuid of the mongo02 volume
  svn01_volume_id:
    type: string
    label: svn01 volume id
    description: uuid of the svn01 volume
  svn02_volume_id:
    type: string
    label: svn02 volume id
    description: uuid of the svn02 volume
  cps_iso_volume_id:
    type: string
    label: cps iso volume id
    description: uuid of the cps iso volume
#=========================
# Instance Parameters
#=========================
  cluman_flavor_name:
    type: string
    label: cluman flavor name
    description: flavor cluman vm will use
    default: cluman
  cluman_internal_ip:
    type: string
    label: internal ip of cluster manager
    description: internal ip of cluster manager
  cluman_management_ip:
    type: string
    label: management ip of cluster manager
    description: management ip of cluster manager
  lb_internal_vip:
    type: string
    label: internal vip of load balancer
    description: internal vip of load balancer
  lb_management_vip:
    type: string
    label: management vip of load balancer
    description: management vip of load balancer
  lb_gx_vip:
    type: string
    label: gx ip of load balancer
    description: gx vip of load balancer 
  lb01_flavor_name:
    type: string
    label: lb01 flavor name
    description: flavor lb01 vms will use
    default: lb01
  lb01_internal_ip:
    type: string
    label: internal ip of load balancer
    description: internal ip of load balancer
  lb01_management_ip:
    type: string
    label: management ip of load balancer
    description: management ip of load balancer
  lb01_gx_ip:
    type: string
    label: gx ip of load balancer
    description: gx ip of load balancer
  lb02_flavor_name:
    type: string
    label: lb02 flavor name
    description: flavor lb02 vms will use
    default: lb02
  lb02_internal_ip:
    type: string
    label: internal ip of load balancer
    description: internal ip of load balancer
  lb02_management_ip:
    type: string
    label: management ip of load balancer
    description: management ip of load balancer
  lb02_gx_ip:
    type: string
    label: gx ip of load balancer
    description: gx ip of load balancer
  pcrfclient01_flavor_name:
    type: string
    label: pcrfclient01 flavor name
    description: flavor pcrfclient01 vm will use
    default: pcrfclient01
  pcrfclient01_internal_ip:
    type: string
    label: internal ip of pcrfclient01
    description: internal ip of pcrfclient01
  pcrfclient01_management_ip:
    type: string
    label: management ip of pcrfclient01
    description: management ip of pcrfclient01
  pcrfclient02_flavor_name:
    type: string
    label: pcrfclient02 flavor name
    description: flavor pcrfclient02 vm will use
    default: pcrfclient02
  pcrfclient02_internal_ip:
    type: string
    label: internal ip of pcrfclient02
    description: internal ip of pcrfclient02
  pcrfclient02_management_ip:
    type: string
    label: management ip of pcrfclient02
    description: management ip of pcrfclient02
  qns_flavor_name:
    type: string
    label: qns flavor name
    description: flavor qns vms will use
    default: qps
  qns01_internal_ip:
    type: string
    label: internal ip of qns01
    description: internal ip of qns01
  qns02_internal_ip:
    type: string
    label: internal ip of qns02
    description: internal ip of qns02
  qns03_internal_ip:
    type: string
    label: internal ip of qns03
    description: internal ip of qns03
  qns04_internal_ip:
    type: string
    label: internal ip of qns04
    description: internal ip of qns04
  sessionmgr_flavor_name:
    type: string
    label: sessionmgr flavor name
    description: flavor sessionmgr vms will use
    default: sm
  sessionmgr01_internal_ip:
    type: string
    label: internal ip of sessionmgr01
    description: internal ip of sessionmgr01
  sessionmgr01_management_ip:
    type: string
    label: management ip of sessionmgr01
    description: management ip of sessionmgr01
  sessionmgr02_internal_ip:
    type: string
    label: internal ip of sessionmgr02
    description: internal ip of sessionmgr02
  sessionmgr02_management_ip:
    type: string
    label: management ip of sessionmgr02
    description: management ip of sessionmgr02
resources:
#=========================
# Instances
#=========================
  cluman:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_1 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: cluman_flavor_name }
      networks:
        - port: { get_resource: cluman_internal_port }
        - port: { get_resource: cluman_management_port }
      block_device_mapping:
        - device_name: vdb
          volume_id: { get_param: cps_iso_volume_id }
      user_data_format: RAW
      user_data: { get_resource: cluman_config }
  cluman_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: cluman_internal_ip }}]
  cluman_management_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: management_net_name }
      fixed_ips: [{ ip_address: { get_param: cluman_management_ip }}]
  cluman_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
            permissions: "0644"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            permissions: "0644"
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: cluman_internal_ip }
          - path: /etc/sysconfig/network-scripts/ifcfg-eth1
            permissions: "0644"
            content:
              str_replace:
                template: |
                  DEVICE=eth1
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                  GATEWAY=$gateway
                params:
                  $ip: { get_param: cluman_management_ip }
                  $gateway: { get_param: management_net_gateway }
          - path: /root/.autoinstall.sh
            permissions: "0755"
            content:
              str_replace:
                template: |
                  #!/bin/bash
                  if [[ -d /mnt/iso ]] && [[ -f /mnt/iso/install.sh ]]; then
                  /mnt/iso/install.sh << EOF
                  $install_type
                  y
                  1
                  EOF
                  fi
                params:
                  $install_type: { get_param: cps_install_type }
        mounts:
          - [ /dev/vdb, /mnt/iso, iso9660, "auto,ro", 0, 0 ]
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth1
              params:
                $cidr: { get_param: management_net_cidr }
          - ifdown eth0 && ifup eth0
          - ifdown eth1 && ifup eth1
          - echo HOSTNAME=cluman >> /etc/sysconfig/network
										- echo cluman > /etc/hostname
          - hostname cluman
          - /root/.autoinstall.sh
  lb01:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_1 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: lb01_flavor_name }
      networks:
        - port: { get_resource: lb01_internal_port }
        - port: { get_resource: lb01_management_port }
        - port: { get_resource: lb01_gx_port }
      user_data_format: RAW
      user_data: { get_resource: lb01_config }
  lb01_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: lb01_internal_ip }}]
      allowed_address_pairs:
        - ip_address: { get_param: lb_internal_vip }
  lb01_management_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: management_net_name }
      fixed_ips: [{ ip_address: { get_param: lb01_management_ip }}]
      allowed_address_pairs:
        - ip_address: { get_param: lb_management_vip }
  lb01_gx_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: gx_net_name }
      fixed_ips: [{ ip_address: { get_param: lb01_gx_ip }}]
      allowed_address_pairs:
        - ip_address: { get_param: lb_gx_vip }
  lb01_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
          - path: /etc/broadhop.profile
            content: "NODE_TYPE=lb01\n"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: lb01_internal_ip }
          - path: /etc/sysconfig/network-scripts/ifcfg-eth1
            content:
              str_replace:
                template: |
                  DEVICE=eth1
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                  GATEWAY=$gateway
                params:
                  $ip: { get_param: lb01_management_ip }
                  $gateway: { get_param: management_net_gateway }
          - path: /etc/sysconfig/network-scripts/ifcfg-eth2
            content:
              str_replace:
                template: |
                  DEVICE=eth2
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                  GATEWAY=$gateway
                params:
                  $ip: { get_param: lb01_gx_ip }
                  $gateway: { get_param: gx_net_gateway }
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth1
              params:
                $cidr: { get_param: management_net_cidr }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth2
              params:
                $cidr: { get_param: gx_net_cidr }
          - ifdown eth0 && ifup eth0
          - ifdown eth1 && ifup eth1
          - ifdown eth2 && ifup eth2
          - echo HOSTNAME=lb01 >> /etc/sysconfig/network
										- echo lb01 > /etc/hostname
          - hostname lb01
  lb02:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_2 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: lb02_flavor_name }
      networks:
        - port: { get_resource: lb02_internal_port }
        - port: { get_resource: lb02_management_port }
        - port: { get_resource: lb02_gx_port }
      user_data_format: RAW
      user_data: { get_resource: lb02_config }
  lb02_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: lb02_internal_ip }}]
      allowed_address_pairs:
        - ip_address: { get_param: lb_internal_vip }
  lb02_management_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: management_net_name }
      fixed_ips: [{ ip_address: { get_param: lb02_management_ip }}]
      allowed_address_pairs:
        - ip_address: { get_param: lb_management_vip }
  lb02_gx_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: gx_net_name }
      fixed_ips: [{ ip_address: { get_param: lb02_gx_ip }}]
      allowed_address_pairs:
        - ip_address: { get_param: lb_gx_vip }
  lb02_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
          - path: /etc/broadhop.profile
            content: "NODE_TYPE=lb02\n"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: lb02_internal_ip }
          - path: /etc/sysconfig/network-scripts/ifcfg-eth1
            content:
              str_replace:
                template: |
                  DEVICE=eth1
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                  GATEWAY=$gateway
                params:
                  $ip: { get_param: lb02_management_ip }
                  $gateway: { get_param: management_net_gateway }
          - path: /etc/sysconfig/network-scripts/ifcfg-eth2
            content:
              str_replace:
                template: |
                  DEVICE=eth2
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                  GATEWAY=$gateway
                params:
                  $ip: { get_param: lb02_gx_ip }
                  $gateway: { get_param: gx_net_gateway }
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth1
              params:
                $cidr: { get_param: management_net_cidr }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth2
              params:
                $cidr: { get_param: gx_net_cidr }
          - ifdown eth0 && ifup eth0
          - ifdown eth1 && ifup eth1
          - ifdown eth2 && ifup eth2
          - echo HOSTNAME=lb02 >> /etc/sysconfig/network
										- echo lb02 > /etc/hostname
          - hostname lb02
  pcrfclient01:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_1 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: pcrfclient01_flavor_name }
      networks:
        - port: { get_resource: pcrfclient01_internal_port }
        - port: { get_resource: pcrfclient01_management_port }
      block_device_mapping:
        - device_name: vdb
          volume_id: { get_param: svn01_volume_id }
      user_data_format: RAW
      user_data: { get_resource: pcrfclient01_config }
  pcrfclient01_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: pcrfclient01_internal_ip }}]
  pcrfclient01_management_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: management_net_name }
      fixed_ips: [{ ip_address: { get_param: pcrfclient01_management_ip }}]
  pcrfclient01_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
          - path: /etc/broadhop.profile
            content: "NODE_TYPE=pcrfclient01\n"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: pcrfclient01_internal_ip }
          - path: /etc/sysconfig/network-scripts/ifcfg-eth1
            content:
              str_replace:
                template: |
                  DEVICE=eth1
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                  GATEWAY=$gateway
                params:
                  $ip: { get_param: pcrfclient01_management_ip }
                  $gateway: { get_param: management_net_gateway }
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth1
              params:
                $cidr: { get_param: management_net_cidr }
          - ifdown eth0 && ifup eth0
          - ifdown eth1 && ifup eth1
          - echo HOSTNAME=pcrfclient01 >> /etc/sysconfig/network
										- echo pcrfclient01 > /etc/hostname
          - hostname pcrfclient01
  pcrfclient02:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_2 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: pcrfclient02_flavor_name }
      networks:
        - port: { get_resource: pcrfclient02_internal_port }
        - port: { get_resource: pcrfclient02_management_port }
      block_device_mapping:
        - device_name: vdb
          volume_id: { get_param: svn02_volume_id }
      user_data_format: RAW
      user_data: { get_resource: pcrfclient02_config }
  pcrfclient02_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: pcrfclient02_internal_ip }}]
  pcrfclient02_management_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: management_net_name }
      fixed_ips: [{ ip_address: { get_param: pcrfclient02_management_ip }}]
  pcrfclient02_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
          - path: /etc/broadhop.profile
            content: "NODE_TYPE=pcrfclient02\n"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: pcrfclient02_internal_ip }
          - path: /etc/sysconfig/network-scripts/ifcfg-eth1
            content:
              str_replace:
                template: |
                  DEVICE=eth1
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                  GATEWAY=$gateway
                params:
                  $ip: { get_param: pcrfclient02_management_ip }
                  $gateway: { get_param: management_net_gateway }
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth1
              params:
                $cidr: { get_param: management_net_cidr }
          - ifdown eth0 && ifup eth0
          - ifdown eth1 && ifup eth1
          - echo HOSTNAME=pcrfclient02 >> /etc/sysconfig/network
										- echo pcrfclient01 > /etc/hostname
          - hostname pcrfclient02
  qns01:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_1 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: qns_flavor_name }
      networks:
        - port: { get_resource: qns01_internal_port }
      user_data_format: RAW
      user_data: { get_resource: qns01_config }
  qns01_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: qns01_internal_ip }}]
  qns01_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
          - path: /etc/broadhop.profile
            content: "NODE_TYPE=qns01\n"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: qns01_internal_ip }
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - ifdown eth0 && ifup eth0
          - echo HOSTNAME=qns01 >> /etc/sysconfig/network
										- echo qns01 > /etc/hostname
          - hostname qns01
  qns02:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_1 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: qns_flavor_name }
      networks:
        - port: { get_resource: qns02_internal_port }
      user_data_format: RAW
      user_data: { get_resource: qns02_config }
  qns02_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: qns02_internal_ip }}]
  qns02_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
          - path: /etc/broadhop.profile
            content: "NODE_TYPE=qns02\n"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: qns02_internal_ip }
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - ifdown eth0 && ifup eth0
          - echo HOSTNAME=qns02 >> /etc/sysconfig/network
										- echo qns02 > /etc/hostname
          - hostname qns02
  qns03:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_2 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: qns_flavor_name }
      networks:
        - port: { get_resource: qns03_internal_port }
      user_data_format: RAW
      user_data: { get_resource: qns03_config }
  qns03_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: qns03_internal_ip }}]
  qns03_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
          - path: /etc/broadhop.profile
            content: "NODE_TYPE=qns03\n"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: qns03_internal_ip }
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - ifdown eth0 && ifup eth0
          - echo HOSTNAME=qns03 >> /etc/sysconfig/network
										- echo qns03 > /etc/hostname
          - hostname qns03
  qns04:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_2 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: qns_flavor_name }
      networks:
        - port: { get_resource: qns04_internal_port }
      user_data_format: RAW
      user_data: { get_resource: qns04_config }
  qns04_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: qns04_internal_ip }}]
  qns04_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
          - path: /etc/broadhop.profile
            content: "NODE_TYPE=qns04\n"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: qns04_internal_ip }
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - ifdown eth0 && ifup eth0
          - echo HOSTNAME=qns04 >> /etc/sysconfig/network
										- echo qns04 > /etc/hostname
          - hostname qns04
  sessionmgr01:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_1 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: sessionmgr_flavor_name }
      networks:
        - port: { get_resource: sessionmgr01_internal_port }
        - port: { get_resource: sessionmgr01_management_port }
      block_device_mapping:
        - device_name: vdb
          volume_id: { get_param: mongo01_volume_id }
      user_data_format: RAW
      user_data: { get_resource: sessionmgr01_config }
  sessionmgr01_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: sessionmgr01_internal_ip }}]
  sessionmgr01_management_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: management_net_name }
      fixed_ips: [{ ip_address: { get_param: sessionmgr01_management_ip }}]
  sessionmgr01_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
          - path: /etc/broadhop.profile
            content: "NODE_TYPE=sessionmgr01\n"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: sessionmgr01_internal_ip }
          - path: /etc/sysconfig/network-scripts/ifcfg-eth1
            content:
              str_replace:
                template: |
                  DEVICE=eth1
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                  GATEWAY=$gateway
                params:
                  $ip: { get_param: sessionmgr01_management_ip }
                  $gateway: { get_param: management_net_gateway }
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth1
              params:
                $cidr: { get_param: management_net_cidr }
          - ifdown eth0 && ifup eth0
          - ifdown eth1 && ifup eth1
          - echo HOSTNAME=sessionmgr01 >> /etc/sysconfig/network
										- echo sessionmgr01 > /etc/hostname
          - hostname sessionmgr01
  sessionmgr02:
    type: OS::Nova::Server
    properties:
      availability_zone: { get_param: cps_az_2 }
      config_drive: "True"
      image: { get_param: base_vm_image_name }
      flavor: { get_param: sessionmgr_flavor_name }
      networks:
        - port: { get_resource: sessionmgr02_internal_port }
        - port: { get_resource: sessionmgr02_management_port }
      block_device_mapping:
        - device_name: vdb
          volume_id: { get_param: mongo02_volume_id }
      user_data_format: RAW
      user_data: { get_resource: sessionmgr02_config }
  sessionmgr02_internal_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: internal_net_name }
      fixed_ips: [{ ip_address: { get_param: sessionmgr02_internal_ip }}]
  sessionmgr02_management_port:
    type: OS::Neutron::Port
    properties:
      network: { get_param: management_net_name }
      fixed_ips: [{ ip_address: { get_param: sessionmgr02_management_ip }}]
  sessionmgr02_config:
    type: OS::Heat::CloudConfig
    properties:
      cloud_config:
        write_files:
          - path: /var/lib/cloud/instance/payload/launch-params
          - path: /etc/broadhop.profile
            content: "NODE_TYPE=sessionmgr02\n"
          - path: /etc/sysconfig/network-scripts/ifcfg-eth0
            content:
              str_replace:
                template: |
                  DEVICE=eth0
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                params:
                  $ip: { get_param: sessionmgr02_internal_ip }
          - path: /etc/sysconfig/network-scripts/ifcfg-eth1
            content:
              str_replace:
                template: |
                  DEVICE=eth1
                  BOOTPROTO=none
                  NM_CONTROLLED=no
                  IPADDR=$ip
                  GATEWAY=$gateway
                params:
                  $ip: { get_param: sessionmgr02_management_ip }
                  $gateway: { get_param: management_net_gateway }
        runcmd:
          - str_replace:
              template: echo $ip installer >> /etc/hosts
              params:
                $ip: { get_param: cluman_internal_ip }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth0
              params:
                $cidr: { get_param: internal_net_cidr }
          - str_replace:
              template: ipcalc -m $cidr >> /etc/sysconfig/network-scripts/ifcfg-eth1
              params:
                $cidr: { get_param: management_net_cidr }
          - ifdown eth0 && ifup eth0
          - ifdown eth1 && ifup eth1
          - echo HOSTNAME=sessionmgr02 >> /etc/sysconfig/network
										- echo sessionmgr02 > /etc/hostname
          - hostname sessionmgr02
Before beginning, verify you have populated your information in the environment (.env) file and heat template (.yaml) file and loaded both files on the control node.
| Step 1 | Run the following command on control node at the location where your environment and heat template files are located: 
 | 
| Step 2 | Add/assign the heat stack owner to core tenant user: 
 | 
| Step 3 | Verify that no existing CPS stack is present:  | 
| Step 4 | Create the stack using the heat template (hot-cps.yaml) and environment file (hot-cps.env) you populated earlier.  | 
| Step 5 | Check the status using the  CREATE_COMPLETEwill be reported when the heat stack is finished. | 
| Step 6 | Wait approximately 10 minutes for the Cluster Manager VM to be deployed, then check the readiness status of the Cluster Manager VM using the following API: GET http://<Cluster Manager IP>:8458/api/system/status/cluman Refer to /api/system/status/cluman for more information. When this API responds that the Cluster Manager VM is in a ready state ( Refer also to the /var/log/cloud-init-output.log on the Cluster Manager VM for deployment details. | 
The following steps outline how to create a consolidated CPS configuration file and use the CPS platform orchestration APIs to deploy the CPS VMs on OpenStack:
| Step 1 | Create a consolidated CPS configuration file. This file contains all the information necessary to deploy VMs in the CPS cluster, including a valid CPS license key. Contact your Cisco representative to receive the CPS license key for your deployment. 
 
 
 | ||||
| Step 2 | Load the consolidated configuration file you created in Step 1 using the following API: POST http://<Cluster Manager IP>:8458/api/system/config/ For example: Refer to /api/system/config/ for more information. | ||||
| Step 3 | (Optional) To confirm the configuration was loaded properly onto the Cluster Manager VM, perform a GET with the same API: GET http://<Cluster Manager IP>:8458/api/system/config/ | ||||
| Step 4 | Apply the configuration using the following API: POST http://<Cluster Manager IP>:8458/api/system/config/action/apply For example: Refer to /api/system/config/ for more information. This API applies the CPS configuration file, triggers the Cluster Manager VM to deploy and bring up all CPS VMs, and performs all post-installation steps. 
 | ||||
| Step 5 | Run  
 
 | 
To enable the feature Disable Root SSH Login, check whether there exists a user with uid 1000 on Cluster Manager.
Use the following command to check there exists a user with uid 1000:
cat /etc/passwd | grep x:1000
If a user with uid 1000 exists on the Cluster Manager, change the uid on the Cluster Manager by executing the following command:
usermod -u <new-uid> <user-name-with-uid-as-1000>
This is done because the feature Disable Root SSH Login creates a user with uid 1000.
| Step 1 | To monitor the status of the deployment, use the following API: GET http://<Cluster Manager IP>:8458/api/system/config/status Refer to /api/system/config/status for more information. | 
| Step 2 | After the deployment has completed, verify the readiness of the entire CPS cluster using the following API: GET http://<Cluster Manager IP>:8458/api/system/status/cps Refer to /api/system/status/cps for more information. | 
| Step 3 | Connect to the Cluster Manager and issue the following command to run a set of diagnostics and display the current state of the system. 
 | 
|  Important | After the validation is complete, take a backup of the Cluster Manager configuration. For more information on taking the backup, refer to CPS Backup and Restore Guide. In case the Cluster Manager gets corrupted this backup can be used to recover the Cluster Manager. | 
CPS clusters deployed using the orchestration APIs report the following licensing errors in /var/log/broadhop/qns.log on the OAM (pcrfclient) VMs:
[LicenseManagerTimer] ERROR c.b.licensing.impl.LicenseManager - Unable to load the license file.  Server is not licensed!This error can be ignored.
CPS supports single root I/O virtualization (SR-IOV) on Intel NIC adapters.
CPS also supports bonding of SR-IOV sub-interfaces for seamless traffic switchover.
The Intel SR-IOV implementation includes anti-spoofing support that will not allow MAC addresses other than the one configured in the VF to communicate. As a result, the active failover mac policy is used.
To support seamless failover of interfaces, the VLAN interfaces must be created directly on top of the VF interfaces (for example, eth0.1003 and eth1.1003) and the interfaces are bonded (bond01003). If VLAN interfaces are created on top of a bond, their MAC address will not follow the bonds when a failover occurs and the old MAC will be used for the new active interface.
If all the guest VM interfaces are SRIOV interface then ifrename.yaml is not required.
If multiple drivers are used, then ifrename.yaml file must be updated with corresponding driver. For example, I40evf for XL710.
Bonding can be created on two different virtual functions. The virtual functions can be created from same physical function or different physical function in the host based on the requirements
|  Note | Before deploying VM, validate the yaml file format and content with yaml validator. | 
The following sample configuration shows the bonding of two interfaces using a single IP address:
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
cat /proc/net/bonding/bond01003
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup) (fail_over_mac active)
Primary Slave: None
Currently Active Slave: eth2.1003
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth2.1003
MII Status: up
Speed: 40000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: fa:16:3e:c0:eb:0f
Slave queue ID: 0
Slave Interface: eth21.1003
MII Status: up
Speed: 40000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: fa:16:3e:77:30:2d
Slave queue ID: 0
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
cat /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
NM_CONTROLLED=no
USRCTL=no
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::
cat /etc/sysconfig/network-scripts/ifcfg-eth21
DEVICE=eth21
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none
NM_CONTROLLED=no
USRCTL=no
::::::::::::::::::::::::::::::::::::::::::::::::::::::::
cat /etc/sysconfig/network-scripts/ifcfg-eth2.1003
DEVICE=eth2.1003
ONBOOT=yes
MASTER=bond01003
BOOTPROTO=none
NM_CONTROLLED=no
USRCTL=no
SLAVE=yes
VLAN=yes
PHYSDEV=eth2
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
cat /etc/sysconfig/network-scripts/ifcfg-eth21.1003
DEVICE=eth21.1003
ONBOOT=yes
MASTER=bond01003
BOOTPROTO=none
NM_CONTROLLED=no
USRCTL=no
SLAVE=yes
VLAN=yes
PHYSDEV=eth21
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
cat /etc/sysconfig/network-scripts/ifcfg-bond01003
DEVICE=bond01003
BONDING_OPTS="mode=active-backup miimon=100 fail_over_mac=1"
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
NM_CONTROLLED=no
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV6INIT=no
IPADDR=172.X.X.X
NETMASK=255.255.255.X
NETWORK=172.X.X.X
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
ONBOOT=yes
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::CPS instances require that network interfaces be assigned IP addresses statically. The names of network interfaces (eth0, eth1, and so on) are assumed to reflect network interfaces representing neutron ports passed to OpenStack nova-boot or heat template in that order. In this case, eth0 is assumed to reflect the first neutron port, eth1 the second, and so on.
For CPS deployments on OpenStack which use SR-IOV, often two or more network drivers are used. When more than one network driver is used, network interface names can become unpredictable and can change based on the order in which the network drivers are loaded into the kernel.
|  Note | Before deploying VM, validate the yaml file format and content with yaml validator. | 
The following section describes how to map a network interface for a given network drivers type to its correct expected name in the guest OS.
Requirements:
Correct IP address assignment requires that network names used in the network interfaces file must match the name of the network interface in the guest OS.
The order of neutron ports of a given type (non-SR-IOV or SR-IOV) in nova-boot or heat template directly maps to the order of the PCI device slot of the associated network interfaces in the guest OS.
The mapping between the network interface of a given network driver type and network driver name are passed during the creation of an instance through the cloud-init configuration.
The expected network interface name configuration is passed into CPS instance’s guest OS using a YAML format configuration
                              file located at: /var/lib/cloud/instance/payload/ifrename.yaml. 
                           
The file should have a section for each driver type and list the interfaces for that driver type with the following information:
Rank order (0, 1, 2…) for the interface among other interfaces of the same driver type, as is specified in the nova boot command/heat template
Expected name of the interface (eth0, eth1, eth2 etc.)
path: /var/lib/cloud/instance/payload/ifrename.yaml
  encoding: ascii
  owner: root:root
  permissions: ‘0644’
  content:  |
    ---
      virtio_net:
        0 : eth0
        1 : eth1
      i40evf:
        0 : eth2
        1 : eth3Driver names for SR-IOV ports can be determined by checking the interface card vendor documentation. For regular virtio ports, the driver name is ‘virtio_net’.
This ifrename.yaml file must be added in the existing write_files: section of cloud-init configurations for each CPS VM.
                           
The configuration file above instructs cloud-init to create a file ifrename.yaml at /var/lib/cloud/instance/payload, owned by root, with permissions of 644 and contents as mentioned in “content:” section. In this example:
                           
the first SR-IOV neutron port (managed by ‘i40evf’ driver) is mapped to to eth2.
the first non-SR-IOV port (managed by ‘virtio-net’ driver) is mapped to eth0.
the second non-SR-IOV port (managed by ‘virtio-net’ driver) to eth1.
Regardless of the order in which neutron ports are passed, or order in which network drivers are loaded, this configuration file specifies which network interface name should go to which network interface.
Using the following steps, you can check and verify the host system configuration for SR-IOV.
| Step 1 | From the control node, verify SR-IOV NIC agent is running on compute node. openstack network agent show 1956f725-b1fc-4e95-837c-d61e701d72e0 | 
| Step 2 | Use the following command to find out how many XL710 interfaces are available.  | 
| Step 3 | Find out the interface name from PCIe address listed from above command and its MAC address. Interface Name – enp94s0f0  | 
| Step 4 | Find out the driver of the interface.  | 
| Step 5 | Total number of VFs supported and configured.  | 
| Step 6 | List out the virtual functions that belongs to the particular interface.  | 
|  Note | The IP addresses used here are just example. Based on your requirement and the environment, you can configure IP addresses and network name accordingly. | 
| Step 1 | Create SRIOV network. 
 | ||||||||||||||||||||||||||
| Step 2 | Create subnets under SRIOV network. 
 | ||||||||||||||||||||||||||
| Step 3 | Create ports to attach to the VM instance. 
 For example, cm-port-int1, cm-port-int2 for Internal network. cm-port-int1 (sriov_net-1), cm-port-int2 (sriov_net-2) 
 
 | 
| Step 1 | Create cloud configuration files for SR-IOV for each VM to be deployed (xxx-cloud.cfg). These configurations are used to define the OpenStack parameters for each CPS VM. | ||
| Step 2 | Deploy each CPS VM with the following nova boot command: 
 Examples: The following example shows the nova boot commands to deploy a Cluster Manager (cluman), two OAMs (pcrfclients), two sessionmgrs, two Policy Directors (load balancers), and four Policy Server (qns) VMs. 
 | 
The following parameters must be configured in /etc/broadhop/qns.conf file when internal network is enabled with SR-IOV and bonding.
networkguard.tcp.local: This parameter used to bring up the diameter stack on the Policy Director (LB) VMs.
com.broadhop.q.if: This parameter is used to create the ZMQ connection between the Policy Server (QNS) and Policy Director (LB) VMs
clusterLBIF: For GR deployments, this parameter is used to create the ZMQ connection between the local and remote Policy Director (LB) VMs.
For more information on qns.conf file, contact your Cisco Account representative.
Example: If bond01003 is the internal network bonding, then qns.conf file needs to be updated with the following information:
-Dnetworkguard.tcp.local=bond01003
-Dcom.broadhop.q.if=bond01003
-DclusterLBIF=bond01003The cloud configuration file needs to be created for all the VM separately based on its interface and network configuration. The sample configuration file for load balancer and qns is as follows:
The following sections show an example Cluster Manager cloud configuration (cluman-cloud.cfg) with SRIOV and bonding.
                              
|  Note | Before deploying VM, validate the yaml file format and content with yaml validator. | 
|  Note | Use  For Cluman/Arbiter VM, include  | 
Lb01-cloud-config.cfg
#cloud-config
write_files:
 - path: /etc/sysconfig/network-scripts/ifcfg-eth0
   encoding: ascii
   content: |
     DEVICE=eth0
     BOOTPROTO=none
     TYPE=Ethernet
     NM_CONTROLLED=no
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth1
   encoding: ascii
   content: |
     DEVICE=eth1
     BOOTPROTO=none
     TYPE=Ethernet
     NM_CONTROLLED=no
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth0.1003
   encoding: ascii
   content: |
     DEVICE=eth0.1003
     ONBOOT=yes
     MASTER=bond01003
     BOOTPROTO=none
     NM_CONTROLLED=no
     USRCTL=no
     SLAVE=yes
     VLAN=yes
     PHYSDEV=eth0
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth1.1003
   encoding: ascii
   content: |
     DEVICE=eth1.1003
     ONBOOT=yes
     MASTER=bond01003
     BOOTPROTO=none
     NM_CONTROLLED=no
     USRCTL=no
     SLAVE=yes
     VLAN=yes
     PHYSDEV=eth1
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-bond01003
   encoding: ascii
   content: |
     DEVICE=bond01003
     BONDING_OPTS="mode=active-backup miimon=100 fail_over_mac=1"
     TYPE=Bond
     BONDING_MASTER=yes
     BOOTPROTO=none
     NM_CONTROLLED=no
     DEFROUTE=yes
     PEERDNS=yes
     PEERROUTES=yes
     IPV6INIT=no
     IPADDR=172.16.182.23
     NETMASK=255.255.255.0
     NETWORK=172.16.182.0
     IPV4_FAILURE_FATAL=no
     IPV6INIT=no
     IPV6_AUTOCONF=yes
     IPV6_DEFROUTE=yes
     IPV6_PEERDNS=yes
     IPV6_PEERROUTES=yes
     IPV6_FAILURE_FATAL=no
     ONBOOT=yes
   owner: root:root
 - path: /etc/sysconfig/network-scripts/ifcfg-eth2
   encoding: ascii
   content: |
     DEVICE=eth2
     BOOTPROTO=none
     TYPE=Ethernet
     NM_CONTROLLED=no
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth3
   encoding: ascii
   content: |
     DEVICE=eth3
     BOOTPROTO=none
     TYPE=Ethernet
     NM_CONTROLLED=no
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth2.3168
   encoding: ascii
   content: |
     DEVICE=eth2.3168
     ONBOOT=yes
     MASTER=bond03168
     BOOTPROTO=none
     NM_CONTROLLED=no
     USRCTL=no
     SLAVE=yes
     VLAN=yes
     PHYSDEV=eth2
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth3.3168
   encoding: ascii
   content: |
     DEVICE=eth3.3168
     ONBOOT=yes
     MASTER=bond03168
     BOOTPROTO=none
     NM_CONTROLLED=no
     USRCTL=no
     SLAVE=yes
     VLAN=yes
     PHYSDEV=eth3
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-bond03168
   encoding: ascii
   content: |
     DEVICE=bond03168
     BONDING_OPTS="mode=active-backup miimon=100 fail_over_mac=1"
     TYPE=Bond
     BONDING_MASTER=yes
     BOOTPROTO=none
     NM_CONTROLLED=no
     DEFROUTE=yes
     PEERDNS=yes
     PEERROUTES=yes
     IPV6INIT=no
     IPADDR=10.81.68.168
     NETMASK=255.255.255.0
     GATEWAY=10.81.68.1
     NETWORK=10.81.68.0
     IPV4_FAILURE_FATAL=no
     IPV6INIT=no
     IPV6_AUTOCONF=yes
     IPV6_DEFROUTE=yes
     IPV6_PEERDNS=yes
     IPV6_PEERROUTES=yes
     IPV6_FAILURE_FATAL=no
     ONBOOT=yes
   owner: root:root
 - path: /etc/sysconfig/network-scripts/ifcfg-eth4
   encoding: ascii
   content: |
     DEVICE=eth4
     BOOTPROTO=none
     TYPE=Ethernet
     NM_CONTROLLED=no
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth5
   encoding: ascii
   content: |
     DEVICE=eth5
     BOOTPROTO=none
     TYPE=Ethernet
     NM_CONTROLLED=no
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth4.1004
   encoding: ascii
   content: |
     DEVICE=eth4.1004
     ONBOOT=yes
     MASTER=bond01004
     BOOTPROTO=none
     NM_CONTROLLED=no
     USRCTL=no
     SLAVE=yes
     VLAN=yes
     PHYSDEV=eth4
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth5.1004
   encoding: ascii
   content: |
     DEVICE=eth5.1004
     ONBOOT=yes
     MASTER=bond01004
     BOOTPROTO=none
     NM_CONTROLLED=no
     USRCTL=no
     SLAVE=yes
     VLAN=yes
     PHYSDEV=eth5
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-bond01004
   encoding: ascii
   content: |
     DEVICE=bond01004
     BONDING_OPTS="mode=active-backup miimon=100 fail_over_mac=1"
     TYPE=Bond
     BONDING_MASTER=yes
     BOOTPROTO=none
     NM_CONTROLLED=no
     DEFROUTE=yes
     PEERDNS=yes
     PEERROUTES=yes
     IPV6INIT=no
     IPADDR=172.16.183.22
     NETMASK=255.255.255.0
     NETWORK=172.16.183.0
     IPV4_FAILURE_FATAL=no
     IPV6INIT=no
     IPV6_AUTOCONF=yes
     IPV6_DEFROUTE=yes
     IPV6_PEERDNS=yes
     IPV6_PEERROUTES=yes
     IPV6_FAILURE_FATAL=no
     ONBOOT=yes
   owner: root:root
 - path: /var/lib/cloud/instance/payload/launch-params
   encoding: ascii
   owner: root:root
   permissions: '0644'
 - path: /etc/broadhop.profile
   encoding: ascii
   owner: root:root
   permissions: '0644'
   content: "NODE_TYPE=lb01\n"
runcmd:
 - ifdown eth0
 - ifdown eth1
 - ifdown eth2
 - ifdown eth3
 - ifdown eth4
 - ifdown eth5
 - ifdown eth0.1003
 - ifdown eth1.1003
 - ifdown eth2.3168
 - ifdown eth3.3168
 - ifdown eth4.1004
 - ifdown eth5.1004
 - ifdown bond01003
 - ifdown bond03168
 - ifdown bond01004
 - echo 172.16.182.22 installer >> /etc/hosts
 - ifup eth0
 - ifup eth1
 - ifup eth2
 - ifup eth3
 - ifup eth4
 - ifup eth5
 - ifup eth0.1003
 - ifup eth1.1003
 - ifup eth2.3168
 - ifup eth3.3168
 - ifup eth4.1004
 - ifup eth5.1004
 - ifup bond01003
 - ifup bond03168
 - ifup bond01004
 - sed -i '/^HOSTNAME=/d' /etc/sysconfig/network && echo HOSTNAME=lb01 >> /etc/sysconfig/network
 - echo lb01 > /etc/hostname
 - hostname lb01
Qns01-cloud-config.cfg
#cloud-config
write_files:
 - path: /etc/sysconfig/network-scripts/ifcfg-eth0
   encoding: ascii
   content: |
     DEVICE=eth0
     BOOTPROTO=none
     TYPE=Ethernet
     NM_CONTROLLED=no
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth1
   encoding: ascii
   content: |
     DEVICE=eth1
     BOOTPROTO=none
     TYPE=Ethernet
     NM_CONTROLLED=no
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth0.1003
   encoding: ascii
   content: |
     DEVICE=eth0.1003
     ONBOOT=yes
     MASTER=bond01003
     BOOTPROTO=none
     NM_CONTROLLED=no
     USRCTL=no
     SLAVE=yes
     VLAN=yes
     PHYSDEV=eth0
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-eth1.1003
   encoding: ascii
   content: |
     DEVICE=eth1.1003
     ONBOOT=yes
     MASTER=bond01003
     BOOTPROTO=none
     NM_CONTROLLED=no
     USRCTL=no
     SLAVE=yes
     VLAN=yes
     PHYSDEV=eth1
   owner: root:root
   permissions: '0644'
 - path: /etc/sysconfig/network-scripts/ifcfg-bond01003
   encoding: ascii
   content: |
     DEVICE=bond01003
     BONDING_OPTS="mode=active-backup miimon=100 fail_over_mac=1"
     TYPE=Bond
     BONDING_MASTER=yes
     BOOTPROTO=none
     NM_CONTROLLED=no
     DEFROUTE=yes
     PEERDNS=yes
     PEERROUTES=yes
     IPV6INIT=no
     IPADDR=172.16.182.29
     NETMASK=255.255.255.0
     NETWORK=172.16.182.0
     GATEWAY=172.16.182.1
     IPV4_FAILURE_FATAL=no
     IPV6INIT=no
     IPV6_AUTOCONF=yes
     IPV6_DEFROUTE=yes
     IPV6_PEERDNS=yes
     IPV6_PEERROUTES=yes
     IPV6_FAILURE_FATAL=no
     ONBOOT=yes
   owner: root:root
 - path: /var/lib/cloud/instance/payload/launch-params
   encoding: ascii
   owner: root:root
   permissions: '0644'
 - path: /etc/broadhop.profile
   encoding: ascii
   owner: root:root
   permissions: '0644'
   content: "NODE_TYPE=qns01\n"
runcmd:
 - ifdown eth0
 - ifdown eth1
 - ifdown eth0.1003
 - ifdown eth1.1003
 - ifdown bond01003
 - echo 172.16.182.22 installer >> /etc/hosts
 - ifup eth0
 - ifup eth1
 - ifup eth0.1003
 - ifup eth1.1003
 - ifup bond01003
 - sed -i '/^HOSTNAME=/d' /etc/sysconfig/network && echo HOSTNAME=qns01 >> /etc/sysconfig/network
 - echo qns01 > /etc/hostname
 - hostname qns01
Some customers may need to customize the configuration for their deployment. When customizing the CPS configuration, it is important to make the customization in a way that does not impact the normal behavior for VM deployment and redeployment, upgrades/migration, and rollbacks.
For this reason, customizations should be placed in the /etc/puppet/env_config directory. Files within this directory are given special treatment for VM deployment, upgrade, migrations, and rollback operations.
|  Note | If system configurations are manually changed in the VM itself after the VM has been deployed, these configurations will be overridden if that VM is redeployed. | 
The following section describes the steps necessary to make changes to the puppet installer.
Customizations of the CPS deployment are dependent on the requirements of the change. Examples of customizations include:
deploying a specific facility on a node (VM)
overriding a default configuration.
To explain the process, let us consider that we modify all VMs built from an installer, so we use the Policy Server (QNS) node definition.
For the above mentioned example, add custom routes via the examples42-network Puppet module. (For more information on the module, refer to https://forge.puppetlabs.com/example42/network).
|  Attention | In CPS 20.2.0, puppet is upgraded from 3.6.2-3 to 5.5.19 version. Puppet code has been modified to adapt to this change. Previous release puppet code is not compatible with the current puppet version (5.5.19). Customer specific puppet code must be adapted to current release puppet version (5.5.19) before applying it to CPS 20.2.0. | 
| Step 1 | Make sure that the proper paths are available: 
 | ||
| Step 2 | Install the necessary Puppet module. For example: 
 | ||
| Step 3 | Copy the existing node definition into the env_config nodes: 
 
 | ||
| Step 4 | Add a reference to your custom Puppet manifest: 
 
 | ||
| Step 5 | Create your new manifest for static routes:  | ||
| Step 6 | Validate the syntax of your newly created puppet script(s): 
 
 | ||
| Step 7 | Rebuild your Environment Configuration: 
 | ||
| Step 8 | Reinitialize your environment: 
 At this point your new manifest is applied across the deployment. For more details, refer to the installer image in the /etc/puppet/env_config/README. | 
It is recommended that version control is used to track changes to these Puppet customizations.
For example, to use 'git', perform the following steps:
Initialize the directory as a repository:
# git init 
                                    			 
                                 
Initialized empty Git repository in /var/qps/env_config/.git/.
Add everything:
# git add . 
                                    			 
                                 
Commit your initial check-in:
# git commit -m 'initial
                                       				  commit of env_config' 
                                    			 
                                 
If you are making more changes and customizations, make sure you create new revisions for those:
# git add . 
                                    			 
                                 
# git commit -m 'updated
                                       				  static routes' 
                                    			 
                                 
By default, the Orchestration API service starts with the HTTP mode on Cluster Manager.
You can change the
                              		  mode to start with HTTPS self-signed certificate by setting the 
                              		  api_https=one_way_ssl facter value in the 
                              		  /etc/facter/facts.d/cluman_facts.yaml configuration
                              		  file in Cluster Manager. This ensures that the API server starts by using the
                              		  pre-loaded self-signed SSL certificates. 
                              		
                           
|  Important | You cannot upload certificates using the API. | 
To configure the
                              		  Orchestration API server to start with the HTTPS self-signed certificate mode,
                              		  make the following changes to the Heat template. These changes create the 
                              		  /etc/facter/facts.d/cluman_facts.yaml file and also
                              		  set the puppet facter value to 
                              		  api_https=one_way_ssl in the configuration file in
                              		  Cluster Manager. 
                              		
                           
cluman_api_name:
 type: string
 label: cluman orch api
 description: cluman orch
 default: one_way_ssl
# This will set the default value to one_way_ssl
- path: /etc/facter/facts.d/cluman_facts.yaml
 permissions: "0755"
 content:
 str_replace:
 template: |
   api_https: $kval
 params:
   $kval: { get_param: cluman_api_name }Sample YAML configuration to run the Orchestration API server:
Using self-signed certificates (one_way_ssl):
cat /etc/facter/facts.d/cluman_facts.yaml
 api_https: "one_way_ssl"cat /etc/facter/facts.d/cluman_facts.yaml
 api_https: "one_way_ssl"
 api_keystore_path: "/var/certs/keystore.jks"
 api_keystore_password: "yoursecret"
 api_keystore_type: "JKS"
 api_cert_alias: "server-tls"
 api_tls_version: "TLSv1.2"
 api_validate_certs: "false"
 api_validate_peers: "false"cat /etc/facter/facts.d/cluman_facts.yaml
 api_https: "two_way_ssl"
 api_keystore_path: "/var/certs/keystore.jks"
 api_keystore_password: "yoursecret"
 api_keystore_type: "JKS"
 api_cert_alias: "server-tls"
 api_tls_version: "TLSv1.2"
 api_truststore_path: "/var/certs/truststore.jks"
 api_truststore_password: "yoursecret"
 api_truststore_type: "JKS"
 api_validate_certs: "true"
 api_validate_peers: "true"
 api_enable_crldp: "true"|  Note | 
 | 
After Cluster Manager is deployed, you can reconfigure the API server to run on HTTP (default) or HTTPS mode. The prerequisites to configure the HTTPS mode are as follows:
For self-signed certificates, set api_https=one_way_ssl in the /etc/facter/facts.d/cluman_facts.yaml configuration file. 
                                 
For trusted certificates:
Install the certificates on Cluster Manager.
Import the certificates into the keystore and the truststore.
Set api_https value to one_way_ssl or two_way_ssl (mutual authentication) in the /etc/facter/facts.d/cluman_facts.yaml configuration file. 
                                       
To apply the configuration run the following puppet commands on Cluster Manager. These commands reconfigure Cluster Manager only.
cd /opt/cluman
                                    					
                                 
 CLUMAN_DIR="/opt/cluman";
                                    					
                                 
puppet apply --logdest /var/log/cluman/puppet-run.log --modulepath=${CLUMAN_DIR}/puppet/modules --config ${CLUMAN_DIR}/puppet/puppet.conf
                                       ${CLUMAN_DIR}/puppet/nodes/node_repo.pp
                                    					
                                 
|  Note | 
 | 
Upgrade CPS to run the Orchestration API server on HTTP or HTTPS. To change the behavior, configuration parameters must be configured before triggering the upgrade.
Follow the steps below to upgrade CPS:
For self-signed certificates, set api_https=one_way_ssl in the /etc/facter/facts.d/cluman_facts.yaml configuration file and then trigger the upgrade. 
                                 
For trusted certificates:
Install the certificates on Cluster Manager.
Import the certificates into the keystore and the truststore.
Set api_https value to one_way_ssl or two_way_ssl (mutual authentication) in the /etc/facter/facts.d/cluman_facts.yaml configuration file. 
                                       
Trigger the upgrade.
|  Note | To roll back the configuration to default, that is HTTP mode, do the following: 
 | 
A keystore contains private keys and certificates used by the TLS and SSL servers to authenticate themselves to TLS and SSL clients respectively. Such files are referred to as keystores. When used as a truststore, the file contains certificates of trusted TLS and SSL servers or of certificate authorities. There are no private keys in the truststore.
|  Note | Your trusted certificates and keystores or truststores should not be located at /opt/orchestration_api_server/ | 
| Step 1 | Create the PKCS12 file for key and certificate chains. 
 For example: 
                                             				 | 
| Step 2 | Create the Java KeyStore on the server. 
  
                                             				 | 
| Step 3 | Import the root certificate or CA certificate in the truststore. 
 
 You must remember the keystore password and this needs to be updated in the /etc/facter/facts.d/cluman_facts.yaml file. | 
The following parameters can be defined in the /etc/facter/facts.d/cluman_facts.yaml configuration file. This file is loaded only onto the Cluster Manager VM. All parameters and values are case sensitive.
|  Note | Before loading the configuration file to the Cluster Manager VM, verify that the YAML file uses the proper syntax. There are many publicly-available Websites that you can use to validate your YAML configuration file. | 
| Parameter | Description | 
|---|---|
| 
 | Runs the application with or without HTTPS (one way or mutual authentication). Valid options: 
 | 
|  
                                             					  | List of protocols that are supported. Valid options: 
 | 
| 
 | Path to the Java keystore which contains the host certificate and private key. Required for one_way_ssl and two_way_ssl. | 
| 
 | Type of keystore. Valid options: 
 Required for one_way_ssl and two_way_ssl. | 
|  
                                             					  | Password used to access the keystore. Required for one_way_ssl and two_way_ssl. | 
| 
 | Alias of the certificate to use. Required for one_way_ssl and two_way_ssl. | 
|  
                                             					  | Path to the Java keystore which contains the CA certificates used to establish trust. Required for two_way_ssl. | 
| 
 | The type of keystore. Valid options: 
 Required for two_way_ssl. | 
| 
 | Password used to access the truststore. Required for two_way_ssl. | 
| 
 | Decides whether or not to validate TLS certificates before starting. If enabled, wizard refuses to start with expired or otherwise invalid certificates. Valid options: 
 Required for one_way_ssl and two_way_ssl. | 
| 
 | Decides whether or not to validate TLS peer certificates. Valid options: 
 Required for one_way_ssl and two_way_ssl. | 
| 
 | Decides whether or not client authentication is required. Valid options: 
 Required for one_way_ssl and two_way_ssl. | 
| 
 | Decides whether or not CRL Distribution Points (CRLDP) support is enabled. Valid options: 
 Required for two_way_ssl. | 
|  Note | The values entered must be in lower case and should be within quotes. For example, "false". | 
The following steps are performed to install platform scripts for MongoDB health monitoring for write operations on OpenStack setup.
| Step 1 | Log in to the Cluster Manager or installer as a root user. | 
| Step 2 | Update the required key and value in /var/qps/config/deploy/json/Configuration.js. | 
| Step 3 | Execute the following scripts to make sure the changes are applied on all the required VMs. 
 
 
 | 
| Step 4 | Execute the following command to validate if the parameter is applied. 
 Sample Output when parameter is configured:  | 
| Step 5 | Execute the following steps on each Policy Server (QNS) VMs. |