The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
A VNF deployment is initiated as a service request through the northbound interface or the ESC portal. The service request comprises of templates that consist of XML payloads and deployment parameters. Deployment parameters are rules, policies or day 0 configuration that determine properties of the VNF and its lifecycle. The table below lists the complete list of deployment parameters and how they interoperate on OpenStack or VMware vCenter:
Deployment Parameters |
OpenStack |
VMware vCenter |
---|---|---|
Day 0 Configuration |
|
Day 0 configuration is done in one of the following ways: |
Deploying VNFs |
|
Configuration of Individual and Composite VNFs is done in one of the following ways: |
Undeploy Virtual Network Functions |
|
Undeploying VNFs is done in one of the following ways: |
Affinity and anti-affinity Rule |
|
|
VNF Operations |
|
For more information, see the Elastic Services Controller Portal. |
Multi Cluster |
Not applicable |
For more information, see the Deploying Virtual Network Functions in ESC Portal (VMware vCenter Only). |
Multiple Virtual Datacenter (Multi VDC) |
Not applicable |
Multiple Virtual Datacenter selection is done in one of the following ways: |
Hardware Acceleration |
Hardware Acceleration is
supported in one of the following ways:
For more information, see the Hardware Acceleration Support (OpenStack Only). |
Not applicable |
Single Root I/O Virtualization |
|
Not applicable |
This chapter describes the procedures to configure the deployment customization. For more information on VNF deployment, see Deploying Virtual Network Functions.
The initial or day 0 configuration of a VNF is based on the VM type. A VNF administrator configures the initial template for each VM type at the time of VNF deployment. The same configuration template is applied to all deployed and new VMs of that VM type. The template is processed at the time of individual VM deployment. The day 0 configuration continues to persists, so that all initial deployment, healing and scaling of VMs have the same day 0 template.
Some of the day 0 configuration tasks include bringing up the interface, managing the network, support for static or dynamic IP (DHCP, IPAM), SSH keys, and NetConf enabled configuration support on VNF.
![]() Note | ESC does not support Day 0 configuration of interfaces added during service update. In case of recovery for Day 0 configuration, all the interfaces with Network Interface Card IDs will be configured. |
Day 0 configuration is defined in the datamodel under the config_data tag. Each user data and the configuration drive file is defined under the configuration tag. The contents are in the form of a template. ESC processes the template through the Apache Velocity Template Engine before passing to the VM.
The config_data tag is defined for each vm_group. The same configuration template is applied to all VMs in the vm_group. The template file is retrieved and stored at deployment initialization. Template processing is applied at time of VM deployment. The content of the config file can be retrieved from the file or data
<file> url </file> <data> inline config content </data>
The url specifies a file on the ESC VM file system or file hosted on report http server.
A destination name is assigned to the config by <dst>. User Data is a treated as a special case with <dst>--user-data</dst>.
A sample config datamodel,
<config_data> <configuration> <file>file://cisco/userdata_file.txt</file> <dst>--user-data</dst> <variable> <name>CUSTOM_VARIABLE_FOR_USERDATA</name> <val>SOME_VALUE_XXX</val> <variable> </configuration> <configuration> <file>file://cisco/config.sh</file> <dst>config.sh</dst> <variable> <name>CUSTOM_VARIABLE_FOR_CONFIG</name> <val>SOME_VALUE_XXX</val> <variable> </configuration> </config_data>
Custom variable can be specified in the variables tag within the configuration. Zero or more variables can be included in each configuration. Each variable can have multiple values. Multiple values are only useful when creating more than one VM per vm_group. Also, when performing scale-in and scale-out, additional VMs can be added and removed from the VM group.
The contents of <file> are a template that is processed by the Velocity Template Engine. ESC populates a set of variables for each interface before processing the configuration template:
NICID_n_IP_ALLOCATION_TYPE |
string containing FIXED | DHCP |
NICID_n_NETWORK_ID |
string containing neutron network uuid |
NICID_n_IP_ADDRESS |
ipv4 or ipv6 address |
NICID_n_MAC_ADDRESS |
string |
NICID_n_GATEWAY |
ipv4 or ipv6 gateway address |
NICID_n_CIDR_ADDRESS |
ipv4 or ipv6 cidr prefix address |
NICID_n_CIDR_PREFIX |
integer with prefix-length |
NICID_n_NETMASK |
If an ipv4 CIDR address and prefix are present, ESC will automatically calculate and populate the netmask variable. This is not substituted in the case of an IPv6 address and should not be used. |
NICID_n_ANYCAST_ADDRESS |
string with ipv4 or ipv6 |
NICID_n_IPV4_OCTETS |
string with last 2 octets of ip address, such as 16.66, specific to CloudVPN |
Where n is the interface number from the datamodel, for example, 0, 1, 2, 3
Example
NICID_0_IP_ALLOCATION_TYPE: FIXED NICID_0_NETWORK_ID: 9f8d9a97-d873-4a1c-8e95-1a123686f038 NICID_0_IP_ADDRESS: 2a00:c31:7fe2:1d:0:0:1:1000 NICID_0_MAC_ADDRESS: null NICID_0_GATEWAY: 2a00:c31:7fe2:1d::1 NICID_0_CIDR_ADDRESS: 2a00:c31:7fe2:1d:: NICID_0_CIDR_PREFIX: 64 NICID_0_ANYCAST_ADDRESS: null NICID_0_IPV4_OCTETS: 16.0 NICID_1_IP_ALLOCATION_TYPE: DHCP NICID_1_NETWORK_ID: 0c468d8e-2385-4641-b1db-9080c170cb1a NICID_1_IP_ADDRESS: 6.0.0.2 NICID_1_MAC_ADDRESS: null NICID_1_GATEWAY: 6.0.0.1 NICID_1_CIDR_ADDRESS: 6.0.0.0 NICID_1_CIDR_PREFIX: 24 NICID_1_ANYCAST_ADDRESS: null NICID_1_NETMASK: 255.255.255.0
By default, ESC substitutes the $ variable in the day 0 configuration file with the actual value during deployment. In ESC Release 2.3 and later, you can enable or disable the $ variable substitution for each configuration file.
Add the following field to the configuration datamodel:
<template_engine>VELOCITY | NONE</template_engine> field to configurationwhere,
If no value is set the default option is VELOCITY, and the $ variable substitution takes place. When set to NONE, the $ variable substitution does not take place.
You must follow these tips while processing the template through the velocity template engine.
Cisco Elastic Services Controller VNF monitoring is done through the definition of Key Performance Indicators (KPIs) metrics. Core metrics are preloaded with ESC, a programmable interface gives to the end-user the ability to add and remove metrics, but also to define the actions to be triggered on specified conditions.These metrics and actions are defined at the time of deployment.
The ESC metrics and actions datamodel is divided into 2 sections:
KPI—Defines the type of monitoring, events, polling interval and other parameters. This includes the event_name, threshold and metric values. The event_name is user defined. The metric_values specify threshold conditions and other details. An event is triggered when the threshold condition is reached.
Rule—Defines the actions when the KPI monitoring events are triggered. The action element defines the actions to be performed when an event corresponding to the event_name is triggered.
The ESC object model defines for each vm_group a section where the end-user can specify the administrative rules to be applied based on the outcome of the KPIs selected metric collector.
<rules> <admin_rules> <rule> <event_name>VM_ALIVE</event_name> <action>TRUE esc_vm_alive_notification</action> <action>FALSE recover autohealing</action> </rule> : : : : : : : : : : : : : : : : </admin_rules> </rules>
As mentioned within the KPIs section, correlation between KPIs and Rules is done based on the value of the <event_name> tag.
In the Rules section above, if the outcome of the KPIs defining event_name is VM_ALIVE, and the selected metric collector is TRUE, then the action identified by the key, TRUE esc_vm_alive_notification is selected for execution.
If the outcome of the KPIs defining event_name is VM_ALIVE, and the selected metric collector is FALSE, then the action identified by the key, FALSE recover autohealing is selected for execution.
For information on updating KPIs and Rules, see Updating the KPIs and Rules.
ESC Metrics and Actions (Dynamic Mapping) framework is the foundation of the kpis and rules sections. As described in the KPIs section the metric type uniquely identifies a metric and its metadata.
The metrics and actions is as follows:
<metrics> <metric> <name>ICMPPING</name> <userLabel>ICMP Ping</userLabel> <type>MONITOR_SUCCESS_FAILURE</type> <metaData> <type>icmp_ping</type> <properties> <property> <name>ip_address</name> <value></value> </property> <property> <name>enable_events_after_success</name> <value>true</value> </property> <property> <name>vm_gateway_ip_address</name> <value></value> </property> <property> <name>enable_check_interface</name> <value>true</value> </property> </properties> </metaData> </metric> : : : : : : : : </metrics>
The above metric is identified by its unique name ICMPPING. The <type> tag identifies the metric type.
Currently ESC supports two types of metrics:
The <metadata> section defines the attributes and properties that is processed by the monitoring engine.
The metric_collector type in the KPI show the following behavior:
At regular intervals of 3 seconds the behavior associated with the ICMPPING identifier is triggered. The ICMPPING metric is of type MONITOR_SUCCESS_FAILURE, that is the outcome of the monitoring action is either a success or a failure. In the sample above, an icmp_ping is performed using the <ip_address> field defined in the <metadata> section. In case of SUCCESS the rule action(s) with the TRUE prefix will be selected for execution. In case of FAILURE the rule action(s) with the FALSE prefix is selected for execution.
<actions> <action> <name>TRUE servicebooted.sh esc_vm_alive_notification</name> <type>ESC_POST_EVENT</type> <metaData> <type>esc_post_event</type> <properties> <property> <name>esc_url</name> <value></value> </property> <property> <name>vm_external_id</name> <value></value> </property> <property> <name>vm_name</name> <value></value> </property> <property> <name>event_name</name> <value></value> </property> <property> <name>esc_event</name> <value>SERVICE_BOOTED</value> </property> <properties/> </metaData> </action> : : : : : : : : <actions>
The action sample above describes the behavior associated with the SUCCESS value. The ESC rule action name TRUE servicebooted.sh esc_vm_alive_notification specifies the action to be selected. Once selected the action <type> ESC_POST_EVENT identifies the action that the monitoring engine selects.
Backup this file to a location outside of ESC, such as, your home directory.
Create esc-dynamic-mapping directory on your ESC VM. Ensure that the read permissions are set.
--file root:root:/opt/cisco/esc/esc-dynamic-mapping/dynamic_mappings.xml:<path-to-local-copy-of-dynamic-mapping.xml>
The CRUD operations for mapping the actions and the metrics are available through REST API. Refer to the API tables below for mapped metrics and actions definition.
To update an existing mapping, delete and add a new mapping through the REST API.
![]() Note | While upgrading any earlier version of ESC to ESC 2.2 and later, to maintain the VNF monitoring rules, you must back up the dynamic_mappings.xml file and then restore the file in the upgraded ESC VM. For more information upgrading monitoring rules, see Upgrading VNF Monitoring Rules section in the Cisco Elastic Services Controller Install and Upgrade Guide. Cisco ESC Release 2.3.2 and later, the dynamic mapping API is accessible locally only on the ESC VM. |
User Operation |
Path |
HTTP Operation |
Payload |
Response |
Description |
---|---|---|---|---|---|
Read |
internal/dynamic_mapping/actions/ <action_name> |
GET |
N/A |
Action XML |
Get action by name |
Read All |
internal/dynamic_mapping/actions |
GET |
N/A |
Action XML |
Get all actions defined |
Write |
internal/dynamic_mapping/actions |
POST |
Actions XML |
Expected Action XML |
Create one or multiple actions |
Delete |
internal/dynamic_mapping/actions/ <action_name> |
DELETE |
N/A |
N/A |
Delete action by name |
Clear All |
internal/dynamic_mapping/actions |
DELETE |
N/A |
N/A |
Delete all non-core actions |
The response for the actions APIs is as follows:
<actions> <action> <name>{action name}</name> <type>{action type}</type> <metaData> <type>{monitoring engine action type}</type> <properties> <property> <name></name> <value></value> </property> : : : : : : : <properties/> </metaData> </action> : : : : : : : : <actions>
Where
{action name}: Unique identifier for the action. Note that in order to be compliant with the ESC object model, for success or failure actions, the name must start with either TRUE or FALSE.
{action type}: Action type in this current release can be either ESC_POST_EVENT, SCRIPT or CUSTOM_SCRIPT.
{monitoring engine action type}: The monitoring engine type are the following: icmp_ping, icmp4_ping, icmp6_ping, esc_post_event, script, custom_script, snmp_get. See Monitoring the VNFs for more details.
Core and Default Actions List
Name |
Type |
Description |
---|---|---|
TRUE esc_vm_alive_notification |
Core |
Start Service |
TRUE servicebooted.sh |
Core/Legacy |
Start Service |
FALSE recover autohealing |
Core |
Recover Service |
TRUE servicescaleup.sh |
Core/Legacy |
Scale Out |
TRUE esc_vm_scale_out_notification |
Core |
Scale Out |
TRUE servicescaledown.sh |
Core/Legacy |
Scale In |
TRUE esc_vm_scale_in_notification |
Core |
Scale In |
TRUE apply_netscaler_license.py |
Default |
Apply Netscaler License |
Metric APIs
User Operation |
Path |
HTTP Operation |
Payload |
Response |
Description |
---|---|---|---|---|---|
Read |
internal/dynamic_mapping/actions/ <metric_name> |
GET |
N/A |
Metric XML |
Get metrics by name |
Read All |
internal/dynamic_mapping/metrics/ |
GET |
N/A |
Metric XML |
Get all metrics defined |
Write |
internal/dynamic_mapping/metrics/ |
POST |
Metrics XML |
Expected Metrics XML |
Create one or multiple metrics |
Delete |
internal/dynamic_mapping/actions/ <metric_name> |
DELETE |
N/A |
N/A |
Delete metric by name |
Clear All |
internal/dynamic_mapping/metrics |
DELETE |
N/A |
N/A |
Delete all non-core metrics |
The response for the Metric APIs is as follows:
<metrics> <metric> <name>{metric name}</name> <type>{metric type}</type> <metaData> <type>{monitoring engine action type}</type> <properties> <property> <name></name> <value></value> </property> : : : : : : : <properties/> </metaData> </metric> : : : : : : : : <metrics>
Where,
{metric name}: Unique identifier for the metric.
{metric type}: Metric type can be either MONITOR_SUCCESS_FAILURE, MONITOR_THRESHOLD or MONITOR_THRESHOLD_COMPUTE.
{monitoring engine action type}: The monitoring engine type are the following: icmp_ping, icmp4_ping, icmp6_ping, esc_post_event, script, custom_script, snmp_get. See Monitoring for more details.
Core and Default Metrics List
Name |
Type |
Description |
---|---|---|
ICMPPING |
Core |
ICMP Ping |
MEMORY |
Default |
Memory compute percent usage |
CPU |
Default |
CPU compute percent usage |
CPU_LOAD_1 |
Default |
CPU 1 Minute Average Load |
CPU_LOAD_5 |
Default |
CPU 5 Minutes Average Load |
CPU_LOAD_15 |
Default |
CPU 15 Minutes Average Load |
PROCESSING_LOAD |
Default |
CSR Processing Load |
OUTPUT_TOTAL_BIT_RATE |
Default |
CSR Total Bit Rate |
SUBSCRIBER_SESSION |
Default |
CSR Subscriber Session |
ESC introduces a new REST API to trigger the existing (pre-defined) actions defined through the Dynamic Mapping API, when required. For more information on the dynamic mapping APIs, see Metrics and Actions (Dynamic Mapping).
A sample pre-defined action is as follows:
<?xml version="1.0" encoding="UTF-8"?> <actions> <action> <name>SaidDoIt</name> <userlabel>My Friendly Action</userlabel> <type>SCRIPT</type> <metaData> <type>script</type> <properties> <property> <name>script_filename</name> <value>/opt/cisco/esc/esc-scripts/do_somethin.py</value> </property> <property> <name>arg1</name> <value>some_val</value> </property> <property> <name>notification</name> <value>true</value> </property> </properties> </metaData> </action> </actions>
![]() Note | A script file located on a remote server is also supported. You must provide the details in the <value> tag, for example, <value>http://myremoteserverIP:80/file_store/do_somethin.py</value> |
The pre-defined action mentioned above is triggered using the trigger API.
Execute the following HTTP or HTTPS POST operation:
POST http://<IP_ADDRESS>:8080/ESCManager/v0/trigger/action/
POST https://<IP_ADDRESS>:8443/ESCManager/v0/trigger/action/
The following payload shows the actions triggered by the API, and the response received:
<triggerTarget> <action>SaidDoIt</action> <properties> <property> <name>arg1</name> <value>real_value</value> </property> </properties> </triggerTarget>The response,
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <triggerResponse> <handle>c11be5b6-f0cc-47ff-97b4-a73cce3363a5</handle> <message>Action : 'SAIDDOIT' triggered </message> </triggerResponse>
ESC accepts the request, and returns a response payload and status code.
An http status code of 200 indicates that the action triggered exists, and is triggered successfully. An http status codes of 400 or 404 indicate that the action to be triggered is not found.
You can determine the status using the custom script notifications sent to NB at various lifecycle stages. For more information, see Custom Script Notification.
ESC sends the MANUAL_TRIGGERED_ACTION_UPDATE callback event to NB with a status message that describes the success or failure of the action execution.
The notification is as follows:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><esc_event xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <event_type>MANUAL_TRIGGERED_ACTION_UPDATE</event_type> <properties> <property> <name>handle</name> <value>c11be5b6-f0cc-47ff-97b4-a73cce3363a5</value> </property> <property> <name>message</name> <value>Action execution success</value> </property> <property> <name>exit_code</name> <value>0</value> </property> <property> <name>action_name</name> <value>SAIDDOIT</value></property> </properties> </esc_event>
![]() Note | The script_filename property cannot be overwritten by the trigger API request. See Script Actions for more details. The trigger API must not contain any additional properties that do not exist in the pre-defined action. |
The new API allows to overrides some of the special properties (of the actions) listed below:
Notification—Set this for the script notification to progress in run time. The default value is false. This value can be set to true in the action or trigger payload.
wait_max_timeout—Wait for the script to complete the execution before terminating. The default wait timeout is 900 seconds.
Custom Script Metric Monitoring can be performed as follows:
The script to be executed has to be compliant with the rules specified for a MONITOR_THRESHOLD action. Threshold crossing evaluation will be based on the exit value from the script execution. In the sample script below, the return value is the number of IP sessions.
#!/usr/bin/env python import pexpect import re import sys ssh_newkey = 'Are you sure you want to continue connecting' # Functions def get_value(key): i = 0 for arg in sys.argv: i = i + 1 if arg == key: return sys.argv[i] return None def get_ip_addr(): device_ip = get_value("vm_ip_address") return device_ip # Main CSR_IP = get_ip_addr() p=pexpect.spawn('ssh admin@' + CSR_IP + ' show ip nat translations total') i=p.expect([ssh_newkey,'assword:',pexpect.EOF]) if i==0: p.sendline('yes') i=p.expect([ssh_newkey,'assword:',pexpect.EOF]) if i==1: p.sendline("admin") p.expect(pexpect.EOF) elif i==2: pass n = p.before result = re.findall(r'\d+', n)[0] sys.exit(int(result))
The ESC monitoring and action engine processes the script exit value.
The script has to be installed into the following ESC VM directory: /opt/cisco/esc/esc-scripts/
The following payload describes a metric using a custom_script defined in the script
<!-- Demo Metric Counting Sessions --> <metrics> <metric> <name>custom_script_count_sessions</name> <type>MONITOR_THRESHOLD</type> <metaData> <properties> <property> <name>script_filename</name> <value>/cisco/esc-scripts/countSessions.py</value> </property> <property> <name>for_threshold</name> <value>true</value> </property> </properties> <type>custom_script_threshold</type> </metaData> </metric> </metrics> <!-- -->
The metric payload has to be added to the list of supported ESC metrics by using the Mapping APIs.
Execute a HTTP POST operation on the following URI:
http://<my_esc_ip>:8080/ESCManager/internal/dynamic_mapping/metrics
The following payload describes custom actions that can be added to the list of supported ESC actions by using the Mapping APIs.
<actions> <action> <name>TRUE ScaleOut</name> <type>ESC_POST_EVENT</type> <metaData> <type>esc_post_event</type> <properties> <property> <name>esc_url</name> <value></value> </property> <property> <name>vm_external_id</name> <value></value> </property> <property> <name>vm_name</name> <value></value> </property> <property> <name>event_name</name> <value></value> </property> <property> <name>esc_event</name> <value>VM_SCALE_OUT</value> </property> <property> <name>esc_config_data</name> <value></value> </property> <properties/> </properties> </metaData> </action> <action> <name>TRUE ScaleIn</name> <type>ESC_POST_EVENT</type> <metaData> <type>esc_post_event</type> <properties> <property> <name>esc_url</name> <value></value> </property> <property> <name>vm_external_id</name> <value></value> </property> <property> <name>vm_name</name> <value></value> </property> <property> <name>event_name</name> <value></value> </property> <property> <name>esc_event</name> <value>VM_SCALE_IN</value> </property> <properties/> </properties> </metaData> </action> </actions>
Execute a HTTP POST operation on the following URI:
http://<IP_ADDRESS>:8080/ESCManager/internal/dynamic_mapping/actions
The KPI section defines the new KPI using the monitoring metrics.
<kpi> <event_name>DEMO_SCRIPT_SCALE_OUT</event_name> <metric_value>20</metric_value> <metric_cond>GT</metric_cond> <metric_type>UINT32</metric_type> <metric_collector> <type>custom_script_count_sessions</type> <nicid>0</nicid> <poll_frequency>15</poll_frequency> <polling_unit>seconds</polling_unit> <continuous_alarm>false</continuous_alarm> </metric_collector> </kpi> <kpi> <event_name>DEMO_SCRIPT_SCALE_IN</event_name> <metric_value>1</metric_value> <metric_cond>LT</metric_cond> <metric_type>UINT32</metric_type> <metric_occurrences_true>1</metric_occurrences_true> <metric_occurrences_false>1</metric_occurrences_false> <metric_collector> <type>custom_script_count_sessions</type> <nicid>0</nicid> <poll_frequency>15</poll_frequency> <polling_unit>seconds</polling_unit> <continuous_alarm>false</continuous_alarm> </metric_collector> </kpi>
In the above sample, in the first KPI section, the metric identified by custom_script_count_sessions is executed at regular interval of 15 seconds. If the value returned by the metric is greater than 20, then the event name DEMO_SCRIPT_SCALE_OUT is triggered to be processed by the rules section.
In the above sample, in the second KPI section,The metric identified by custom_script_count_sessions is executed at regular interval of 15 seconds. If the value returned by the metric is less than 1, then the event name DEMO_SCRIPT_SCALE_IN is triggered to be processed by the rules section.
The rules section defines rules using the event_name that have been used by kpis. The action tag will define an action that will be executed when the event_name is triggered. In the example below, the action identified by the TRUE ScaleOut identifier is executed when the event DEMO_SCRIPT_SCALE_OUT is triggered.
<rule> <event_name>DEMO_SCRIPT_SCALE_OUT</event_name> <action>ALWAYS log</action> <action>TRUE ScaleOut</action> </rule> <rule> <event_name>DEMO_SCRIPT_SCALE_IN</event_name> <action>ALWAYS log</action> <action>TRUE ScaleIn</action> </rule>
ESC now supports sending notification to northbound about customized scripts run as part of the deployment at a certain lifecycle stage. You can also determine the progress of the script executed through this notification. To execute a custom script with notification, define action type attribute as SCRIPT, and property attribute name as notification, and set the value to true.
For example, in the datamodel below, the action is to run a customized script located at /opt/cisco/esc/esc-scripts/senotification.py with notification, when the deployment reaches POST_DEPLOY_ALIVE stage.
<policies> <policy> <name>PCRF_POST_DEPLOYMENT</name> <conditions> <condition> <name>LCS::POST_DEPLOY_ALIVE</name> </condition> </conditions> <actions> <action> <name>ANY_NAME</name> <type>SCRIPT</type> <properties> <property> <name>script_filename</name> <value>/opt/cisco/esc/esc-scripts/senotification.py</value> </property> <property> <name>notification</name> <value>true</value> </property> </properties> </action> </actions> </policy> </policies> <vm_group> <name>g1</name>
You can notify northbound about the script execution progress using the following outputs:
Standard JSON Output
The standard JSON output follows the MONA notification convention. MONA captures entries in this to generate notification.
{"esc-notification":{"items":{"properties": [{"name":"name1","value":"value1"},{"name":"name2","value":"value2"}...]}}}The items are listed in the table below.
Name |
Description |
||
---|---|---|---|
type |
Describes the type of notification. progress_steps | progress_percentage | log | alert | error |
||
progress
|
For progress-steps type, {current_step}|{total_steps} |
||
For progress-percentage type, {percentage} |
|||
msg |
Notification message. |
Example JSON output is as follows:
{"esc-notification":{"items":{"properties": [{"name":"type", "value":"progress_percentage"},{"name":"progress","value":"25"},{"name":"msg","value":"Installation in progress."}]}}}![]() Note | If the custom script is written in Python, because standard output is buffered by default, after each notification print statement, the script is required to call sys.stdout.flush() to flush the buffer (for pre Python 3.0). Otherwise MONA cannot process the script stdout in a real-time. print '{"esc-notification":{"items":{"properties": [{"name":"type", "value":"progress_percentage"},{"name":"progress","value":"25"},{"name":"msg","value":"Installation in progress."}]}}}'sys.stdout.flush() |
REST API Call
http://localhost:8090/mona/v1/actions/notificationFor REST API, the script must accept a script handle as the last parameter. The script handle can be UUID, MONA action or execution job Id. For example, if the script originally accepts 3 command line parameters, to support MONA notification, the script considers an additional parameter for the handle UUID. This helps MONA to identify the notification source. For every notification, the script is responsible for constructing a POST REST call to MONA's endpoint inside the script:
The payload is as follows:
{ "esc-notification" : { "items" : { "properties" : [{ "name" : "type", "value" : "log", "hidden" : false }, { "name" : "msg", "value" : "Log info", "hidden" : false } ] }, "source" : { "action_handle" : "f82fe86d-6625-4b13-99f7-89d169e427ad" } } }
![]() Note | The action_handle value is the handle UUID MONA passes into the script. |
Cisco ESC Release 2.2 and later supports a new policy-driven datamodel. A new <policy> section is introduced under <policies> at both deployment and VM group level.
Using the policy-driven datamodel, a user can perform actions based on conditions. ESC supports predefined actions, or customized scripts during a deployment based on certain Lifecycle Stage (LCS). For example, the redeployment policy uses predefined actions based on lifecycle stages (LCS) to redeploy VMs. For more information, see Redeployment Policy.
The policy data model consists of conditions and actions. The condition is a Lifecycle Stage (LCS) in a deployment. The action is predefined or custom script.
Predefined action—The action is predefined and executed when the condition is met.
In the datamodel below, when condition2 is met, Action2 is performed. The action <type> is predefined.
Custom Script—The action is a custom script, and executed when the condition is met.
<policies> <policy> <name>Name1</name> <conditions> <condition> <name>Condition1</name> </condition> </conditions> <actions> <action> <name>Action1-1</name> <type>SCRIPT</type> </action> <action> <name>Action1-2</name> <type>SCRIPT</type> </action> </actions> </policy> <policy> <name>Name2</name> <conditions> <condition> <name>Condition2</name> </condition> </conditions> <actions> <action> <name>Action2</name> <type>PRE-DEFINED</type> </action> </actions> </policy> </policies>
For more information on Predefined actions, and scripts, see Recovery and Redeployment Policies.
The table below shows the LCS in a deployment, and its description. The recovery and redeployment policies, and VNF software upgrade policies use the policy-driven data model. For details on configuring the recovery and redeployment policies using the policy framework, see Recovery and Redeployment Policies. For details on upgrading the VNF software upgrade policies, see Upgrading the VNF Software Using the Policy Data Model.
Condition Name | Scope | Description |
LCS::PRE_DEPLOY |
Deployment |
Occurs just before deploying VMs of the deployment. |
LCS::POST_DEPLOY_ALIVE |
Deployment |
Occurs immediately after the deployment is active. |
LCS::DEPLOY_ERR |
Deployment |
Occurs immediately after the deployment fails. |
LCS::POST_DEPLOY:: VM_RECOVERY_ERR |
Deployment |
Occurs immediately after the recovery of one VM fails. (This is specified at deployment level and applies to all VM groups) |
LCS::POST_DEPLOY:: VM_RECOVERY_REDEPLOY_ERR |
Deployment |
Occurs immediately after the redeployment of one VM fails. (This is specified at deployment level and applies to all VM groups) |
LCS::DEPLOY_UPDATE::VM_PRE_VOLUME_DETACH |
Deployment |
Triggered just before the ESC detaches a volume. (This is specified for a group of individual VMs and specified under <vm_group> rather than the entire deployment.) |
LCS::DEPLOY_UPDATE::VM_VOLUME_ATTACHED |
Deployment |
Triggered immediately after ESC has attached a new volume (This is specified for a group of individual VMs and specified under <vm_group> rather than the entire deployment.) |
LCS::DEPLOY_UPDATE::VM_SOFTWARE_VERSION_UPDATED |
Deployment |
Triggered immediately after ESC has updated the software version of the VM (This is specified for a group of individual VMs and specified under <vm_group> rather than the entire deployment.) |
Affinity and anti-affinity rules create relationship between virtual machines (VMs) and hosts. The rule can be applied to VMs, or a VM and a host. The rule either keeps the VMs and hosts together (affinity) or separated (anti-affinity).
Policies are applied during individual VM deployment. You can deploy a single VNF or multiple VNFs together through ESC portal by uploading an existing deployment datamodel or by creating a new deployment datamodel. For more information, see Cisco Elastic Services Portal.
Affinity and anti-affinity policy streamlines the deployment process.
Affinity and anti-affinity rules are created and applied on VMs at the time of deployment. VM receives the placement policies when the deploy workflow is initialized.
During a composite VNF deployment, if a couple of VMs need to communicate with each other constantly, they can be grouped together (affinity rule) and placed on the same host.
If two VMs are over-loading a network, they can be separated (anti-affinity rule) and placed on different hosts to balance the network.
Grouping or separating VMs and hosts at the time of deployment helps ESC to manage load across the VMs and hosts in the network. Recovery and scale out of these VMs do not impact the affinity and anti-affinity rules.
The anti-affinity rule can also be applied between VMs within the same group and on a different host. These VMs perform similar functions and support each other. When one host is down, the VM on the other host continues to run preventing any loss of service.
The table shows the types of affinity and anti-affinity policies in a deployment.
Policy |
Policy |
VM group |
Host |
Zone |
---|---|---|---|---|
affinity |
Intra group affinity |
same VM group |
same host |
same zone |
Inter group affinity |
different VM group |
same host |
same zone |
|
anti-affinity |
Intra group anti-affinity |
same VM group |
different host |
same zone |
Inter group anti-affinity |
different VM group |
different host |
same zone |
![]() Note | If the zone is not specified on OpenStack, VMs will be placed on different hosts and different zones for inter and intra group anti-affinity rules. |
The following sections describe affinity and anti-affinity policies with examples.
The VNFs within the same VM group can either be deployed on the same host, or into the same availability zone.
Example for Intra Group Affinity Policy:
<vm_group> <name>affinity-test-gp</name> <placement> <type>affinity</type> <enforcement>strict</enforcement> </placement> ...
In ESC Release 2.0 and later, the type zone-host is used to deploy VNFs in the same host, or into the same availability zone.
Zone or Host Based PlacementYou cannot specify both the host and zone tags to deploy VM on the same host or the same available zone.
Example for host placement:
<vm_group> <name>zone-host-test-gp1</name> <placement> <type>zone_host</type> <enforcement>strict</enforcement> <host>my-ucs-4</host> </placement> ...
Example for zone placement:
<vm_group> <name>zone-host-test-gp2</name> <placement> <type>zone_host</type> <enforcement>strict</enforcement> <zone>dt-zone</zone> </placement> ...
The VNFs within the same VM group are explicitly deployed on different hosts. For example, back-up VNFs.
Example for Intra Group anti-affinity Policy:
<vm_group> <name>anti-affinity-test-gp</name> <placement> <type>anti_affinity</type> <enforcement>strict</enforcement> </placement> ...
![]() Note | You can use one or more vm_group_ref tag, type tag and enforcement tag under the placement tag. The host or zone cannot be specified. |
Example for Inter Group Affinity Policy:
<deployments> <deployment> <name>intergroup-affinity-dep</name> <policies> <placement> <target_vm_group_ref>affinity-test-gp1</target_vm_group_ref> <type>affinity</type> <vm_group_ref>affinity-test-gp2</vm_group_ref> <enforcement>strict</enforcement> </placement> </policies> …
![]() Note | You can only use one <target_vm_group_ref> tag, type tag and enforcement tag under the placement tag. The host or zone cannot be specified. You can use multiple <vm_group_ref> tags, however the anti-affinity policy only applies between each <vm_group_ref> and their <target_vm_group_ref>, which means that 2 or more <vm_group_ref> can be deployed on the same host, as long as each of them are deployed on a different host from their <target_vm_group_ref> that is acceptable. |
Example for Inter Group anti-affinity Policy:
<deployments> <deployment> <name>intergroup-anti_affinity-dep</name> <policies> <placement> <target_vm_group_ref>affinity-test-gp1</target_vm_group_ref> <type>anti_affinity</type> <vm_group_ref>affinity-test-gp2</vm_group_ref> <enforcement>strict</enforcement> </placement> </policies> …
A placement group tag is added under policies. Each <placement_group> contains the following:
name—name unique per deployment.
type—affinity or anti_affinity
enforcement—strict
vm_group—the content of each vm_group must be a vm group name listed under the same deployment.
The placement group tag is placed within the placement policy. The placement policy describes the relationship between the target vm group and the vm group members. The placement_group policy describes mutual relationship among all vm group members. The placement group policy is not applicable for target vm group.
The datamodel is as follows:
<policies> <placement_group> <name>placement-affinity-1</name> <type>affinity</type> <enforcement>strict</enforcement> <vm_group>t1g1</vm_group> <vm_group>t1g2</vm_group> <vm_group>t1g7</vm_group> </placement_group> <placement_group> <name>placement-affinity-2</name> <type>affinity</type> <enforcement>strict</enforcement> <vm_group>t1g3</vm_group> <vm_group>t1g4</vm_group> </placement_group> <placement_group> <name>placement-affinity-3</name> <type>affinity</type> <enforcement>strict</enforcement> <vm_group>t1g5</vm_group> <vm_group>t1g6</vm_group> </placement_group> <placement_group> <name>placement-anti-affinity-1</name> <type>anti_affinity</type> <enforcement>strict</enforcement> <vm_group>t1g1</vm_group> <vm_group>t1g3</vm_group> <vm_group>t1g5</vm_group> </placement_group> </policies>
![]() Note | In the new placement group tag under policies, the <target_vm_group_ref> and <vm_group_ref> are replaced with <vm_group>. The placement group policy is applicable for inter group affinity and anti-affinity policies only. You cannot use both placement and placement group tags together in the inter group affinity and anti-affinity policies. The placement group name tag must be unique for each placement group policy. |
Inter Deployment anti-affinity rules define relationships between different deployments with respect to the host placement. Anti-affinity between deployments is defined such that any VM from one deployment is not co-located on the same host as any other VM from the other deployment.
![]() Note | Inter Deployment anti-affinity is supported on OpenStack only. Inter Deployment anti-affinity does not work with host-placement (affinity or anti-affinity) as the latter takes precedence over inter deployment anti-affinity rules. |
In the ESC datamodel, inter deployment anti-affinity is defined using anti-affinity groups. All member deployments of an anti-affinity group have an anti-affinity relationship between them. For example, in an anti-affinity group called default-anti with 3 deployments dep-1, dep-2 and dep-3, dep-1 is anti-affinity to dep-2 and dep-3 deployments, dep-2 is anti-affinity to dep-1 and dep-3 deployments, dep-3 is anti-affinity dep-1 and dep-2. A deployment specifies its membership in an anti-affinity group by referencing to all group names it pertains to as shown below.
<deployment> <name>VPC-dep</name> <deployment_groups> <anti_affinity_group>VPC-ANTI-AFFINITY</anti_affinity_group> <anti_affinity_group>VPNAAS-ANTI-AFFINITY</anti_affinity_group> </deployment_groups> …. </deployment>
In the above example, VPC-dep is in 2 anti-affinity groups; any other deployment that references one of these 2 groups will have an anti-affinity relationship with VPC-dep.
Inter-deployment Placement GroupsAnti-affinity group is an example of placement group. Anti-affinity group has the following properties in ESC:
The affinity and anti-affinity rules for VMware vCenter is explained with examples. These rules are created for a cluster and a targeted host.
All VMware vCenter deployments must always be accompanied with zone-host placement policy. The zone-host defines the target VM group which is either the cluster or the host.
The VNFs with the same VM group can be deployed on the same host.
During deployment, ESC deploys the first VM as an anchor VM for affinity. All the other VMs that follow the same affinity rule will be deployed to the same host as the anchor VM. The anchor VM deployment helps to optimize the resource usage.
Example for Intra Group Affinity Policy:
… <vm_group> <name>vm-gp</name> … <placement> <type>zone_host</type> <enforcement>strict</enforcement> <zone>cluster1</zone> </placement> <placement> <type>affinity</type> <enforcement>strict</enforcement> </placement> …
![]() Note | In Cisco ESC Release 2.2 and later, only "strict" is supported for enforcement. |
![]() Note | Affinity and anti-affinity policy with a host placement policy is incorrect and may cause deployment failure. |
![]() Note | Host placement alone (without affinity and anti-affinity placement policy within a VM group) can be used to achieve intra group affinity. |
The VNFs with the same VM group can be deployed in different hosts. During deployment ESC deploys VNFs with the same VM group one after the other. At the end of each VNF deployment, ESC records its host to a list. At the beginning of each VNF’s deployment, ESC deploys the VNF to a computing-host that is not in the list. If all the available computing-host(s) are in the list, ESC fails the whole deployment.
Example for Intra Group Anti-Affinity Policy:
… <vm_group> <name>vm-gp</name> … <placement> <type>zone_host</type> <enforcement>strict</enforcement> <zone>cluster1</zone> </placement> <placement> <type>anti_affinity</type> <enforcement>strict</enforcement> </placement>
All VMs in a VM group can be deployed to a cluster. For example, all VMs in a vm group CSR-gp1 can be deployed to cluster ott-cluster2.
![]() Note | The VMware vCenter cluster must be created by the administrator. |
Example for cluster placement:
<name>CSR-gp1</name> <placement> <type>zone_host</type> <enforcement>strict</enforcement> <zone>ott-cluster2</zone> </placement>
All VMS in a VM group can be deployed to a host. For example, all VMs in the vm group CSR-gp1 will be deployed to host 10.2.0.2.
<name>CSR-gp1</name> <placement> <type>zone_host</type> <enforcement>strict</enforcement> <host>10.2.0.2</host> </placement>
The VMs in different VM groups can be deployed to the same host. For example, all VMs in the VM group ASA-gp1 can be deployed to the same host as the VMs in the VM group CSR-gp1.
During deployment ESC deploys the first VM as an anchor VM. All other VMs that follow the same affinity rule will be deployed to the same host as the anchor VM.
![]() Note | To ensure that the inter-group affinity rules are applied within a single cluster, verify that all VM groups in a deployment are specified to the same cluster (<zone> in esc data_model). |
Example for Inter Group Affinity Policy:
<deployment> <deployment> <name>test-affinity-2groups</name> <policies> <placement> <target_vm_group_ref>CSR-gp1</target_vm_group_ref> <type>affinity</type> <vm_group_ref>CSR-gp2</vm_group_ref> <vm_group_ref>ASA-gp1</vm_group_ref> <enforcement>strict</enforcement> </placement> </policies>
The VNFs are in the same deployment, but different VM groups can be explicitly deployed in different hosts. During deployment, ESC deploys the first VNF of the target VM group, and records its host to a list at the end. ESC then deploys the first VNF of each reference VM groups, ensure the VNFs are not deployed to the host in the list. Then, the second VNF of the target VM group, the second VNF of each reference VM group, and rest of the VNFs accordingly.
Example for Inter Group Anti-Affinity Policy:
<deployment> <deployment> <name>vm-groups</name> <policies> <placement> <target_vm_group_ref>CSR-gp1</target_vm_group_ref> <type>anti_affinity</type> <vm_group_ref>CSR-gp2</vm_group_ref> <vm_group_ref>ASA-gp1</vm_group_ref> <enforcement>strict</enforcement> </placement> </policies>
A placement group tag is added under policies. Each <placement_group> contains the following:
name—name unique per deployment.
type—affinity or anti_affinity
enforcement—strict
vm_group—the content of each vm_group must be a vm group name listed under the same deployment.
The placement group tag is placed within the placement policy. The placement policy describes the relationship between the target vm group and the vm group members. The placement_group policy describes mutual relationship among all vm group members. The placement group policy is not applicable for target vm group.
The datamodel is as follows:
<policies> <placement_group> <name>placement-affinity-1</name> <type>affinity</type> <enforcement>strict</enforcement> <vm_group>t1g1</vm_group> <vm_group>t1g2</vm_group> <vm_group>t1g7</vm_group> </placement_group> <placement_group> <name>placement-affinity-2</name> <type>affinity</type> <enforcement>strict</enforcement> <vm_group>t1g3</vm_group> <vm_group>t1g4</vm_group> </placement_group> <placement_group> <name>placement-affinity-3</name> <type>affinity</type> <enforcement>strict</enforcement> <vm_group>t1g5</vm_group> <vm_group>t1g6</vm_group> </placement_group> <placement_group> <name>placement-anti-affinity-1</name> <type>anti_affinity</type> <enforcement>strict</enforcement> <vm_group>t1g1</vm_group> <vm_group>t1g3</vm_group> <vm_group>t1g5</vm_group> </placement_group> </policies>
![]() Note | In the new placement group tag under policies, the <target_vm_group_ref> and <vm_group_ref> are replaced with <vm_group>. The placement group policy is applicable for inter group affinity and anti-affinity policies only. You cannot use both placement and placement group tags together in the inter group affinity and anti-affinity policies. The placement group name tag must be unique for each placement group policy. |
All Affinity rules defined on VMware vCenter are implemented in a cluster.
DPM, HA and vMotion must be turned off.
VM deployment and recovery are managed by ESC.
DRS must be set to manual mode if it is turned on.
To leverage DRS deployment, shared storage is required.
Supported value for <enforcement> tag should be 'strict'.
<zone_host> must be used for any VM group.
This section describes several interface configurations for Elastic Services Controller (ESC) and the procedure to configure the hardware interfaces.
Before you configure Single Root I/O Virtualization (SR-IOV) in ESC, we highly recommend that you configure the hardware and OpenStack with the correct parameters
SR-IOV allows multiple VMs running a variety of guest operating systems to share a single PCIe network adapter within a host server. It also allows a VM to move data directly to and from the network adapter, bypassing the hypervisor for increased network throughput and lower server CPU burden.
<interfaces> <interface> <nicid>0</nicid> <network>esc-net</network> <type>direct</type> </interface> </interfaces> ...
Cisco Elastic Services Controller allows you to specify the address pairs in the deployment datamodel to pass through a specified port regardless of the subnet associated with the network.
The address pair is configured in the following ways:
<interface> <nicid>1</nicid> <network>network1</network> <allowed_address_pairs> <network> <name>bb8c5cfb-921c-46ea-a95d-59feda61cac1</name> </network> <network> <name>6ae017d0-50c3-4225-be10-30e4e5c5e8e3</name> </network> </allowed_address_pairs> </interface> </interfaces>
<interface> <nicid>0</nicid> <network>esc-net</network> <allowed_address_pairs> <address> <ip_address>10.10.10.10</ip_address> <netmask>255.255.255.0</netmask> </address> <address> <ip_address>10.10.20.10</ip_address> <netmask>255.255.255.0</netmask> </address> </allowed_address_pairs> </interface>
To configure security group rules, do the following:
You can configure hardware acceleration features on OpenStack using the flavor datamodel. The following hardware acceleration features can be configured:
vCPU Pinning—enables binding and unbinding of a process to a vCPU (Virtual Central Processing Unit) or a range of CPUs, so that the process executes only on the designated CPU or CPUs rather than any CPU.
OpenStack performance optimization for large pages and non-uniform memory access (NUMA)—enables improvement of system performance for large pages and NUMA i.e., system's ability to accept higher load and modify the system to handle a higher load.
OpenStack support for PCIe Passthrough interface—enables assigning a PCI device to an instance on OpenStack.
$ cat fl.xml <?xml version='1.0' encoding='ASCII'?> <esc_datamodel xmlns="http://www.cisco.com/esc/esc"> <flavors> <flavor> <name>testfl6</name> <vcpus>1</vcpus> <memory_mb>2048</memory_mb> <root_disk_mb>10240</root_disk_mb> <ephemeral_disk_mb>0</ephemeral_disk_mb> <swap_disk_mb>0</swap_disk_mb> <properties> <property> <name>pci_passthrough:alias</name> <value>nic1g:1</value> </property> </properties> </flavor> </flavors> </esc_datamodel> $ /opt/cisco/esc/esc-confd/esc-cli/esc_nc_cli edit-config ./fl.xml
ESC supports VMware vCenter PCI or PCIe device passthrough (VMDirectPath I/O). This enables VM access to physical PCI functions on platforms with an I/O memory management unit.
Before You Begin
For the PCI / PCIe devices of a host VM to enable passthrough, the vSphere administrator must mark these devices in the vCenter.
![]() Note | You must reboot the host after PCI settings. Put the host to maintenance mode, power off or migrate all VMs to other hosts. |
<tenants> <tenant> <name>admin</name> <deployments> <deployment> <name>abc-test</name> <vm_group> <name>test-g1</name>  <bootup_time>300</bootup_time> <recovery_wait_time>10</recovery_wait_time> <interfaces> <interface> <nicid>1</nicid> <network>MgtNetwork</network> <ip_address>10.79.0.102</ip_address> </interface> <interface> <nicid>2</nicid> <network>VM Network</network> <type>passthru</type> <ip_address>172.16.0.0</ip_address> </interface> <interface> <nicid>3</nicid> <network>VM Network</network> <type>passthru</type> <ip_address>10.84.46.117</ip_address> </interface> </interfaces>
After successful deployment, the passthru value is set in the interface section of the notification as well as in the operational data.
ESC needs one or more PCI or PCIe passthrough devices to be attached to each deployment without a particular PCI ID. ESC first selects a host. ESC selects the next available PCI or PCIe passthrough enabled device and attaches it during the deployment. If there is no PCI or PCIe passthrough enabled device available, ESC fails the deployment. The vSphere administrator has to ensure all computing-host within the target computing-cluster have enough number of PCI or PCIe passthrough enabled devices.