Cisco Network Function Virtualization Infrastructure (Cisco NFVI) provides the virtual layer and hardware environment in which virtual network functions (VNFs) operate. VNFs provide a well-defined network function that offers routing, intrusion, detection, domain name service (DNS), caching, Network Address Translation (NAT), and other network functions. While this network functions required a tight integration between a network software and hardware in the past, VNFs decouple software from the underlying hardware.

Cisco NFVI is based on the Newton release of OpenStack, an open source cloud operating system that controls large pools of compute, storage, and networking resources. The Cisco version of OpenStack is Cisco Virtualization Infrastructure Manager (VIM). VIM manages the OpenStack compute, network, and storage services, and all Cisco NFVI build and control functions. Cisco NFVI pods perform four key roles:

  • Control (including Networking)

  • Computes

  • Storage

  • Management, logging, and monitoring

Hardware that is used to create the Cisco NFVI pods include:

  • Cisco UCS® C240 M4 or C240 M5 or C220 M5—Performs management and storage functions, and services. Includes dedicated Ceph (UCS 240-M4 or UCS 240-M5) distributed object store and the file system. (Only Red Hat Ceph is supported).

  • Cisco UCS C220/240 M4 or M5 —Performs control and compute services.

  • HP DL 360 Gen9: Supports as a third-party Compute, where the control plane is still Cisco UCS servers.

  • Cisco UCS B200 M4 blades—It can be used instead of the UCS C220 for compute and control services. The B200 blades and C240 Ceph server are joined with redundant Cisco Fabric Interconnects that are managed by UCS Manager.

  • Combination of M5 series servers are supported in micro-pod and VIC/NIC (40G) based Hyper-converged and Micro-pod offering.

The UCS C240 and C220 servers are M4/M5 Small Form Factor (SFF) models where the operating systems boots from HDD for control nodes, from HDD/SSD for compute nodes, and from internal SSD for Ceph nodes. Each UCS C240, C220, and B200 have two 10 GE Cisco UCS Virtual Interface Cards.

Software applications that manage Cisco NFVI hosts and services include:

  • Red Hat Enterprise Linux 7.4 with OpenStack Platform 10.0-Provides the core operating system with OpenStack capability. RHEL 7.4 and OSP 10.0 are installed on all Cisco NFVI UCS servers.

  • Cisco Virtual Infrastructure Manager (VIM)—An OpenStack orchestration system that helps to deploy and manage an OpenStack cloud offering from bare metal installation to OpenStack services, considering the hardware and software redundancy, security, and monitoring. Cisco VIM includes the OpenStack Newton release with more features and usability enhancements that are tested for functionality, scale, and performance.

  • Cisco Unified Management/Insight—Deploys, Provisions, and manages CiscoVIM on Cisco UCS servers.

  • Cisco UCS Manager—Used to perform certain management functions when UCS B200 blades are installed.

  • Cisco Integrated Management Controller (IMC)—When installing Cisco VIM 2.4, Cisco IMC 2.0(13i) or later is supported.

    For the Cisco IMC 2.0 lineup, the recommended version information is as follows:

    UCS-M4 servers

    Recommended: Cisco IMC 2.0(13n) or later.

    For the Cisco IMC 3.x lineup, the recommended version information is as follows:

    UCS-M4 servers

    Cisco IMC versions are 3.0(3a) or later, except for 3.0(4a); Recommended: Cisco IMC 3.0(4d).

    UCS-M5 servers

    CIMC 3.1(2b) or later.

  • Cisco Virtual Topology System (VTS)— VTS is a standard-based, open, overlay management and provisioning system for data center networks. It automates DC overlay fabric provisioning for physical and virtual workloads.

  • Cisco Virtual Topology Forwarder (VTF)—Includes VTS, VTF leverages Vector Packet Processing (VPP) to provide high performance Layer 2 and Layer 3 VXLAN packet forwarding.

Layer 2 networking protocols include:

  • VxLAN supported using Linux Bridge

  • VTS VLAN supported using ML2/VPP

  • VLAN supported using OpenVSwitch (OVS) & ML2/VPP (including SRIOV with Intel NIC 710 NIC)

  • VLAN supported using ML2/ACI

For pods that are based on UCS B-Series pods, and pods based on C-series with Intel NIC Single Root I/O Virtualization (SRIOV). SRIOV allows a single physical PCI Express to be shared on a different virtual environment. The SRIOV offers different virtual functions to different virtual components, for example, network adapters, on a physical server.

Any connection protocol can be used unless you install UCS B200 blades with the UCS Manager plugin, in which case, only OVS over VLAN can be used.

Features of Cisco VIM 2.4.2

Cisco VIM is the only standalone fully automated cloud lifecycle manager offering from Cisco for the private cloud. The current version of VIM, integrates with Cisco C or B-series UCS servers and Cisco or Intel NIC. This document and its accompanying administrator guide help the cloud administrators to set up and manage the private cloud.

The following are the features of Cisco VIM:

Feature Name


OpenStack Version

RHEL 7.4 with OSP 10 (Newton)

Hardware Support Matrix

  • UCS C220/B200 M4 controller or compute with Intel V3 (Haswell)

  • UCS C240/220 M4 controller or compute + Intel V4 (Broadwell)

  • UCS C240/220 M4 controller or compute + Intel V4 (Skylake)

  • HP DL360 Gen 9

  • UCS C220/240 M5 in a micropod environment, with an option to add up to 16 220/240-M5 computes.

  • UCS C240/220 M5 controller or compute + Intel X710 NIC and SR-IOV

NIC support

  • Cisco VIC: VIC 1227, 1240, 1340, 1380

  • Intel NIC: X710, 520, XL710

POD Type

  • Dedicated control, compute and storage (C-Series) node running on Cisco VIC (M4 only), or Intel 710 (full on)

  • Dedicated control, compute, and storage (B-Series) node running on Cisco NIC

  • MICRO POD: Integrated (AIO) control, compute and storage (C-series) node running on Cisco VIC, or Intel 710 X or VIC/NIC combo. Micro pod can be optionally expanded to accommodate for more computes running with the same NIC type. This can be done as a day-0 or day-1 activity. Support for HDD or SSD-based M5 micropod; Intel NIC-based micropod supports SRIOV, with the M5 based micropod supporting XL710 as an option for SRIOV.

  • Hyper-Converged on M4 (UMHC): Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (1227) and 2x10GE 520 or 2x40GE 710XL Intel NIC with an option to migrate from one to another.

  • Hyper-Converged (NGENAHC): Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (1227) for the control plane, and 1x10GE 710X (2 port) Intel NIC for the Data plane (over VPP).

  • Hyper-Converged on M5: Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (40G) and 2x40GE 710XL Intel NIC.


In a full-on (VIC based) or UMHC M4 pod, computes can either have a combination of 1-Cisco VIC (1227) and (2x10GE 520/2x40GE 710XL Intel NIC) or 1-CiscoVIC (1227). The compute running pure Cisco VIC does not run SR-IOV. In Cisco VIM 2.4, we support HP DL360 Gen9 as a third party compute.

Currently, we do not support a mix of computes from different vendors.

ToR and FI support

  • For VTS-based installation, use the following Nexus version-7.0(3)I2(2a) and 7.0(3)I2(2c)

  • For the mechanism driver other than VTS, use the following Nexus software version 7.0(3)I4(6) 7.0(3)I6(1).

  • Support of Cisco NCS 5500 (with recommended Cisco IOS XR version with splitter cable support.

  • Nexus 9K switches running ACI 3.0 (for the mechanism driver ACI)

Install or update mode

  • Connected to the Internet or air-gap.

  • Support of Software Distribution Server (SDS) to mitigate the problem associated with logistics of USB distribution for air-gapped installation.

IPV6 support for management network

  • Static IPv6 management assignment for servers.

  • Support of IPv6 for NTP, DNS, LDAP, external syslog server, and AD.

  • Support of IPv6 for the cloud API end point.

Mechanism Drivers

OVS/VLAN, Linuxbridge/VXLAN, ACI/VLAN, VPP/VLAN (Fast Networking, Fast Data > VPP/VLAN, based on the VPP fast virtual switch).

VPP with LACP is now the default configuration for data plane.

SDN Controller Integration

VTS; ACI (ships in the night or with Unified ACI Plugin). with Cisco VIC or Intel NIC on the UCS C-series M4 platform.

Install Methodology

Fully automated online or offline.


  • Full on: Total of 60 (compute and OSD) nodes (with Ceph OSD max at 20).

  • Micropod: max of 16 standalone compute nodes.

    Ceph OSDs can be HDD or SSD based, but has to be uniform across the pod. Computes can boot off 2x1.2TB HDD or 2x1.6TB SSD). In the same pod, some computes can have SSD, while others can have HDD.

Automated Pod Life Cycle Management

  • Add or remove compute and Ceph nodes and replace the controller

  • Reconfiguration of passwords and selected optional services

  • Automated software update

Platform security

Secure OS, RBAC, Network isolation, TLS, Source IP filtering, Keystone v3, Bandit, CSDL-compliant, hardened OS, SELinux.

Change the CIMC password after post install for maintenance and security.

Non-root log in for Administrators.

Enabling Custom Policy for VNF Manager.


NUMA, CPU pinning, huge pages, SRIOV with Intel NIC.

HA and Reliability

  • Redundancy at hardware and software level.

  • Automated backup and restore of the management node.

Unified Management Support

Single pane of glass in a single or multi instance (HA) mode: Supports multi-tenancy and manages multiple pods from one instance.

Central Logging

ELK integrated with external syslog (over v4 or v6) for a log offload, with optional support of NFS with ELK snapshot.

External Syslog Servers

Support of multiple external syslog servers over IPv4 or IPv6. The minimum and maximum number of external syslog server that is supported is 1 and 3, respectively.

VM Migration

Cold migration and resizing.

Live Migration


  • Object store with SwiftStack, and block storage with Ceph or NetApp.

  • Option to use Ceph for Glance and Solidfire for Cinder.



  • Third-party integration with Zenoss (called NFVIMON).

  • Optional auto-ToR configuration of collector ToR ports, when Cisco NCS 5500 is used as ToR.

Support of External Auth System

  • LDAP

  • Active Directory (AD)

Software update

Update of Cloud Software for bug fixes on the same release.

Software upgrade

Upgrade of non-VTS cloud from the release 2.2.24 to release 2.4.2.

CIMC upgrade capability

Central management tool to upgrade the CIMC bundle image of one or more servers.

Technical support for CIMC

Collection of technical support for CIMC.

Splitter cable support for Cisco NCS5500

Automated splitter cable support for Cisco NCS 5500.

Extending auto-TOR configuration of Cisco NCS5500

Extending autoToR configuration of Cisco NCS 5500 to include NFV1MON-Collector.

Enable TTY logging as an option

Enables TTY logging and forwards the log to syslogs and Kibana dashboard. Optionally, it forwards the log to remote syslog if that option is available.

Power Management of Computes

Option to power off or on computes selectively to conserve energy.

Disk maintenance for Pod Nodes

Ability to replace faulty disk(s) on the Pod node(s) without the need for add/remove/replace node operation.

Integrated Test Tools

  • Open Source Data-plane Performance Benchmarking: VMTP (an open source data plane VM to VM performance benchmarking tool), NFVBench (NFVI data plane and a service chain performance benchmarking tool)

  • Services Health Checks Integration: Cloudpulse and Cloudsanity.

Known Caveats

The following list describes the known caveats in NFVI 2.4.2:

Translation of vic_slot from 7 to MLOM fails in CIMC 2.0(13i) version.
Traffic loss of 8 to 10 seconds is seen, when the active l3 agents are rebooted.
Volume-attach failure errors are to be reported to the user.
When a MariaDB HA event is logged, you should run the recovery playbook.
Nova-compute service is down for up to two minutes, after a controller reboot.
Nova HA: VM is stuck in scheduling state, while conducting HA on Nova conductor.
Auto-created Layer 3 network is not cleaned up with the router or tenant deletion.
Update fails, if compute is not reachable even after updating the containers on the controller node.
Nova API is unavailable for few minutes, once the controller is down.
The ARP entry on ToR does not get refreshed, which results in the failure of the Layer 3 ping to VM FIP.
The Ceph cluster does not move to error state, when all the storage nodes are down.
Rollback not supported for repo update failure.
VMs intermittently goes to 'SHUTOFF' state, after compute node reboot.
Recovery play book needs to handle ceph recovery after power outage.
Performance Issue on IE browser.
Insight UI: The pod users cannot update the REST API password once it is changed.
When using mechanism-driver, ACI which is the command-line interface for neutron quota-update does not get enforced.
When using mechanism-driver, ACI and VMs originally in an active state on the compute node are unable to acquire an IP address from DHCP.
Representation of the service-type such as cloud-formation in openstack endpoint needs to be changed.
Virtual disk creation fails due to the busy state of the physical disk.

Using the Cisco Bug Search Tool

You can use the Bug Search Tool to search for a specific bug or to search for all bugs in a release.


Step 1

Go to the Cisco Bug Search Tool.

Step 2

In the Log In screen, enter your registered username and password, and then click Log In. The Bug Search page opens.


If you do not have a username and password, you can register for them at

Step 3

To search for a specific bug, enter the bug ID in the Search For field and press Enter.

Step 4

To search for bugs in the current release:

  1. In the Search For field, enter Cisco Network Function Virtualization Infrastructure 2.0(1) and press Enter. (Leave the other fields empty.)

  2. When the search results are displayed, use the filter tools to find the types of bugs you are looking for. You can search for bugs by status, severity, modified date, and so forth.

    To export the results to a spreadsheet, click the Export Results to Excel link.     

Obtaining Documentation and Submitting a Service Request

For information on obtaining documentation, submitting a service request, and gathering additional information, see the monthly What’s New in Cisco Product Documentation, which also lists all new and revised Cisco technical documentation, at:

Subscribe to the What’s New in Cisco Product Documentation as an RSS feed and set content to be delivered directly to your desktop using a reader application. The RSS feeds are a free service. Cisco currently supports RSS Version 2.0.

External References

NFVI documentation is available at: