Cisco Network Function Virtualization Infrastructure(Cisco NFVI) provides the virtual layer and hardware environment in which virtual network functions (VNFs) can operate. VNFs provide well-defined network functions such as routing, intrusion detection, domain name service (DNS), caching, network address translation (NAT) and other network functions. While these network functions required a tight integration between network software and hardware in the past, VNFs decouple software from the underlying hardware.

Cisco NFVI is based on the Newton release of OpenStack, the open source cloud operating system that controls large pools of compute, storage, and networking resources. The Cisco version of OpenStack is Cisco Virtualization Infrastructure Manager (VIM). VIM manages the OpenStack compute, network, and storage services, and all Cisco NFVI build and control functions. Cisco NFVI pods perform four key roles:

  • Control (including Networking)

  • Compute

  • Storage

  • Management, logging, and monitoring

Hardware used to create the Cisco NFVI pods include:

  • Cisco UCS® C240 M4—Performs management and storage functions and services. Includes dedicated Ceph (UCS 240-M4) distributed object store and file system. (Only Red Hat Ceph is supported).

  • Cisco UCS C220/240 M4—Performs control and compute services.

  • Cisco UCS B200 M4 blades—Can be used instead of the UCS C220 for compute and control services. The B200 blades and C240 Ceph server are connected with redundant Cisco Fabric Interconnects managed by UCS Manager.

The UCS C240 and C220 servers are M4 Small Form Factor (SFF) models with Cisco FlexFlash 64 GB Secure Digital cards and two 24 solid state storage disks (SSDs). Each UCS C240, C220, and B200 has two 10 GE Cisco UCS Virtual Interface Cards.

Software applications that manage Cisco NFVI hosts and services include:

  • Red Hat Enterprise Linux 7.3 with OpenStack Platform 10.0-Provides the core operating system with OpenStack capability. RHEL 7.3 and OSP 10.0 are installed on all Cisco NFVI UCS servers.

  • Cisco Virtual Infrastructure Manager (VIM)—An OpenStack orchestration system that helps deploy and manage an OpenStack cloud offering from bare metal installation to OpenStack services, taking into account hardware and software redundancy, security and monitoring. Cisco VIM includes the OpenStack Newton release with additional features and usability enhancements tested for functionality, scale, and performance.

  • Cisco Insight—Deploys, provisions, and manages CiscoVIM on Cisco UCS servers.

  • Cisco UCS Manager—Used to perform certain management functions when UCS B200 blades are installed.

  • Cisco Integrated Management Controller (IMC)—Provides embedded server management for Cisco UCS C-Series Rack Servers. Supported Cisco IMC firmware versions for fresh install of Cisco VIM 2.0 is 2.0 (13i) or greater. Pods running VIM 1.0 release will continue to work with 2.0(3i), 2.0(6d), 2.0(6f), 2.0(8d), 2.0(8g), 2.0(9c), 2.0(9e), 2.0(10d),and 2.0(10e). 2.0(10e) as it is going through the upgrade. If the box is running with Intel NIC, Cisco IMC firmware versions of 2.0(13i) or greater is recommended. Please note under no circumstances can the CIMC version be running 3.0 series.

  • Cisco Virtual Topology System (VTS)—is a standards-based, open, overlay management and provisioning system for data center networks. It automates DC overlay fabric provisioning for physical and virtual workloads.

  • Cisco Virtual Topology Forwarder (VTF)—Included with VTS, VTF leverages Vector Packet Processing (VPP) to provide high performance Layer 2 and Layer 3 VXLAN packet forwarding.

Supported Layer 2 networking protocols include:

  • Virtual extensible LAN (VXLAN) over a Linux bridge

  • Open vSwitch (OVS) over VLAN (SRIOV with Intel 710 NICs)

  • ML2/VPP over VLAN for C-series Only

For UCS B-Series pods, Single Root Input/Output Virtualization (SRIOV). SRIOV allows a single physical PCI Express to be shared on a different virtual environment. The SRIOV offers different virtual functions to different virtual components, for example, network adapters, on a physical server.

Any connection protocol can be used unless you install UCS B200 blades with the UCS Manager plugin, in which case, only OVS over VLAN can be used.

Features of Cisco VIM 2.0

Cisco VIM 2.0 is a standalone fully automated cloud lifecycle manager offering from Cisco for private cloud. The current version of VIM, integrates well with Cisco C or B-series UCS servers and Cisco or Intel NIC.

The following table provides a summary of the feature set that is offered.

Feature Name


OpenStack Version

RHEL 7.3 with OSP 10 (Newton).

Hardware Support Matrix

  1. UCS C220/B200 M4 controller or compute with Intel V3 (Haswell).
  2. UCS C240 M4 controller or compute + Intel V4 (Broadwell).

NIC support

  1. Cisco VIC: VIC 1227, 1240, 1340, 1380.
  2. Intel NIC: X710.

ToR and FI support

  1. For VTS based installation, use the following Nexus version-7.0(3)I2(2a) and 7.0(3)I2(2c)
  2. For mechanism driver other than VTS, use the following Nexus software version 7.0(3)I4(6) 7.0(3)I6(1)

  3. UCS-FI-6296

Mechanism Drivers

OVS/VLAN, Linuxbridge/VXLAN, ML2VPP (Fast Networking, Fast Data > ML2/VPP/VLAN, based on the VPP fast virtual switch)

SDN Controller Integration

VTS; ACI (ships in the night).

Install Methodology

Fully automated online or offline.


  1. Compute: 40 hosts
  2. Ceph OSD: 20 hosts

Automated Pod Life Cycle Management

  1. Add or remove compute and Ceph nodes and replace controller.
  2. Reconfiguration of passwords and selected optional services.
  3. Automated software update.

Platform security

Secure OS, RBAC, Network isolation, TLS, Source IP filtering, Keystone v3, Bandit, CSDL compliant, hardened OS, SELinux.


NUMA, CPU pinning, huge pages, SRIOV with Intel NIC.

HA and Reliability

  1. Redundancy at hardware and software level.
  2. Automated backup and restore of management node.

Unified Management Support

Single pane of glass in a single or multi instance (HA) mode: Supports multi-tenancy and manages multiple pods from one instance.

Central Logging

ELK integrated with external syslog for log offload.

VM Migration

Cold migration and resizing.


Object store with SwiftStack, Block storage with Ceph.


"Collectd" (system statistics collection daemon) , or third party integration with Zenoss (Called NFVIMON).

Integrated Test Tools

  1. Open Source Data-plane Performance Benchmarking: VMTP, NFVBench.
  2. Services Health Checks Integration: Cloudpulse and Cloudsanity.

POD Type

  1. Dedicated controller, compute and storage node.
  2. Micro pod: Integrated controller, compute and storage node.
  3. VMTP: An open source data plane VM to VM performance benchmarking tool.
  4. NFVBench: An open source NFVI data plane and service chain performance benchmarking tool.
  5. CloudPulse and CloudSanity: Platform services integrated health check tools.

Known Caveats

The following lists describes the known caveats in NFVI 2.0.

Translation of vic_slot '7' to 'MLOM' fails in CIMC 2.0(13i) version.
Traffic loss of 8-10s seen while controller with active l3 reboots.
Volume attach failure errors need to be reported back to the user.
After a MariaDB HA event, you may need to run the recovery playbook.
Nova compute reports that it is down for up to two minutes after a controller reboot.
Nova HA: VM is stuck in scheduling state after Nova conductor HA.
Auto-created L3 network not cleaned up properly with router/tenant deletion.
Update fails if compute is not reachable even though updating the containers on the controller node.
Nova api is unavailable for a few minutes after you bring down the controller.
The ARP entry on ToR is not refreshed randomly resulting in an external ping to VM VIP failure.
The Ceph cluster does not move to Error state when all storage nodes are down.
Rollback not supported for repo update failure.
NFVbench multi-chaining support for EXT chain with ARP does not work with TRex trunk ports.
Master Newton: (ml2_vpp) : Add compute node shut the bond interfaces on VPP in down state.
Kibana dashboard intermittently displays "Visualize: unknown error" & "unknown error".
Newton 1.9.11 (VTS/VPP): VTF does not cleanup tenant route after neutron router interface is deleted.
Recovery play book needs to handle ceph recovery after power outage.
NFVBench in VTS fails when NFVBench TOR is 93180YC.

Using the Cisco Bug Search Tool

You can use the Bug Search Tool to search for a specific bug or to search for all bugs in a release.


Step 1

Go to the Cisco Bug Search Tool.

Step 2

In the Log In screen, enter your registered username and password, and then click Log In. The Bug Search page opens.


If you do not have a username and password, you can register for them at

Step 3

To search for a specific bug, enter the bug ID in the Search For field and press Enter.

Step 4

To search for bugs in the current release:

  1. In the Search For field, enter Cisco Network Function Virtualization Infrastructure 2.0(1) and press Enter. (Leave the other fields empty.)

  2. When the search results are displayed, use the filter tools to find the types of bugs you are looking for. You can search for bugs by status, severity, modified date, and so forth.

    To export the results to a spreadsheet, click the Export Results to Excel link.     

Related Documentation

The Cisco NFVI 2.0 documentation set consists of:

  • Cisco NFV Infrastructure Installation Guide

  • Cisco NFV Infrastructure Administrator Guide

  • Cisco NFV Infrastructure Release Notes

These documents will be available on when Cisco NFV Infrastructure is released.

Obtaining Documentation and Submitting a Service Request

For information on obtaining documentation, submitting a service request, and gathering additional information, see the monthly What’s New in Cisco Product Documentation, which also lists all new and revised Cisco technical documentation, at:

Subscribe to the What’s New in Cisco Product Documentation as an RSS feed and set content to be delivered directly to your desktop using a reader application. The RSS feeds are a free service. Cisco currently supports RSS Version 2.0.