Introduction

Cisco Network Function Virtualization Infrastructure (Cisco NFVI) provides the virtual layer and hardware environment in which virtual network functions (VNFs) can operate. VNFs provide a well-defined network function such as routing, intrusion detection, domain name service (DNS), caching, Network Address Translation (NAT), and other network functions. While this network functions required a tight integration between a network software and hardware in the past, VNFs decouple software from the underlying hardware.

Cisco NFVI is based on the Newton release of OpenStack, the open source cloud operating system that controls large pools of compute, storage, and networking resources. The Cisco version of OpenStack is Cisco Virtualization Infrastructure Manager (VIM). VIM manages the OpenStack compute, network, and storage services, and all Cisco NFVI build and control functions. Cisco NFVI pods perform four key roles:

  • Control (including Networking)

  • Computes

  • Storage

  • Management, logging, and monitoring

Hardware that is used to create the Cisco NFVI pods include:

  • Cisco UCS® C240 M4—Performs management and storage functions, and services. Includes dedicated Ceph (UCS 240-M4) distributed object store and the file system. (Only Red Hat Ceph is supported).

  • Cisco UCS C220/240 M4—Performs control and compute services.

  • HP DL 360 Gen9: Supports as a Third Party Compute, where the control plane is still Cisco UCS servers.

  • Cisco UCS B200 M4 blades—It can be used instead of the UCS C220 for compute and control services. The B200 blades and C240 Ceph server are joined with redundant Cisco Fabric Interconnects managed by UCS Manager.

The UCS C240 and C220 servers are M4 Small Form Factor (SFF) models where the operating systems boots from HDD for control nodes, from HDD/SSD for compute nodes, and from internal SSD for Ceph nodes. Each UCS C240, C220, and B200 have two 10 GE Cisco UCS Virtual Interface Cards.

Software applications that manage Cisco NFVI hosts and services include:

  • Red Hat Enterprise Linux 7.4 with OpenStack Platform 10.0-Provides the core operating system with OpenStack capability. RHEL 7.4 and OSP 10.0 are installed on all Cisco NFVI UCS servers.

  • Cisco Virtual Infrastructure Manager (VIM)—An OpenStack orchestration system that helps deploy and manage an OpenStack cloud offering from bare metal installation to OpenStack services, considering the hardware and software redundancy, security, and monitoring. Cisco VIM includes the OpenStack Newton release with more features and usability enhancements that are tested for functionality, scale, and performance.

  • Cisco Insight—Deploys, Provisions, and manages CiscoVIM on Cisco UCS servers.

  • Cisco UCS Manager—Used to perform certain management functions when UCS B200 blades are installed.

  • Cisco Integrated Management Controller (IMC)-Provides embedded server management for Cisco UCS C-Series Rack Servers. Supported Cisco IMC firmware versions install or upgrade of Cisco VIM 2.2 is 2.0 (13i) or greater (2.0(13n) is recommended). Because of security issues, we recommend you to use CIMC 2.0(13n). Similarly, CIMC version of 3.0 lineup supports for CIMC 3.0 you must choose a version greater or equal to 3.0 (3a). However, do not use CIMC 3.0(4a).

    .
  • Cisco Virtual Topology System (VTS)—is a standards-based, open, overlay management and provisioning system for data center networks. It automates DC overlay fabric provisioning for physical and virtual workloads.

  • Cisco Virtual Topology Forwarder (VTF)—Included with VTS, VTF leverages Vector Packet Processing (VPP) to provide high performance Layer 2 and Layer 3 VXLAN packet forwarding.

Layer 2 networking protocols include:

  • VxLAN supported using Linux Bridge

  • VTS VLAN supported using ML2/VPP

  • VLAN supported using OpenVSwitch (OVS) & ML2/VPP (including SRIOV with Intel NIC 710 NIC)

  • VLAN supported using ML2/ACI

For pods that are based on UCS B-Series pods, and pods based on C-series with Intel NIC, Single Root I/O Virtualization (SRIOV). SRIOV allows a single physical PCI Express to be shared on a different virtual environment. The SRIOV offers different virtual functions to different virtual components, for example, network adapters, on a physical server.

Any connection protocol can be used unless you install UCS B200 blades with the UCS Manager plugin, in which case, only OVS over VLAN can be used.

Features of Cisco VIM 2.4

Cisco VIM is the only standalone fully automated cloud lifecycle manager offering from Cisco for the private cloud. The current version of VIM, integrates with Cisco C or B-series UCS servers and Cisco or Intel NIC. This document and its accompanying administrator guide help the cloud administrators to set up and manage the private cloud.

The following are the features of Cisco VIM:

Feature Name

Comments

OpenStack Version

RHEL 7.4 with OSP 10 (Newton)

Hardware Support Matrix

  • UCS C220/B200 M4 controller or compute with Intel V3 (Haswell)

  • UCS C240/220 M4 controller or compute + Intel V4 (Broadwell)

  • HP DL360 Gen 9

  • 24 UCS C220/240 M5 in a micropod environment, with an option to add upto 16 220/240-M5 computes

NIC support

  • Cisco VIC: VIC 1227, 1240, 1340, 1380

  • Intel NIC: X710, 520, XL710

POD Type

  • Dedicated control, compute and storage (C-Series) node running on Cisco VIC, or Intel 710 X (full on)

  • Dedicated control, compute, and storage (B-Series) node running on Cisco NIC

  • MICRO POD: Integrated (AIO) control, compute and storage (C-series) node running on Cisco VIC, or Intel 710 X or VIC/NIC combo. Micro pod can be optionally expanded to accommodate for more computes running with the same NIC type. This can be done as a day-0 or day-1 activity. Support for HDD or SSD-based M5 micropod; Intel NIC-based micropod supports SRIOV, with the M5 based micropod supporting XL710 as an option for SRIOV.

  • Hyper-Converged on M4 (UMHC): Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (1227) and 2x10GE 520 or 2x40GE 710XL Intel NIC with an option to migrate from one to another.

  • Hyper-Converged (NGENAHC): Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (1227) for the control plane, and 1x10GE 710X (2 port) Intel NIC for the Data plane (over VPP).

  • Hyper-Converged on M5: Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (40G) and 2x40GE 710XL Intel NIC.

Note 

In a full-on (VIC based) or UMHC M4 pod, computes can either have a combination of 1-Cisco VIC (1227) and (2x10GE 520/2x40GE 710XL Intel NIC) or 1-CiscoVIC (1227). The compute running pure Cisco VIC does not run SR-IOV. In Cisco VIM 2.4, we support HP DL360 Gen9 as a third party compute.

Currently, we do not support a mix of computes from different vendors.

ToR and FI support

  • For VTS-based installation, use the following Nexus version-7.0(3)I2(2a) and 7.0(3)I2(2c)

  • For the mechanism driver other than VTS, use the following Nexus software version 7.0(3)I4(6) 7.0(3)I6(1).

  • Support of NCS-5500 (with recommended Cisco IOS XR version 6.1.33.02I)

  • Nexus 9K switches running ACI 3.0 (for the mechanism driver ACI)

IPV6 Support for Management Network

  • Static IPv6 management assignment for servers

  • Support of IPv6 for NTP, DNS, LDAP, external syslog server, and AD.

  • Support of IPv6 for the Cloud API end point.

Mechanism Drivers

OVS/VLAN, Linuxbridge/VXLAN, ACI/VLAN, VPP/VLAN (Fast Networking, Fast Data FD.io > VPP/VLAN, based on the FD.io VPP fast virtual switch).

Note 

VPP with LACP is now the default configuration for data plane.

SDN Controller Integration

VTS; ACI (ships in the night or with Unified ACI Plugin). with Cisco VIC or Intel NIC on the UCS C-series M4 platform.

Install Methodology

Fully automated online or offline.

Scale

  • Full on: Total of 60 (compute and OSD) nodes (with Ceph OSD max at 20).

  • Micropod: max of 16 standalone compute nodes.
    Note 

    For, Ceph OSDs can be HDD or SSD based; it has to be uniform across the pod. Computes can boot off 2x1.2TB HDD or 2x1.6TB SSD); In the same pod, some computes can have SSD, while others can have HDD.

Automated Pod Life Cycle Management

  • Add or remove compute and Ceph nodes and replace the controller

  • Reconfiguration of passwords and selected optional services

  • Automated software update

Platform security

Secure OS, RBAC, Network isolation, TLS, Source IP filtering, Keystone v3, Bandit, CSDL-compliant, hardened OS, SELinux.

Change the CIMC password after post install for maintenance and security.

Non-root log in for Administrators.

Enabling Custom Policy for VNF Manager.

EPA

NUMA, CPU pinning, huge pages, SRIOV with Intel NIC.

HA and Reliability

  • Redundancy at hardware and software level.

  • Automated backup and restore of the management node.

Unified Management Support

Single pane of glass in a single or multi instance (HA) mode: Supports multi-tenancy and manages multiple pods from one instance.

Central Logging

ELK integrated with external syslog (over v4 or v6) for a log offload, with optional support of NFS with ELK snapshot.

External Syslog Servers

Support of multiple external syslog servers over IPv4 or IPv6. The minimum and maximum number of external syslog server that is supported is 1 and 3, respectively.

VM Migration

Cold migration and resizing.

Live Migration

Storage

Object store with SwiftStack, Block storage with Ceph or Netapp.

Monitoring

Third-party integration with Zenoss (called NFVIMON).

Support of External Auth System

  • LDAP

  • Active Directory (AD)

Software Update

Update of Cloud Software for bug fixes on the same release.

Power Management of Computes

Option to power off or on computes selectively to conserve energy.

Disk maintenance for Pod Nodes

Ability to replace faulty disk(s) on the Pod node(s) without the need for add/remove/replace node operation.

Integrated Test Tools

  • Open Source Data-plane Performance Benchmarking: VMTP (an open source data plane VM to VM performance benchmarking tool), NFVBench (NFVI data plane and a service chain performance benchmarking tool)

  • Services Health Checks Integration: Cloudpulse and Cloudsanity.

Known Caveats

The following list describes the known caveats in NFVI 2.4

CSCve39684
Translation of vic_slot 7 to MLOM fails in CIMC 2.0(13i) version.
CSCva37451
Traffic loss of 8-10s seen while controller with active l3 reboots.
CSCva36943
Volume attach failure errors should be reported to the user.
CSCva36914
After a MariaDB HA event, you should run the recovery playbook.
CSCva36907
Nova compute reports goes down for up to two minutes after a controller reboot.
CSCva36782
Nova HA: VM is stuck in scheduling state after Nova conductor HA.
CSCva32195
Auto-created L3 network not cleaned up with the router/tenant deletion.
CSCva32312
Update fails if compute is not reachable even after updating the containers on the controller node.
CSCva34476
Nova api is unavailable for few minutes once the controller is down.
CSCva32193
The ARP entry on ToR does not refresh resulting in an external ping to VM VIP failure.
CSCva57121
The Ceph cluster does not move to Error state when all storage nodes are down.
CSCva66093
Rollback not supported for repo update failure.
CSCvf81055
VMs goes to 'SHUTOFF' state intermittently on compute node reboot.
CSCve13042
Recovery play book needs to handle ceph recovery after power outage.
CSCve76157
Performance Issue on IE browser.
CSCvf74264
Insight UI: The pod users cannot update the restapi password once it is changed.
CSCvf86622
When using MECHANISM_DRIVER, aci which is the command-line interfaces for neutron quota-update do not get enforced.
CSCvf86623
When using MECHANISM_DRIVER: aci and VMs originally in an ACTIVE state on the rebote compute node are unable to acquire an ip address from DHCP.
CSCvi35426
Security Groups with ML2_VPP not supported in CVIM 2.4
CSCvi64002
[NCS/CVIM] BGP NCS Peer Configuration Not Applied when BGP sessions exist
CSCvi98399
Representation of Service type cloud-formation in Openstack endpoint should be changed.

Using the Cisco Bug Search Tool

You can use the Bug Search Tool to search for a specific bug or to search for all bugs in a release.

Procedure


Step 1

Go to the Cisco Bug Search Tool.

Step 2

In the Log In screen, enter your registered Cisco.com username and password, and then click Log In. The Bug Search page opens.

Note 

If you do not have a Cisco.com username and password, you can register for them at http://tools.cisco.com/RPF/register/register.do.

Step 3

To search for a specific bug, enter the bug ID in the Search For field and press Enter.

Step 4

To search for bugs in the current release:

  1. In the Search For field, enter Cisco Network Function Virtualization Infrastructure 2.0(1) and press Enter. (Leave the other fields empty.)

  2. When the search results are displayed, use the filter tools to find the types of bugs you are looking for. You can search for bugs by status, severity, modified date, and so forth.

    Tip 
    To export the results to a spreadsheet, click the Export Results to Excel link.     

Related Documentation

The Cisco NFVI 2.4.1 documentation set consists of:

  • Cisco NFV Infrastructure Installation Guide

  • Cisco NFV Infrastructure Administrator Guide

  • Cisco NFV Infrastructure Release Notes

These documents are available on cisco.com when Cisco NFV Infrastructure is released.

Obtaining Documentation and Submitting a Service Request

For information on obtaining documentation, submitting a service request, and gathering additional information, see the monthly What’s New in Cisco Product Documentation, which also lists all new and revised Cisco technical documentation, at: http://www.cisco.com/c/en/us/td/docs/general/whatsnew/whatsnew.html

Subscribe to the What’s New in Cisco Product Documentation as an RSS feed and set content to be delivered directly to your desktop using a reader application. The RSS feeds are a free service. Cisco currently supports RSS Version 2.0.

External References

NFVI documentation is now available on Cisco.com.

Here are the documentation links:

  • Release Note:

  • Installation Guide:

  • Administration Guide: