Introduction

Cisco Network Function Virtualization Infrastructure (Cisco NFVI) provides the virtual layer and hardware environment in which virtual network functions (VNFs) operate. VNFs provide a well-defined network function that offers routing, intrusion, detection, Domain Name Service (DNS), caching, Network Address Translation (NAT), and other network functions. While the network functions required a tight integration between a network software and hardware in the past, VNFs decouple the software from the underlying hardware.

Cisco NFVI 3.2.2 is based on the Queens release of OpenStack, an open source cloud operating system that controls large pools of compute, storage, and networking resources. The Cisco version of OpenStack is Cisco Virtualized Infrastructure Manager (Cisco VIM). Cisco VIM manages the OpenStack compute, network, and storage services, and all Cisco NFVI build and control functions.

Key roles of Cisco NFVI pods are:

  • Control (including Networking)

  • Computes

  • Storage

  • Management, logging, and monitoring

Hardware that is used to create the Cisco NFVI pods include:

  • Cisco UCS® C240 M4 or C240 M5 or C220 M5—Performs management and storage functions, and services. Includes dedicated Ceph (UCS 240-M4 or UCS 240-M5) distributed object store and the file system. (Only Red Hat Ceph is supported).

  • Cisco UCS C220/240 M4 or M5 —Performs control and compute services.

  • HP DL 360 Gen9: Supports as a third-party Compute, where the control plane is still Cisco UCS servers.

  • Cisco UCS B200 M4 blades—It can be used instead of the UCS C220 for compute and control services. The B200 blades and C240 Ceph server are joined with redundant Cisco Fabric Interconnects that are managed by UCS Manager.

  • Combination of M5 series servers are supported in micro-pod and VIC/NIC (40G) based Hyper-converged and Micro-pod offering.

  • Quanta servers as an alternate to Cisco UCS servers: Use of specific Quanta servers for the installation of the cloud both at the core and edge. Automated installation of the Central Ceph cluster on the edge pod is supported for Glance image services.

The UCS C240 and C220 servers are M4/M5 Small Form Factor (SFF) models where the operating systems boots from HDD/SDD for control nodes and compute nodes, and from internal SSD for Ceph nodes. Cisco supports pure Intel NIC configuration and Cisco 40G VIC with Intel NIC configuration.

Software applications that manage Cisco NFVI hosts and services include:

  • Red Hat Enterprise Linux 7.6 with OpenStack Platform 13.0—Provides the core operating system with OpenStack capability. RHEL 7.6 and OSP 13.0 are installed on all Cisco NFVI UCS servers.

  • Cisco VIM—An OpenStack orchestration system that helps to deploy and manage an OpenStack cloud offering from bare metal installation to OpenStack services, considering the hardware and software redundancy, security, and monitoring. Cisco VIM includes OpenStack Queens release with more features and usability enhancements that are tested for functionality, scale, and performance.

  • Cisco Unified Management—Deploys, provisions, and manages Cisco VIM on Cisco UCS servers. Also, provides UI to manage multiple pods when installed on a dedicated server Unified Management node.

  • Cisco VIM Monitor— Used to provide integrated monitoring and alerting of the NFV Infrastructure layer.

  • Cisco UCS Manager—Used to perform certain management functions when UCS B200 blades are installed.

  • Cisco Integrated Management Controller (IMC)—When installing Cisco VIM, Cisco IMC 2.0(13i) or later is supported but certain IMC versions are recommended and listed in the below table.

    For the Cisco IMC 2.0 lineup, the recommended version information is as follows:

    UCS-M4 servers

    Recommended: Cisco IMC 2.0(13n) or later.

    For the Cisco IMC 3.x lineup, the recommended version is as follows:

    UCS-M4 servers

    Cisco IMC versions are 3.0(3a) or later, except for 3.0(4a). Recommended: Cisco IMC 3.0(4d). Extended support of 4.0(1a), 4.0(1b), and 4.0(1c).

    UCS-M5 servers

    Recommended to stay with Cisco IMC 3.1(2b). Ensure that you do not use 3.1(3c) through 3.1(3h).

    Extended support of CIMC 4.0(1a) and 4.0(1c)

  • Cisco Virtual Topology System (VTS)— VTS is a standard-based, open, overlay management and provisioning system for data center networks. It automates DC overlay fabric provisioning for physical and virtual workloads.

  • Cisco Virtual Topology Forwarder (VTF)—Includes VTS, VTF leverages Vector Packet Processing (VPP) to provide high performance Layer 2 and Layer 3 VXLAN packet forwarding.

Layer 2 networking protocols include:

  • VXLAN supported using Linux Bridge

  • VTS VXLAN supported using ML2/VPP

  • VLAN supported using Open vSwitch (OVS)

  • VLAN supported using ML2/VPP. It is supported only on Intel NIC.

  • VLAN supported using ML2/ACI

For pods that are based on UCS B-Series pods, and pods based on C-series with Intel NIC Single Root I/O Virtualization (SRIOV), the SRIOV allows a single physical PCI Express to be shared on a different virtual environment. The SRIOV offers different virtual functions to different virtual components, for example, network adapters, on a physical server.

For B-series based pod, the installation is limited to OVS.

Features of Cisco VIM 3.2.2

Cisco VIM is the only standalone fully automated cloud lifecycle manager offered from Cisco for the private cloud. The current version of Cisco VIM, integrates with Cisco C or B-series UCS servers and Cisco or Intel NIC. This document and its accompanying administrator and install guides help the cloud administrators to set up and manage the private cloud.

The following are the features of Cisco VIM:

Feature Name

Comments

OpenStack Version

RHEL 7.6 with OSP 13 (Queens)

Hardware Support Matrix

  • UCS C220/B200 M4 controller or compute with Intel V3 (Haswell)

  • UCS C240/220 M4 controller or compute + Intel V4 (Broadwell)

  • UCS C240/220 M4 controller or compute + Intel V4 (Skylake)

  • HP DL360 Gen 9

  • UCS C220/240 M5 in a micropod environment, with an option to add up to 16 220/240-M5 computes.

  • UCS C240/220 M5 controller or compute + Intel X710 NIC and SR-IOV

  • Quanta servers as an alternate to Cisco UCS servers for Full on and edge deployment of the cloud.

  • Quanta servers for Central Ceph cluster for edge pod to offer glance image services.

NIC support

  • Cisco VIC: VIC 1227, 1240, 1340, 1380, 1387

  • Intel NIC: X710, 520, XL710

POD Type

  • Dedicated control, compute and storage (C-Series) node running on Cisco VIC or Intel X710 (full on) with Cisco Nexus 9000 or Cisco NCS 5500 series switch (only for Intel NIC based pod and VPP as mechanism driver) as ToR.

  • Dedicated control, compute, and storage (C-series) node running on Cisco VIC and Intel NIC (full on) with Cisco Nexus 9000 as ToR. SRIOV is supported on Intel NIC only.

    Support of Intel X520 (with 2 NIC cards/compute) on M4 pods or XL710 (2 or 4 NIC cards/compute) on M4/M5 pods for SRIOV cards. Few computes can run with/without SRIOV in a given pod.

    For M4 pods, VIC/NIC computes running XL710 and X520 can reside in the same pod.

  • Dedicated control, compute, and storage (B-Series) node running on Cisco NIC

  • Micropod: Integrated (AIO) control, compute and storage (C-series) node running on Cisco VIC, or Intel 710 X or VIC/NIC combo. Micro pod can be optionally expanded to accommodate for more computes running with the same NIC type. This can be done as a day-0 or day-1 activity. Support for HDD or SSD-based M5 micropod; Intel NIC-based Micropod supports SRIOV, with the M5-based Micropod supporting XL710 as an option for SRIOV.

  • Hyper-Converged on M4 (UMHC): Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (1227) and 2x10GE 520 or 2x40GE 710XL Intel NIC with an option to migrate from one to another.

  • Hyper-Converged (NGENAHC): Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (1227) for the control plane, and 1x10GE 710X (2 port) Intel NIC for the Data plane (over VPP).

  • Hyper-converged on M5: Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (40G) and 2x40GE 710XL Intel NIC.

  • Hyper-converged on M5 with Intel NIC: Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on 2x10GE 710X Intel NIC with VPP.

  • Quanta server based pods for Full on and Edge clouds. The edge cloud communicates with Quanta server based Central Ceph cluster for glance service.

  • Support of M4 (10GVIC + 10/40G NIC) and M5 (40G VIC + 40G NIC) computes, control and ceph nodes in the same pod.

Note 

In a full-on (VIC based) or UMHC M4 pod, computes can either have a combination of 1-Cisco VIC (1227) and (2x10GE 520/2x40GE 710XL Intel NIC) or 1-CiscoVIC (1227). The compute running pure Cisco VIC does not run SR-IOV. In Cisco VIM 2.4, we support HP DL360 Gen9 as a third party compute.

A mix of computes from different vendors is not supported.

ToR and FI support

  • For VTS-based installation, use the following Nexus version 7.0.3.I7.2 and 9.2 (1).

  • For the mechanism driver other than VTS, use the following Nexus software version 7.0(3)I4(6) 7.0(3)I6(1).

  • Support of Cisco NCS 5500 (with recommended Cisco IOS XR version 6.1.33.02I or 6.5.1) with splitter cable support. Also, extending day-0 configuration to support user defined route-target and ethernet segment id (ESI).

  • Cisco Nexus 9000 switches running ACI 3.2 (4d) with plug-in version of 4.0.1 (for the mechanism driver ACI).

Install or update mode

  • Connected to the Internet or air-gap.

  • Support of Cisco VIM Software Hub over v4 and v6 to mitigate the problem associated with logistics of USB distribution for air-gapped installation.

  • Support of USB drive 3.0 for M5 and Quanta based management node.

IPV6 support for management network

  • Static IPv6 management assignment for servers.

  • Support of IPv6 for NTP, DNS, LDAP, external syslog server, and AD.

  • Support of IPv6 for the cloud API end point.

  • Support of CIMC over IPv6.

  • RestAPI over IPv6.

  • Support for IPv6 filters for administration source networks.

  • Support of UM over IPv6.

Mechanism Drivers

OVS/VLAN, Linuxbridge/VXLAN, ACI/VLAN, VPP/VLAN (Fast Networking, Fast Data FD.io > VPP/VLAN, based on the FD.io VPP fast virtual switch).

Note 
  • VPP with LACP is the default configuration for data plane.

  • VPP is not supported on VIC.

SDN controller integration

VTS 2.6.2.1 with optional feature of Managed VTS; ACI (ships in the night or with Unified ACI Plugin) 4.0.1 with Cisco VIC or Intel NIC on the UCS C-series M4/M5 platform.

Automation of ToR configuration via ACI API in the “ships in the night” model.

Scale

  • Total of 120 nodes (compute and OSD) with Ceph OSD max at 20.

    Note 
    It is recommended to deploy 30 nodes at a time. Also, after day-0, you can add only one ceph node at a time.
  • Micropod: Supports maximum of 16 standalone compute nodes.

    Note 
    Ceph OSDs can be HDD or SSD based, but has to be uniform across the pod. Computes can boot off 2x1.2TB HDD or 2x960 GB SSD). In the same pod, some computes can have SSD, while others can have HDD.

Automated pod life cycle management

  • Addition/removal of compute and Ceph nodes and replacement of the controller nodes.

  • Static IP management for storage network.*

  • Reduction of tenant/provider VLAN via reconfiguration to a minimum of two.*

  • Reconfiguration of passwords and selected optional services .

  • Automated software update.

  • Reconfiguration of NTP and DNS.

Platform security

  • Secure OS, RBAC, network isolation, TLS, source IP filtering (v4 and v6), Keystone v3, Bandit, CSDL-compliant, hardened OS, and SELinux.

  • Enabling change of CIMC password post installation, for maintenance and security.

  • Non-root log in for Administrators.

  • Enabling custom policy for VNF Manager.

  • Option to disable the management node reachability to the cloud API network.

  • Read-only option for Horizon.

  • Hosting of Horizon behind NAT or with a DNS alias.

  • Cinder volume encryption via LUKS.

  • Support of configurable login banner for SSH sessions.

  • Access to management node via LDAP

  • Support for IPv6 filters for administration source networks.

  • Access of NFVIMON via non-root user.

  • Extend permit_root_login to Unified Management node. *

Enhanced Platform Awareness (EPA)

  • Supports NUMA, CPU pinning, huge pages, and SRIOV with Intel NIC.

  • Support trusted_vf, huge page percentage, huge page size*, and tr_rx_buffer_size* for OVS on a per compute basis.

  • Ability to allocate user defined CPU (upto 6) cores to VPP.

  • Ability to allocate user defined CPU (upto 12) cores to Ceph for Micropod and hyper-converged nodes.

HA and reliability

  • Redundancy at hardware and software level.

  • Automated backup and restore of the management node.

Unified Management (UM) Support

  • Single pane of glass in a standalone mode. Supports multi-tenancy and manages multiple pods from one instance.

  • LDAP support for authentication to UM. Default is local.

Central logging

ELK integrated with external syslog (over v4 or v6) for a log offload, with optional support of NFS with ELK snapshot.

External syslog servers

Support of multiple external syslog servers over IPv4 or IPv6. The minimum and maximum number of external syslog server that is supported is 1 and 4, respectively.*

VM migration

  • Cold migration and resizing.

  • Live Migration

Storage

  • Object store with SwiftStack and block storage with Ceph using bluestore or NetApp.

  • Option to use Ceph for Glance and Solidfire for Cinder.

  • Option to have multi-backend (HDD and SSD based) Ceph in the same cluster to support various I/O requirements and latency.

.

Monitoring

  • Cisco VIM monitor as a Cisco solution over v4 and/or v6. Acts as a single pane of glass to collect metrics from the entire pod. Supports customizing alerts, sending SNMP traps, and exporting to external metric collectors.

  • Support of admin and non-admin CVIM-MON users.*

  • Ceilometer for resource tracking and alarming capabilities across core OpenStack components.

  • Third-party integration with Zenoss (called NFVIMON) in HA.*

Optional OpenStack Features

  • Enable trusted virtual function on a per server basis.

  • DHCP reservation for virtual MAC addresses.

Support of External Auth System

  • LDAP with anonymous bind option.

  • Active Directory (AD)

Software update

Update of Cloud software for bug fixes on the same release.

Software upgrade

Software upgrade of non-VTS cloud from Cisco VIM 3.0.0 to Cisco VIM 3.2.2.*

CIMC/BMC upgrade capability

Central management tool to upgrade the CIMC bundle image of one or more servers.

Support of automated update of BMC/BIOS and firmware in Quanta server.

VPP port mirroring

Ability to trace or capture packets for debugging and other administrative purposes.

VXLAN extension into the cloud

  • Extending native external VXLAN network intoVNFs in the cloud.

  • Support of Layer 3 adjacency for BGP

  • Support of single VXLAN network or multi-VXLAN network (with head-end-replication option) terminating on the same compute node.

    Note 
    Only two-VXLAN network is supported.

Technical support for CIMC

Collection of technical support for CIMC.

Enable TTY logging as an option

Enables TTY logging and forwards the log to external syslog server and ELK stack running on management node. Optionally, it forwards the log to remote syslog if that option is available.

Automated enablement of Intel X710/XL710 NIC's PXE configuration on Cisco UCS-C series

Utility to update Intel X710/XL710 NIC's PXE configuration on Cisco UCS-C series.

Power management of computes

Option to selectively turn OFF or ON the power of computes to conserve energy.

Disk maintenance for pod nodes

Ability to replace faulty disk(s) on the Pod node(s) without the need for add/remove/replace node operation.

Support of workload types

Extending Cisco VIM to support baremetal (ironic based) and container (Cisco Container Platform (CCP)) based workloads. *

Cloud adaptation for low latency workload

  • Real time kernel to support on edge pod.

  • Automated BIOS configuration.

  • Introduction of custom flavor.

  • Support of Intel N3000 card on selected servers to handle vRAN workloads.

  • Support of Cache Allocation Technology (CAT) to handle vRAN workloads.

  • Support of INTEL_SRIOV_VFS and INTEL_FPGA_VFS at a per server level.

Integrated test tools

  • Open Source Data-plane Performance Benchmarking: VMTP (an open source data plane VM to VM performance benchmarking tool), NFVbench (NFVI data plane and a service chain performance benchmarking tool).

  • Extending VMTP to support v6 over provider network.

  • NFVbench support for VXLAN.

  • Services Health Checks Integration: Cloudpulse and Cloudsanity.


Note

The features specific for Cisco VIM 3.2.2 are indicated by *.

Known Caveats

The following list describes the known caveats:

CSCve39684
Translation of vic_slot 7 to MLOM fails in CIMC 2.0(13i) version.
CSCva37451
Traffic loss of 8-10s seen while controller with active l3 reboots.
CSCva36943
Volume attach failure errors should be reported to the user.
CSCva36914
After a MariaDB HA event, you should run the recovery playbook.
CSCva36907
Nova compute reports goes down for up to two minutes after a controller reboot.
CSCva36782
Nova HA: VM is stuck in scheduling state after Nova conductor HA.
CSCva32195
Auto-created L3 network not cleaned up with the router/tenant deletion.
CSCva32312
Update fails if compute is not reachable even after updating the containers on the controller node.
CSCva34476
Nova api is unavailable for few minutes once the controller is down.
CSCva32193
The ARP entry on ToR does not refresh resulting in an external ping to VM VIP failure.
CSCva57121
The Ceph cluster does not move to Error state when all storage nodes are down.
CSCva66093
Rollback not supported for repo update failure.
CSCvf81055
VMs goes to 'SHUTOFF' state intermittently on compute node reboot.
CSCve13042
Recovery play book needs to handle ceph recovery after power outage.
CSCve76157
Performance Issue on IE browser.
CSCvf74264
Insight UI: The pod users cannot update the restapi password once it is changed.
CSCvf86622
When using MECHANISM_DRIVER, aci which is the command-line interfaces for neutron quota-update do not get enforced.
CSCvf86623
When using MECHANISM_DRIVER: aci and VMs originally in an ACTIVE state on the rebote compute node are unable to acquire an ip address from DHCP.

Using the Cisco Bug Search Tool

You can use the Bug Search Tool to search for a specific bug or to search for all bugs in a release.

Procedure


Step 1

Go to the Cisco Bug Search Tool.

Step 2

In the Log In screen, enter your registered Cisco.com username and password, and then click Log In. The Bug Search page opens.

Note 

If you do not have a Cisco.com username and password, you can register for them at http://tools.cisco.com/RPF/register/register.do.

Step 3

To search for a specific bug, enter the bug ID in the Search For field and press Enter.

Step 4

To search for bugs in the current release:

  1. In the Search For field, enter Cisco Network Function Virtualization Infrastructure 2.0(1) and press Enter. (Leave the other fields empty.)

  2. When the search results are displayed, use the filter tools to find the types of bugs you are looking for. You can search for bugs by status, severity, modified date, and so forth.

    Tip 
    To export the results to a spreadsheet, click the Export Results to Excel link.     

Obtaining Documentation and Submitting a Service Request

For information on obtaining documentation, submitting a service request, and gathering additional information, see the monthly What’s New in Cisco Product Documentation, which also lists all new and revised Cisco technical documentation, at: http://www.cisco.com/c/en/us/td/docs/general/whatsnew/whatsnew.html

Subscribe to the What’s New in Cisco Product Documentation as an RSS feed and set content to be delivered directly to your desktop using a reader application. The RSS feeds are a free service. Cisco currently supports RSS Version 2.0.

External References

Cisco VIM documentation is available at: https://www.cisco.com/c/en/us/support/cloud-systems-management/virtualized-infrastructure-manager/tsd-products-support-series-home.html