Introduction

Cisco Network Function Virtualization Infrastructure (Cisco NFVI) provides the virtual layer and hardware environment in which virtual network functions (VNFs) operate. VNFs provide a well-defined network function that offers routing, intrusion, detection, Domain Name Service (DNS), caching, Network Address Translation (NAT), and other network functions. While the network functions required a tight integration between a network software and hardware in the past, VNFs decouple the software from the underlying hardware.

Cisco NFVI 3.4.2 is based on Queens release of OpenStack, an open source cloud operating system that controls large pools of compute, storage, and networking resources. The Cisco version of OpenStack is Cisco Virtualized Infrastructure Manager (Cisco VIM). Cisco VIM manages the OpenStack compute, network, and storage services, and all Cisco NFVI build and control functions.

Key roles of Cisco NFVI pods are:

  • Control (including Networking)

  • Computes

  • Storage

  • Management, logging, and monitoring

Hardware that is used to create the Cisco NFVI pods include:

  • Cisco UCS® C240 M4 or C240 M5 or C220 M5—Performs management and storage functions, and services. Includes dedicated Ceph (UCS 240-M4 or UCS 240-M5) distributed object store and the file system. (Only Red Hat Ceph is supported).

  • Cisco UCS C220/240 M4 or M5—Performs control and compute services.

  • HP DL 360 Gen9: Supports as a third-party Compute, where the control plane is still Cisco UCS servers.

  • Cisco UCS B200 M4 blades—It can be used instead of the UCS C220 for compute and control services. The B200 blades and C240 Ceph server are joined with redundant Cisco Fabric Interconnects that are managed by UCS Manager.

  • Combination of M5 series servers are supported in micro-pod and VIC/NIC (40G) based Hyper-converged and Micro-pod offering.

  • Quanta servers as an alternate to Cisco UCS servers: Use of specific Quanta servers for the installation of the cloud both at the core and edge. Automated installation of the Central Ceph cluster on the edge pod is supported for Glance image services.

The UCS C240 and C220 servers are M4/M5 Small Form Factor (SFF) models where the operating systems boots from HDD/SDD for control nodes and compute nodes, and from internal SSD for Ceph nodes. Cisco supports pure Intel NIC configuration and Cisco 40G VIC with Intel NIC configuration.

Software applications that manage Cisco NFVI hosts and services include:

  • Red Hat Enterprise Linux 7.6 with OpenStack Platform 13.0—Provides the core operating system with OpenStack capability. RHEL 7.6 and OSP 13.0 are installed on all Cisco NFVI UCS servers.

  • Cisco VIM—An OpenStack orchestration system that helps to deploy and manage an OpenStack cloud offering from bare metal installation to OpenStack services, considering the hardware and software redundancy, security, and monitoring. Cisco VIM includes OpenStack Queens release with more features and usability enhancements that are tested for functionality, scale, and performance.

  • Cisco Unified Management—Deploys, provisions, and manages Cisco VIM on Cisco UCS servers. Also, provides UI to manage multiple pods when installed on a dedicated server Unified Management node.

  • Cisco VIM Monitor— Used to provide integrated monitoring and alerting of the NFV Infrastructure layer.

  • Cisco UCS Manager—Used to perform certain management functions when UCS B200 blades are installed.

  • Cisco Integrated Management Controller (IMC)—When installing Cisco VIM, Cisco IMC 2.0(13i) or later is supported but certain IMC versions are recommended and listed in the below table.

    For the Cisco IMC 2.0 lineup, the recommended version information is as follows:

    UCS-M4 servers

    We recommend Cisco IMC 2.0(13n) or later.

    For the Cisco IMC 3.x and 4.y lineup, the recommended versions are given below:

    UCS-M4 servers

    Cisco IMC versions are 3.0(3a) or later, except for 3.0(4a). We recommend that you use Cisco IMC 3.0(4d).

    Expanded support of CIMC 4.0(1a), 4.0(1b), 4.0(1c). You can move to 4.0(2f) only if the servers are based on Cisco VIC.

    UCS-M5 servers

    Support CIMC 3.1(2b) and 4.0(4e) or later. We recommend that you use Cisco IMC 4.0(4e).

    Do not use 3.1(3c) to 3.1(3h), 3.0(4a), 4.0(2c), or 4.0(2d).

    A minimum bundle version of CIMC 4.0(4d) is needed for Cascade Lake support.

    For GPU support, you must ensure that the server has CIMC 4.0(2f).

  • Cisco Virtual Topology System (VTS)— VTS is a standard-based, open, overlay management and provisioning system for data center networks. It automates DC overlay fabric provisioning for physical and virtual workloads.

  • Cisco Virtual Topology Forwarder (VTF)—Includes VTS, VTF leverages Vector Packet Processing (VPP) to provide high performance Layer 2 and Layer 3 VXLAN packet forwarding.

Layer 2 networking protocols include:

  • VXLAN supported using Linux Bridge

  • VTS VXLAN supported using ML2/VPP

  • VLAN supported using Open vSwitch (OVS)

  • VLAN supported using ML2/VPP. It is supported only on Intel NIC.

  • VLAN supported using ML2

For pods that are based on UCS B-series pods, and pods based on C-series with Intel NIC Single Root I/O Virtualization (SRIOV), the SRIOV allows a single physical PCI Express to be shared on a different virtual environment. The SRIOV offers different virtual functions to different virtual components, for example, network adapters, on a physical server.

For B-series based pod, the installation is limited to OVS.

Features of Cisco VIM 3.4.2

Cisco VIM is the only standalone fully automated cloud lifecycle manager offered from Cisco for the private cloud. The current version of Cisco VIM, integrates with Cisco C or B-series UCS servers and Cisco or Intel NIC. This document and its accompanying administrator and install guides help the cloud administrators to set up and manage the private cloud.

The following are the features of Cisco VIM:

Feature Name

Comments

OpenStack Version

RHEL 7.6 with OSP 13 (Queens)

Hardware Support Matrix

  • UCS C220/B200 M4 controller or compute with Intel V3 (Haswell)

  • UCS C240/220 M4 controller or compute with Intel V4 (Broadwell)

  • UCS C240/220 M5 controller or compute with Intel Skylake and Cascade Lake support.

  • HP DL360 Gen 9

  • UCS C220/240 M5 in a micropod environment, with an option to add up to 16 220/240-M5 computes.

  • UCS C240/220 M5 controller or compute with Intel X710 NIC and SR-IOV

  • UCS C240/220 M5 servers with Cisco 1457 (for control plane) and Intel XXV710 NIC (for data plane with VPP) and SR-IOV

    Support of physical GPU for M5.

  • Quanta servers as an alternate to Cisco UCS servers for full on and micro (D52BQ-2U 3UPI), and edge (D52BE-2U) deployment of the cloud.

  • Quanta servers for central Ceph (D52BQ-2U 3UPI), cluster for edge pod to offer glance image services.

  • Support of SATA M.2 (960G) as an option for a boot drive

NIC support

  • Cisco VIC: VIC 1227, 1240, 1340, 1380, 1387, 1457

  • Intel NIC: X710, 520, XL710, xxv710 (25G)

Pod type

  • Dedicated control, compute and storage (C-Series) node running on Cisco VIC or Intel X710 (full on) with Cisco Nexus 9000 or Cisco NCS 5500 series switch (only for Intel NIC based pod and VPP as mechanism driver) as ToR.

    Support of UCS-M4 (10G VIC with 2-XL710) compute with UCS-M5 (Cisco VIC 1457 with 2-XL710)

  • Dedicated control, compute, and storage (C-series) node running on Cisco VIC and Intel NIC (full on) with Cisco Nexus 9000 as ToR. SRIOV is supported on Intel NIC only.

    Support of Intel X520 (with 2 NIC cards/compute) on M4 pods or XL710 (2 or 4 NIC cards/compute) on M4/M5 pods for SRIOV cards. Few computes can run with/without SRIOV in a given pod.

    For M4 pods, VIC/NIC computes running XL710 and X520 can reside in the same pod.

  • Dedicated control, compute, and storage (B-Series) node running on Cisco NIC

  • Micropod: Integrated (AIO) control, compute and storage (C-series) node running on Cisco VIC, or Intel 710 X or VIC/NIC combo. Micro pod can be optionally expanded to accommodate for more computes running with the same NIC type. This can be done as a day-0 or day-1 activity. Support for HDD or SSD-based M5 micropod.

    Intel NIC-based Micropod supports SRIOV, with the M5-based Micropod supporting XL710 as an option for SRIOV.

    Extends the micropod option to Quanta (D52BE-2U) servers with Intel XXV710 NIC (25G) with Cisco Nexus 9000 (-FX) as ToR.

  • Hyper-converged on M4 (UMHC): Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (1227) and 2x10GE 520 or 2x40GE 710XL Intel NIC with an option to migrate from one to another.

  • Hyper-converged (NGENAHC): Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (1227) for the control plane, and 1x10GE 710X (2 port) Intel NIC for the Data plane (over VPP).

    Support of M5 as controller and hyper-converged nodes (with 1457 for control plane, and 1x10GE X710 (2 port) Intel NIC for Data plane) in an existing M4-based pod.

  • Hyper-converged on M5: Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on a combination of 1-Cisco VIC (40G) and 2x40GE 710XL Intel NIC.

  • Hyper-converged on M5 with Intel NIC: Dedicated control and compute nodes, with all storage acting as compute (C-series) nodes, running on 2x10GE 710X Intel NIC with VPP.

  • Quanta server based pods for Full on and Edge clouds. The edge cloud communicates with Quanta server based Central Ceph cluster for glance service.

  • Support of M4 (10GVIC + 10/40G NIC) and M5 (40G VIC + 40G NIC) computes, control (M4 10G VIC with M5 40G VIC) and ceph (M4 10G VIC and M5 40G VIC) nodes in the same pod.

Note 

In a full-on (VIC based) or UMHC M4 pod, computes can either have a combination of 1-Cisco VIC (1227) and (2x10GE 520/2x40GE 710XL Intel NIC) or 1-CiscoVIC (1227). The compute running pure Cisco VIC does not run SR-IOV. In Cisco VIM 2.4, we support HP DL360 Gen9 as a third party compute.

A mix of computes from different vendors is not supported.

ToR and FI support

  • For VTS-based installation, use the following Nexus version 7.0.3.I7.2 and 9.2 (1).

  • For the mechanism driver other than VTS, use the following Nexus software version 7.0(3)I4(6) 7.0(3)I6(1).

  • Support of Cisco NCS 5500 (with recommended Cisco IOS XR version 6.1.33.02I or 6.5.1) with splitter cable support. Also, extending day-0 configuration to support user defined route-target and ethernet segment id (ESI).

Install or update mode

  • Connected to the Internet or air-gap.

  • Support of Cisco VIM Software Hub over v4 and v6 to mitigate the problem associated with logistics of USB distribution for air-gapped installation.

  • Support of USB drive 3.0 for M5 and Quanta based management node.

IPV6 support for management network

  • Static IPv6 management assignment for servers.

  • Support of IPv6 for NTP, DNS, LDAP, external syslog server, and AD.

  • Support of IPv6 for the cloud API end point.

  • Support of CIMC over IPv6.

  • RestAPI over IPv6.

  • Support for IPv6 filters for administration source networks.

  • Support of UM over IPv6.

Mechanism drivers

OVS/VLAN, Linuxbridge/VXLAN, VPP/VLAN (Fast Networking, Fast Data FD.io > VPP/VLAN, based on the FD.io VPP 19.04 fast virtual switch).

Note 
  • VPP with LACP is the default configuration for data plane.

  • VPP is not supported on VIC.

SDN controller integration

VTS 2.6.2.1 with optional feature of managed VTS; ACI (ships in the night or Auto-ToR) 4.0.1 with Cisco VIC or Intel NIC on the UCS C-series M4/M5 platform.

Automation of ToR configuration via ACI API.

Scale

  • Total of 128 nodes (compute and OSD) with Ceph OSD max at 25.

    Note 
    It is recommended to deploy 40 nodes at a time. Also, after day-0, you can add only one ceph node at a time.
  • Micropod: Supports maximum of 16 standalone compute nodes.

    Note 
    Ceph OSDs can be HDD or SSD based, but has to be uniform across the pod. Computes can boot off 2x1.2TB HDD or 2x960 GB SSD). In the same pod, some computes can have SSD, while others can have HDD.

Automated pod life cycle management

  • Addition or removal of compute and Ceph nodes and replacement of the controller nodes.

  • Static IP management for storage network.

  • Reduction of tenant or provider VLAN via reconfiguration to a minimum of two.

  • Reconfiguration of passwords and selected optional services .

  • Reconfiguration of NTP and DNS.

Platform security

  • Secure OS, RBAC, network isolation, TLS, source IP filtering (v4 and v6), Keystone v3, Bandit, CSDL-compliant, hardened OS, and SELinux.

  • Enabling change of CIMC password post installation, for maintenance and security.

  • Non-root log in for Administrators.

  • Enabling custom policy for VNF Manager.

  • Option to disable the management node reachability to the cloud API network.

  • Read-only option for Horizon.

  • Hosting of Horizon behind NAT or with a DNS alias.

  • Cinder volume encryption via LUKS.

  • Support of configurable login banner for SSH sessions.

  • Access to management node via LDAP

  • Support for IPv6 filters for administration source networks.

  • Access of NFVIMON via non-root user.

  • Support of Vault to encrypt data on the management node.

  • Enablement of Vault as an option on Day 2.

  • Extend permit_root_login to Unified Management node.

  • CIMC authentication via LDAP

  • Support of RedHat IDS Identity, Policy and Audit (IDS) system.

  • Support of LDAP on Unified Management node.*

Enhanced Platform Awareness (EPA)

  • Supports NUMA, CPU pinning, huge pages, and SRIOV with Intel NIC.

  • Support trusted_vf, huge page percentage, huge page size, and tr_rx_buffer_size for OVS on a per compute basis.

  • Ability to bring in trusted_vf as a reconfigure option on a per server basis .*

  • Ability to allocate user defined CPU (upto 6) cores to VPP.

  • Ability to allocate user defined CPU (upto 12) cores to Ceph for Micropod and hyper-converged nodes. *

  • Ability to allocated user-defined CPU (upto 12) cores to controller for AIO nodes in Micropod and control/compute nodes in edge pod.

HA and reliability

  • Redundancy at hardware and software level.

  • Automated backup and restore of the management node.

Unified Management (UM) support

  • Single pane of glass in a standalone mode. Supports multi-tenancy and manages multiple pods from one instance.

  • LDAP support for authentication to UM. Default is local.

  • LDAP support for authorization to UM.

  • Centralized image distribution system.

Central logging

EFK integrated with external syslog (over v4 or v6) for a log offload, with optional support of NFS with EFK snapshot.

External syslog servers

Support of multiple external syslog servers over IPv4 or IPv6. The minimum and maximum number of external syslog server that is supported is 1 and 4, respectively.

VM migration

  • Cold migration and resizing.

  • Live migration (only for OVS non-SRIOV VMs).

Storage

  • Object store with SwiftStack and block storage with Ceph using bluestore or NetApp.

  • Option to use Ceph for Glance and Solidfire for Cinder.

  • Option to have multi-backend (HDD and SSD based) Ceph in the same cluster to support various I/O requirements and latency.

.

Monitoring

  • Monitor CVIM pods individually using the local CVIM Monitor (CVIM-MON) over v4 and v6. Collects metrics from the entire pod. Supports customizing alerts, sending SNMP traps, and exporting to external metric collectors.

  • Monitor CVIM pods centrally using the Highly Available CVIM Monitor (HA CVIM-MON) over v4 and v6. Acts as a single pane of glass to collect metrics from multiple pods. Supports customizing alerts, sending SNMP traps, and exporting to external metric collectors.

  • Support of admin and non-admin CVIM-MON users with/without LDAP.

  • Ceilometer for resource tracking and alarming capabilities across core OpenStack components.

  • Third-party integration with Zenoss (called NFVIMON) in HA.

  • Traffic Monitoring with OVS for debugging.

Optional OpenStack Features

  • Enable trusted virtual function on a per server basis.

  • DHCP reservation for virtual MAC addresses.

  • Enable CPU and memory over-subscription on a per server basis.

  • Enable VM_HUGE_PAGE_SIZE and VM_HUGE_PAGE percentage on a per server basis.

Support of External Auth System

  • LDAP with anonymous bind option.

  • Active Directory (AD)

Software update

Update of Cloud software for bug fixes on the same release.

Software upgrade

Software upgrade of non-VTS cloud from Cisco VIM 3.2.1 or 3.2.2 to Cisco VIM 3.4.1.

Software upgrade of non-VTS cloud from Cisco VIM 2.4.y to Cisco VIM 3.4.2, where y=15 or 16 or 17

CIMC/BMC upgrade capability

Central management tool to upgrade the CIMC bundle image of one or more servers.

Support of automated update of BMC/BIOS and firmware in Quanta server.

VPP port mirroring

Ability to trace or capture packets for debugging and other administrative purposes.

Remote Installation of Management Node (RIMN)

Automated installation of management node over v4 and v6 layer3 network.

VXLAN extension into the cloud

  • Extending native external VXLAN network into VNFs in the cloud.

  • Support of Layer 3 adjacency for BGP

  • Support of single VXLAN network or multi-VXLAN network (with head-end-replication option) terminating on the same compute node.

    Note 
    Only two-VXLAN network is supported.

Technical support for CIMC

Collection of technical support for CIMC.

Enable TTY logging as an option

Enables TTY logging and forwards the log to external syslog server and EFK stack running on management node. Optionally, it forwards the log to remote syslog if that option is available.

Automated enablement of Intel X710/XL710 NIC's PXE configuration on Cisco UCS-C series

Utility to update Intel X710/XL710 NIC's PXE configuration on Cisco UCS-C series.

Power management of computes

Option to selectively turn OFF or ON the power of computes to conserve energy.

Disk maintenance for pod nodes

Ability to replace faulty disk(s) on the Pod node(s) without the need for add/remove/replace node operation.

Support of workload types

Extending Cisco VIM to support baremetal (ironic based) and container (Cisco Container Platform (CCP)) based workloads.

Support of bonding on the Ironic network.

Cloud adaptation for low latency workload

  • Real time kernel to support on edge pod.

  • Automated BIOS configuration.

  • Introduction of custom flavor.

  • Support of Intel N3000 card on selected servers to handle vRAN workloads.

  • Support of Cache Allocation Technology (CAT) to handle vRAN workloads.

  • Support of INTEL_SRIOV_VFS and INTEL_FPGA_VFS at a per server level.

Integrated test tools

  • Open Source Data-plane Performance Benchmarking: VMTP (an open source data plane VM to VM performance benchmarking tool), NFVbench (NFVI data plane and a service chain performance benchmarking tool).

  • Extending VMTP to support v6 over provider network.

  • NFVbench support for VXLAN.

  • Services Health Checks Integration: Cloudpulse and Cloudsanity.


Note

* Indicates the features introduced in Cisco VIM 3.4.2.

Known Caveats

The following list describes the known caveats in Cisco VIM 3.4.2:

CSCve39684
Translation of vic_slot from 7 to MLOM fails in CIMC 2.0(13i) version.
CSCva37451
Traffic loss of 8 to 10 seconds occurs when you reboot active layer 3 agents.
CSCva36943
Volume-attach failure errors are not reported to users.
CSCva36914
When a MariaDB HA event is logged, you should run the recovery playbook.
CSCva36907
Nova-compute service is down for up to two minutes after a controller reboot.
CSCva36782
Nova HA: VM is stuck in scheduling state while conducting HA on Nova conductor.
CSCva32195
Auto-created Layer 3 network is not cleaned up with the router or tenant deletion.
CSCva32312
Update fails if compute is not reachable even after updating the containers on the controller node.
CSCva34476
Nova API is unavailable for few minutes when the controller is down.
CSCva32193
The ARP entry on ToR does not get refreshed, which results in the failure of the Layer 3 ping to VM FIP.
CSCva57121
The Ceph cluster are not set to error state when all the storage nodes are down.
CSCva66093
Rollback is not supported for repo update failure.
CSCvf81055
VMs intermittently goes to SHUTOFF state after compute node reboot.
CSCvq81285
persist_dashboard does not save new folders and dashboards created under a new folder.
CSCvq93234
Unsaved changes popup appears when you navigate between dashboards.

Resolved Caveats

The following list describes the issues that are resolved in Cisco VIM 3.4.2:

CSCvs04022
LV swap partition not set to 32.0G.
CSCvq96653
Enhance cluster recovery to recover mariadb state files.
VIMCORE-3774
Adjustments to CVIM MON for DIMM, HDD, CPU, and NIC monitoring.
CSCvr70935
In RT servers, tech-support hangs.
CSCvr32649
ldap_default_authtok is not recognized in vim_ldap_admins section of setup_data.yaml.
CSCvr60751
Disable auto link selection during ISO.
CSCvr36238
Increase strip size and enable caching on M5.
CSCvs39459
Fluentd stops sending logs to Elasticsearch.
CSCvs05233
RestAPI endpoint RootCA larger than 4000 bytes.
CSCvs22471
Per-server huge page sizes are not applied correctly.
CSCvs45509
VPP has a memory leak in 19.04.

Enhancements

The following list describes the enhancements in 3.4.2:

  • Upgrade of RedHat Kernel 7.6 EUS and OSP13.

    • RHEL 7.6 Real Time Version: 3.10.0-957.38.1.rt56.952.el7.x86_64.

    • RHEL 7.6 Version: 3.10.0-957.38.1.el7.x86_64.

    • Inter NUMA noisy neighbor fix via the kernel.

  • Support of trusted_vf as a reconfigure option.

  • CVIM-MON openstack telegraf plugin - openstack metrics for non-block storage.

Using the Cisco Bug Search Tool

You can use the Bug Search Tool to search for a specific bug or to search for all bugs in a release.

Procedure


Step 1

Go to the Cisco Bug Search Tool.

Step 2

In the Log In screen, enter your registered Cisco.com username and password, and then click Log In. The Bug Search page opens.

Note 

If you do not have a Cisco.com username and password, you can register for them at http://tools.cisco.com/RPF/register/register.do.

Step 3

To search for a specific bug, enter the bug ID in the Search For field and press Enter.

Step 4

To search for bugs in the current release:

  1. In the Search For field, enter Cisco Network Function Virtualization Infrastructure 2.0(1) and press Enter. (Leave the other fields empty.)

  2. When the search results are displayed, use the filter tools to find the types of bugs you are looking for. You can search for bugs by status, severity, modified date, and so forth.

    Tip 
    To export the results to a spreadsheet, click the Export Results to Excel link.     

Obtaining Documentation and Submitting a Service Request

For information on obtaining documentation, submitting a service request, and gathering additional information, see the monthly What’s New in Cisco Product Documentation, which also lists all new and revised Cisco technical documentation, at: http://www.cisco.com/c/en/us/td/docs/general/whatsnew/whatsnew.html

Subscribe to the What’s New in Cisco Product Documentation as an RSS feed and set content to be delivered directly to your desktop using a reader application. The RSS feeds are a free service. Cisco currently supports RSS Version 2.0.

External References

Cisco VIM documentation is available at: https://www.cisco.com/c/en/us/support/cloud-systems-management/virtualized-infrastructure-manager/tsd-products-support-series-home.html