Release Notes for Cisco UCS Virtual Interface Card Drivers, Release 4.0

Introduction

This document contains information on new features, resolved caveats, open caveats, and workarounds for Cisco UCS Virtual Interface Card (VIC) Drivers, Release 4.0 and later releases. This document also includes the following:

  • Updated information after the documentation was originally published.

  • Related firmware and BIOS on blade, rack, and modular servers and other Cisco Unified Computing System (UCS) components associated with the release.

The following table shows the online change history for this document.

Revision Date Description

October 17, 2019

Added information about CSCvq02558 in Open Caveats section.

April 26, 2019

Updated release notes for Cisco UCS Software Release 4.0(4).

February 12, 2019

Added information about CSCvk34443 in the VIC Driver Updates for Release 4.0(2a) section.

January 02, 2019

Updated release notes for Cisco UCS Software Release 4.0(2).

August 14, 2018

Initial release of VIC drivers for Cisco UCS Software Release 4.0(1).

New Software Features in Release 4.0(4a)

4.0(4a) adds support for the following:

  • Support for signed drivers in all supported Linux platforms. All Cisco Linux drivers are now cryptographically signed, meaning that they can be used with UEFI Secure Boot on all supported Linux platforms. UEFI Secure Boot ensures that only trusted firmware and drivers are allowed to run at system boot, decreasing vulnerability to malware at boot time.


    Note

    Some older Linux distributions, such as Red Hat Enterprise Linux (RHEL) 6.x and CentOS 6.x, do not support UEFI Secure Boot, but use Cisco signed drivers. If desired, Cisco-signed drivers can be manually verified outside of UEFI Secure Boot.


  • Unified driver support for Fibre Channel and NVMe over Fibre Channel on SLES12 SP4, SLES 15, and Red Hat Enterprise Linux (RHEL) 7.6. NVMe over Fibre Channel allows host software to communicate with nonvolatile memory over PCI Express (PCIe). The unified driver for Fibre Channel Multi-Queue supports up to 64 I/O queues on RHEL 7.6. This support is available on UCS 6300 Series Fabric Interconnects and UCS 6454 Fabric Interconnects.

  • Consistent Device Naming (CDN) support is extended to SLES 12 SP3, SLES 12 SP4, and SLES 15.

New Software Features in Release 4.0(1a)

Release 4.0(1a) adds support for the following:

  • Cisco UCS Manager 4.0(1a) release now provides Virtual Machine Multi-Queue (VMMQ) support only with VIC 14XX adapters.

  • VIC 14XX adapters now support UDP RSS for ESXi and Linux.

  • VXLAN is now supported on Windows 2016 with VIC 14XX adapters.

VIC Driver Updates for Release 4.0(4a)

ESX ENIC Driver Updates

Native ENIC driver versions 1.0.X.0 are for ESXi 6.5 and later releases.

ESXi NENIC Version 1.0.29.0

Native FNIC driver version 1.0.29.0 is supported on ESX 6.7U1.

ESX FNIC Driver Updates

Native FNIC Version 4.0.0.35

Native FNIC driver version 4.0.0.35 is supported on ESXi 6.7U1.


Note

This release also fixes intermittent connectivity failures on Cisco UCS B200 M4 Blade Servers.


Linux Driver Updates


Note

All operating systems listed in HCL are signed. However, some older Linux distribution versions such as Red Hat Enterprise Linux (RHEL) 6.x and CentOS 6.x do not support UEFI Secure Boot. Cisco signs the drivers for these distribution versions in case manual verification is desired. This applies to both ENIC and FNIC versions.


ENIC Version 738.12

This driver update adds support for the following Linux Operating System versions:

  • Red Hat Enterprise Linux 6.9, 6.10, 7.5 and 7.6

  • XenServer 7.2, 7.3, 7.4, 7.5 and 7.6

  • SLES 12 SP3, SLES 12 SP4, and SLES 15

  • Ubuntu Server 16.04.2, 16.04.3, 16.04.4, 16.04.5, 18.04, and 18.04.1

  • CentOS 6.9, 6.10, 7.5 and 7.6

Linux FNIC Driver Updates

FNIC Driver Update 1.6.0.50

This driver update adds Secure Boot and signed driver support for VIC 14xx drivers on the following Linux Operating System versions:

  • Red Hat Enterprise Linux 7.5

  • XenServer 7.2, 7.3, 7.4, 7.5 and 7.6

  • SLES 12 SP3

  • CentOS 7.5, 7.6


Note

The new FNIC driver update adds NVMe over Fibre Channel support in addition to signing on the following Operating Systems:
  • Red Hat Enterprise Linux 7.6

  • SLES 12 SP4

  • SLES 15 with errata kernel 4.12.14-25.28.1.


Note

SLES 15 FC-NVMe is supported with DM multi-pathing and native multi-pathing is not supported.



Note

CSCvk34443—After installing the Cisco fNIC driver on a system with SLES 15, the following error message appears:

cat: write error: broken pipe

The driver is installed correctly, and is operational. The message that appears is an informational message from SUSE, and is not caused by the Cisco fNIC driver.


Windows 2019 and 2016 NENIC Driver Updates

Windows Server 2019 and 2016 NENIC Version 5.3.25.4

  • This driver update provides a Spectre-compliant driver for VIC 1400 Series adapters.

Windows Server 2019 and 2016 ENIC Version 4.2.0.5

  • This driver update provides a Spectre-compliant driver for VIC 1300 Series adapters.

Windows 2019 and 2016 FNIC Driver Updates

Windows Server 2019 and 2016 FNIC Version 3.2.0.14

  • This driver update provides a Spectre-compliant neNIC driver for VIC 14XX and VIC 13XX adapters.

VIC Driver Updates for Release 4.0(2a)

ESX ENIC Driver Updates

Native ENIC driver versions 1.0.X.0 are for ESXi 6.5 and later releases.

Native ENIC Version 1.0.26.0

ESX FNIC Driver Updates

Native FNIC Version 4.0.0.20

Native FNIC driver version 4.0.0.20 is supported on ESXi 6.7U1.

FNIC Version 1.6.0.47

FNIC driver version 1.6.0.47 is supported on ESXi 5.5-6.7.

Linux ENIC Driver Updates

ENIC Version 3.1.137.5-700.16

This driver update adds support for the following Operating System versions:

  • Red Hat Enterprise Linux 6.9-6.10 and 7.4-7.6

  • Cent OS 6.9-6.10 and 7.4-7.5

  • Ubuntu 16.04.4, 16.04.5, 18.04 and 18.04.1

ENIC Version 3.1.142.369-700.16

This driver update adds support for the following Operating System versions:

  • SLES 12 SP3 and SLES 15


    Note

    CSCvk34443 — After installing the Cisco eNIC driver on a system with SLES 15, the following error message appears:

    cat: write error: broken pipe

    The driver is installed correctly, and is operational. The message that appears is an informational message from SUSE, and is not caused by the Cisco eNIC driver.


Linux FNIC Driver Updates

FNIC Version 1.6.0.47

This driver update adds support for the following Operating System versions:

  • Red Hat Enterprise Linux 5.11, 6.5-6.10 and 7.0-7.6

  • SLES 12 SP3, and SLES 15


    Note

    CSCvk34443 — After installing the Cisco fNIC driver on a system with SLES 15, the following error message appears:

    cat: write error: broken pipe

    The driver is installed correctly, and is operational. The message that appears is an informational message from SUSE, and is not caused by the Cisco fNIC driver.


  • Cent OS 6.7-6.10 and 7.0-7.5

FNIC Version 2.0.0.26

This driver update adds NVME over Fibre Channel support on SLES 12 SP3 with kernel 4.4.126-94.22.1.

Windows 2019 and 2016 ENIC Driver Updates

Windows Server 2019 and 2016 NENIC Version 5.2.3.3

  • This driver update adds support for VIC 14XX adapters.

Windows Server 2019 and 2016 ENIC Version 4.1.19.2

  • This driver update is for VIC 13XX and earlier adapters.

Windows 2019 and 2016 FNIC Driver Updates

Windows Server 2019 and 2016 FNIC Version 3.1.0.11

  • This driver update adds support for VIC 14XX and VIC 13XX adapters.

VIC Driver Updates for Release 4.0(1a)

ESX ENIC Driver Updates

Native ENIC driver versions 1.0.X.0 are for ESXi 6.5 and later releases.

Native ENIC Version 1.0.25.0

ESX FNIC Driver Updates

FNIC Version 1.6.0.44

  • Changed the fnic default queue depth to 256 and printing port speed

Linux ENIC Driver Updates

ENIC Version 3.0.107.37-492.52

Linux FNIC Driver Updates

FNIC Version 1.6.0.44

XenServer FNIC Driver Updates

FNIC Version 1.6.0.44

  • This driver update adds support for XenServer 7.3.

Windows 2016 ENIC Driver Updates

Windows Server 2016 ENIC Version 5.0.152.8

  • This driver update adds support for VIC 14XX adapters.

Windows 2016 FNIC Driver Updates

Windows Server 2016 FNIC Version 3.0.17.6

  • This driver update adds support for VIC 14XX adapters.

Resolved Caveats

The following table lists the resolved caveats in Release 4.0.

Defect ID

Description

FirstVersion Affected

Resolved In

CSCvn52229

VXLAN stateless offloads with Guest OS TCP traffic over IPV6 do not take effect when using UCS VIC 14xx and ESXi version 6.5 or 6.7 with nenic driver version 1.0.25.0 and 1.0.26.0.

1.0(0.25)

4.0(2a)A

CSCvo02207

Quiesce failures with native fNIC drivers caused timing issues, resulting in multiple aborts in the logs.

1.0(0.9)

4.0(2a)BC

CSCvo09082

Cisco B and C Series Servers with VIC Adapters running fNIC on ESXi 6.7U1 could not tune parameters in the same manner allowed by the fNIC.

A new module parameter lun_queue_depth_per_path, with a default value of 32, has been added to support 4.0(4a) VIC drivers.

1.0(0.9)

4.0(4a)A

CSCvo83140

ESX fNIC drivers with Fibre Channel storage enabled lost their connection to storage and became unresponsive.

1.0(0.9)

4.0(2a)A

CSCvo57214

Corrects inner checksum validation on native neNIC drivers.

4.1(1.57)VC

4.0(4a)A

CSCvo61233

Fixes recovery from read/write queue error in VIC 14xx Series drivers and adds logging of descriptors.

4.1(1.57)VC

4.0(4a)A

CSCvo68641

ESX Native fNIC drivers on Cisco M4 Blade Servers running Xeon(R) E5-2660 v3 experienced intermittant failures on VIC 14xx Series drivers.

4.1(1.57)VC

4.0(4a)A

CSCvo74998

SAN PLOGI connections failed on VMWare ESXi 6.7 U1 with the 4.0.0.24 native fNIC driver.

2.0(0.4)

4.0(4a)A

CSCvo75208

The native fNIC driver running ESXi 6.7 could not ping the Fibre Channel adaptor, even though there was no problem with data traffic.

4.0(0.9)

4.0(4a)A

CSCvo72782

Cisco UCS Manager now prints a warning when the LUN inventory size is higher than the maximum LUNs configured in UCS Manager.

4.0(0.9)

4.0(4a)A

Open Caveats

The following table lists the open caveats in Release 4.0.

Defect ID

Description

Workaround

First Release Affected

CSCvp48149

On Windows operating systems, yellow bang warning icons may appear for a VIC management device exposed for legacy reasons. There is no functional impact when this interface is exposed.

None

3.1(1a)

CSCvn28299

On a SET switch created using two VMMQ capable NICs, network traffic may stop when removing and re-adding these VMMQ capable NICs.

Disable or enable the physical interface that has been added to the existing SET switch.

4.0(1.107)C

CSCvo02207

Quiesce in the native fNIC drivers doesn't wait for I/O completion, causing failures due to timing issues, resulting in multiple aborts in the logs.

None

1.0(0.9)

Resolved in 4.0(2a)

CSCvn52229

VXLAN stateless offloads with Guest OS TCP traffic over IPV6 do not take effect when using UCS VIC 14xx and ESXi version 6.5 or 6.7 with neNIC driver version 1.0.25.0 and 1.0.26.0.

Upgrade the neNIC driver to version 1.0.27.0.

4.0(1a)C

Resolved in

4.0(4a)C

CSCvo09082

Cisco B and C Series Servers with VIC Adapters running fNIC on ESXi 6.7U1 cannot tune parameters in the same manner allowed by the fNIC.

N/A

1.0(0.9)

Resolved in 4.0(4a)BC

CSCvo57214

Incorrect inner checksum validation on native neNIC drivers.

None

4.1(1.57)VC

Resolved in 4.0(4a)C

CSCvo61233

VIC 14xx Series driver does not recover from read/write queue error.

None

4.1(1.57)VC

Resolved in

4.0(4a)C

CSCvo68641

ESX Native fNIC drivers on Cisco M4 Blade Servers running Xeon(R) E5-2660 v3 experience intermittant failures on VIC 14xx Series drivers.

None.

4.1(1.57)VC

Resolved in

4.0(4a)B

CSCvo74998

SAN PLOGI connections fail on VMWare ESXi 6.7 U1 with the 4.0.0.24 native fNIC driver.

Upgrade to 4.0.0.33 nfNIC

2.0(0.4)

Resolved in

4.0(4a)A

CSCvo83140

ESX fNIC drivers with Fibre Channel storage enabled lose their connection to storage and become unresponsive.

Reboot the ESXi host.

1.0(0.9)

Resolved in

4.0(4a)A

CSCvo75208

The native fNIC driver running ESXi 6.7 cannot ping the Fibre Channel adaptor.

None

4.0(0.9)

Resolved in

4.0(4a)A

CSCvo72782

Need to print a warning when the LUN inventory size is higher than the maximum LUNs configured in UCS Manager.

N/A

4.0(0.9)

Resolved in

4.0(4a)A

CSCvo18110

NVMe over Fibre Channel loses namespace paths during port flap.

None

4.0(0.9)

CSCvo00914

I/O operations fail on NVMe over Fibre Channel namespaces during link flaps on the path between the UCS VIC server and FC-NVMe target.

None

4.0(0.9)

CSCvp21853

An I/O failure occurs on the Fibre Channel to NVMe namespace.

Restart the application to recover from the failure.

2.0(0.37)

CSCvp35462

RHEL 7.6 with eNIC 3.2.210.18-738.12 hangs during the boot up when the server receives invalid multicast packets and fails to boot into the OS.

None

4.0(3.94)A

CSCvq02558

The VIC 1400 Series Windows drivers on Blade and Rack servers do not support more than 2 RDMA engines per adaptor. Currently, Windows can only support RDMA on 4 VPorts on each RDMA Engine. Currently, you can Enable RDMA with the PS command on more than 4 vPorts on each RDMA Engine, but the driver will not allocate RDMA resources to more than 4 vPorts per engine. Executing a Get-NetAdapterRdma command on the host could show additional vPorts with RDMA Capable Flag as True. Using the Get-smbclientNetworkInterfce command shows the actual number of RDMA vPort resources available for use.

Use the Get-smbclientNetworkInterfce command instead of the Get-NetAdapterRdma command to confirm the number of effective RDMA vPorts.

4.0(3.51)B and C

Behavior Changes and Known Limitations

vNIC MTU Configuration

For VIC 14xx adapters, you can change the MTU size of the vNIC from the host interface settings. Make sure that the new value is equal to or less than the MTU specified in the associated QoS system class. If this MTU value exceeds the MTU value in the QoS system class, packets could be dropped during data transmission.

When the Overlay network is configured, make sure that the overall MTU size does not exceed the MTU value in the QoS system class.

Microsoft Stand-alone NIC Teaming and Virtual Machine Queue (VMQ) support for VIC14xx adapters

Microsoft stand-alone NIC teaming works only with VMQ. For VIC 14xx adapters, VMQ is VMMQ with single queue. To support this, you must create a new VMMQ adapter policy with a 1 TQ, 1 RQ and 2 CQ combination and assign it to the VMQ Connection Policy.

Configuration Fails When 16 vHBAs are Configured with Maximum I/O Queues

Cisco UCS Manager supports a maximum of 64 I/O Queues for each vHBA. However, when you configure 16 vHBAs, the maximum number of I/O Queues supported for each vHBA becomes 59. In Cisco UCS Manager Release 4.0(2), if you try to configure 16 vHBAs with more than 59 I/O queues per vHBA, the configuration fails.

Related Cisco UCS Documentation

Documentation Roadmaps

For a complete list of all B-Series documentation, see the Cisco UCS B-Series Servers Documentation Roadmap available at the following URL: https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/overview/guide/UCS_roadmap.html

For a complete list of all C-Series documentation, see the Cisco UCS C-Series Servers Documentation Roadmap available at the following URL: https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/overview/guide/ucs_rack_roadmap.html.

For information on supported firmware versions and supported UCS Manager versions for the rack servers that are integrated with the UCS Manager for management, refer to Release Bundle Contents for Cisco UCS Software.

Other Documentation Resources

Follow Cisco UCS Docs on Twitter to receive document update notifications.

Obtaining Documentation and Submitting a Service Request

For information on obtaining documentation, submitting a service request, and gathering additional information, see the monthly What's New in Cisco Product Documentation, which also lists all new and revised Cisco technical documentation.

Subscribe to the What's New in Cisco Product Documentation as a Really Simple Syndication (RSS) feed and set content to be delivered directly to your desktop using a reader application. The RSS feeds are a free service and Cisco currently supports RSS version 2.0.

Follow Cisco UCS Docs on Twitter to receive document update notifications.