Guidelines, Limitations, and Requirements

RoCEv2 for Windows

Guidelines for Using SMB Direct support using RoCEv2

General Guidelines and Limitations

  • Cisco IMC 4.1.x and later releases support Microsoft SMB Direct with RoCEv2 on Microsoft Windows Server 2019 and later. Cisco recommends that you have all KB updates from Microsoft for your Windows Server release. See Windows Requirements.


    Note


    RoCEv2 is not supported on Microsoft Windows Server 2016.


  • Cisco recommends you check UCS Hardware and Software Compatibility specific to your Cisco UCS IMC release to determine support for Microsoft SMB Direct with RoCEv2 on Microsoft Windows.

  • Microsoft SMB Direct with RoCEv2 is supported only with Cisco UCS VIC 1400 Series, 14000 Series, and 15000 Series adapters. It is not supported with UCS VIC 1200 Series and 1300 Series adapters. SMB Direct with RoCEv2 is supported on all UCS Fabric Interconnects.


    Note


    RoCEv1 is not supported with Cisco UCS VIC 1400 Series, Cisco UCS VIC 14000 Series, and Cisco UCS VIC 15000 Series.


  • RoCEv2 configuration is supported only between Cisco adapters. Interoperability between Cisco adapters and third party adapters is not supported.

  • RoCEv2 supports two RoCEv2 enabled vNIC per adapter and four virtual ports per adapter interface, independent of SET switch configuration.

  • RoCEv2 cannot be used on the same vNIC interface as NVGRE, NetFlow, and VMQ features.


    Note


    RoCEv2 cannot be configured if Geneve Offload feature is enabled on any of the interfaces of a specific adaptor.


  • Support for RoCEv2 protocol for Windows 2019 NDKPI mode 1 and mode 2, with both IPV4 and IPV6.

  • RoCEv2-enabled vNIC interfaces must have the no-drop QoS system class enabled in Cisco IMC.

  • The RoCE Properties queue pairs setting must for be a minimum of 4 queue pairs.

  • Maximum number of queue pairs per adapter is 2048.

  • The maximum number of memory regions per rNIC interface is 131072.

  • Cisco IMC does not support fabric failover for vNICs with RoCEv2 enabled.

  • SMB Direct with RoCEv2 is supported on both IPv4 and IPv6.

  • RoCEv2 cannot be used with GENEVE offload.

  • The QoS No Drop class configuration must be properly configured on upstream switches such as Cisco Nexus 9000 series switches. QoS configurations may vary between different upstream switches.

MTU Properties

  • In older versions of the VIC driver, the MTU was derived from Cisco IMC in standalone mode. This behavior changed for VIC 14xx series adapters, where MTU is controlled from the Windows OS Jumbo Packet advanced property. A value configured from Cisco IMC has no effect.

  • MTU in Windows is derived from the Jumbo Packet advanced property, rather than from the Cisco IMC configuration.

  • The RoCEv2 MTU value is always power-of-two and the maximum limit is 4096.

  • RoCEv2 MTU is derived from the Ethernet MTU.

  • RoCEv2 MTU is the highest power-of-two that is less than the Ethernet MTU. For example:

    • if the Ethernet value is 1500, then the RoCEv2 MTU value is 1024

    • if the Ethernet value is 4096, then the RoCEv2 MTU value is 4096

    • if the Ethernet value is 9000, then the RoCEv2 MTU value is 4096

RoCEv2 Modes of Operation

Cisco IMC provides two modes of RoCEv2 configuration depending on the release:

  • From Cisco IMC Release 4.1(1c) onwards, RoCEv2 can be configured with Mode 1 and Mode 2.

    Mode 1 uses the existing RoCEv2 properties with Virtual Machine Queue (VMQ).

    Mode 2 introduces additional feature to configure Multi-Queue RoCEv2 properties.

    RoCEv2 enabled vNICs for Mode2 operation require that the Trust Host CoS is enabled.

    RoCEv2 Mode1 and Mode2 are mutually exclusive: RoCEv2 Mode1 must be enabled to operate RoCEv2 Mode2.

  • In Cisco IMC releases prior to 4.1(1c), only mode 1 is supported and could be configured from VMQ RoCE properties.

Downgrade Limitations

Cisco recommends you remove the RoCEv2 configuration before downgrading to any non-supported RoCEv2 release. If the configuration is not removed or disabled, downgrade will fail.

Windows Requirements

Configuration and use of RDMA over Converged Ethernet for RoCEv2 in Windows Server requires the following:

  • Windows 2019 and later versions with latest Microsoft updates

  • VIC Driver version 5.4.0.x or later

  • C-Series servers with VIC 1400 Series adapters: only Cisco UCS VIC 1400 Series or VIC 15000 series adapters are supported.


Note


All Powershell commands or advanced property configurations are common across all Windows versions unless explicitly mentioned.


RoCEv2 for Linux

Guidelines for using NVMe over Fabrics (NVMeoF) with RoCEv2

General Guidelines and Limitations

  • Cisco recommends you check UCS Hardware and Software Compatibility specific to your Cisco IMC release to determine support to determine support for NVMeoF. NVMeoF is supported on Cisco UCS M5 and later C- Series servers.

  • NVMe over RDMA with RoCEv2 is supported with the fourth generation Cisco UCS VIC 1400 Series, Cisco UCS VIC 14000, and Cisco UCS VIC 15000 Series adapters. NVMe over RDMA is not supported on Cisco UCS 6324 Fabric Interconnects or on Cisco UCS VIC 1200 Series and Cisco 1300 Series adapters.

  • When configuring RoCEv2 interfaces, use both the enic and enic_rdma binary drivers downloaded from Cisco.com and install the matched set of enic and enic_rdma drivers. Attempting to use the binary enic_rdma driver downloaded from Cisco.com with an inbox enic driver will not work.

  • RoCEv2 supports maximum two RoCEv2 enabled interfaces per adapter.

  • Booting from an NVMeoF namespace is not supported.

  • Layer 3 routing is not supported.

  • RoCEv2 does not support bonding.

  • Saving a crashdump to an NVMeoF namespace during a system crash is not supported.

  • NVMeoF cannot be used with usNIC, VMFEX, VxLAN, VMQ, VMMQ, NVGRE, GENEVE Offload, and DPDK features.

  • Netflow monitoring is not supported on RoCEv2 interfaces.

  • In the Linux-NVMe-RoCE policy, do not change values of Queue Pairs, Memory Regions, Resource Groups, and Priority settings other than to Cisco provided default values. NVMeoF functionality may not be guaranteed with different settings for Queue Pairs, Memory Regions, Resource Groups, and Priority.

  • The QoS no drop class configuration must be properly configured on upstream switches such as Cisco Nexus 9000 series switches. QoS configurations will vary between different upstream switches.

  • Set MTU size correctly on the VLANs and QoS policy on upstream switches.

  • Spanning Tree Protocol (STP) may cause temporary loss of network connectivity when a failover or failback event occurs. To prevent this issue from occurring, disable STP on uplink switches.

Interrupts

  • Linux RoCEv2 interface supports only MSIx interrupt mode. Cisco recommends avoiding changing interrupt mode when the interface is configured with RoCEv2 properties.

  • The minimum interrupt count for using RoCEv2 with Linux is 8.

Downgrade Limitations

Cisco recommends you remove the RoCEv2 configuration before downgrading to any non-supported RoCEv2 release.

Linux Requirements

Configuration and use of RoCEv2 in Linux requires the following:

  • Red Hat Enterprise Linux:

    • Red Hat Enterprise Linux 7.6 with Z-Kernel 3.10.0-957.27.2

    • Redhat Enterprise Linux 7.7 with Linux Z-kernel-3.10.0-1062.9.1 and above

    • Redhat Enterprise Linux 7.8, 7.9, and 8.2


      Note


      Cisco IMC Release 4.2(2x) or later supports Redhat Enterprise Linux 7.8, 7.9, 8.x and 9.x.

      Starting from eNIC driver version 4.8.0.0-1128.4 and enic_rdma driver version 1.8.0.0-1128.4, RoCEv2 support is added on Ubuntu 24.04.1 with kernel 6.8.0-51-generic.


  • InfiniBand kernel API module ib_core

  • Cisco IMC Release 4.1(1x) or later

  • VIC firmware - Minimum requirement is 5.1(1x) for IPv4 support and 5.1(2x) for IPv6 support

  • Cisco UCS C-Series servers with Cisco UCS VIC 14xx series and Cisco UCS 15xxx series adapters

  • eNIC driver version 4.0.0.6-802-21 or later provided with the 4.1(1x) release package

  • enic_rdma driver version 1.0.0.6-802-21 or later provided with the 4.1(1x) release package


    Note


    Use eNIC driver version 4.0.0.10-802.34 or later and enic_rdma driver version 1.0.0.10-802.34 or later for IPv6 support.


  • A storage array that supports NVMeoF connection

RoCEv2 For ESXi

Guidelines for using RoCEv2 Protocol in the Native ENIC driver on ESXi

General Guidelines and Limitations

  • Cisco IMC release 4.2(3b) supports RoCEv2 on ESXi 7.0 U3, ESXi 8.0, ESXi 8.0 U1, ESXi 8.0 U2, and ESXi 8.0 U3.

  • Cisco recommends you check UCS Hardware and Software Compatibility specific to your Cisco IMC release to determine support for ESXi. RoCEv2 on ESXi is supported on Cisco UCS C-Series servers with Cisco UCS VIC 15000 Series and later adapters.

  • RoCEv2 on ESXi is not supported on UCS VIC 1200, 1300 and 1400 Series adapters.

  • RDMA on ESXi nENIC currently supports only ESXi NVME that is part of the ESXi kernel. The current implementation does not support the ESXi user space RDMA application.

  • Multiple MAC addresses and multiple VLANs are supported only on VIC 15000 Series adapters.

  • RoCEv2 supports maximum two RoCEv2 enabled interfaces per adapter.

  • Pvrdma, VSAN over RDMA, and iSER are not supported.

Downgrade Limitations

Cisco recommends you remove the RoCEv2 configuration before downgrading to any non-supported RoCEv2 release.

ESXi nENIC RDMA Requirements

Configuration and use of RoCEv2 in ESXi requires the following:

  • VMware ESXi 7.0 U2, ESXi 8.0, ESXi 8.0 U1, ESXi 8.0 U2, and ESXi 8.0 U3

  • Cisco IMC release 4.2.3 or later

  • Cisco VMware nENIC driver version 2.0.10.0 for ESXi 7.0U3 and 2.0.11.0 for ESXi 8.0 and later. provides both standard eNIC and RDMA support

  • A storage array that supports NVMeoF connection. Currently, tested and supported on Pure Storage with Cisco Nexus 9300 Series switches.

SR-IOV for ESXi

Guidelines and Limitations

  • Cisco recommends that you check UCS Hardware and Software Compatibility specific to your Cisco IMC release to determine support for SR-IOV.

  • SR-IOV is supported with Cisco UCS VIC 1400 series, 15000 series, and later series adapters. SR-IOV is not supported on Cisco UCS VIC 1200 and 1300 series adapters.

  • SR-IOV is supported with Cisco UCS AMD®/Intel® based C-Series, B-Series, and X-Series servers.

  • SR-IOV is not supported in Physical NIC mode.

  • SR-IOV does not support VLAN Access mode.

  • SR-IOV cannot be configured on the same vNIC with VXLAN, Geneve Offload, QinQ, VMQ/VMMQ, RoCE, or usNIC.

  • aRFS is not supported on SR-IOV VF.

  • iSCSI boot is not supported on SR-IOV VF.

  • DPDK on SRIOV VF is not supported when the host has Linux OS.

  • SR-IOV interface supports MSIx interrupt mode.

  • Precision Time Protocol (PTP) is not supported on SR-IOV VF.

  • Cisco recommends not do downgrade the adapter firmware to lower than 5.3(2.32) and to remove SR-IOV related configurations before downgrading Cisco IMC to non-supported SR-IOV release.

  • For Cisco UCS VIC 1400/14000, Receive Side Scaling (RSS) must be enabled on PF to support VF RSS.


    Note


    RSS turned off on PF disables the RSS on all VFs.
  • For Cisco UCS VIC 15000 series adapters, turning off the RSS on PF works on all the VFs.


    Note


    The PF and VF RSS are independent of each other. The VF driver enables and configures RSS on VF, when there are multiple RQs.

ESXi Requirements

Configuration and use of SR-IOV in ESXi requires the following:

  • Cisco IMC release 4.3(2b)

  • Cisco VIC firmware version 5.3(2.32) or later

  • VMware ESXi 7.0 U3 and 8.0 or later

  • VMs with RHEL 8.7 or later and RHEL 9.0 or later

  • Cisco VMware nENIC driver version 2.0.10.0 for ESXi 7.0 U3 and 2.0.11.0 for ESXi 8.0 and later

  • Cisco RHEL ENIC driver version 4.4.0.1-930.10 or later

SR-IOV for Linux

Guidelines and Limitations

  • Cisco recommends that you check UCS Hardware and Software Compatibility specific to your Cisco IMC release to determine the support for SR-IOV.

  • SR-IOV is supported with Cisco UCS VIC 1400, 14000, 15000 series adapters. SR-IOV is not supported on Cisco UCS VIC 1200 and 1300 series adapters.

  • SR-IOV is supported with AMD®/Intel® based Cisco UCS C-Series, B-Series, and X-Series servers.

  • SR-IOV is not supported in Physical NIC mode.

  • SR-IOV does not support VLAN Access mode.

  • SR-IOV cannot be configured on the same vNIC with VXLAN, Geneve Offload, QinQ, VMQ/VMMQ, RoCE, or usNIC.

  • aRFS is not supported on SR-IOV VF.

  • iSCSI boot is not supported on SR-IOV VF.

  • DPDK on SRIOV VF is not supported when the host has Linux OS.

  • SR-IOV interface supports MSIx interrupt mode.

  • Precision Time Protocol (PTP) is not supported on SR-IOV VF.

Linux Requirements

Configuration and use of SR-IOV in Linux requires the following:

  • Host OS: Red Hat Enterprise Linux 8.10 or later, 9.4 or later, Ubuntu 22.0.4.2 LTS

  • Guest OS: Red Hat Enterprise Linux 8.10, 9.4, Ubuntu 22.0.4.2 LTS

  • Virtualization Packages installed on the host

  • eNIC driver version 4.7.0.5-1076.6 or later

  • Cisco IMC Release 4.3(5x) or later

  • Cisco VIC firmware 5.3(4.75) or later