Guidelines, Limitations, and Requirements

RoCEv2 for Windows

Guidelines for Using SMB Direct support using RoCEv2

General Guidelines and Limitations

  • Cisco UCS Manager release 4.1.x and later releases support Microsoft SMB Direct with RoCEv2 on Microsoft Windows Server 2019 and later. Cisco recommends that you have all KB updates from Microsoft for your Windows Server release. See Windows Requirements.


    Note


    RoCEv2 is not supported on Microsoft Windows Server 2016.


  • Cisco recommends you check UCS Hardware and Software Compatibility specific to your Cisco UCS Manager release to determine support for Microsoft SMB Direct with RoCEv2 on Microsoft Windows.

  • Microsoft SMB Direct with RoCEv2 is supported only with Cisco UCS VIC 1400 Series, 14000 Series, and 15000 Series adapters. It is not supported with UCS VIC 1200 Series and 1300 Series adapters. SMB Direct with RoCEv2 is supported on all UCS Fabric Interconnects.


    Note


    RoCEv1 is not supported with Cisco UCS VIC 1400 Series, Cisco UCS VIC 14000 Series, and Cisco UCS VIC 15000 Series.


  • RoCEv2 configuration is supported only between Cisco adapters. Interoperability between Cisco adapters and third party adapters is not supported.

  • RoCEv2 supports two RoCEv2 enabled vNIC per adapter and four virtual ports per adapter interface, independent of SET switch configuration.

  • RoCEv2 cannot be used on the same vNIC interface as NVGRE, NetFlow, and VMQ features.

  • Support for RoCEv2 protocol for Windows 2019 NDKPI mode 1 and mode 2, with both IPV4 and IPV6.

  • RoCEv2-enabled vNIC interfaces must have the no-drop QoS system class enabled in Cisco UCS Manager.

  • The RoCE Properties queue pairs setting must for be a minimum of 4 queue pairs.

  • Maximum number of queue pairs per adapter is 2048.

  • The maximum number of memory regions per rNIC interface is 131072.

  • Cisco UCS Manager does not support fabric failover for vNICs with RoCEv2 enabled.

  • SMB Direct with RoCEv2 is supported on both IPv4 and IPv6.

  • RoCEv2 cannot be used with GENEVE offload.

  • The QoS No Drop class configuration must be properly configured on upstream switches such as Cisco Nexus 9000 series switches. QoS configurations may vary between different upstream switches.

  • RoCEv2 cannot be used with usNIC.

MTU Properties

  • In older versions of the VIC driver, the MTU was derived from either a Cisco UCS Manager service profile or from the Cisco IMC vNIC MTU setting in non-cluster setup. This behavior changes on Cisco UCS VIC 1400 Series and later adapters, where MTU is controlled from the Windows OS Jumbo Packet advanced property. A value configured from Cisco UCS Manager or Cisco IMC has no effect.

  • The RoCEv2 MTU value is always power-of-two and the maximum limit is 4096.

  • RoCEv2 MTU is derived from the Ethernet MTU.

  • RoCEv2 MTU is the highest power-of-two that is less than the Ethernet MTU. For example:

    • if the Ethernet value is 1500, then the RoCEv2 MTU value is 1024

    • if the Ethernet value is 4096, then the RoCEv2 MTU value is 4096

    • if the Ethernet value is 9000, then the RoCEv2 MTU value is 4096

Windows NDPKI Modes of Operation

  • The implementation of Network Direct Kernel Provider Interface (NDPKI) supports two modes of operation: Mode 1 and Mode 2. Mode 1 and Mode 2 relate to the implementation of Network Direct Kernel Provider Interface (NDKPI): Mode 1 is native RDMA, and Mode 2 involves configuration for the virtual port with RDMA. Cisco does not support NDPKI Mode 3 operation.

  • The recommended default adapter policy for RoCEv2 Mode 1 is Win-HPN-SMBd .

  • The recommended default adapter policy for RoCEv2 Mode 2 is MQ-SMBd.

  • RoCEv2 enabled vNICs for Mode2 operation require the QoS host control policy set to full.

  • Mode 2 is inclusive of Mode 1: Mode 1 must be enabled to operate Mode 2.

  • On Windows, the RoCEv2 interface supports MSI & MSIx interrupt modes. By default, it is in MSIx interrupt mode. Cisco recommends you avoid changing interrupt mode when the interface is configured with RoCEv2 properties.

Downgrade Limitations

Cisco recommends you remove the RoCEv2 configuration before downgrading to any non-supported RoCEv2 release. If the configuration is not removed or disabled, downgrade will fail.

Windows Requirements

Configuration and use of RDMA over Converged Ethernet for RoCEv2 in Windows Server requires the following:

  • Windows 2019 and later versions with latest Microsoft updates

  • UCS Manager release 4.1.1 or later

  • VIC Driver version 5.4.0.x or later

  • UCS M5 B-Series or C-Series servers with VIC 1400 Series adapters: only Cisco UCS VIC 1400 Series or VIC 15000 series adapters are supported.


Note


All Powershell commands or advanced property configurations are common across all Windows versions unless explicitly mentioned.


RoCEv2 for Linux

Guidelines for using NVMe over Fabrics (NVMeoF) with RoCEv2

General Guidelines and Limitations

  • Cisco recommends you check UCS Hardware and Software Compatibility specific to your Cisco UCS Manager release to determine support for NVMeoF. NVMeoF is supported on Cisco UCS M5 and later B-Series and C- Series servers.

  • NVMe over RDMA with RoCEv2 is supported with the fourth generation Cisco UCS VIC 1400 Series, Cisco UCS VIC 14000, and Cisco UCS VIC 15000 Series adapters. NVMe over RDMA is not supported on Cisco UCS 6324 Fabric Interconnects or on Cisco UCS VIC 1200 Series and Cisco 1300 Series adapters.

  • When creating RoCEv2 interfaces, use Cisco UCS Manager provided Linux-NVMe-RoCE adapter policy.


    Note


    Do not use the default Linux Adapter policy with RoCEv2; RoCEv2 interfaces will not be created in the OS.


  • When configuring RoCEv2 interfaces, use both the enic and enic_rdma binary drivers downloaded from Cisco.com and install the matched set of enic and enic_rdma drivers. Attempting to use the binary enic_rdma driver downloaded from Cisco.com with an inbox enic driver will not work.

  • RoCEv2 supports maximum two RoCEv2 enabled interfaces per adapter.

  • Booting from an NVMeoF namespace is not supported.

  • RoCEv2 cannot be used with GENEVE offload.

  • Layer 3 routing is not supported.

  • RoCEv2 does not support bonding.

  • Saving a crashdump to an NVMeoF namespace during a system crash is not supported.

  • NVMeoF cannot be used with usNIC, VMFEX, VxLAN, VMQ, VMMQ, NVGRE, GENEVE Offload, and DPDK features.

  • Netflow monitoring is not supported on RoCEv2 interfaces.

  • In the Linux-NVMe-RoCE policy, do not change values of Queue Pairs, Memory Regions, Resource Groups, and Priority settings other than to Cisco provided default values. NVMeoF functionality may not be guaranteed with different settings for Queue Pairs, Memory Regions, Resource Groups, and Priority.

  • The QoS no drop class configuration must be properly configured on upstream switches such as Cisco Nexus 9000 series switches. QoS configurations will vary between different upstream switches.

  • Set MTU size correctly on the VLANs and QoS policy on upstream switches.

  • Spanning Tree Protocol (STP) may cause temporary loss of network connectivity when a failover or failback event occurs. To prevent this issue from occurring, disable STP on uplink switches.

  • Cisco UCS Manager does not support fabric failover for vNICs with RoCEv2 enabled.

Interrupts

  • Linux RoCEv2 interface supports only MSIx interrupt mode. Cisco recommends avoiding changing interrupt mode when the interface is configured with RoCEv2 properties.

  • The minimum interrupt count for using RoCEv2 with Linux is 8.

Downgrade Limitations

Cisco recommends you remove the RoCEv2 configuration before downgrading to any non-supported RoCEv2 release.

Linux Requirements

Configuration and use of RoCEv2 in Linux requires the following:

  • InfiniBand kernel API module ib_core

  • Red Hat Enterprise Linux 8.x and 9.x versions

  • Cisco UCS Manager release 4.1.1 or later

  • Minimum VIC firmware 5.1(1x) for IPv4 support and 5.1(2x) for IPv6 support

  • Cisco UCS M5 and later B or C-series servers with Cisco UCS VIC 1400 or Cisco UCS VIC 15000 Series adapters

  • eNIC driver version 4.0.0.6-802-21 or later provided with the 4.1.1 release package

  • enic_rdma driver version 1.0.0.6-802-21 or later provided with the 4.1.1 release package


    Note


    Use eNIC driver version 4.0.0.10-802.34 or later and enic_rdma driver version 1.0.0.10-802.34 or later for IPv6 support.


  • A storage array that supports NVMeoF connection

RoCEv2 for ESXi

Guidelines for using RoCEv2 Protocol in the Native ENIC driver on ESXi

General Guidelines and Limitations

  • Cisco UCS Manager release 4.2(3b) supports RoCEv2 on ESXi 7.0 U3, ESXi 8.0, ESXi 8.0 U1, ESXi 8.0 U2, and ESXi 8.0 U3.

  • Cisco recommends you check UCS Hardware and Software Compatibility specific to your Cisco UCS Manager release to determine support for ESXi. RoCEv2 on ESXi is supported on Cisco UCS B-Series and C-Series servers with Cisco UCS VIC 15000 Series and later adapters.

  • RoCEv2 on ESXi is not supported on UCS VIC 1200, 1300 and 1400 Series adapters.

  • RDMA on ESXi nENIC currently supports only ESXi NVME that is part of the ESXi kernel. The current implementation does not support the ESXi user space RDMA application.

  • Multiple MAC addresses and multiple VLANs are supported only on VIC 15000 Series adapters.

  • RoCEv2 cannot be used with GENEVE offload.

  • RoCEv2 cannot be used on the same vNIC interface with VXLAN, Geneve Offload, QinQ, and VMQ.

  • RoCEv2 supports maximum two RoCEv2 enabled interfaces per adapter.

  • Pvrdma, VSAN over RDMA, and iSER are not supported.

  • The COS setting is not supported on Cisco UCS Manager.

Downgrade Limitations

Cisco recommends you remove the RoCEv2 configuration before downgrading to any non-supported RoCEv2 release.

ESXi nENIC RDMA Requirements

Configuration and use of RoCEv2 in ESXi requires the following:

  • VMware ESXi 7.0 U2, ESXi 8.0, ESXi 8.0 U1, ESXi 8.0 U2, and ESXi 8.0 U3

  • Cisco UCS Manager release 4.2.3 or later

  • Cisco VMware nENIC driver version 2.0.10.0 for ESXi 7.0U3 and 2.0.11.0 for ESXi 8.0 and later. provides both standard eNIC and RDMA support

  • A storage array that supports NVMeoF connection. Currently, tested and supported on Pure Storage with Cisco Nexus 9300 Series switches.

SRIOV for ESXi

Guidelines and Limitations

  • Cisco recommends that you check UCS Hardware and Software Compatibility specific to your Cisco UCS Manager release to determine support for SR-IOV.

  • SR-IOV is supported with Cisco UCS VIC 1400 series, 15000 series, and later series adapters. SR-IOV is not supported on Cisco UCS VIC 1200 and 1300 series adapters.

  • SR-IOV is supported with Cisco UCS AMD®/Intel® based C-Series, B-Series, and X-Series servers.

  • SR-IOV cannot be configured on the same vNIC with VXLAN, Geneve Offload, QinQ, VMQ/VMMQ, RoCE, or usNIC.

  • aRFS is not supported on SR-IOV VF.

  • iSCSI boot is not supported on SR-IOV VF.

  • DPDK on SRIOV VF is not supported when the host has Linux OS.

  • SR-IOV interface supports MSIx interrupt mode.

  • Precision Time Protocol (PTP) is not supported on SR-IOV VF.

  • Cisco recommends not do downgrade the adapter firmware to lower than 5.3(2.32) and to remove SR-IOV related configurations before downgrading Cisco UCS Manger to non-supported SR-IOV release.

  • For Cisco UCS VIC 1400/14000, Receive Side Scaling (RSS) must be enabled on PF to support VF RSS.


    Note


    RSS turned off on PF disables the RSS on all VFs.
  • For Cisco UCS VIC 15000 series adapters, turning off the RSS on PF works on all the VFs.


    Note


    The PF and VF RSS are independent of each other. The VF driver enables and configures RSS on VF, when there are multiple RQs.
  • On ESXi hosts configured with SR-IOV vNICs and Virtual Machines (VMs) utilizing enumerated Virtual Functions (VFs), the system might experience a Purple Screen of Death (PSOD) during cold or warm reboot operations.

ESXi Requirements

Configuration and use of SR-IOV in ESXi requires the following:

  • Cisco UCS Manager release 4.3(2b)

  • Cisco VIC firmware version 5.3(2.32) or later

  • VMware ESXi 7.0 U3 and 8.0 or later

  • VMs with RHEL 8.7 or later and RHEL 9.0 or later

  • Cisco VMware nENIC driver version 2.0.10.0 for ESXi 7.0 U3 and 2.0.11.0 for ESXi 8.0 and later

  • Cisco RHEL ENIC driver version 4.4.0.1-930.10 or later

SRIOV for Linux

Guidelines and Limitations

  • Cisco recommends that you check UCS Hardware and Software Compatibility specific to your Cisco UCS Manager release to determine the support for SR-IOV.

  • SR-IOV is supported with Cisco UCS VIC 1400, 14000, 15000 series adapters. SR-IOV is not supported on Cisco UCS VIC 1200 and 1300 series adapters.

  • SR-IOV is supported with AMD®/Intel® based Cisco UCS C-Series, B-Series, and X-Series servers.

  • SR-IOV is not supported in Physical NIC mode.

  • SR-IOV does not support VLAN Access mode.

  • SR-IOV cannot be configured on the same vNIC with VXLAN, Geneve Offload, QinQ, VMQ/VMMQ, RoCE, or usNIC.

  • aRFS is not supported on SR-IOV VF.

  • iSCSI boot is not supported on SR-IOV VF.

  • DPDK on SRIOV VF is not supported when the host has Linux OS.

  • SR-IOV interface supports MSIx interrupt mode.

  • Precision Time Protocol (PTP) is not supported on SR-IOV VF.

  • Cisco recommends that you do not downgrade the adapter firmware to lower than 5.3(2.32) and to remove SR-IOV related configurations before downgrading Cisco UCS Manager to non-supported SR-IOV release.

  • On Linux hosts with SR-IOV configured on vNICs and Virtual Functions (VFs) enumerated, VFs are not persistent across reboots and must be recreated after each system reboot.

Linux Requirements

Configuration and use of SR-IOV in Linux requires the following:

  • Host OS: Red Hat Enterprise Linux 8.10 or later, 9.4 or later, Ubuntu 22.0.4.2 LTS

  • Guest OS: Red Hat Enterprise Linux 8.10, 9.4, Ubuntu 22.0.4.2 LTS

  • Virtualization Packages installed on the host

  • eNIC driver version 4.7.0.5-1076.6 or later

  • Cisco UCS Manager Release 4.3(5a) or later

  • Cisco VIC firmware 5.3(4.75) or later

NDIS Poll Mode support for Windows

Overview and Supported Configurations

Overview

NDIS Poll Mode allows the operating system to control how network traffic is processed, providing improvements over older methods that relied on Deferred Procedure Calls (DPCs). By using the NDIS layer, the operating system manages network processing directly, which enhances performance, stability, and efficiency, especially during periods of heavy network activity.

Supported Configurations for NDIS Poll Mode:

  • Supported Operating Systems: Windows Server 2025 and later

  • Supported Adapters: Cisco UCS VIC 15000 Series


Note


For additional information on NDIS Poll Mode and its underlying mechanisms in Windows drivers, refer to Microsoft's Windows Driver and Network documentation.