Guidelines for Using SMB Direct with RoCEv2
General Guidelines and Limitations
-
Cisco IMC 4.1.x and later releases support Microsoft SMB Direct with RoCEv2 on Windows. Cisco recommends that you have all KB updates from Microsoft. See Windows Requirements.
Note
RoCEv2 is not supported on Windows Server 2016.
-
Cisco recommends you check UCS Hardware and Software Compatibility specific to your Cisco IMC release to determine support for Microsoft SMB Direct with RoCEv2 on Microsoft 2019.
-
Microsoft SMB Direct with RoCEv2 is supported only with Cisco UCS VIC 14xx series adapters. RoCEv2 is not supported on UCS VIC 12xx Series and 13xx Series adapters.
Note
RoCE v1 is not supported with Cisco UCS VIC 14xx adapters.
-
RoCEv2 configuration is supported only between Cisco adapters. Interoperability between Cisco adapters and third party adapters is not supported.
-
RoCEv2 supports two RoCEv2 enabled vNIC per adapter and four virtual ports per adapter interface, independent of SET switch configuration.
-
RoCEv2 cannot be used on the same vNIC interface as Geneve Offload, NVGRE, NetFlow, and VMQ features.
Note
RoCEv2 cannot be configured if Geneve Offload feature is enabled on any of the interfaces of a specific adaptor.
-
Support for RoCEv2 protocol for Windows 2019 NDKPI mode 1 and mode 2, with both IPV4 and IPV6.
-
RoCEv2 enabled vNIC interfaces must have the no-drop QoS system class enabled in Cisco IMC.
-
The RoCEv2 properties queue pairs setting must be a minimum of 4 Queue pairs.
-
Maximum number of queue pairs per adapter is 2048.
-
The maximum number of memory regions per RNIC interface is 131072.
-
Cisco IMC does not support fabric failover for vNICs with RoCEv2 enabled.
-
QOS no-drop class configuration needs to be configured correctly on upstream switches. For example: N9K
QOS configurations will vary between different upstream switches.
-
Configuration of RoCEv2 on the Windows platform requires first configuring RoCEv2 Mode 1, then configuring RoCEv2 Mode 2. Modes 1 and 2 relate to the implementation of Network Direct Kernel Provider Interface (NDKPI): Mode 1 is native RDMA, and Mode 2 involves configuration for the virtual port with RDMA.
MTU Properties
-
MTU in Windows is derived from the Jumbo Packet advanced property, rather than from the Cisco IMC configuration.
-
In older versions of the VIC driver, the MTU was derived from Cisco IMC in standalone mode. This behavior changed for VIC 14xx series adapters, where MTU is controlled from the Windows OS Jumbo Packet advanced property. A value configured from Cisco IMC has no effect.
-
The RoCEv2 MTU value is always power-of-two and the maximum limit is 4096.
-
RoCEv2 MTU is derived from the Ethernet MTU.
-
RoCEv2 MTU is the highest power-of-two that is less than the Ethernet MTU. For example:
-
If the Ethernet value is 1500, then the RoCEv2 MTU value is 1024.
-
If the Ethernet value is 4096, then the RoCEv2 MTU value is 4096.
-
If the Ethernet value is 9000, then the RoCEv2 MTU value is 4096.
-
RoCEv2 Modes of Operation
Cisco IMC provides two modes of RoCEv2 configuration depending on the release:
-
From Cisco IMC Release 4.1(1c) onwards, RoCEv2 can be configured with Mode 1 and Mode 2.
Mode 1 uses the existing RoCEv2 properties with Virtual Machine Queue (VMQ).
Mode 2 introduces additional feature to configure Multi-Queue RoCEv2 properties.
RoCEv2 enabled vNICs for Mode2 operation require that the Trust Host CoS is enabled.
RoCEv2 Mode1 and Mode2 are mutually exclusive: RoCEv2 Mode1 must be enabled to operate RoCEv2 Mode2.
-
In Cisco IMC releases prior to 4.1(1c), only mode 1 is supported and could be configured from VMQ RoCE properties.
Downgrade Limitations
Cisco recommends you remove the RoCEv2 configuration before downgrading to any non-supported RoCEv2 release. If the configuration is not removed or disabled, downgrade may fail.