Configuring RDMA Over Converged Ethernet (RoCE) version 2

Configuring RoCEv2 in Windows

Configuring vNIC Properties in Mode 1

Follow this procedure to configure vNIC Properties using the VMQ RoCEv2 properties.

Before you begin

Ensure that you are familiar with Cisco IMC GUI interface.

SUMMARY STEPS

  1. In the Navigation pane, click the Networking menu.
  2. In the Adapter Card pane, click the vNICs tab.
  3. In the vNICs pane, select the vNIC (either the default eth0 or eth1, or any other newly created vNIC).
  4. Configure the vNIC properties as desired. See the configuration guide for detailed procedures. In addition to configuring RoCEv2 in Mode 1, perform the remaining steps.
  5. In the vNIC Properties pane, under the Ethernet Interrupt area, update the following fields:
  6. In the vNIC Properties, under the RoCE Properties area, update the following fields:

DETAILED STEPS


Step 1

In the Navigation pane, click the Networking menu.

Step 2

In the Adapter Card pane, click the vNICs tab.

Step 3

In the vNICs pane, select the vNIC (either the default eth0 or eth1, or any other newly created vNIC).

Step 4

Configure the vNIC properties as desired. See the configuration guide for detailed procedures. In addition to configuring RoCEv2 in Mode 1, perform the remaining steps.

Step 5

In the vNIC Properties pane, under the Ethernet Interrupt area, update the following fields:

Field

Description

Interrupt Count field

Set Interrupt count as Logical Processors times 2 + 4.

Step 6

In the vNIC Properties, under the RoCE Properties area, update the following fields:

Field

Description

RoCE check box

Check the RoCE check box to enable the RoCE Properties.

Queue Pairs field

The number of Queue pairs per adapter. Enter an integer between 1 to 2048. We recommend that the value be an integer power of 2. The recommended value is 256.

Memory Regions field

The number of memory regions per adapter. Enter an integer between 1 to 524288. We recommend that the value be an integer power of 2. The recommended value is 131072.

Resource Groups field

The number of resource groups per adapter. Enter an integer between 1 to 128. We recommend that the value be an integer power of 2. The recommended value is 2.

Class of Service drop-down list

NO Drop QOS COS to be specified. This same value should be configured at the up link switch. Default No Drop QOS COS is 5.


What to do next

Perform the host verification to ensure that the Mode 1 is configured correctly. See Verifying the Configurations on the Host.

Configuring SMB Direct Mode 1 on the Host System

Perform this procedure to configure a connection between smb-client and smb-server on two host interfaces. For each of these servers, smb-client, and smb-server, configure the RoCEv2-enabled vNIC.

Before you begin

Configure RoCEv2 for Mode 1 from Cisco IMC.

Procedure


Step 1

In the Windows host, go to the Device Manager and select the appropriate Cisco VIC Internet Interface.

Step 2

Select the Advanced tab and verify that the Network Direct Functionality property is Enabled. If not, enable it and click OK.

Perform this step for both the smb-server and smb-client vNICs.

Step 3

Go to Tools > Computer Management > Device Manager > Network Adapter > click VIC Network Adapter > Properties > Advanced > Network Direct Functionality. Perform this operation for both the smb-server and smb-client vNICs.

Step 4

Verify that RoCE is enabled on the host operating system using PowerShell.

Execute the Get-NetOffloadGlobalSetting command to verify that NetworkDirect is enabled:

PS C:\Users\Administrator> Get-NetOffloadGlobalSetting
 
ReceiveSideScaling           : Enabled
ReceiveSegmentCoalescing     : Enabled
Chimney                      : Disabled
TaskOffload                  : Enabled
NetworkDirect                : Enabled
NetworkDirectAcrossIPSubnets : Blocked
PacketCoalescingFilter       : Disabled

Note

 

If the NetworkDirect setting is showing as disabled, enable it using the following command:

Set-NetOffloadGlobalSetting -NetworkDirect enabled

Step 5

Bring up the Powershell and execute the get -SmbClientNetworkInterface command.

PS C:\Users\Administrator>
PS C:\Users\Administrator> Get-SmbClientNetworklnterface
Interface Index    RSS Capable    RDKA Capable    Speed    IpAddresses    Friendly Name
---------------    ------------   ------------   -------   -----------  ---------------
14                True            False          40 Gbps    {10.37.60.162}    vEthernet (vswitch)
26                True            True           40 Gbps    {10.37.60.158}    vEthernet (vpl)
9                 True            True           40 Gbps    {50.37.61.23}     Ethernet 2
5                 False           False          40 Gbps    {169.254.10.S}    Ethernet (Kernel Debugger)
8                 True            False          40 Gbps    {169.254.4.26}    Ethernet 3
PS C:\Users\Administrator>

Step 6

Enter enable - netadapterrdma [-name] ["Ethernetname"]

Step 7

Verify the overall RoCEv2 Mode 1 configuration at the host:

  1. Use the Powershell command netstat -xan to verify the listeners in both the smb-client and smb-server Windows host; listeners will be shown in the command output.

    PS C:\Users\Administrator>
    PS C:\Users\Administrator> netstat -xan
    Active NetworkDirect Connections, Listeners, SharedEndpoints
    Mode    Iflndex    Type    Local Address    Foreign Address    PID
    Kernel    9        Listener  50.37.61.23:445    NA             0
    Kernel    26       Listener  10.37.60.158:445   NA             0
    PS C:\Users\Administrator>
  2. Go to the smb-client server fileshare and start an I/O operation.

  3. Go to the performance monitor and check that it displays the RDMA activity.

Step 8

In the Powershell command window, check the connection entries with the netstat -xan output command to make sure they are displayed. You can also run netstat -xan from the command prompt. If the connection entry shows up in netstat-xan output, the RoCEv2 mode1 connections are correctly established between client and server.


PS C:\Users\Administrator> nctstat -xan
Active NetworkDirect Connections, Listeners, SharedEndpoints
Mode    IfIndex    Type    Local Address        Foreign Address    PID
Kernel   4    Connection    50.37.61.22:445    50.37.61.71:2240    0
Kernel   4    Connection    50.37.61.22:445    50.37.61.71:2496    0
Kernel   11   Connection    50.37.61.122:445   50.37.61.71:2752    0
Kernel   11   Connection    50.37.61.122:445   50.37.61.71:3008    0
Kernel   32   Connection    10.37.60.155:445   50.37.60.61:49092   0
Kernel   32   Connection    10.37.60.155:445   50.37.60.61:49348   0
Kernel   26   Connection    50.37.60.32:445    50.37.60.61:48580   0
Kernel   26   Connection    50.37.60.32:445    50.37.60.61:48836   0
Kernel   4    Listener      50.37.61.22:445    NA                  0
Kernel   11   Listener      50.37.61.122:445   NA                  0
Kernel   32   Listener      10.37.60.155:445   NA                  0
Kernel   26   Listener      50.37.60.32:445    NA                  0

Step 9

By default, Microsoft's SMB Direct establishes two RDMA connections per RDMA interface. You can change the number of RDMA connections per RDMA interface to one or any number of connections.

For example, to increase the number of RDMA connections to 4, execute the following command in PowerShell:

PS C:\Users\Administrator> Set-ItemProperty -Path ` "HKLM:\SYSTEM\CurrentControlSet\Services
\LanmanWorkstation\Parameters" ConnectionCountPerRdmaNetworkInterface -Type DWORD -Value 4 –Force

Configuring vNIC Properties in Mode 2

Follow this procedure to configure vNIC Properties in Mode 2. You can perform this procedure using Cisco IMC release 4.1(1c) or higher.

Before you begin

  • Ensure that you are familiar with Cisco IMC GUI interface.

  • Ensure that you are using Cisco IMC release 4.1(1c) or higher.

SUMMARY STEPS

  1. In the Navigation pane, click the Networking menu.
  2. In the Adapter Card pane, click the vNICs tab.
  3. In the vNICs pane, select the vNIC (either the default eth0 or eth1, or any other newly created vNIC).
  4. Configure the vNIC properties as desired. See the configuration guide for detailed procedures. In addition to configuring RoCEv2 in Mode 1, perform the remaining steps.
  5. In the vNIC Properties pane, under the General area, update the following :
  6. In the vNIC Properties pane, under the Ethernet Interrupt area, update the following fields:
  7. In the vNIC Properties pane, under the Multi Queue area, update the following fields:
  8. In the vNIC Properties pane, under the RoCE Properties area, update the following fields:

DETAILED STEPS


Step 1

In the Navigation pane, click the Networking menu.

Step 2

In the Adapter Card pane, click the vNICs tab.

Step 3

In the vNICs pane, select the vNIC (either the default eth0 or eth1, or any other newly created vNIC).

Step 4

Configure the vNIC properties as desired. See the configuration guide for detailed procedures. In addition to configuring RoCEv2 in Mode 1, perform the remaining steps.

Step 5

In the vNIC Properties pane, under the General area, update the following :

Field

Description

Trust Host CoS check box

Check the Trust Host CoS check.

Enable VMQ check box

Check the Enable VMQ check.

Note

 

Uncheck the RoCE check to disable RoCE properties before enabling VMQ.

Enable Multi Queue check box

Check the Enable Multi Queue check.

No. of Sub vNICs field

Enter the number of sub vNICs. Default value is 64.

Step 6

In the vNIC Properties pane, under the Ethernet Interrupt area, update the following fields:

Field

Description

Interrupt Count field

Set Interrupt count as Logical Processors times 2 + 4.

Step 7

In the vNIC Properties pane, under the Multi Queue area, update the following fields:

Field

Description

RoCE check box

Check the RoCE check box to enable the RoCE Properties.

Queue Pairs field

The number of Queue pairs per adapter. Enter an integer between 1 and 2048. We recommend that the value be an integer power of 2. Recommended value is 256.

Memory Regions field

The number of memory regions per adapter. Enter an integer between 1 and 524288. We recommend that the value be an integer power of 2. The recommended value is 65536.

Resource Groups field

The number of resource groups per adapter. Enter an integer between 1 and 128. We recommend that the value be an integer power of 2. The recommended value is 2.

Class of Service drop-down list

NO Drop QOS COS to be specified. This same value should be configured at the up link switch. Default No Drop QOS COS is 5.

Receive Queue Count field

The number of receive queue count per adapter. Enter an integer between 1 and 1000.

Transmit Queue Count field

The number of transmit queue count per adapter. Enter an integer between 1 and 1000.

Completion Queue Count field

The number of completed queue count per adapter. Enter an integer between 1 and 2000.

Step 8

In the vNIC Properties pane, under the RoCE Properties area, update the following fields:

Field

Description

RoCE check box

Check the RoCE check box to enable the RoCE Properties.

Queue Pairs field

The Number of Queue pairs per adapter. Enter an integer between 1 to 2048. We recommend that the value be an integer power of 2. Recommended value is 256.

Memory Regions field

The number of memory regions per adapter. Enter an integer between 1 to 524288. We recommend that the value be an integer power of 2. The recommended value is 131072.

Resource Groups field

The Number of resource groups per adapter. Enter an integer between 1 to 128. We recommend that the value be an integer power of 2. Recommended value is 2.

Class of Service drop-down list

NO Drop QOS COS to be specified. This same value should be configured at the up link switch. Default No Drop QOS COS is 5.


What to do next

Perform the host verification to ensure that the Mode 1 is configured correctly. See Verifying the Configurations on the Host.

Configuring SMB Direct Mode 2 on the Host System

This task uses Hyper-V virtualization software that is compatible with Windows Server 2019 and later.

Before you begin

  • Configure and confirm the connection for RoCEv2 Mode 2 for both Cisco IMC and the host.

  • Configure RoCEv2 Mode 2 connection for Cisco IMC.

  • Enable Hyper-V at the Windows host server.

Procedure


Step 1

Go to the Hyper-V switch manager.

Step 2

Create a new Virtual Network Switch (vswitch) for theRoCEv2-enabled Ethernet interface.

  1. Choose External Network and select VIC Ethernet Interface 2 and Allow management operating system to share this network adapter.

  2. Click OK to create the virtual switch.

Bring up the Powershell interface.

Step 3

Configure the non-default vport and enable RDMA with the following Powershell commands:

add-vmNetworkAdapter -switchname vswitch -name vp1 -managementOS
enable-netAdapterRdma -name "vEthernet (vp1"
PS C:\Users\Administrator>
PS C:\Users\Administrator> add - vmNet workAdapter -switchName vswitch -name vpl -managementOS 
PS C:\Users\Administrator> enable-netAdapterRdma -name "vEthernet (vpl)"
PS C:\Users\Administrator>
  1. Configure the set-switch using the following Powershell command.

    new-vmswitch -name setswitch -netAdapterName “Ethernet x” -enableEmbeddedTeam $true

    This creates the switch. Use the following to display the interfaces:

    get-netadapterrdma
    add-vmNetworkAdapter -switchname setswtch -name svp1
    You will see the new vport when you again enter
    get-netadapterrdma
  2. Add a vport:

    add-vmNetworkAdapter -switchname setswtch -name svp1
    You see the new vport when you again enter:
    get-netadapterrdma
  3. Enable the RDMA on the vport:

    enable-netAdapterRdma -name “vEthernet (svp1)”

Step 4

Configure the IPv4 addresses on the RDMA enabled vport in both servers.

Step 5

Create a share in smb-server and map the share in the smb-client.

  1. For smb-client and smb-server in the host system, configure the RoCEv2-enabled vNIC as described above.

  2. Configure the IPv4 addresses of the primary fabric and sub-vNICs in both servers, using the same IP subnet and same unique VLAN for both.

  3. Create a share in smb-server and map the share in the smb-client.

Step 6

Finally, verify the Mode 2 configuration.

  1. Use the Powershell command netstat -xan to display listeners and their associated IP addresses.

    PS C:\Users\Administrator>
    PS C:\Users\Administrator> netstat -xan
    Active    NetworkDirect Connections, Listeners, SharedEndpoints
    Mode    Iflndex    Type    Local Address    Foreign Address    PID
    Kernel    9 Listener    50.37.61.23:445    NA    0
    Kernel    26 Listener    10.37.60.158:445    NA    0
    PS C:\Users\Administrator>
  2. Start any RDMA I/O in the file share in smb-client.

  3. Issue the netstat -xan command again and check for the connection entries to verify they are displayed.

    PS C:\Users\Administrator>
    PS C:\Users\Administrator> netstat -xan
    Active NetworkDirect Connections, Listeners, SharedEndpoints
    Mode    IfIndex    Type    Local Address    Foreign Address    PID
    Kernel    9    Connection    50.37.61.23:192    50.37.61.184:445    0
    Kernel    9    Connection    50.37.61.23:448    50.37.61.184:445    0
    Kernel    9    Connection    50.37.61.23:704    50.37.61.214:445    0
    Kernel    9    Connection    50.37.61.23:960    50.37.61.214:445    0
    Kernel    9    Connection    50.37.61.23:1216    50.37.61.224:44    05
    Kernel    9    Connection    50.37.61.23:1472    50.37.61.224:445    0
    Kernel    9    Connection    50.37.61.23:1728    50.37.61.234:445    0
    Kernel    9    Connection    50.37.61.23:1984    50.37.61.234:445    0
    Kernel    9    Listener    50.37.61.23:445    NA
    Kernel    26    Listener    10.37.60.158:445    NA
    PS C:\Users\Administrator>

Verifying the Configurations on the Host

Once the configurations are done, you should perform the following:

  • Host verification of Mode 1 and Mode 2 configurations

  • Host verification for RDMA capable ports

  • Verification of RDMA capable ports using Advanced Property

  • V port assignment on each PF

SUMMARY STEPS

  1. NIC driver creates Kernal Socket Listeners on each RDMA capable ports in Mode 1 and V ports in Mode 2 to accept incoming remote RDMA requests.
  2. Host verification for RDMA capable ports at host.
  3. Netstat-xan output shows established connections in addition to Listeners. If output shows only listeners with traffic, it indicates traffic is passing only on TCP path. If connections are created on PF or vPorts, traffic is passing on RDMA Path.
  4. Verification of RDMA capable port using Advanced Property. According to the driver, Network Direct functionality to be enabled on RDMA Capable VNIC.
  5. Verify V Port assignment on each PF.

DETAILED STEPS


Step 1

NIC driver creates Kernal Socket Listeners on each RDMA capable ports in Mode 1 and V ports in Mode 2 to accept incoming remote RDMA requests.

Example:

Ps C:Users \Administrator . ADNINISTRATOR9 NETSTAT.EXE Xan	
active NetworkDirect Connectians, Listeners, SharedEndpo int s	
Mode    IFIndex Type Local Address  Foreign     PID
                                    Address
Kernel  75 Listener  50.6.5.33:445  NA           0
Kernel  19 Listener  58.6.5.34:445  NA           0
Kernel  38 Listener  59.6.5.35:445  NA           0
Kernel  89 Listener  58.6.5.36:445  NA           0
Kernel  37 Listener  59.6.5.37:445  NA           0
Kernel  23 Listener  59.6.5.38:445  NA           0
Kernel  42 Listener  5e.6.5.39:445  NA           0
Kernel  40 Listener  59.6.5.40:445  NA           0
Kernel  61 Listener  58.6.5.41:445  NA           0
Kernel  79 Listener  58.6.5.42:445  NA           0
Kernel   2 Listener  59.6.5.43:445  NA           0
Kernel  88 Listener  5.5.5.44:445   NA           0
Kernel  11 Listener  59.6.5.45:445  NA           0
Kernel   9 Listener  58.6.5.46:445  NA           0
Kernel  82 Listener  59.6.5.47:445  NA           0
Kernel  83 Listener  58.6.5.48:445  NA           0
Kernel  73 Listener  58.6.5.49:445  NA           0
Kernel  71 Listener  50.6.5.50:445  NA           0
Kernel  se Listener  50.6.5.51i445  NA           0
Kernel   8 Listener  58.6.5.52:445  NA           0
Kernel   5 Listener  50.6.5.53:445  NA           0
Kernel  68 Listener  58.6.5.54:445  NA           0
Kernel  76 Listener  58.6.5.55:445  NA           0
Kernel  34 Listener  50.6.5.56:445  NA           0

Step 2

Host verification for RDMA capable ports at host.

Example:

PS C:\Users\administrator> Get-NetAdapterRdma

Name InterfaceDescription             Enabled   PFC    ETS
---- -------------------------------  --------  ------ -------                     
eth2 Cisco VIC Ethernet Interface #3   True     False  False
eth1 Cisco VIC Ethernet Interface #2   True     False  False
eth0 Cisco VIC Ethernet Interface      False    False  False

Step 3

Netstat-xan output shows established connections in addition to Listeners. If output shows only listeners with traffic, it indicates traffic is passing only on TCP path. If connections are created on PF or vPorts, traffic is passing on RDMA Path.

Example:

PS C:\Users\administrator> netstat -xan

Active NetworkDirect Connections, Listeners, SharedEndpoints

Mode   IfIndex Type   Local Address    Foreign Address     PID
-----  -------------  ---------------  ---------------    -----
Kernel 3 Connection   50.28.1.19:445   50.28.1.14:9408      0
Kernel 3 Connection   50.28.1.19:445   50.28.1.14:9664      0
Kernel 3 Connection   50.28.1.19:445   50.28.1.84:12480     0
Kernel 3 Connection   50.28.1.19:445   50.28.1.84:13504     0
Kernel 3 Connection   50.28.1.19:445   50.28.1.105:15808    0
Kernel 3 Connection   50.28.1.19:445   50.28.1.97:20672     0
Kernel 3 Connection   50.28.1.19:445   50.28.1.111:10432    0
Kernel 3 Connection   50.28.1.19:445   50.28.1.111:11968    0
Kernel 3 Connection   50.28.1.19:445   50.28.1.111:12736    0
Kernel 3 Connection   50.28.1.19:1472  50.28.1.14:445       0

Step 4

Verification of RDMA capable port using Advanced Property. According to the driver, Network Direct functionality to be enabled on RDMA Capable VNIC.

Step 5

Verify V Port assignment on each PF.

Example:

PS C:\Users\Administrator> Get-NetAdapterVPort

Name           ID MacAddress         VID   ProcMask   FID State       ITR       QPairs
----           -- ----------         ---   --------   --- -----       ---       ------
Eth3-605-RDMA  0                           0:0        PF  Activated   Unknown   1
Eth3-605-RDMA  1  00-15-5D-ED-EE-36        0:2        PF  Activated   Adaptive  1
Eth3-605-RDMA  2  00-15-5D-ED-EE-2A        0:0        PF  Activated   Adaptive  1
Eth3-605-RDMA  3  00-15-5D-ED-EE-35        0:0        PF  Activated   Adaptive  1
Eth3-605-RDMA  4  00-15-5D-ED-EE-2D        0:0        PF  Activated   Adaptive  1
Eth3-605-RDMA  5  00-15-5D-ED-EE-31        0:0        PF  Activated   Adaptive  1
Eth5-605-RDMA  0                           0:0        PF  Activated   Unknown   1
Eth5-605-RDMA  1  00-15-5D-ED-EE-33        0:8        PF  Activated   Adaptive  1
Eth5-605-RDMA  2  00-15-5D-ED-EE-2B        0:0        PF  Activated   Adaptive  1
Eth5-605-RDMA  3  00-15-5D-ED-EE-29        0:0        PF  Activated   Adaptive  1
Eth5-605-RDMA  4  00-15-5D-ED-EE-30        0:0        PF  Activated   Adaptive  1
Eth5-605-RDMA  5  00-15-5D-ED-EE-2C        0:0        PF  Activated   Adaptive  1

Removing RoCEv2 on vNIC Interface Using Cisco IMC GUI

You must perform this task to remove RoCEv2 on the vNIC interface.

Procedure


Step 1

In the Navigation pane, click Networking.

Step 2

Expand Networking and select the adapter from which you want to remove RoCEv2 configuration.

Step 3

Select vNICs tab.

Step 4

Select the vNIC from which you want to remove RoCEv2 configuration.

Step 5

Expand RoCE Properties tab and uncheck the RoCE check box.

Step 6

Click Save Changes.

Step 7

Reboot the server for the above changes to take effect.


Configuring RoCEv2 in Linux

Configuring RoCEv2 for NVMeoF using Cisco IMC GUI

Procedure


Step 1

In the Navigation pane, click Networking.

Step 2

Expand Networking and click on the adapter to configure RoCEv2 vNIC.

Step 3

Select the vNICs tab.

Step 4

Perform one the following:

  • Click Add vNIC to create a new vNIC and modify the properties as mentioned in next step.

    OR

  • From the left pane, select an existing vNIC and modify the properties as mentioned in next step.

Step 5

Expand RoCE Properties.

Step 6

Select RoCE checkbox.

Step 7

Modify the following vNIC properties:

Property

Field

Value

Ethernet Interrupt

Interrupt count field

256

Ethernet Receive Queue

Count field

1

Ring Size field

512

Ethernet Transmit Queue

Count field

1

Ring Size field

256

Completion Queue

Count field

2

RoCE Properties

Queue Pairs field

1024

Memory Regions field

131072

Resource Groups field

8

Class of Service drop-down list

5

Step 8

Click Save Changes.

Step 9

Select Reboot when prompted.


Enabling an SRIOV BIOS Policy

Use these steps to configure the server with RoCEv2 vNIC to enable the SRIOV BIOS policy before enabling the IOMMU driver in the Linux kernel.

Procedure


Step 1

In the Navigation pane, click Compute.

Step 2

Expand BIOS > Configure BIOS > I/O.

Step 3

Select Intel VT for direct IO to Enabled.

Note

 

Intel VT for directed IO option is enabled, by default, on Cisco UCS 220 and C240 M8 servers.

Step 4

Click Save.

Step 5

Reboot the host for the changes to take effect.


Configuring NVMeoF Using RoCEv2 on the Host

Before you begin

Configure the server with RoCEv2 vNIC and the SRIOV-enabled BIOS policy.

Procedure


Step 1

Open the /etc/default/grub file for editing.

Step 2

Add intel_iommu=on to the end of the line for GRUB_CMDLINE_LINUXas shown in the sample file below.

sample /etc/default/grub configuration file after adding intel_iommu=on:
# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap biosdevname=1 rhgb quiet intel_iommu=on
GRUB_DISABLE_RECOVERY="true"

Step 3

Save the file.

Step 4

After saving the file, run the following command to generate a new grub.cfg file:

  • For Legacy boot:

    # grub2-mkconfig -o /boot/grub2/grub.cfg
  • For UEFI boot:

    # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

Step 5

Reboot the server. You must reboot your server for the changes to take after enabling IOMMU.

Step 6

Verify that the server booted with the intel_iommu=on option by checking the output file.

cat /proc/cmdline | grep iommu

Note its inclusion at the end of the output.

[root@localhost basic-setup]# cat /proc/cmdline | grep iommu
BOOT_IMAGE=/vmlinuz-3.10.0-957.27.2.el7.x86_64 root=/dev/mapper/rhel-
root ro crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb 
quiet intel_iommu=on LANG=en_US.UTF-8

What to do next

Download the enic and enic_rdma drivers.

Installing Cisco enic and enic_rdma Drivers

The enic_rdma driver requires enic driver. When installing enic and enic_rdma drivers, download and use the matched set of enic and enic_rdma drivers on Cisco.com. Attempting to use the binary enic_rdma driver downloaded from Cisco.com with an inbox enic driver, will not work.

Procedure


Step 1

Install the enic and enic_rdma rpm packages:

# rpm -ivh kmod-enic-<version>.x86_64.rpm kmod-enic rdma-<version>.x86_64.rpm

Note

 

During enic_rdma installation, the enic_rdmalibnvdimm module may fail to install on RHEL 7.7 because the nvdimm-security.conf dracut module needs spaces in the add_drivers value. For workaround, please follow the instruction from the following links:

https://access.redhat.com/solutions/4386041

https://bugzilla.redhat.com/show_bug.cgi?id=1740383

Step 2

The enic_rdma driver is now installed but not loaded in the running kernel. Reboot the server to load enic_rdma driver into the running kernel.

Step 3

Verify the installation of enic_rdma driver and RoCE v2 interface:

# dmesg | grep enic_rdma
[    4.025979] enic_rdma: Cisco VIC Ethernet NIC RDMA Driver, ver 1.0.0.6-802.21 init
[    4.052792] enic 0000:62:00.1 eth1: enic_rdma: IPv4 RoCEv2 enabled
[    4.081032] enic 0000:62:00.2 eth2: enic_rdma: IPv4 RoCEv2 enabled

Step 4

Load the vme-rdma kernel module:

# modprobe nvme-rdma
After server reboot, nvme-rdma kernel module is unloaded. To load nvme-rdma kernel module every server reboot, create nvme_rdma.conf file using:
# echo nvme_rdma > /etc/modules-load.d/nvme_rdma.conf

Note

 

For more information about enic_rdma after installation, use the rpm -q -l kmod-enic_rdma command to extract the README file.


What to do next

Discover targets and connect to NVMe namespaces. If your system needs multipath access to the storage, please go to the section for Setting Up Device Mapper Multipath.

Discovering the NVMe Target

Use this procedure to discover the NVMe target and connect NVMe namespaces.

Before you begin

Install nvme-cli version 1.6 or later if it is not installed already.


Note


Skip to Step 2 below if nvme-cli version 1.7 or later is installed.


Configure the IP address on the RoCE v2 interface and make sure the interface can ping the target IP.

Procedure


Step 1

Create an nvme folder in /etc, then manually generate host nqn.

# mkdir /etc/nvme
# nvme gen-hostnqn > /etc/nvme/hostnqn

Step 2

Create a settos.sh file and run the script to set priority flow control (PFC) in IB frames.

Note

 

To avoid failure of sending NVMeoF traffic, you must create and run this script after every server reboot.

# cat settos.sh
#!/bin/bash
for f in `ls /sys/class/infiniband`;
do
        echo "setting TOS for IB interface:" $f
        mkdir -p /sys/kernel/config/rdma_cm/$f/ports/1
        echo 186 > /sys/kernel/config/rdma_cm/$f/ports/1/default_roce_tos
done

Step 3

Discover the NVMe target by entering the following command.

nvme discover --transport=rdma --traddr=<IP address of transport target port>
For example, to discover the target at 50.2.85.200:
# nvme discover --transport=rdma --traddr=50.2.85.200

Discovery Log Number of Records 1, Generation counter 2
=====Discovery Log Entry 0======
trtype:  rdma
adrfam:  ipv4
subtype: nvme subsystem
treq:    not required
portid:  3
trsvcid: 4420
subnqn:  nqn.2010-06.com.purestorage:flasharray.9a703295ee2954e
traddr:  50.2.85.200
rdma_prtype: roce-v2
rdma_qptype: connected
rdma_cms:    rdma-cm
rdma_pkey: 0x0000

Note

 

To discover the NVMe target using IPv6, put the IPv6 target address next to the traddr option.

Step 4

Connect to the discovered NVMe target by entering the following command.

nvme connect --transport=rdma --traddr=<IP address of transport target port>> -n <subnqn 
value from nvme discover>
For example, to discover the target at 50.2.85.200 and the subnqn value found above:
# nvme connect --transport=rdma --traddr=50.2.85.200 -n nqn.2010-06.com.purestorage:flasharray.
9a703295ee2954e

Note

 

To connect to the discovered NVMe target using IPv6, put the IPv6 target address next to the traddr option.

Step 5

Use the nvme list command to check mapped namespaces:

# nvme list
Node         SN               Model                   Namespace Usage       Format       FW Rev
------------ ---------------- ----------------------- --------------------- -----------  -------
/dev/nvme0n1 09A703295EE2954E Pure Storage FlashArray 72656 4.29 GB/4.29 GB 512 B + 0 B  99.9.9
/dev/nvme0n2 09A703295EE2954E Pure Storage FlashArray 72657 5.37 GB/5.37 GB 512 B + 0 B  99.9.9

Setting Up Device Mapper Multipath

If your system is configured with Device Mapper multipathing (DM Multipath), use the following steps to set up Device Mapper multipath.

Procedure


Step 1

Install the device-mapper-multipath package if it is not installed already

Step 2

Enable and start multipathd:

# mpathconf --enable --with_multipathd y

Step 3

Edit the etc/multipath.conf file to use the following values :

defaults {
polling_interval    10
path_selector    "queue-length 0"
path_grouping_policy    multibus
fast_io_fail_tmo    10
no_path_retry    0
features    0
dev_loss_tmo    60
user_friendly_names    yes
}

Step 4

Flush with the updated multipath device maps.

# multipath -F

Step 5

Restart multipath service:

# systemctl restart multipathd.service

Step 6

Rescan multipath devices:

# multipath -v2

Step 7

Check the multipath status:

# multipath -ll

Deleting RoCEv2 Interface Using Cisco IMC CLI

SUMMARY STEPS

  1. server # scope chassis
  2. server/chassis # scope adapter index_number
  3. server/chassis/adapter # scope host-eth-if vNIC_name
  4. server/chassis/adapter/host-eth-if # set rocev2 disabled
  5. server/chassis/adapter/host-eth-if *# commit

DETAILED STEPS

  Command or Action Purpose

Step 1

server # scope chassis

Enters the chassis command mode.

Step 2

server/chassis # scope adapter index_number

Enters the command mode for the adapter card at the PCI slot number specified by index_number.

Note

 

Ensure that the server is powered on before you attempt to view or change adapter settings. To view the index of the adapters configured on you server, use the show adapter command.

Step 3

server/chassis/adapter # scope host-eth-if vNIC_name

Enters the command mode for the vNIC specified by vNIC_name.

Step 4

server/chassis/adapter/host-eth-if # set rocev2 disabled

Disables RoCE properties on the vNIC.

Step 5

server/chassis/adapter/host-eth-if *# commit

Commits the transaction to the system configuration.

Note

 

The changes take effect when the server is rebooted.

Example

server# scope chassis
server/chassis # scope adapter 1
server/chassis/adapter # scope host-eth-if vNIC_Test
server/chassis/adapter/host-eth-if # set rocev2 disabled
server/chassis/adapter/host-eth-if *# commit

Configuring RoCEv2 in EXSi

Installing NENIC Driver

The eNIC drivers, which contain the RDMA driver, are available as a combined package. Download and use the eNIC driver on cisco.com.

These steps assume this is a new installation.


Note


While this example uses the /tmp location, you can place the file anywhere that is accessible to the ESX console shell.


Procedure


Step 1

Copy the eNIC VIB or offline bundle to the ESX server. The example below uses the Linux scp utility to copy the file from a local system to an ESX server located at 10.10.10.10: and uses the location /tmp.

scp nenic-2.0.4.0-1OEM.700.1.0.15843807.x86_64.vib root@
10.10.10.10:/tmp

Step 2

Specifying the full path, issue the command shown below.

esxcli software vib install -v {VIBFILE}

or

esxcli software vib install -d {OFFLINE_BUNDLE}

Example:

esxcli software vib install -v /tmp/nenic-2.0.4.0-1OEM.
700.1.0.15843807.x86_64.vib

Note

 

Depending on the certificate used to sign the VIB, you may need to change the host acceptance level. To do this, use the command: esxcli software acceptance set --level=<level>

Depending on the type of VIB being installed, you may need to put ESX into maintenance mode. This can be done through the VI Client, or by adding the --maintenance-mode option to the above esxcli command.

Upgrading NENIC Driver

  1. To upgrade NENIC driver, enter the command:

    esxcli software vib update -v {VIBFILE}

    or

    esxcli software vib update -d {OFFLINE_BUNDLE}
  2. Copy the enic VIB or offline bundle to the ESX server using Step 1 given above.


Creating and Configuring the ESXi Adapter Policy in Cisco IMC

This procedure applies to configuring the ESXi adapter policy for RoCEv2.

Before you begin

Download and install the enic-nvme driver which supports RoCEv2.

Procedure


Step 1

In the Navigation pane, click the Networking menu.

Step 2

Expand Networking and click on the adapter to configure RoCEv2 vNIC.

Step 3

Select the vNICs tab.

Step 4

Perform one the following:

  • Click Add vNIC to create a new vNIC and modify the properties as mentioned in next step.

  • From the left pane, select an existing vNIC and modify the properties as mentioned in next step.

Step 5

Expand General pane.

  1. On the MAC address dropdown, select the Auto checkbox or enter the desired address.

  2. Select which VLAN you want use use from the drop-down list.

  3. Click OK.

Step 6

Expand RoCE Properties.

Step 7

Select RoCE checkbox.

Step 8

Modify the following vNIC properties:

Property

Field

Value

Ethernet Interrupt

Interrupt count field

256

Coalescing Time field

125

Interrupt Mode field

MSIx

Coalescing Type field

MIN

Ethernet Receive Queue

Count field

1

Ring Size field

512

Ethernet Transmit Queue

Count field

1

Ring Size field

256

Completion Queue

Count field

2

RoCE Properties

Queue Pairs field

1024

Memory Regions field

131072

Resource Groups field

8

Class of Service drop-down list

5

Step 9

Click Save Changes.

Step 10

Select Reboot.


NENIC RDMA Functionality

Differences between the use case for RDMA on Linux and ESXi:

  • In ESXi, the physical interface (vmnic) MAC is not used fo RoCEv2 traffic. Instead, the VMkernel port (vmk) MAC is used.

    Outgoing RoCE packets use the vmk MAC in the Ethernet source MAC field, and incoming RoCE packets use the vmk MAC in the Ethernet destination mac field. The vmk MAC address is a VMware MAC address assigned to the vmk interface when it is created.

  • In Linux, the physical interface MAC is used in source MAC address field in the RoCE packets. This Linux MAC is usually a Cisco MAC address configured to the VNIC using Cisco IMC.

If you ssh into the host and use the esxcli network ip interface list command, you can see the MAC address.

vmko
    Name: vmko
    MAC Address: 2c:f8:9b:a1:4c:e7 
    Enabled: true
    Portset: vSwitch0
    Portgroup: Management Network 
    Netstack Instance: defaultTcpipStack
    VDS Name: N/A
    VDS UUID: N/A
    VDS Port: N/A
    VDS Connection: -1
    Opaque Network ID: N/A 
    Opaque Network Type: N/A
    External ID: N/A
    MTU: 1500
    TSO MSS: 65535
    RXDispQueue Size: 2
    Port ID: 67108881

You must create a vSphere Standard Switch to provide network connectivity for hosts, virtual machines, and VMkernel traffic. Depending on the connection type that you want to create, you can create a new vSphere Standard Switch with a VMkernel adapter, only connect physical network adapters to the new switch, or create the switch with a virtual machine port group.

Create Network Connectivity Switches

Use these steps to create a vSphere Standard Switch to provide network connectivity for hosts, virtual machines, and to VMkernel traffic.

Before you begin

Ensure that you have downloaded and installed the NENIC driver.

Procedure


Step 1

In the vSphere Client, navigate to the host.

Step 2

On the Configure tab, expand Networking and select Virtual Switches.

Step 3

Click on Add Networking.

The available network adapter connection types are:

  • Vmkernel Network Adapter

    Creates a new VMkernel adapter to handle host management traffic

  • Physical Network Adapter

    Adds physical network adapters to a new or existing standard switch.

  • Virtual Machine Port Group for a Standard Switch

    Creates a new port group for virtual machine networking.

Step 4

Select connection type Vmkernel Network Adapter.

Step 5

Select New Standard Switch and click Next.

Step 6

Add physical adapters to the new standard switch.

  1. Under Assigned Adapters, select New Adapters.

  2. Select one or more adapters from the list and click OK. To promote higher throughput and create redundancy, add two or more physical network adapters to the Active list.

  3. (Optional) Use the up and down arrow keys to change the position of the adapter in the Assigned Adapters list.

  4. Click Next.

Step 7

For the new standard switch you just created for the VMadapter or a port group, enter the connection settings for the adapter or port group.

  1. Enter a label that represents the traffic type for the VMkernel adapter.

  2. Set a VLAN ID to identify the VLAN the VMkernel uses for routing network traffic.

  3. Select IPV4 or IPV6 or both.

  4. Select an MTU size from the drop-down menu. Select Custom if you wish to enter a specific MTU size. The maximum MTU size is 9000 bytes.

    Note

     

    You can enable Jumbo Frames by setting an MTU greater than 1500.

  5. After setting the TCP/IP stack for the VMkernel adapter, select a TCP/IP stack.

    To use the default TCP/IP stack, select it from the available services.

    Note

     

    Be aware that the TCP/IP stack for the VMkernel adapter cannot be changed later.

  6. Configure IPV4 and/or IPV6 settings.

Step 8

On the Ready to Complete page, click Finish.

Step 9

Check the VMkernel ports for the VM Adapters or port groups with NVMe RDMA in the vSphere client, as shown in the Results below.


What to do next

Create vmhba ports on top of vmrdma ports.

Create VMHBA Ports in ESXi

Use the following steps for creating vmhba ports on top of the vmrdma adapter ports.

Before you begin

Create the adapter ports for storage connectivity.

Procedure


Step 1

Go to vCenter where your ESXi host is connected.

Step 2

Click on Host>Configure>Storage adapters.

Step 3

Click +Add Software Adapter.

Add Software Adapter dialog box is displayed.

Step 4

Select Add software NVMe over RDMA adapter and the vmrdma port you want to use.

Step 5

Click OK

The vmhba ports for the VMware NVMe over RDMA storage adapter will be shown.


What to do next

Configure NVMe.

Displaying vmnic and vmrdma Interfaces

ESXi creates a vmnic interface for each enic VNIC configured to the host.

Before you begin

Create Network Adapters and VHBA ports.

Procedure


Step 1

Use ssh to access the host system.

Step 2

Enter esxcfg-nics -l to list the vmnics on ESXi.


Name   PCI          Driver  Link  Speed     Duplex  MAC Address       MTU  Description
vmnico 0000:3b:00.0 ixgben  Down  0Mbps     Half    2c:f8:9b:a1:4c:e6 1500 Intel(R) Ethernet Controller X550
vmnic1 0000:36:00.1 ixgben  Up    1000Mbps  Full    2c:f8:9b:a1:4c:e7 1500 Intel(R) Ethernet Controller X550
vmnic2 0000:1d:00.0 nenic   Up    50000Mbps Full    2c:f8:9b:79:8d:bc 1500 Cisco Systems Inc Cisco VIC Ethernet NIC
vmnic3 0000:1d:00.1 nenic   Up    50000Mbps Full    2c:f8:9b:79:8d:bd 1500 Cisco Systems Inc Cisco VIC Ethernet NIC
vmnic4 0000:63:00.0 nenic   Down  0Mbps     Half    2c:f8:9b:51:b3:3a 1500 Cisco Systems Inc Cisco VIC Ethernet NIC
Venic5 0000:63:00.1 nenic   Down  0Mbps     Half    2c:f8:9b:51:b3:3b 1500 Cisco Systems Inc Cisco VIC Ethernet NIC

esxcli network nic list


Name   PCI          Driver  Admin Status Link Status Speed Duplex MAC Address       MTU  Description
vmnico 0000:3b:00.0 ixgben  Up           Down        0     Half   2c:f8:9b:a1:4c:e6 1500 Intel(R) Ethernet Controller X550
vmnic1 0000:36:00.1 ixgben  Up           Up          1000  Full   2c:f8:9b:a1:4c:e7 1500 Intel(R) Ethernet Controller X550
vmnic2 0000:1d:00.0 nenic   Up           Up          50000 Full   2c:f8:9b:79:8d:bc 1500 Cisco Systems Inc Cisco VIC Ethernet NIC
vmnic3 0000:1d:00.1 nenic   Up           Up          50000 Full   2c:f8:9b:79:8d:bd 1500 Cisco Systems Inc Cisco VIC Ethernet NIC
vmnic4 0000:63:00.0 nenic   Up           Down        0     Half   2c:f8:9b:51:b3:3a 1500 Cisco Systems Inc Cisco VIC Ethernet NIC
Venic5 0000:63:00.1 nenic   Up           Down        0     Half   2c:f8:9b:51:b3:3b 1500 Cisco Systems Inc Cisco VIC Ethernet NIC

When the enic driver registers with ESXi the RDMA device for a RDMA capable VNIC, ESXi creates a vmrdma device and links it to the corresponding vmnic.

Step 3

Use esxcli rdma device list to list the vmrdma devices.


[root@RackServer:~] esxcli rdma device list 
Name    Driver State  MTU  Speed   Paired Uplink Description
-----   ------ -----  ---  -----   ------------- -----
vmrdma0 nenic  Active 4096 50 Gbps vmnic1        Cisco UCS VIC 15XXX (A0)
vmrdmal nenic  Active 4096 50 Gbps vmnic2        Cisco UCS VIC 15XXX (A0)
[root@StockholmRackServer:~] esxcli rdma device vmknic list
Device  Vmknic NetStack
------- ------ --------
vmrdma0 vmk1   defaultTcpipStack
vmrdma1 vmk2   defaultTcpipStack

Step 4

Use esxcli rdma device list to check the protocols supported by the vmrdma interface.

For enic, RoCE v2 will be the only protocol supported from this list. The output of this command should match the RoCEv2 configuration on the VNIC.

Step 5

Use esxcli rdma device protocol list to check the protocols supported by the vmrdma interface.

For enic RoCE v2 will be the only protocol supported from this list. The output of this command should match the RoCEv2 configuration on the VNIC.


[root@RackServer:~] esxcli rdma protocol list 
Device  RoCE v1 RoCE v2 iWARP
-----   ------- ------- -----
vmrdma0 false   true    false
vmrdmal false   true    false

Step 6

Use esxcli nvme adapter list to list the NVMe adapters and the vmrdma and vmnic interfaces it is configured on.


[root@RackServer:~] esxcli nvme adapter list 
Adapter Adapter Qualified Name          Transport Type Driver   Associated Devices 
------- ----------------------          -------------- ------   ------------------
vmhba64 aqn: nvmerdma:2c-f8-9b-79-8d-bc RDMA           nvmerdma vmrdmaR, vmnic2 
vmhba65 aqn: nvmerdma:2c-f8-9b-79-8d-bd RDMA           nvmerdma vmrdma1, vmnic3

Step 7

All vmhbas in the system can be listed using esxcli storage core adapter list.


[root@RackServer:~] esxcli storage core adapter list
HBA Name Driver   Link State UID                                  Capabilities        Description
-------- ------   ---------- ------------------------------------ ------------------- -------------------------------------
vmhbao   nfnic    link-down  fc.10002cf89b798dbe:20002cf89b798dbe Second Level Lun ID (0000:1d:00.2) Cisco Corporation Cisco 
                                                                                        UCS VIC Fnic Controller
vmhba1   vmw_ahci link-n/a   sata.vmhba1                                              (0000:00:11.5) Intel Corporation Lewisburg 
                                                                                        SATA AHCI Controller
vmhba2   nfnic    link-down  fc.10002cf89b798dbf:20002cf89b798dbf Second Level Lun ID (0000:1d:00.3) Cisco Corporation Cisco 
                                                                                        UCS VIC Fnic Controller
vmhba3   nfnic    link-down  fc.10002cf89b51b33c:20002cf89b51b33c Second Level Lun ID (0000:63:00.2) Cisco Corporation Cisco 
                                                                                        UCS VIC Fnic Controller 
vmhba4   nfnic    link-down  fc.10002cf89b51b33d:20002cf89b51b33d Second Level Lun ID (0000:63:00.3) Cisco Corporation Cisco 
                                                                                        UCS VIC Fnic Controller 
vmhba5   lsi_mr3  link-n/a   sas.5cc167e9732f9b00                                     (0000:3c:00.0) Broadcom Cisco 126 Modular 
                                                                                        Raid Controller with 2GB cache
vmhba64  nvmerdma link-n/a   rdma.vmnic2:2c: f8:9b:79:8d:bc                         VMware NVMe over RDMA Storage Adapter on vmrdma0
vmhba65  nvmerdma link-n/a   rdma.vmnic3:2c:f8:9b:79:8d:bd                          VMware NVMe over RDMA Storage Adapter on vmrdma1

What to do next

Configure NVME.

NVMe Fabrics and Namespace Discovery

This procedure is performed through the ESXi command line interface.

Before you begin

Create and configure NVMe on the adapter's VMHBAs. The maximum number of adapters is two, and it is a best practice to configure both for fault tolerance.

Procedure


Step 1

Check and enable NVMe on the vmrdma device.

esxcli nvme fabrics enable -p RDMA -d vmrdma0

The system should return a message showing if NVMe is enabled.

Step 2

Discover the NVMe fabric on the array by entering the following command:

esxcli nvme fabrics discover -a vmhba64 -l transport_address

figure with esxcli nvme fabrics discover -a vmhba64 -l 50.2.84.100

The output lists the following information: Transport Type, Address Family, Subsystem Type, Controller ID, Admin Queue, Max Size, Transport Address, Transport Service ID, and Subsystem NQN

You will see output on the NVMe controller.

Step 3

Perform NVMe fabric interconnect.

esxcli nvme fabrics discover -a vmhba64 -l transport_address p Transport Service ID -s Subsystem NQN

Step 4

Repeat steps 1 through 4 to configure the second adapter.

Step 5

Display the controller list to verify the NVMe controller is present and operating.

esxcli nvme controller list RDMA -d vmrdma0


[root@RackServer:~] esxcli nvme controller list
Name                                      Controller Number Adapter  Transport Type Is Online
----------------------------------------  ----------------- -------- -------------- ---------
nqn.2010-06.com.purestorage: flasharray.  258               vmhba64  RDMA           true
5ab274df5b161455#vmhba64#50.2.84.100:4420 
nqn.2010-06.com.purestorage: flasharray.  259               vmhba65  RDMA           true
Sab274df5b161455#vmhba65#50.2.83.100:4420 
[root@RackServer:~] esxcli nvme namespace list
Name                                 Controller Number Namespace ID Block Size Capacity in MB
------------------------------------ ----------------  ------------ ---------- --------------
eui.00e6d65b65a8f34824a9374e00011745 258               71493        512        102400
eui.00e6d65b65a8f34024a9374e00011745 259               71493        512        102400

Example

The following example shows esxcli discovery commands executed on the server.

[root@RackServer:~] esxcli nvme fabrics enable -p RDMA -d vmrdma0 NVMe already 
enabled on vmrdma0
[root@RackServer:~] esxcli nvme fabrics discover -a vmhba64 -l 50.2.84.100
Transport Address  Subsystem Controller Admin Queue Transport   Transport  Subsystem NQN
Type      Family   Type      ID         Max Size    Address     Service ID 
--------  -------- --------- --------   ----------- ----------- ---------  -----------------
RDMA      IPV4     NVM       65535      31          50.2.84.100 4420       nq.210-06.com.
                                                                           purestorage:
                                                                           flasharray:2dp1239anjkl484
[root@RackServer:~] esxcli nvme fabrics discover -a vmhba64 -l 50.2.84.100 p 4420 -s nq.210-06.com.
purestorage:flasharray:2dp1239anjkl484 Controller already connected

Deleting the ESXi RoCEv2 Interface Using Cisco IMC

Use these steps to delete the ESXi RoCEv2 configuration for a specific port.

Procedure


Step 1

In the Navigation pane, click Networking.

Step 2

Expand Networking and select the adapter from which you want to remove RoCEv2 configuration.

Step 3

Select vNICs tab.

Step 4

Select the vNIC from which you want to delete the ESXi RoCEv2 configuration.

Step 5

Expand RoCE Properties tab and uncheck the RoCE check box.

Step 6

Click Save Changes.

Step 7

Reboot the server for the above changes to take effect.