The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This document describes the process of configuring the maximum Queue Depth and the Outstanding Input/Output (IO) on a native fiberchannel network interface card (nfnic) driver. In the VMware ESXi 6.7 hypervisor, the fiberchannel network interface card (fnic) driver was replaced with the nfnic driver for all Cisco adapters.
The default queue depth of the nfnic driver is set to 32 and on all initial releases of the nfnic driver there is no way to adjust the nfnic queue depth. This limits all Maximum Device Queue Depths and Disk Schedule Number Requests Outstanding to 32. It has also caused issues while using vSphere Virtual Volumes since the recommended queue depth is 128. The effects of this limit can also be seen on any VMs that experience a higher workload and require a larger queue depth in general.
Contributed to by Michael Baba, Josh Good and Alejandro Marino; Cisco TAC Engineers.
Enhancement created to add ability to configure queue depth parameter: https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvo09082
Starting with version 4.0.0.35 of the nfnic driver, you can adjust the "lun_queue_depth_per_path" via the ESXi Command Line Interface (CLI). This driver version can be manually installed to the ESXi host if it is not already on it.
The nfnic driver 4.0.0.35 can be found in the UCS Blade Firmware bundle 4.0.4 and can also be downloaded separately from VMware. You should refer to the UCS Hardware and Software Compatibility page to get the latest recommended driver for your specific hardware and software combination.
To check the currently installed version of the nfnic driver, run:
esxcli software vib list | grep nfnic
You should see something like:
[root@localhost:~] esxcli software vib list | grep nfnic nfnic 4.0.0.14-1OEM.670.1.28.10302608 Cisco VMwareCertified 2019-08-24 [root@localhost:~]
If you do not see any output, then you currently do not have the nfnic driver installed. Please refer to the UCS Hardware and Software Compatibility page to check if your configuration should be using the nfnic or fnic driver.
Detailed instructions to install the latest drivers are beyond the scope of this guide. Please refer to UCS Driver Installation for Common Operating Systems or VMware's documentation for step-by-step instructions to upgrade the driver. Once the driver is upgraded you can use the same commands above to verify the version.
Once the correct driver is installed we can check that the module parameters are available to configure with:
esxcli system module parameters list -m nfnic
We can see in this output that the default value is set to 32, however, we can configure any value from 1-1024. If using vSphere Virtual Volumes, it is recommended to set this value to 128. We would recommend to reach out to VMware and your storage vendor for any other specific recommendations.
Sample Output:
[root@localhost:~] esxcli system module parameters list -m nfnic Name Type Value Description ------------------------ ----- ----- -------------------------------------------------------------- lun_queue_depth_per_path ulong nfnic lun queue depth per path: Default = 32. Range [1 - 1024] [root@localhost:~]
To change the Queue Depth parameter, the command is below. In the below example we are changing it to 128, but your value may be different depending on your environment.
esxcli system module parameters set -m nfnic -p lun_queue_depth_per_path=128
Using the same command as above we can config the change has been made:
[root@localhost:~] esxcli system module parameters list -m nfnic Name Type Value Description ------------------------ ----- ----- -------------------------------------------------------------- lun_queue_depth_per_path ulong 128 nfnic lun queue depth per path: Default = 32. Range [1 - 1024] [root@localhost:~]
We can now configure the Outstanding IOs on the Protocol Endpoint to match the Queue Depth above (in our example, 128) and then check to make sure both values have changed to 128.
NOTE: You may need to reboot the host before this configuration change can be made.
To change the Queue Depth for a specific device:
esxcli storage core device set -O 128 -d naa.xxxxxxxxx
To find the device ID you can use the below command:
esxcli storage core device list
To confirm the changes for a specific device:
esxcli storage core device list -d naa.xxxxxxxxxx
An example with output. We can see that the "Device Max Queue Depth:" and "No of outstanding IOs with competing worlds:" are both still 32.
[root@localhost:~] esxcli storage core device list -d naa.600a09803830462d803f4c6e68664e2d naa.600a09803830462d803f4c6e68664e2d Display Name: VMWare_SAS_STG_01 Has Settable Display Name: true Size: 2097152 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.600a09803830462d803f4c6e68664e2d Vendor: NETAPP ...snip for length... Is Boot Device: false Device Max Queue Depth: 32 No of outstanding IOs with competing worlds: 32 Drive Type: unknown RAID Level: unknown Number of Physical Drives: unknown Protection Enabled: false PI Activated: false PI Type: 0 PI Protection Mask: NO PROTECTION Supported Guard Types: NO GUARD SUPPORT DIX Enabled: false DIX Guard Type: NO GUARD SUPPORT Emulated DIX/DIF Enabled: false
Now we change it to 128 for this device
esxcli storage core device set -O 128 -d naa.600a09803830462d803f4c6e68664e2d
And when checking the same output we can see "Device Max Queue Depth:" and "No of outstanding IOs with competing worlds:" are both now 128. If the changes are not immediately reflected then a reboot of the ESXi host may be needed.
[root@localhost:~] esxcli storage core device list -d naa.600a09803830462d803f4c6e68664e2d naa.600a09803830462d803f4c6e68664e2d Display Name: VMWare_SAS_STG_01 Has Settable Display Name: true Size: 2097152 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.600a09803830462d803f4c6e68664e2d Vendor: NETAPP ...snip for length... Is Boot Device: false Device Max Queue Depth: 128 No of outstanding IOs with competing worlds: 128 Drive Type: unknown RAID Level: unknown Number of Physical Drives: unknown Protection Enabled: false PI Activated: false PI Type: 0 PI Protection Mask: NO PROTECTION Supported Guard Types: NO GUARD SUPPORT DIX Enabled: false DIX Guard Type: NO GUARD SUPPORT Emulated DIX/DIF Enabled: false