Guest

Unified Computing

Cisco VM-FEX Best Practices for VMware ESX Environment Deployment Guide

  • Viewing Options

  • PDF (1.5 MB)
  • Feedback

Contents

1 Executive Summary. 3

1.1 Target Audience. 3

1.2 Introduction. 3

2 Cisco UCS VM-FEX Best Practices. 4

2.1 Scale Considerations for static vNICs/vHBAs and Dynamic vNICs. 4

2.2 Defining Dynamic vNICs polices. 8

2.3 Service Profile creation for Full / Half Width blades. 9

2.4 VM-FEX with VMDirectPath (UPT) 10

2.5 VMDirectPath Sizing. 13

3 References. 18


1 Executive Summary

1.1 Target Audience

The target audience for this guide includes, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who want to deploy Cisco VM-FEX for VMware ESX Environment.

1.2 Introduction

Cisco Virtual Machine Fabric Extender (VM-FEX) is a Cisco technology that addresses management and performance concerns in a data center by unifying physical and virtual switch management. The Cisco VM-FEX collapses virtual and physical networking into a single infrastructure. This unified infrastructure enables data center administrators to provision, configure, manage, monitor, and diagnose virtual machine network traffic and bare metal network traffic.

The Cisco VM-FEX significantly reduces the number of network management points, enabling both physical and virtual network traffic to be treated in a consistent policy driven manner.

The VM-FEX software extends Cisco fabric extender technology to the virtual machine with the following capabilities:

Each virtual machine includes a dedicated interface on the parent switch

All virtual machine traffic is sent directly to the dedicated interface on the switch

The software-based switch in the hypervisor is eliminated

Figure 1. Extension of Fabric Extender Technology with Fabric Interconnects Using VM-FEX

Figure 1 shows the extension of Fabric Extender technology with Fabric interconnects using VM-FEX. The Cisco Virtual Machine Fabric Extender (VM-FEX) technology extends Cisco Fabric Extender technology all the way to the virtual machine. Each virtual machine gets a dedicated interface on the parent switch (virtual Ethernet port). All virtual machine traffic is sent directly to the dedicated interface on the switch. VM-FEX eliminates the software based switch within the hypervisor by providing individual virtual machine virtual ports on the physical network switch. Virtual machine traffic is sent directly to the upstream physical network switch, which takes full responsibility for virtual machine switching and policy enforcement. This approach leads to consistent treatment for all network traffic, virtual or physical. VM-FEX consolidates virtual and physical switching layers into a single layer and reduces the number of network management points by an order of magnitude. The following are the benefits of VM-FEX:

Simplicity

One infrastructure for virtual and physical resource provisioning, management, monitoring and troubleshooting

Consistent features, performance and management for virtual and physical infrastructure

Robustness

Programmability, ability to re-number VLANs without disruptive changes

Trouble shooting & Traffic engineering VM traffic from the physical network

Performance

VMDirectPath with vMotion provides near bare metal I/O performance

Line rate traffic to the virtual machine

2 Cisco UCS VM-FEX Best Practices

This guide covers the following topics along with the recommended settings for best performance and a trouble free environment.

Scale considerations for static vNIC/vHBA and Dynamic vNICs

Defining Dynamic vNICs polices

Service Profile creation for Full / Half Width blades

VM-FEX VMDirectPath (UPT) mode

VMDirectPath Sizing

2.1 Scale Considerations for static vNICs/vHBAs and Dynamic vNICs

2.1.1 Discovery Policy

Chassis Discovery Policy defines the number of Fabric Extender (IOM) port links available for all the blades in Chassis. These port links are used to access the Fabric Interconnect for all operations.

The maximum number of Virtual Interfaces (VIF) that can be defined on Cisco VIC Adapters depends on the maximum number of port links available on the Fabric Extender. In a single chassis environment, the number of port links is determined by the chassis discovery policy. However, in environments where there is more than two chassis managed by a single Fabric Interconnect, the number of available port links can vary with the availability of ports on the Fabric Interconnect

Figure 2 shows the method to define Discovery Policy.

Figure 2. Chassis Discovery Policy Setting Window

Table 1 provides the scaling information on the number of VIFs that can be created with the following considerations:

With or without Jumbo frames

A combination of Fabric Interconnects ( 6100 / 6200) and Fabric Extenders (2104 / 2208)

Number of static and dynamic vNICs and vHBAs on Cisco VIC Adapters

ESX version 4.1 and above

Table 1. Scaling Information of Virtual Interfaces

Fabric Interconnect

Fabric Extenders

Chassis UPLINKS

Num VIC Card

Num Static

vNICs

Num vHBAs

Maximum Dynamic

vNICs

Total VIFs

(Static / Dynamic ) vNICs / FC vHBAs

ESX 4.0 U3 – 4.1 – 4.0i U3 – 4.1iU1

Max VIFs

ESX 5.0

Max VIFs

6100

2104

1

1

2

2

9

13

58

58

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

6100

2104

1

2

4

2

7

13

58

58

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

6100

2104

2

1

2

2

24

28

58

58

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

6100

2104

2

2

4

2

22

28

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6100

2104

4

1

2

2

54

58

58

58

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

6100

2104

4

2

4

2

52

58

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6100

2208

8

1

2

2

112

116

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6100

2208

8

2

4

2

110

116

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6200

2104

1

1

2

2

57

61

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6200

2104

1

2

4

2

54

61

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6200

2104

2

1

2

2

112

116

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6200

2104

2

2

4

2

110

116

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6200

2104

4

1

2

2

112

116

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6200

2104

4

2

4

2

110

116

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6200

2208

1

1

2

2

57

61

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

6200

2208

1

2

4

2

55

61

58

116

MTU 9000

4

MTU 9000

58

MTU 1500

54

MTU 1500

58

Note: These numbers are applicable to Cisco UCS 2.0 and ESXi 5.0 and they are consistent with Cisco UCSM 2.0 configuration limits. For more information on the configuration limits see : http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/configuration_limits/2.0/b_UCS_Configuration_Limits_2_0.html. These numbers have been tested with Cisco UCSM 2.0 and ESXi 5.0, however they may change in future versions.

2.2 Defining Dynamic vNICs polices

It is recommended to choose the VMWarePassThrou adapter policy when defining Dynamic Ethernet vNIC policy for the Cisco VIC Adapter in the Service Profile.

The VMWarePassThrou adapter policy parameters are derived from testing performed under various workloads and are tuned for the Cisco UCS system to provide better performance on ESX with the VM-FEX environment.

Cisco UCS applies the VMWarePassThrou adapter policy parameters to Cisco VIC adapters during Service Profile association. Later, when the ESX OS is loaded on the blade associated with the Service Profile, the Ethernet driver of Cisco VIC automatically sets these values in the OS. Manual configuration and tuning is not required on the host.

You can choose to manually tune these parameters depending on your current requirements. However, improper tuning of the parameters can impact performance, cause OS instability and affect the maximum number of VIC Interfaces (Static / Dynamic vNICs).

Configure VMXNET3 driver resource in the Guest OSs in terms of number of interrupts and number of queues (WQs, RQs, CQs).

The values set for WQs, RQs, CQs and Interrupts on the Dynamic vNIC Adapter Policy must match the values for these queues and interrupts in VMXNET3 driver.

Consider the resource configuration of all the VMXNET3 instances of the VMs destined to run on a server.

Configure appropriately the adapter policy within the dynamic vNIC policy in the ESX Host; in order for a VMXNET3 instance to go into High Performance mode, the adapter policy must have enough queue and interrupts resources to satisfy the VMXNET3 instance configuration.

The Figure 3 shows the method to define Dynamic Adapter Policy.

Figure 3. Dynamic Adapter Policy Setting Window

2.3 Service Profile creation for Full / Half Width blades

2.3.1 Full Width Blade with Dual Cisco VIC Adapters

It is the best practice to create Vcon placement policy on Full width blades installed with dual Cisco VIC Adapter and have Static and Dynamic vNICs defined in the Service Profile.

To create the Vcon placement policy, you need to create vNIC/vHBA Placement Policy and place 2 static vNICs on Fabric Interconnects on Vcon 1 & 2 with fabric failure option enabled from Cisco UCS 2.0 onwards. Then, you need to create Dynamic vNIC policy with the number of vNICs defined and define Adapter policy (refer to section 2.2 on page 7). By default protected option A-B is selected, which is also the recommended option.

Note: Ensure not to enable fabric failover on static vNICs before applying Vcon placement policy with Cisco UCS versions 1.4 and below.

The following are the benefits of adopting the configuration best practices:

Cisco UCS will automatically place the entire Dynamic vNICs evenly across the Physical Cisco VIC adapters. This provides a better network load balancing by allowing Dynamic vNICs to pin to specific Fabric Interconnect LAN Uplink ports.

By placing Dynamic vNICs evenly on dual Cisco VIC adapters you can effectively utilize 10G ports on both the adapters to perform network related operations.

When there is a failure along the Fabric Interconnect path including the LAN Ethernet Uplink Interfaces, protected mode enables all Dynamic vNICs to fail over to other Fabric Interconnect.

2.3.2 Half Width Blade with Single Cisco VIC Adapter

Half width blade with single Cisco VIC adapter does not need any Vcon placement policy to be defined in the Service Profile since two Static vNIC / vHBAs and Dynamic vNICs are placed on a single adapter by default.

In the VM-FEX Port Profile, you need to configure the Dynamic Ethernet interface’s actual VLAN, QOS, pinning etc. Unlike static vNICs, Dynamic vNICs are not configured upfront.

2.4 VM-FEX with VMDirectPath (UPT)

VM-FEX VMDirectPath is a feature where the network IO bypasses the Hypervisor’s network kernel and communicates directly with the Cisco VIC adapter. This helps in offloading the host CPU and Memory cycles to handle VM’s networks.

The conditions that need to be met for the Virtual Machines on VM-FEX to use the VMDirectPath are as discussed the following sections.

2.4.1 High Performance Option

VMDirectPath with vMotion (supported with EX 5.0) on VM-FEX is the high performance mode of VM-FEX. You can enable this mode by selecting the High Performance radio button in the port profile as shown in Figure 4.

Figure 4. High Performance Mode Setting in the Port Profile Management Window

Figure 5 shows the method to apply port profile for the Virtual Machine in vCenter Manger. After port profile is applied Directpath I/O Gen2 status automatically goes to active state. The Figure 6 shows the DirectPath I/O Gen2 status is Active after the profile is applied.

Figure 5. Adding Port Profile for Virtual Machines

Figure 6. DirectPath I/O Gen.2 Status

2.4.2 Virtual Machine Configuration with VMDirectPath

The Virtual Machines using the VMDirectPath Dynamic vNICs should have memory reserved. The memory reserved must not exceed the physical available memory on ESX Host. The Adapter Type for VM vNICs using the VMDirectPath should be VMXNET3.

Figure 7 shows the method to enable memory reserve.

Figure 7. Setting the Memory Reserve

You need to make sure that the virtual machine memory is ‘reserve’ and is configured with 1 GB. This is a mandatory configuration, as shown in Figure 8.

Figure 8. Resource Allocation Window to Reserve All Guest Memory

2.5 VMDirectPath Sizing

2.5.1 Windows Guest Virtual Machines with RSS Enabled

The following formula gives the maximum number of Dynamic vNIC with VMDirectPath active mode when all the Virtual Machines run Windows Guest with VMXNET3 emulated driver and have RSS is enabled.

4 * (Static vNICs) + 4 *(vHBAs) + (Num of Dynamic vNICs) + (Num of Dynamic vNICs with VMDirectPath * (Max(Num TQs, Num RQs) + 2) = < 128

Table 2. Scaling of Dynamic vNIC with VMDirectPath, Virtual Machines Running on Windows Guest with VMXNET3 Emulated driver and RSS Enabled

SL NO

Guest OS

RSS Enabled

Static vNICs

Static vHBAs

Max(Num TQs, Num RQs)

Dynamic

vNICs

Max VMDirectPath Dynamic vNICs

1

Windows

Yes

2

2

2

22

22

2

Windows

Yes

2

2

4

16

16

3

Windows

Yes

2

2

8

10

10

4

Windows

No

2

0

2

24

24

5

Windows

No

2

0

4

17

17

6

Windows

No

2

0

8

10

10

2.5.2 Windows Guest Virtual Machines with RSS Disabled

The following formula gives the maximum number of Dynamic vNIC with VMDirectPath active mode when all the Virtual Machines run Windows Guest with VMXNET3 emulated driver and RSS is disabled.

4 * (Static vNICs) + 4 *(vHBAs) + (Num of Dynamic vNICs) + (Num of Dynamic vNICs with VMDirectPath * 2) = < 128

Table 3. Scaling of Dynamic vNIC with VMDirectPath, Virtual Machines Running on Windows Guest with VMXNET3 Emulated Driver and RSS Disabled

SL NO

Guest OS

RSS Enabled

Static vNICs

Static vHBAs

Dynamic

vNICs

Max VMDirectPath Dynamic vNICs

1

Windows

No

2

2

37

37

2

Windows

No

2

0

40

40

2.5.3 Linux Guest Virtual Machines with Multi Queues Enabled

The following formula gives the maximum number of Dynamic vNIC with VMDirectPath active mode when all Virtual Machines run Linux Guest with VMXNET3 emulated driver and multi-queue is enabled.

4 * (Static vNICs) + 4 *(vHBAs) + (Num of Dynamic vNICs) + (Num of Dynamic vNICs with VMDirectPath) * (Num of queue pairs + 1) = < 128

Table 4. Scaling of Dynamic vNIC with VMDirectPath, Virtual Machines Running on Linux Guest with VMXNET3 Emulated Driver and Multi-Queue Enabled

SL NO

Guest OS

Multi-Queue

Enabled

Static vNICs

Static vHBAs

Number Of Queue Pairs

Dynamic

vNICs

Max VMDirectPath Dynamic vNICs

1

Linux

Yes

2

2

1

37

37

2

Linux

Yes

2

2

2

28

28

3

Linux

Yes

2

2

4

18

18

4

Linux

Yes

2

2

8

11

11

5

Linux

Yes

2

0

1

40

40

6

Linux

Yes

2

0

2

30

30

7

Linux

Yes

2

0

4

20

20

8

Linux

Yes

2

0

8

12

12

2.5.4 Linux Guest Virtual Machines with Multi Queues Disabled

The following formula determines the maximum number of Dynamic vNIC with VMDirectPath active mode when all Virtual Machines run Linux Guest with VMXNET3 emulated driver and multi-queue is disabled.

4 * (Static vNICs) + 4 *(vHBAs) + (Num of Dynamic vNICs) + (Num of Dynamic vNICs with VMDirectPath + 1) = < 128

Table 5. Scaling of Dynamic vNIC with VMDirectPath, Virtual Machines Running on Linux Guest with VMXNET3 Emulated Driver and Multi-Queue Disabled

SL NO

Guest OS

Multi-Queue

Disabled

Static vNICs

Static vHBAs

Number Of Queue Pairs

Dynamic

vNICs

Max VMDirectPath Dynamic vNICs

1

Linux

Yes

2

2

1

56

56

2

Linux

Yes

2

0

1

60

60

2.5.5 Linux Guest Virtual Machines with Advanced Configuration

In a Linux Guest Virtual Machine, you can change the interrupt mode of the vmxnet3 from MSI-X to MSI. The MSI-X interrupt mode which is the default interrupt mode when VMDirectPath is enabled, consumes four interrupts. In comparison, the MSI interrupt mode consumes only one interrupt, but causes the multi-queue feature to be turned off.

The following formula gives the maximum number of Dynamic vNIC with VMDirectPath mode active when all Virtual Machines run Linux Guest with VMXNET3 emulated driver and MSI Interrupt is turned on.

4 * (Static vNICs) + 4 *(vHBAs) + ( Num of Dynamic vNICs) = < 128

You need to ensure that the Guest VMs are shutdown before enabling VMDirectPath and changing the interrupt mode from MSI-X to MSI. To change the interrupt mode from MSI-X to MSI, you need to edit the configuration file (.vmx). All supported UPT Linux Guests vNICs (RHEL 6, SLES 11, SLES 11 SP1) can be forced to use MSI interrupt by editing *.vmx file and inserting the following line.

ethernetX.intrMode = "2"

vmfs/volumes/4c9ffcdb-1d1c1488-a327-0025b500008d/Perf-1 # cat Perf-1.vmx | grep ethernet

ethernet0.present = "true"

ethernet0.virtualDev = "vmxnet3"

ethernet0.dvs.switchId = "04 f6 05 50 6f 1f e4 cf-d3 1c 0d e8 f7 24 50 b8"

ethernet0.dvs.portId = "1710"

ethernetX.intrMode = "2"

ethernet0.dvs.portgroupId = "dvportgroup-223"

ethernet0.dvs.connectionId = "753846454"

ethernet0.addressType = "static"

ethernet0.address = "00:50:56:3F:0A:01"

ethernet0.pciSlotNumber = "192"

ethernet0.dvs.portId = "1710"

Table 6. Scaling of Dynamic vNIC with VMDirectPath Mode Active, Virtual Machines Running on Linux Guest with VMXNET3 Emulated Driver and MSI Interrupt Turned On

SL NO

Guest OS

Interrupt Mode MSI

Static vNICs

Static vHBAs

Dynamic

vNICs

Max VMDirectPath Dynamic vNICs

1

Linux

Yes

2

2

112

112

2

Linux

Yes

2

0

112

112

2.5.6 Configuration

This section helps you to understand the method to enable or disable Receive Side Scaling (RSS) on a Windows. Figure 9 shows the method to enable / disable RSS on a Windows Guest Virtual Machine.

Figure 9. Enabling RSS in VMXNet3 Driver

Linux

NICs support multiple receive and transmit descriptor queues referred to as multi-queue. On the receive side different packets are sent to different queues to distribute processing among CPUs. The NIC distributes packets by applying filters to each packet. These packets are distributed in separate queues which are processed by separate CPUs. This mechanism is also known as Receive Side Scaling (RSS). The multi-queue feature is enabled by default in Linux with VMXNET3 driver version vmxnet3 1.016.0-k or higher on the guest virtual machine.

2.5.7 VMDirectPath Guest Virtual Machine OS and VMXNET3 Supported Versions

Table 7 provides the VMXNET3 driver versions which support VMDirectPath mode for different Guest Virtual Machine Operating Systems.

Table 7. VMXNET3 Driver Versions

SL NO

Guest OS

VMXNET3 Driver Version

1

Windows 2008 SP2

1.2.22.0

2

Windows 2008 R2

1.2.22.0

3

RHEL 6.0

1.0.14.0-k

4

SLES11 SP1

1.0.14.0-k

5

SLES11

1.0.36.0

3 References

Hardware and Software Interoperability Matrix:

http://www.cisco.com/en/US/docs/unified_computing/ucs/interoperability/matrix/r_hcl_B_rel2_0.pdf

Cisco UCS Manager VM-FEX for VMware GUI Configuration :

http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/vm_fex/vmware/gui/config_guide/b_GUI_VMware_VM-FEX_UCSM_Configuration_Guide.pdf