Cisco Nexus 1000V Switch for VMware vSphere

Nexus 1000v and UCS QoS Configuration/Integration Example

Document ID: 117541

Updated: Feb 27, 2014

Contributed by Vishal Mehta and Manuel Velasco, Cisco TAC Engineers.



This document describes how to configure and troubleshoot Quality of Service (QoS) for Virtual Machines (VM) on setup of the Cisco Unified Computing System (UCS) and the Cisco Nexus 1000v switch. QoS can be controlled at the Nexus 1000v and/or at the UCS level. This document demonstrates both variations and their resulting effects.



Cisco recommends that you have knowledge of these topics:

  • QoS
  • Cisco UCS
  • VMware ESXi
  • Cisco Nexus 1000V Switch

Components Used

The command outputs in this document are based on these software and hardware versions:

  • Cisco Nexus 5020 Switch Version 5.0(3)N2(2a)
  • Cisco UCS Version 2.1(1d)
  • Cisco UCS B200 M3 Blade Server with Cisco Virtual Interface Card (VIC) 1240
  • vSphere Version 5.1 (ESXi and vCenter)
  • Cisco Nexus 1000V Switch Version 4.2(1)SV2(1.1a)

The software version is not a limitation for the demonstrated QoS feature. However, the examples in this document are only valid for Cisco Adapter cards.

The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command or packet capture setup.


Network Setup

In this setup, UCS is connected to the Nexus 5000 Series swtich via Virtual Port Channel (vPC). Each blade of the UCS has two Virtual Network Interface Controllers (vNICs), one for the vSwitch and the other for the Nexus 1000v. The operating system (OS) installed on both hosts is VMware ESXi Version 5.1. Each host has one VM with a Guest-OS of Windows 2012.

Here are some details about the network setup:

  • SJTAC VM is on host (UCS Blade 1/6).

  • TEST VM is on host (UCS Blade 1/5).

  • Blade 1/5 has a service profile with the name ESXi-Local, and Blade 1/6 has a service profile name of ESXi-2.

The vNIC for the Nexus1000v on Server 1/5 has a Fabric-A primary path, and the vNIC for the Nexus1000v on Server 1/6 has a Fabric-B primary path. Therefore, the traffic across these hosts traverses through the upstream Nexus 5000 switches.


Here is the UCS global QoS configuration:

As illustrated in the image, any QoS Policy with a Priority of Silver has a Class of Service (CoS) value of 2 and those with a Priority of Gold have a CoS value of 6.

QoS policies Milano and Florida are created for the two hosts.

Whether UCS controls the CoS for a vNIC or not strictly depends on the Host Control Field of the QoS Policy, which is assigned to that particular vNIC.

  • If None is selected, then UCS assigns the CoS value associated with the Priority Class given in the QoS policy. It disregards any of the settings implemented at the host level by the Nexus 1000v.

  • If Full is selected and the packet has a valid CoS assigned by the Nexus 1000v, then UCS trusts the CoS settings assigned at the host level. Otherwise, Cisco UCS uses the CoS value associated with the priority selected in the Priority drop-down list.

The Milano QoS policy has a Host Control of Full, which means that the Gold Priority (CoS 6) is ignored and the Nexus 1000v setting is trusted.

The Florida QoS policy has a Host Control of None, which means that all of the packets on that vNIC are remarked with Silver Priority (CoS 2) irrespective of the settings from the Nexus 1000v.

The QoS Policy Milano is assigned to the vNIC of Blade 1/6, which hosts VM - SJTAC. Therefore, any traffic sent by SJTAC, marked at Nexus 1000v, is trusted and unmodified.

The QoS Policy Florida is assigned to the vNIC of Blade 1/5, which hosts VM - TEST. Therefore, any traffic sent by TEST is remarked at UCS to the CoS value 2

On the Nexus 1000v, two policy-maps are created for each of the VMs. Policy gold_in_mark sets the CoS to 4 and policy silver_in_mark sets the CoS to 5 as shown here:

This Nexus 1000v configuration is the most common configuration seen for basic QoS settings.

SJTAC VM (veth 3) is given a QoS policy of gold_in_mark, and TEST VM (veth 6) is given a QoS policy of silver_in_mark.

Therefore, SJTAC VM traffic is marked with a CoS 4 at the Nexus 1000v. Since the corresponding host (Blade 1/6) has a Milano QoS Policy, that CoS is unmodified across the UCS and all packets that originate from SJTAC have a QoS setting of CoS 4.

TEST VM traffic is initially marked with CoS 5 on the Nexus 1000v, but it is remarked on the UCS vNIC to a QoS setting of CoS 2 because the corresponding host (Blade 1/5) has a Florida QoS Policy with a Priority of Silver.

Verify the settings on the UCS and prove that the QoS marking/remarking as explained previously is actually seen on the packet captures.

For a more detailed QoS Configuration on the UCS, refer to Configuring Quality of Service.

For a more detailed QoS Configuration on the Nexus 1000v, refer to Cisco Nexus 1000V Quality of Service Configuration Guide, Release 4.2(1)SV2(2.1).


Verify that the UCS CLI settings were implemented with the UCS Manager GUI.

This output shows the corresponding QoS policies and their Priority settings:

This output shows the mapping of Priority with the CoS value:

This output shows the confirmation of the QoS policy that is applied to Blade 1/6 at a specific vNIC:

This output shows the confirmation of the QoS policy that is applied to Blade 1/5 at a specific vNIC:

This shows a continuous ping initiated across both VMs:


Packet Captures are done at the Fabric-Interconnect in order to verify the QoS Settings across the host:

Capture 1:

Capture 2:

As seen in the previous capture, packets that come from (TEST VM) are set with a QoS value of CoS 2 and packets that come from (SJTAC VM) are set with QoS value of CoS 4.

This explains how the Host Control Field in UCS and the QoS Settings on the Nexus 1000v coexist and modify the CoS parameters for the traffic that originates at the VM level.


There is currently no specific troubleshooting information available for this configuration.

Updated: Feb 27, 2014
Document ID: 117541