The Cisco Nexus 1000V provides a distributed, Layer 2 virtual switch that extends across many virtualized hosts. The Cisco Nexus 1000V manages a data center defined by the vCenter Server. Each server in the data center is represented as a line card in the Cisco Nexus 1000V and can be managed as if it were a line card in a physical Cisco switch.
The Cisco Nexus 1000V consists of the following two components:
Virtual Supervisor Module (VSM), which contains the Cisco CLI, configuration, and high-level features.
Virtual Ethernet Module (VEM), which acts as a line card and runs in each virtualized server to handle packet forwarding and other localized functions.
Software Compatibility with VMware
The servers that run the Cisco Nexus 1000V VSM and VEM must be in the VMware Hardware Compatibility list. This release of the Cisco Nexus 1000V supports vSphere 5.5, 5.1, and 5.0 release trains. For additional compatibility information, see the Cisco Nexus 1000V Compatibility Information.
Note All virtual machine network adapter types that VMware vSphere supports are supported with the Cisco Nexus 1000V. Refer to the VMware documentation when choosing a network adapter. For more information, see the VMware Knowledge Base article #1001805.
Software Compatibility with Cisco Nexus 1000V
This release supports hitless upgrades from Release 4.2(1)SV1(4) and later releases. For additional information, see the Cisco Nexus 1000V Software Upgrade Guide.
New and Changed Information
This section describes the new software features in Cisco Nexus 1000V Release 4.2(1)SV2(2.2).
New Software Features
The following software features were added in Cisco Nexus 1000V Release 4.2(1)SV2(2.2):
The Vision Station Interface (VSI) Discovery and Configuration Protocol (VDP) on the Cisco Nexus 1000V is part of the IEEE standard 802.1Qbg (Edge Virtual Bridging [EVB]) that can detect and signal the presence of end hosts and the exchange capability with an adjacent VDP-capable bridge. The VDP is a reliable first-hop protocol that communicates the presence of end-host Virtual Machines (VMs) to adjacent leaf nodes on the Cisco Dynamic Fabric Automation (DFA) architecture. In addition to detecting the MAC and IP addresses of the end-host VMs when a host comes up or during VM mobility events, VDP also triggers auto-configuration of the leaf nodes on the DFA architecture to make them ready for further VM traffic.
For detailed information about VDP, see the Cisco Nexus 1000V VDP Configuration Guide.
Dynamic Fabric Automation (DFA)
Cisco Nexus 1000V supports the Cisco Dynamic Fabric Automation. This feature extends any cast gateway MAC and forwarding mode functionality to the end host running Cisco Nexus 1000V.
For detailed information about DFA, see the Cisco Nexus 1000V DFA Configuration Guide and the Cisco DFA Solutions Guide.
Cisco Nexus 1000V Multi-Hypervisor Licensing
The Cisco Nexus 1000V uses a multi-hypervisor licensing approach, which allows you to migrate a license from one Cisco Nexus 1000V switch platform type to another. For example, you can migrate the license from a Cisco Nexus 1000V for VMware switch to a Cisco Nexus 1000V for Microsoft Hyper-V. For more information about the multi-hypervisor licensing, see the Cisco Nexus 1000V Platform Multi-Hypervisor Licensing Guide.
Limitations and Restrictions
This section describes the limitations and restrictions of the Cisco Nexus 1000V.
Table 1 shows the Cisco Nexus 1000V configuration limits:
Table 1 Configuration Limits for Cisco Nexus 1000V
Supported Limits for a Single Cisco Nexus 1000V Deployment Spanning up to 2 Physical Data Centers
Virtual Ethernet Module (VEM)
Virtual Supervisor Module (VSM)
The VSMs can be placed in different physical data centers.
Note that the previous restrictions requiring the active-standby VSMs in a single physical data center do not apply anymore.
Active VLANs and VXLANs across all VEMs
2048 VLANs and 2048 VXLANs (with a combined maximum of 4096)
MAC addresses per VEM
MAC addresses per VLAN per VEM
vEthernet interfaces per port profile
1024 (without static auto expand port binding)
Same as DVS maximum (with static auto expand port binding)
Distributed Virtual Switches (DVS) per vCenter with VMware vCloud Director (vCD)
Distributed Virtual Switches (DVS) per vCenter without VMware vCloud Director (vCD)
Switched Port Analyzer (SPAN)/Encapsulated Remote Switched Port Analyzer (ERSPAN) sessions
QoS policy maps
QoS class maps
1.Only one connection to vCenter server is permitted at a time.
2.When you upgrade from an earlier version of the Cisco Nexus 1000V software to the current version of Cisco Nexus 1000V software, the maximum vEth ports are displayed as 216. To get the current supported vEth limit, remove the host from the DVS and add the host again.
3.This number can be exceeded if VEM has available memory.
Single VMware Data Center Support
The Cisco Nexus 1000V can be connected to a single VMware vCenter Server data center object. The virtual data center can span multiple physical data centers.
Each VMware vCenter can support multiple Cisco Nexus 1000V VSMs per vCenter data center.
Implementing VDP on the Cisco Nexus 1000V has the following limitations and restrictions:
The Cisco Nexus 1000V supports the Cisco DFA-capable VDP based on the IEEE Standard 802.1 Qbg, Draft 2.2, and does not support the Link Layer Discovery Protocol (LLDP). Therefore, the EVB type, length, value are not originated or processed by the Cisco Nexus 1000V.
The VDP implementation in the current release supports a matching LLDP-less implementation on the bridge side, which is delivered as part of the Cisco DFA solution. For more information on the Cisco DFA, see the Cisco DFA Solutions Guide.
Timer-related parameters are individually configurable in the station and in the leaf.
Connectivity to multiple unclustered bridges are not supported in this release.
IPv6 addresses in filter format are not supported in this release.
VDP is supported for only segmentation-based port profiles. VDP for VLAN-based port-profiles is unavailable in this release.
The dynamic VLANs allocated by VDP are local to the VEM, and they should not be configured on the Cisco Nexus 1000V VSM.
VDP is supported on VMware ESX releases 5.0, 5.1 and 5.5 in the current release.
DFA feature has following limitations and restrictions:
Fabric forwarding mode is not supported under the VLAN configuration.
VMotion of VSM
VMotion of the VSM has the following limitations and restrictions:
VMotion of a VSM is supported for both the active and standby VSM VMs. For high availability, we recommend that the active VSM and standby VSM reside on separate hosts.
If you enable Distributed Resource Scheduler (DRS), you must use the VMware anti-affinity rules to ensure that the two virtual machines are never on the same host, and that a host failure cannot result in the loss of both the active and standby VSM.
VMware VMotion does not complete when using an open virtual appliance (OVA) VSM deployment if the CD image is still mounted. To complete the VMotion, either click Edit Settings on the VM to disconnect the mounted CD image, or power off the VM. No functional impact results from this limitation.
If you are adding one host in a DRS cluster that is using a vSwitch to a VSM, you must move the remaining hosts in the DRS cluster to the VSM. Otherwise, the DRS logic does not work, the VMs that are deployed on the VEM could be moved to a host in the cluster that does not have a VEM, and the VMs lose network connectivity.
For more information about VMotion of VSM, see the Cisco Nexus 1000V Software Installation Guide.
ACLs have the following limitations and restrictions:
IPV6 ACL rules are not supported.
VLAN-based ACLs (VACLs) are not supported.
ACLs are not supported on port channels.
IP ACL rules do not support the following:
– fragments option
– addressgroup option
– portgroup option
– interface ranges
Control VLAN traffic between the VSM and VEM does not go through ACL processing.
The NetFlow configuration has the following support, limitations, and restrictions:
Layer 2 match fields are not supported.
NetFlow Sampler is not supported.
NetFlow Exporter format V9 is supported
NetFlow Exporter format V5 is not supported.
The multicast traffic type is not supported. Cache entries are created for multicast packets, but the packet/byte count does not reflect replicated packets.
NetFlow is not supported on port channels.
The NetFlow cache table has the following limitation:
Immediate and permanent cache types are not supported.
Note The cache size that is configured using the CLI defines the number of entries, not the size in bytes. The configured entries are allocated for each processor in the ESX host and the total memory allocated depends on the number of processors.
Port security has the following support, limitations, and restrictions:
Port security is enabled globally by default. The feature/no feature port-security command is not supported.
In response to a security violation, you can shut down the port.
The port security violation actions that are supported on a secure port are Shutdown and Protect. The Restrict violation action is not supported.
Port security is not supported on the PVLAN promiscuous ports.
Port profiles have the following restrictions or limitations:
There is a limit of 255 characters in a port-profile command attribute.
We recommend that you save the configuration across reboots, which shortens the VSM bringup time.
We recommend that if you are altering or removing a port channel, you should migrate the interfaces that inherit the port channel port profile to a port profile with the desired configuration, rather than editing the original port channel port profile directly.
If you attempt to remove a port profile that is in use, that is, one that has already been auto-assigned to an interface, the Cisco Nexus 1000V generates an error message and does not allow the removal.
When you remove a port profile that is mapped to a VMware port group, the associated port group and settings within the vCenter Server are also removed.
Policy names are not checked against the policy database when ACL/NetFlow policies are applied through the port profile. It is possible to apply a nonexistent policy.
Only SSH version 2 (SSHv2) is supported.
For more information, see the Cisco Nexus 1000V Security Configuration Guide.
Cisco NX-OS Commands Might Differ from Cisco IOS
Be aware that the Cisco NX-OS CLI commands and modes might differ from those commands and modes used in the Cisco IOS software.
Layer 2 Switching: No Spanning Tree Protocol
The Cisco Nexus 1000V forwarding logic is designed to prevent network loops so it does not need to use the Spanning Tree Protocol. Packets that are received from the network on any link connecting the host to the network are not forwarded back to the network by the Cisco Nexus 1000V.
Cisco Discovery Protocol
The Cisco Discovery Protocol (CDP) is enabled globally by default.
CDP runs on all Cisco-manufactured equipment over the data link layer and does the following:
Advertises information to all attached Cisco devices.
Discovers and views information about those Cisco devices.
– CDP can discover up to 256 neighbors per port if the port is connected to a hub with 256 connections.
If you disable CDP globally, CDP is also disabled for all interfaces.
For more information about CDP, see the Cisco Nexus 1000V System Management Configuration Guide.
DHCP Not Supported for the Management IP
DHCP is not supported for the management IP. The management IP must be configured statically.
The Link Aggregation Control Protocol (LACP) is an IEEE standard protocol that aggregates Ethernet links into an EtherChannel.
The Cisco Nexus 1000V has the following restrictions for enabling LACP on ports carrying the control and packet VLANs:
Note These restrictions do not apply to other data ports using LACP.
If LACP offload is disabled, at least two ports must be configured as part of the LACP channel.
Note This restriction is not applicable if LACP offload is enabled. You can check the LACP offload status by using the show lacp offload status command.
The upstream switch ports must be configured in spanning-tree port type edge trunk mode.
Upstream Switch Ports
All upstream switch ports must be configured in spanning-tree port type edge trunk mode.
Without spanning-tree PortFast on upstream switch ports, it takes approximately 30 seconds to recover these ports on the upstream switch. Because these ports are carrying control and packet VLANs, the VSM loses connectivity to the VEM.
The following commands are available to use on Cisco upstream switch ports in interface configuration mode:
spanning-tree portfast trunk
spanning-tree portfast edge trunk
The Cisco Nexus 1010 (1000V) cannot resolve a domain name or hostname to an IP address.
When the maximum transmission unit (MTU) is configured on an operationally up interface, the interface goes down and comes back up.
Layer 3 VSG
When a VEM communicates with the Cisco Virtual Security Gateway (VSG) in Layer 3 mode, an additional header with 94 bytes is added to the original packet. You must set the MTU to a minimum of 1594 bytes to accommodate this extra header for any network interface through which the traffic passes between the Cisco Nexus 1000V and the Cisco VSG. These interfaces can include the uplink port profile, the proxy ARP router, or a virtual switch.
Copy Running-Config Startup-Config Command
When you are using the copy running-config startup-config command, do not press the PrtScn key. If you do, the command aborts.
Dynamic Entries Are Not Deleted for Linux VM
On a Linux VM that has multiple adapters, a DHCP release packet is sent from an incorrect interface (because of OS functionality) and the DHCP release packet is dropped. As a result, the binding entry is not deleted. This issue is a Linux issue where the packets from all interfaces go out of one interface (which is the default interface). To avoid this issue, put the interfaces in different subnets and make sure that the default gateways for each interface is set.
Source Filter TX VLANs Are Missing After the VSM Restarts
When a SPAN (ERSPAN-source) session is created and the source interface is configured as a port channel and PVLAN Promiscuous access is programmed, the filter RX is not configured and the configured programmed filter TX is not persistent on a VSM reload.
To work around this issue, configure all the primary and secondary VLANs as filter VLANs while using the port channel with PVLAN Promiscuous access as the source interface.
Default SSH Inactive Session Timeout
The default SSH inactive session timeout is 30 minutes, but the timeout setting is disabled by default, so the connection remains active. The exec-timeout command can be used to explicitly configure the inactive session timeout limit.
Queueing Policy Cannot Be Changed in a Flexible Upgrade Setup
Queuing is valid starting from Cisco NX-OS Release 4.2(1)SV1(5.1). Any queueing configuration that exists on the VSM in an earlier release stops working. All port profiles that have a queueing configuration cannot be used. If a port is down, it should be moved to a profile without QoS queueing.
Clear QoS Statistics Fails on the VSM
When a policy-map, of type queuing, that has a class map of type “match-any” without any match criteria, is applied on an interface, a resource pool is not created for that specific class ID. As a result, the collection of statistics fails and no data is sent back to the VSM. To work around this issue, add a match criteria on the empty class map.
Span Source/Destination Removed from the Session Configuration After an Atomic Port-Profile Change
If a virtual Ethernet port is a SPAN/ERSPAN source or destination and its port profile changes atomically, the virtual Ethernet port is removed from the SPAN/ERSPAN configuration. If it was the only operational source/destination, the session might go down.
In high-traffic scenarios, there is a possibility that IGMP-Query packets may be queued behind data packets. This issue can cause IGMP-Join(s) not to be sent for the corresponding VXLAN segments and cause traffic to fail for unknown-unicast/multicast/broadcast.
The virtual Ethernet auto delete option does not work. If the switch has non-participating virtual Ethernet (vEth) interfaces, those interfaces are not automatically deleted even if you configure the vEth auto delete option.
The vethPerHostUsed field is displaying the same value as the vethUsed field in the XML response for http://vsm_ip/api/vc/limits API. It should display the number of vEths on the host with the maximum used vEths.
PSS gets corrupted on the gateways in a corner case.
The Cisco Management Information Base (MIB) list includes Cisco proprietary MIBs and many other Internet Engineering Task Force (IETF) standard MIBs. These standard MIBs are defined in Requests for Comments (RFCs). To find specific MIB information, you must examine the Cisco proprietary MIB structure and related IETF-standard MIBs supported by the Cisco Nexus 1000V Series switch.
The MIB Support List is available at the following FTP site:
Subscribe to What’s New in Cisco Product Documentation, which lists all new and revised Cisco technical documentation, as an RSS feed and deliver content directly to your desktop using a reader application. The RSS feeds are a free service.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Internet Protocol (IP) addresses used in this document are for illustration only. Examples, command display output, and figures are for illustration only. If an actual IP address appears in this document, it is coincidental.