The Cisco Nexus 1000V provides a distributed, Layer 2 virtual switch that extends across many virtualized hosts. The Cisco Nexus 1000V manages a data center defined by the vCenter Server. Each server in the data center is represented as a line card in the Cisco Nexus 1000V and can be managed as if it were a line card in a physical Cisco switch.
The Cisco Nexus 1000V consists of the following two components:
Virtual Supervisor Module (VSM), which contains the Cisco CLI, configuration, and high-level features.
Virtual Ethernet Module (VEM), which acts as a line card and runs in each virtualized server to handle packet forwarding and other localized functions.
The servers that run the Cisco Nexus 1000V VSM and VEM must be in the VMware Hardware Compatibility list. This release of the Cisco Nexus 1000V supports vSphere 5.5, 5.1, and 5.0 release trains. For additional compatibility information, see the Cisco Nexus 1000V Compatibility Information .
NoteAll virtual machine network adapter types that VMware vSphere supports are supported with the Cisco Nexus 1000V. Refer to the VMware documentation when choosing a network adapter. For more information, see the VMware Knowledge Base article #1001805. All virtual machine network adapter types that VMware vSphere supports are supported with the Cisco Nexus 1000V. Refer to the VMware documentation when choosing a network adapter. For more information, see the VMware Knowledge Base article #1001805.
Software Compatibility with Cisco Nexus 1000V
This release supports hitless upgrades from Release 4.2(1)SV1(4) and later releases. For additional information, see the Cisco Nexus 1000V Software Upgrade Guide .
New and Changed Information
This section provides the following information about Cisco Nexus 1000V Release 4.2(1)SV2(2.1):
A VXLAN supports two different modes for flood traffic:
Multicast mode—A VXLAN uses an IP multicast network to send broadcast, multicast, and unknown unicast flood frames. Each multicast mode VXLAN has an assigned multicast group ID address. When a new VM joins a host in multicast mode VXLAN, VEM joins the assigned multicast group ID address by sending IGMP join messages. Flood traffic, that is broadcast, multicast and unknown unicast, from the VM is encapsulated and is sent using the assigned multicast group IP as destination IP address. Packets sent to known unicast MAC address are encapsulated and sent directly to the destination server VTEP IP addresses.
Unicast-only mode—A VXLAN uses each VEM's single unicast IP address as the destination IP address to send broadcast, multicast, and unknown unicast flood frames of designated VTEP on each VEM that has at least one VM in the corresponding VXLAN. When a new VM joins the host in unicast-mode VXLAN, a designated VTEP is selected for receiving flood traffic on that host. This designated VTEP is communicated to all other hosts through the VSM. Flood traffic, that is broadcast, multicast and unknown unicast, is replicated on each VEMs designated VTEP in that VXLAN by encapsulating it with a VXLAN header. Packets are sent only to VEMs with a VM in that VXLAN. Packets that have an unicast MAC address are encapsulated and sent directly to the destination servers VTEP IP address.
– MAC distribution mode (supported only in unicast mode)—In this mode, the unknown unicast flooding in the network is eliminated. Virtual Supervisor Module (VSM) learns all the mac-addresses from VEMs in all VXLANs and distributes those MAC addresses with VTEP IP mappings to other VEMs. Hence, there is no unknown unicast mac-address in the network when VMs on VEMs are communicating and controlled by same VSM.
VXLAN termination (encapsulation and decapsulation) is supported only on virtual switches. As a result, the only endpoints that can connect into VXLANs are VMs that are connected to a virtual switch. Physical servers cannot be in VXLANs and routers or services that have traditional VLAN interfaces cannot be used by VXLAN networks. The only way that VXLANs can currently interconnect with traditional VLANs is through VM-based software routers.
The VXLAN gateways supported are as follows:
VMware vShield Edge
Cisco VXLAN gateway
The configuration for such VXLAN-VLAN translation/mappings for the VXLAN gateway must be configured through the VSM and must always be a 1:1 mapping for each Layer 2 domain. Each VXLAN gateway can support multiple VXLAN-VLAN mappings.
A VXLAN trunk allows you to trunk multiple VXLANs on a single virtual Ethernet interface. In order to achieve this configuration, you must encapsulate a VXLAN-VLAN mapping on the virtual Ethernet interface.
VXLAN-VLAN mappings are configured through the VSM and must always be a 1:1 mapping for each Layer 2 domain. VXLAN-VLAN mappings are applied on a virtual Ethernet interface using a port-profile. A single port profile can support multiple VLAN-VXLAN mappings.
The Cisco Nexus 1000V supports offloading VXLAN checksum and TSO computations of inner packets for VXLAN encapsulated packets. The VXLAN offload feature is supported only if an adapter supports the offload feature and the VMware supports the offload feature on that adapter. For more information, see the Cisco Nexus 1000V VXLAN Configuration Guide .
You can use multi-MAC addresses to mark a virtual Ethernet interface as capable of sourcing packets from multiple MAC addresses. For example, you can use this feature if you have a virtual Ethernet port and you have enabled VXLAN trunking on it and the VM that is connected to the port bridges packets that are sourced from multiple MAC addresses.
By using this feature, you can easily identify such multi-MAC capable ports and handle live migration scenarios correctly for those ports.
Extending VEMs for Centralized Management of Data Centers and Branch Offices
To facilitate a centralized management environment, it is possible to have the VSM at a central location in the main Data Center, while the VEMs are spread across different branch locations. The maximum latency recommended between VSMs and VEMs in such cases should be 100 ms.
Limitations and Restrictions
This section describes the Cisco Nexus 1000V limitations and restrictions.
Table 1 shows the Cisco Nexus 1000V configuration limits:
Table 1 Configuration Limits for Cisco Nexus 1000V
Supported Limits for a Single Cisco Nexus 1000V Deployment Spanning up to 2 Physical Data Centers
Virtual Ethernet Module (VEM)
Virtual Supervisor Module (VSM)
The VSMs can be placed in different physical data centers.
Note that the previous restrictions requiring the active-standby VSMs in a single physical data center do not apply anymore.
Active VLANs and VXLANs across all VEMs
2048 VLANs and 2048 VXLANs (with a combined maximum of 4096)
MACs per VEM
MACs per VLAN per VEM
vEthernet interfaces per port profile
1024 (without static auto expand port binding)
Same as DVS maximum (with static auto expand port binding)
Distributed Virtual Switches (DVS) per vCenter with VMware vCloud Director (vCD)
Distributed Virtual Switches (DVS) per vCenter without VMware vCloud Director (vCD)
1.Only one connection to vCenter server is permitted at a time.
2.Upgrade from an earlier version of Cisco Nexus 1000V software to the current version of Cisco Nexus 1000V software displays the maximum vEth ports as 216. To get the current supported vEth limit, remove the host from DVS and add the host again.
3.This number can be exceeded if VEM has available memory.
Single VMware Data Center Support
The Cisco Nexus 1000V can be connected to a single VMware vCenter Server datacenter object. Note that this virtual datacenter can span across multiple physical data centers.
VMotion of VSM
VMotion of the VSM has the following limitations and restrictions:
VMotion of a VSM is supported for both the active and standby VSM VMs. For high availability, we recommend that the active VSM and standby VSM reside on separate hosts.
If you enable Distributed Resource Scheduler (DRS), you must use the VMware anti-affinity rules to ensure that the two virtual machines are never on the same host, and that a host failure cannot result in the loss of both the active and standby VSM.
VMware VMotion does not complete when using an open virtual appliance (OVA) VSM deployment if the CD image is still mounted. To complete the VMotion, either click Edit Settings on the VM to disconnect the mounted CD image, or power off the VM. No functional impact results from this limitation.
If you are adding one host in a DRS cluster that is using vSwitch to a VSM, you must move the remaining hosts in the DRS cluster to the VSM. Otherwise, the DRS logic does not work, the VMs that are deployed on the VEM could be moved to a host in the cluster that does not have a VEM, and the VMs lose network connectivity.
For more information about VMotion of VSM, see the Cisco Nexus 1000V Software Installation Guide .
ACLs have the following limitations and restrictions:
IPV6 ACL rules are not supported.
VLAN-based ACLs (VACLs) are not supported.
ACLs are not supported on port channels.
IP ACL rules do not support the following:
– fragments option
– addressgroup option
– portgroup option
– interface ranges
Control VLAN traffic between the VSM and VEM does not go through ACL processing.
The NetFlow configuration has the following support, limitations, and restrictions:
Layer 2 match fields are not supported.
NetFlow Sampler is not supported.
NetFlow Exporter format V9 is supported
NetFlow Exporter format V5 is not supported.
The multicast traffic type is not supported. Cache entries are created for multicast packets, but the packet/byte count does not reflect replicated packets.
NetFlow is not supported on port channels.
The NetFlow cache table has the following limitation:
Immediate and permanent cache types are not supported.
Note The cache size that is configured using the CLI defines the number of entries, not the size in bytes. The configured entries are allocated for each processor in the ESX host and the total memory allocated depends on the number of processors.
Port security has the following support, limitations, and restrictions:
Port security is enabled globally by default. The feature/no feature port-security command is not supported.
In response to a security violation, you can shut down the port.
The port security violation actions that are supported on a secure port are Shutdown and Protect . The Restrict violation action is not supported.
Port security is not supported on the PVLAN promiscuous ports.
Port profiles have the following restrictions or limitations:
There is a limit of 255 characters in a port-profile command attribute.
We recommend that you save the configuration across reboots, which will shorten the VSM bringup time.
We recommend that if you are altering or removing a port channel, you should migrate the interfaces that inherit the port channel port profile to a port profile with the desired configuration, rather than editing the original port channel port profile directly.
If you attempt to remove a port profile that is in use, that is, one that has already been auto-assigned to an interface, the Cisco Nexus 1000V generates an error message and does not allow the removal.
When you remove a port profile that is mapped to a VMware port group, the associated port group and settings within the vCenter Server are also removed.
Policy names are not checked against the policy database when ACL/NetFlow policies are applied through the port profile. It is possible to apply a nonexistent policy.
Only SSH version 2 (SSHv2) is supported.
For more information, see the Cisco Nexus 1000V Security Configuration Guide .
Cisco NX-OS Commands Might Differ from Cisco IOS
Be aware that the Cisco NX-OS CLI commands and modes might differ from those commands and modes used in the Cisco IOS software.
For information about CLI commands, see the Cisco Nexus 1000V Command Reference .
Layer 2 Switching
This section lists the Layer 2 switching limitations and restrictions and includes the following topics:
For more information about Layer 2 switching, see the Cisco Nexus 1000V Layer 2 Switching Configuration Guide .
No Spanning Tree Protocol
The Cisco Nexus 1000V forwarding logic is designed to prevent network loops so it does not need to use the Spanning Tree Protocol. Packets that are received from the network on any link connecting the host to the network are not forwarded back to the network by the Cisco Nexus 1000V.
Cisco Discovery Protocol
The Cisco Discovery Protocol (CDP) is enabled globally by default.
CDP runs on all Cisco-manufactured equipment over the data link layer and does the following:
Advertises information to all attached Cisco devices.
Discovers and views information about those Cisco devices.
– CDP can discover up to 256 neighbors per port if the port is connected to a hub with 256 connections.
If you disable CDP globally, CDP is also disabled for all interfaces.
For more information about CDP, see the Cisco Nexus 1000V System Management Configuration Guide .
DHCP Not Supported for the Management IP
DHCP is not supported for the management IP. The management IP must be configured statically.
The Link Aggregation Control Protocol (LACP) is an IEEE standard protocol that aggregates Ethernet links into an EtherChannel.
The Cisco Nexus 1000V has the following restrictions for enabling LACP on ports carrying the control and packet VLANs:
NoteThese restrictions do not apply to other data ports using LACP. These restrictions do not apply to other data ports using LACP.
If LACP offload is disabled, at least two ports must be configured as part of LACP channel.
NoteThis restriction is not applicable if LACP offload is enabled. You can check the LACP offload status by using the This restriction is not applicable if LACP offload is enabled. You can check the LACP offload status by using the show lacp offload status command.
The upstream switch ports must be configured in spanning-tree port type edge trunk mode. For more information about this restriction, see Upstream Switch Ports.
Upstream Switch Ports
All upstream switch ports must be configured in spanning-tree port type edge trunk mode.
Without spanning-tree PortFast on upstream switch ports, it takes approximately 30 seconds to recover these ports on the upstream switch. Because these ports are carrying control and packet VLANs, the VSM loses connectivity to the VEM.
The following commands are available to use on Cisco upstream switch ports in interface configuration mode:
spanning-tree portfast trunk
spanning-tree portfast edge trunk
The Cisco Nexus 1010 (1000V) cannot resolve a domain name or hostname to an IP address.
When the maximum transmission unit (MTU) is configured on an operationally up interface, the interface goes down and comes back up.
Layer 3 VSG
When a VEM communicates with Cisco VSG in Layer 3 mode, an additional header with 94 bytes is added to the original packet. You must set the MTU to a minimum of 1594 bytes to accommodate this extra header for any network interface through which the traffic passes between the Cisco Nexus 1000V and the Cisco VSG. These interfaces can include the uplink port profile, the proxy ARP router, or a virtual switch.
Copy Running-Config Startup-Config Command
When running the copy running-config startup-config command, do not press the PrtScn key. If you do, the command will abort.
Dynamic Entries Are Not Deleted For A Linux VM
On a Linux VM that has multiple adapters, a DHCP release packet is sent from an incorrect interface (because of OS functionality) and the DHCP release packet is dropped. As a result, the binding entry is not deleted. This issue is a Linux issue where the packets from all interfaces go out of one interface (which is the default interface). To avoid this issue, put the interfaces in different subnets and make sure that the default gateways for each interface is set.
Source Filter TX VLANs Are Missing After the VSM Restarts
When a SPAN (erspan-source) session is created and the source interface is configured as a port channel and PVLAN Promiscuous access is programmed, the filter RX is not configured and the configured programmed filter TX is not persistent on VSM reload.
To work around this issue, configure all the primary and secondary VLANs as filter VLANs while using the port channel with PVLAN Promiscuous access as the source interface.
Default SSH Inactive Session Timeout
The default SSH inactive session timeout is 30 minutes, but the timeout setting is disabled by default, so the connection remains active. The exec-timeout command can be used to explicitly configure the inactive session timeout limit.
Queueing Policy Cannot Be Changed In Flexible Upgrade Setup
Queuing is valid starting from Cisco NX-OS Release 4.2(1)SV1(5.1). Any queueing configuration that exists on the VSM in an earlier release will stop working. All port profiles that have a queueing configuration cannot be used. If a port is down, it should be moved to a profile without QoS queueing.
Clear QoS Statistics Fails on the VSM
When a policy-map, of type queuing, having a class-map of type "match-any" without any match criteria, is applied on an interface, a resource pool is not created for that specific class ID. As a result, the collection of statistics fails and no data is sent back to the VSM. To work around this issue, add a match criteria on the empty class map.
In high traffic scenarios there is a possibility that IGMP-Query packets may be queued behind data packets. This can cause IGMP-Join(s) not to be sent for the corresponding VXLAN segments hence causing traffic to fail for unknown-unicast/multicast/broadcast.
The vethPerHostUsed field is displaying the same value as the vethUsed field in the XML response for http://vsm_ip/api/vc/limits API. It should display the number of Veths on the host with the maximum used Veths.
A Link Aggregation Control Protocol (LACP) port channel member port goes to the suspended state when the port is newly added to the LACP port channel, or the port is removed and re-added to the LACP port channel.
Span sources gets deleted on VEM output while adding source intfs.
The Cisco Management Information Base (MIB) list includes Cisco proprietary MIBs and many other Internet Engineering Task Force (IETF) standard MIBs. These standard MIBs are defined in Requests for Comments (RFCs). To find specific MIB information, you must examine the Cisco proprietary MIB structure and related IETF-standard MIBs supported by the Cisco Nexus 1000V Series switch.
The MIB Support List is available at the following FTP site:
Subscribe to What’s New in Cisco Product Documentation , which lists all new and revised Cisco technical documentation, as an RSS feed and deliver content directly to your desktop using a reader application. The RSS feeds are a free service.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks . Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Internet Protocol (IP) addresses used in this document are for illustration only. Examples, command display output, and figures are for illustration only. If an actual IP address appears in this document, it is coincidental.