The Cisco Nexus 1000V provides a distributed, Layer 2 virtual switch that extends across many virtualized hosts. The Cisco Nexus 1000V manages a data center defined by the vCenter Server. Each server in the data center is represented as a line card in the Cisco Nexus 1000V and can be managed as if it were a line card in a physical Cisco switch.
The Cisco Nexus 1000V consists of the following two components:
Virtual Supervisor Module (VSM), which contains the Cisco CLI, configuration, and high-level features.
Virtual Ethernet Module (VEM), which acts as a line card and runs in each virtualized server to handle packet forwarding and other localized functions.
The servers that run the Cisco Nexus 1000V VSM and VEM must be in the VMware Hardware Compatibility list. This release of the Cisco Nexus 1000V supports vSphere 5.5, 5.1, and 5.0 release trains. For additional compatibility information, see the Cisco Nexus 1000V Compatibility Information .
NoteAll virtual machine network adapter types that VMware vSphere supports are supported with the Cisco Nexus 1000V. Refer to the VMware documentation when choosing a network adapter. For more information, see the VMware Knowledge Base article #1001805. All virtual machine network adapter types that VMware vSphere supports are supported with the Cisco Nexus 1000V. Refer to the VMware documentation when choosing a network adapter. For more information, see the VMware Knowledge Base article #1001805.
Software Compatibility with Cisco Nexus 1000V
This release supports hitless upgrades from Release 4.2(1)SV1(4) and later releases. For additional information, see the Cisco Nexus 1000V Software Upgrade Guide .
New and Changed Information
This section provides the following information about Cisco Nexus 1000V Release 4.2(1)SV2(2.1a):
The Cisco Nexus 1000V VSM can validate the certificate presented by vCenter Server to authenticate it. The certificate may be self-signed or signed by a Certificate Authority (CA). The validation is done each time the VSM connects to the vCenter Server. If the certificate authentication fails, a warning is generated but the connection is not impaired. This is an optional feature.
VXLAN Gateway Upgrade
The Cisco Nexus 1000V VXLAN supports upgrading the VXLAN gateway through VSM. The following commands are added for VXLAN upgrade feature.
vsm# install service-module kickstart bootflash: kickstart image system bootflash: system image module-num 3–130
Upgrades a VXLAN gateway service module (standalone) by using the kickstart and system image. The module number range is from 3 to 130.
vsm# install service-module iso bootflash:iso image module-num 3–130
Upgrades a VXLAN gateway service module (standalone) by using the iso image. The module number range is from 3 to 130.
vsm# install service-module kickstart bootflash: kickstart image system bootflash: system image cluster-id 1–8
Upgrades a VXLAN gateway high availability (HA) cluster by using the kickstart and system image. The cluster ID range is from 1 to 8.
vsm# install service-module iso bootflash:iso image cluster-id 1–8
Upgrades a VXLAN gateway HA cluster by using the iso image. The cluster ID range is from 1 to 8.
For detailed information about upgrading the VXLAN gateway, see Cisco Nexus 1000V VXLAN Configuration Guide, Release 4.2(1)SV2(2.1a) .
Limitations and Restrictions
This section describes the Cisco Nexus 1000V limitations and restrictions.
Table 1 shows the Cisco Nexus 1000V configuration limits:
Table 1 Configuration Limits for Cisco Nexus 1000V
Supported Limits for a Single Cisco Nexus 1000V Deployment Spanning up to 2 Physical Data Centers
Virtual Ethernet Module (VEM)
Virtual Supervisor Module (VSM)
The VSMs can be placed in different physical data centers.
Note that the previous restrictions requiring the active-standby VSMs in a single physical data center do not apply anymore.
Active VLANs and VXLANs across all VEMs
2048 VLANs and 2048 VXLANs (with a combined maximum of 4096)
MACs per VEM
MACs per VLAN per VEM
vEthernet interfaces per port profile
1024 (without static auto expand port binding)
Same as DVM maximum (with static auto expand port binding)
Distributed Virtual Switches (DVS) per vCenter with VMware vCloud Director (vCD)
Distributed Virtual Switches (DVS) per vCenter without VMware vCloud Director (vCD)
1.Only one connection to vCenter server is permitted at a time.
2.Upgrade from an earlier version of Cisco Nexus 1000V software to the current version of Cisco Nexus 1000V software displays the maximum vEth ports as 216. To get the current supported vEth limit, remove the host from DVS and add the host again.
3.This number can be exceeded if VEM has available memory.
Single VMware Data Center Support
The Cisco Nexus 1000V can be connected to a single VMware vCenter Server datacenter object. Note that this virtual datacenter can span across multiple physical data centers.
VMotion of VSM
VMotion of the VSM has the following limitations and restrictions:
VMotion of a VSM is supported for both the active and standby VSM VMs. For high availability, we recommend that the active VSM and standby VSM reside on separate hosts.
If you enable Distributed Resource Scheduler (DRS), you must use the VMware anti-affinity rules to ensure that the two virtual machines are never on the same host, and that a host failure cannot result in the loss of both the active and standby VSM.
VMware VMotion does not complete when using an open virtual appliance (OVA) VSM deployment if the CD image is still mounted. To complete the VMotion, either click Edit Settings on the VM to disconnect the mounted CD image, or power off the VM. No functional impact results from this limitation.
If you are adding one host in a DRS cluster that is using vSwitch to a VSM, you must move the remaining hosts in the DRS cluster to the VSM. Otherwise, the DRS logic does not work, the VMs that are deployed on the VEM could be moved to a host in the cluster that does not have a VEM, and the VMs lose network connectivity.
For more information about VMotion of VSM, see the Cisco Nexus 1000V Software Installation Guide .
ACLs have the following limitations and restrictions:
IPV6 ACL rules are not supported.
VLAN-based ACLs (VACLs) are not supported.
ACLs are not supported on port channels.
IP ACL rules do not support the following:
– fragments option
– addressgroup option
– portgroup option
– interface ranges
Control VLAN traffic between the VSM and VEM does not go through ACL processing.
The NetFlow configuration has the following support, limitations, and restrictions:
Layer 2 match fields are not supported.
NetFlow Sampler is not supported.
NetFlow Exporter format V9 is supported
NetFlow Exporter format V5 is not supported.
The multicast traffic type is not supported. Cache entries are created for multicast packets, but the packet/byte count does not reflect replicated packets.
NetFlow is not supported on port channels.
The NetFlow cache table has the following limitation:
Immediate and permanent cache types are not supported.
Note The cache size that is configured using the CLI defines the number of entries, not the size in bytes. The configured entries are allocated for each processor in the ESX host and the total memory allocated depends on the number of processors.
Port security has the following support, limitations, and restrictions:
Port security is enabled globally by default. The feature/no feature port-security command is not supported.
In response to a security violation, you can shut down the port.
The port security violation actions that are supported on a secure port are Shutdown and Protect . The Restrict violation action is not supported.
Port security is not supported on the PVLAN promiscuous ports.
Port profiles have the following restrictions or limitations:
There is a limit of 255 characters in a port-profile command attribute.
We recommend that you save the configuration across reboots, which will shorten the VSM bringup time.
We recommend that if you are altering or removing a port channel, you should migrate the interfaces that inherit the port channel port profile to a port profile with the desired configuration, rather than editing the original port channel port profile directly.
If you attempt to remove a port profile that is in use, that is, one that has already been auto-assigned to an interface, the Cisco Nexus 1000V generates an error message and does not allow the removal.
When you remove a port profile that is mapped to a VMware port group, the associated port group and settings within the vCenter Server are also removed.
Policy names are not checked against the policy database when ACL/NetFlow policies are applied through the port profile. It is possible to apply a nonexistent policy.
Only SSH version 2 (SSHv2) is supported.
For more information, see the Cisco Nexus 1000V Security Configuration Guide .
Cisco NX-OS Commands Might Differ from Cisco IOS
Be aware that the Cisco NX-OS CLI commands and modes might differ from those commands and modes used in the Cisco IOS software.
For information about CLI commands, see the Cisco Nexus 1000V Command Reference .
Layer 2 Switching
This section lists the Layer 2 switching limitations and restrictions and includes the following topics:
For more information about Layer 2 switching, see the Cisco Nexus 1000V Layer 2 Switching Configuration Guide .
No Spanning Tree Protocol
The Cisco Nexus 1000V forwarding logic is designed to prevent network loops so it does not need to use the Spanning Tree Protocol. Packets that are received from the network on any link connecting the host to the network are not forwarded back to the network by the Cisco Nexus 1000V.
Cisco Discovery Protocol
The Cisco Discovery Protocol (CDP) is enabled globally by default.
CDP runs on all Cisco-manufactured equipment over the data link layer and does the following:
Advertises information to all attached Cisco devices.
Discovers and views information about those Cisco devices.
– CDP can discover up to 256 neighbors per port if the port is connected to a hub with 256 connections.
If you disable CDP globally, CDP is also disabled for all interfaces.
For more information about CDP, see the Cisco Nexus 1000V System Management Configuration Guide .
DHCP Not Supported for the Management IP
DHCP is not supported for the management IP. The management IP must be configured statically.
The Link Aggregation Control Protocol (LACP) is an IEEE standard protocol that aggregates Ethernet links into an EtherChannel.
The Cisco Nexus 1000V has the following restrictions for enabling LACP on ports carrying the control and packet VLANs:
NoteThese restrictions do not apply to other data ports using LACP. These restrictions do not apply to other data ports using LACP.
If LACP offload is disabled, at least two ports must be configured as part of LACP channel.
NoteThis restriction is not applicable if LACP offload is enabled. You can check the LACP offload status by using the This restriction is not applicable if LACP offload is enabled. You can check the LACP offload status by using the show lacp offload status command.
The upstream switch ports must be configured in spanning-tree port type edge trunk mode. For more information about this restriction, see Upstream Switch Ports.
Upstream Switch Ports
All upstream switch ports must be configured in spanning-tree port type edge trunk mode.
Without spanning-tree PortFast on upstream switch ports, it takes approximately 30 seconds to recover these ports on the upstream switch. Because these ports are carrying control and packet VLANs, the VSM loses connectivity to the VEM.
The following commands are available to use on Cisco upstream switch ports in interface configuration mode:
spanning-tree portfast trunk
spanning-tree portfast edge trunk
The Cisco Nexus 1010 (1000V) cannot resolve a domain name or hostname to an IP address.
When the maximum transmission unit (MTU) is configured on an operationally up interface, the interface goes down and comes back up.
Layer 3 VSG
When a VEM communicates with Cisco VSG in Layer 3 mode, an additional header with 94 bytes is added to the original packet. You must set the MTU to a minimum of 1594 bytes to accommodate this extra header for any network interface through which the traffic passes between the Cisco Nexus 1000V and the Cisco VSG. These interfaces can include the uplink port profile, the proxy ARP router, or a virtual switch.
Copy Running-Config Startup-Config Command
When running the copy running-config startup-config command, do not press the PrtScn key. If you do, the command will abort.
Dynamic Entries Are Not Deleted For A Linux VM
On a Linux VM that has multiple adapters, a DHCP release packet is sent from an incorrect interface (because of OS functionality) and the DHCP release packet is dropped. As a result, the binding entry is not deleted. This issue is a Linux issue where the packets from all interfaces go out of one interface (which is the default interface). To avoid this issue, put the interfaces in different subnets and make sure that the default gateways for each interface is set.
Source Filter TX VLANs Are Missing After the VSM Restarts
When a SPAN (erspan-source) session is created and the source interface is configured as a port channel and PVLAN Promiscuous access is programmed, the filter RX is not configured and the configured programmed filter TX is not persistent on VSM reload.
To work around this issue, configure all the primary and secondary VLANs as filter VLANs while using the port channel with PVLAN Promiscuous access as the source interface.
Default SSH Inactive Session Timeout
The default SSH inactive session timeout is 30 minutes, but the timeout setting is disabled by default, so the connection remains active. The exec-timeout command can be used to explicitly configure the inactive session timeout limit.
Queueing Policy Cannot Be Changed In Flexible Upgrade Setup
Queuing is valid starting from Cisco NX-OS Release 4.2(1)SV1(5.1). Any queueing configuration that exists on the VSM in an earlier release will stop working. All port profiles that have a queueing configuration cannot be used. If a port is down, it should be moved to a profile without QoS queueing.
Clear QoS Statistics Fails on the VSM
When a policy-map, of type queuing, having a class-map of type "match-any" without any match criteria, is applied on an interface, a resource pool is not created for that specific class ID. As a result, the collection of statistics fails and no data is sent back to the VSM. To work around this issue, add a match criteria on the empty class map.
In high traffic scenarios there is a possibility that IGMP-Query packets may be queued behind data packets. This can cause IGMP-Join(s) not to be sent for the corresponding VXLAN segments hence causing traffic to fail for unknown-unicast/multicast/broadcast.
The vethPerHostUsed field is displaying the same value as the vethUsed field in the XML response for http://vsm_ip/api/vc/limits API. It should display the number of Veths on the host with the maximum used Veths.
Gateway module fluctuates when traffic flows with varied 1000 SMAC ICMP traffic.
The Cisco Management Information Base (MIB) list includes Cisco proprietary MIBs and many other Internet Engineering Task Force (IETF) standard MIBs. These standard MIBs are defined in Requests for Comments (RFCs). To find specific MIB information, you must examine the Cisco proprietary MIB structure and related IETF-standard MIBs supported by the Cisco Nexus 1000V Series switch.
The MIB Support List is available at the following FTP site:
Subscribe to What’s New in Cisco Product Documentation , which lists all new and revised Cisco technical documentation, as an RSS feed and deliver content directly to your desktop using a reader application. The RSS feeds are a free service.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks . Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Internet Protocol (IP) addresses used in this document are for illustration only. Examples, command display output, and figures are for illustration only. If an actual IP address appears in this document, it is coincidental.