Cisco Nexus 1000V Switch for VMware vSphere

Cisco Nexus 1000V Release Notes, Release 4.2(1)SV2(2.2)

  • Viewing Options

  • PDF (281.9 KB)
  • Feedback

Table of Contents

Cisco Nexus 1000V Release Notes, Release 4.2(1)SV2(2.2)



Software Compatibility with VMware

Software Compatibility with Cisco Nexus 1000V

New and Changed Information

New Software Features

VSI Discovery and Configuration Protocol (VDP)

Dynamic Fabric Automation (DFA)

Cisco Nexus 1000V Multi-Hypervisor Licensing

Limitations and Restrictions

Configuration Limits

Single VMware Data Center Support



VMotion of VSM

Access Lists


Port Security

Port Profiles

SSH Support

Cisco NX-OS Commands Might Differ from Cisco IOS

Layer 2 Switching: No Spanning Tree Protocol

Cisco Discovery Protocol

DHCP Not Supported for the Management IP


Upstream Switch Ports

DNS Resolution


Layer 3 VSG

Copy Running-Config Startup-Config Command

Dynamic Entries Are Not Deleted for Linux VM

Source Filter TX VLANs Are Missing After the VSM Restarts

Default SSH Inactive Session Timeout

Queueing Policy Cannot Be Changed in a Flexible Upgrade Setup

Clear QoS Statistics Fails on the VSM

Span Source/Destination Removed from the Session Configuration After an Atomic Port-Profile Change


Open Caveats


VXLAN Gateway

Platform, Infrastructure, Ports, Port Channel, and Port Profiles

Quality of Service


Resolved Caveats

MIB Support

Obtaining Documentation and Submitting a Service Request

Cisco Nexus 1000V Release Notes, Release 4.2(1)SV2(2.2)

First Published: January 31, 2014

Last Updated: May 12, 2015

This document describes the features, limitations, and caveats for the Cisco Nexus 1000V Release 4.2(1)SV2(2.2) software. The following is the change history for this document.



May 12, 2015

Added the caveat CSCuu25712.

September 23, 2014

Moved open caveats from VMware to Platform, Infrastructure, Ports, Port Channel, and Port Profiles, Quality of Service, and Features.

June 2, 2014

Added the section Span Source/Destination Removed from the Session Configuration After an Atomic Port-Profile Change.

February 27, 2014

Added the caveat CSCum99528.

February 24, 2014

Added the Cisco Nexus 1000V Multi-Hypervisor Licensing section.

January 31, 2014

Created release notes for Release 4.2(1)SV2(2.2).


The Cisco Nexus 1000V provides a distributed, Layer 2 virtual switch that extends across many virtualized hosts. The Cisco Nexus 1000V manages a data center defined by the vCenter Server. Each server in the data center is represented as a line card in the Cisco Nexus 1000V and can be managed as if it were a line card in a physical Cisco switch.

The Cisco Nexus 1000V consists of the following two components:

  • Virtual Supervisor Module (VSM), which contains the Cisco CLI, configuration, and high-level features.
  • Virtual Ethernet Module (VEM), which acts as a line card and runs in each virtualized server to handle packet forwarding and other localized functions.

Software Compatibility with VMware

The servers that run the Cisco Nexus 1000V VSM and VEM must be in the VMware Hardware Compatibility list. This release of the Cisco Nexus 1000V supports vSphere 5.5, 5.1, and 5.0 release trains. For additional compatibility information, see the Cisco Nexus 1000V Compatibility Information.

Note All virtual machine network adapter types that VMware vSphere supports are supported with the Cisco Nexus 1000V. Refer to the VMware documentation when choosing a network adapter. For more information, see the VMware Knowledge Base article #1001805.

Software Compatibility with Cisco Nexus 1000V

This release supports hitless upgrades from Release 4.2(1)SV1(4) and later releases. For additional information, see the Cisco Nexus 1000V Software Upgrade Guide.

New and Changed Information

This section describes the new software features in Cisco Nexus 1000V Release 4.2(1)SV2(2.2).

New Software Features

The following software features were added in Cisco Nexus 1000V Release 4.2(1)SV2(2.2):

VSI Discovery and Configuration Protocol (VDP)

The Vision Station Interface (VSI) Discovery and Configuration Protocol (VDP) on the Cisco Nexus 1000V is part of the IEEE standard 802.1Qbg (Edge Virtual Bridging [EVB]) that can detect and signal the presence of end hosts and the exchange capability with an adjacent VDP-capable bridge. The VDP is a reliable first-hop protocol that communicates the presence of end-host Virtual Machines (VMs) to adjacent leaf nodes on the Cisco Dynamic Fabric Automation (DFA) architecture. In addition to detecting the MAC and IP addresses of the end-host VMs when a host comes up or during VM mobility events, VDP also triggers auto-configuration of the leaf nodes on the DFA architecture to make them ready for further VM traffic.

For detailed information about VDP, see the Cisco Nexus 1000V VDP Configuration Guide.

Dynamic Fabric Automation (DFA)

Cisco Nexus 1000V supports the Cisco Dynamic Fabric Automation. This feature extends any cast gateway MAC and forwarding mode functionality to the end host running Cisco Nexus 1000V.

For detailed information about DFA, see the Cisco Nexus 1000V DFA Configuration Guide and the Cisco DFA Solutions Guide.

Cisco Nexus 1000V Multi-Hypervisor Licensing

The Cisco Nexus 1000V uses a multi-hypervisor licensing approach, which allows you to migrate a license from one Cisco Nexus 1000V switch platform type to another. For example, you can migrate the license from a Cisco Nexus 1000V for VMware switch to a Cisco Nexus 1000V for Microsoft Hyper-V. For more information about the multi-hypervisor licensing, see the Cisco Nexus 1000V Platform Multi-Hypervisor Licensing Guide.

Limitations and Restrictions

This section describes the limitations and restrictions of the Cisco Nexus 1000V.

Configuration Limits

Table 1 shows the Cisco Nexus 1000V configuration limits:


Table 1 Configuration Limits for Cisco Nexus 1000V

Supported Limits for a Single Cisco Nexus 1000V Deployment Spanning up to 2 Physical Data Centers

Maximum Modules


Virtual Ethernet Module (VEM)


Virtual Supervisor Module (VSM)

The VSMs can be placed in different physical data centers.

Note that the previous restrictions requiring the active-standby VSMs in a single physical data center do not apply anymore.



Active VLANs and VXLANs across all VEMs

2048 VLANs and 2048 VXLANs (with a combined maximum of 4096)

MAC addresses per VEM


MAC addresses per VLAN per VEM


vEthernet interfaces per port profile

1024 (without static auto expand port binding)

Same as DVS maximum (with static auto expand port binding)



Distributed Virtual Switches (DVS) per vCenter with VMware vCloud Director (vCD)


Distributed Virtual Switches (DVS) per vCenter without VMware vCloud Director (vCD)


vCenter Server connections

1 per VSM HA pair1

Maximum latency between VSMs and VEMs


Per Host

vEthernet interfaces



Port profiles


System port profiles



Port channel



Physical trunks


Physical NICs


vEthernet trunks






ACEs per ACL



ACL instances



NetFlow policies



NetFlow instances



Switched Port Analyzer (SPAN)/Encapsulated Remote Switched Port Analyzer (ERSPAN) sessions



QoS policy maps



QoS class maps



QoS instances



Port security



Multicast groups



1.Only one connection to vCenter server is permitted at a time.

2.When you upgrade from an earlier version of the Cisco Nexus 1000V software to the current version of Cisco Nexus 1000V software, the maximum vEth ports are displayed as 216. To get the current supported vEth limit, remove the host from the DVS and add the host again.

3.This number can be exceeded if VEM has available memory.

Single VMware Data Center Support

The Cisco Nexus 1000V can be connected to a single VMware vCenter Server data center object. The virtual data center can span multiple physical data centers.

Each VMware vCenter can support multiple Cisco Nexus 1000V VSMs per vCenter data center.


Implementing VDP on the Cisco Nexus 1000V has the following limitations and restrictions:

  • The Cisco Nexus 1000V supports the Cisco DFA-capable VDP based on the IEEE Standard 802.1 Qbg, Draft 2.2, and does not support the Link Layer Discovery Protocol (LLDP). Therefore, the EVB type, length, value are not originated or processed by the Cisco Nexus 1000V.
  • The VDP implementation in the current release supports a matching LLDP-less implementation on the bridge side, which is delivered as part of the Cisco DFA solution. For more information on the Cisco DFA, see the Cisco DFA Solutions Guide.
  • Timer-related parameters are individually configurable in the station and in the leaf.
  • Connectivity to multiple unclustered bridges are not supported in this release.
  • IPv6 addresses in filter format are not supported in this release.
  • VDP is supported for only segmentation-based port profiles. VDP for VLAN-based port-profiles is unavailable in this release.
  • The dynamic VLANs allocated by VDP are local to the VEM, and they should not be configured on the Cisco Nexus 1000V VSM.
  • VDP is supported on VMware ESX releases 5.0, 5.1 and 5.5 in the current release.


DFA feature has following limitations and restrictions:

  • Fabric forwarding mode is not supported under the VLAN configuration.

VMotion of VSM

VMotion of the VSM has the following limitations and restrictions:

  • VMotion of a VSM is supported for both the active and standby VSM VMs. For high availability, we recommend that the active VSM and standby VSM reside on separate hosts.
  • If you enable Distributed Resource Scheduler (DRS), you must use the VMware anti-affinity rules to ensure that the two virtual machines are never on the same host, and that a host failure cannot result in the loss of both the active and standby VSM.
  • VMware VMotion does not complete when using an open virtual appliance (OVA) VSM deployment if the CD image is still mounted. To complete the VMotion, either click Edit Settings on the VM to disconnect the mounted CD image, or power off the VM. No functional impact results from this limitation.
  • If you are adding one host in a DRS cluster that is using a vSwitch to a VSM, you must move the remaining hosts in the DRS cluster to the VSM. Otherwise, the DRS logic does not work, the VMs that are deployed on the VEM could be moved to a host in the cluster that does not have a VEM, and the VMs lose network connectivity.

For more information about VMotion of VSM, see the Cisco Nexus 1000V Software Installation Guide.

Access Lists

ACLs have the following limitations and restrictions:


  • IPV6 ACL rules are not supported.
  • VLAN-based ACLs (VACLs) are not supported.
  • ACLs are not supported on port channels.


  • IP ACL rules do not support the following:

fragments option

addressgroup option

portgroup option

interface ranges

  • Control VLAN traffic between the VSM and VEM does not go through ACL processing.


The NetFlow configuration has the following support, limitations, and restrictions:

  • Layer 2 match fields are not supported.
  • NetFlow Sampler is not supported.
  • NetFlow Exporter format V9 is supported
  • NetFlow Exporter format V5 is not supported.
  • The multicast traffic type is not supported. Cache entries are created for multicast packets, but the packet/byte count does not reflect replicated packets.
  • NetFlow is not supported on port channels.

The NetFlow cache table has the following limitation:

  • Immediate and permanent cache types are not supported.

Note The cache size that is configured using the CLI defines the number of entries, not the size in bytes. The configured entries are allocated for each processor in the ESX host and the total memory allocated depends on the number of processors.

Port Security

Port security has the following support, limitations, and restrictions:

  • Port security is enabled globally by default.
    The feature/no feature port-security command is not supported.
  • In response to a security violation, you can shut down the port.
  • The port security violation actions that are supported on a secure port are Shutdown and Protect. The Restrict violation action is not supported.
  • Port security is not supported on the PVLAN promiscuous ports.

Port Profiles

Port profiles have the following restrictions or limitations:

  • There is a limit of 255 characters in a port-profile command attribute.
  • We recommend that you save the configuration across reboots, which shortens the VSM bringup time.
  • We recommend that if you are altering or removing a port channel, you should migrate the interfaces that inherit the port channel port profile to a port profile with the desired configuration, rather than editing the original port channel port profile directly.
  • If you attempt to remove a port profile that is in use, that is, one that has already been auto-assigned to an interface, the Cisco Nexus 1000V generates an error message and does not allow the removal.
  • When you remove a port profile that is mapped to a VMware port group, the associated port group and settings within the vCenter Server are also removed.
  • Policy names are not checked against the policy database when ACL/NetFlow policies are applied through the port profile. It is possible to apply a nonexistent policy.

SSH Support

Only SSH version 2 (SSHv2) is supported.

For more information, see the Cisco Nexus 1000V Security Configuration Guide.

Cisco NX-OS Commands Might Differ from Cisco IOS

Be aware that the Cisco NX-OS CLI commands and modes might differ from those commands and modes used in the Cisco IOS software.

Layer 2 Switching: No Spanning Tree Protocol

The Cisco Nexus 1000V forwarding logic is designed to prevent network loops so it does not need to use the Spanning Tree Protocol. Packets that are received from the network on any link connecting the host to the network are not forwarded back to the network by the Cisco Nexus 1000V.

Cisco Discovery Protocol

The Cisco Discovery Protocol (CDP) is enabled globally by default.

CDP runs on all Cisco-manufactured equipment over the data link layer and does the following:

  • Advertises information to all attached Cisco devices.
  • Discovers and views information about those Cisco devices.

CDP can discover up to 256 neighbors per port if the port is connected to a hub with 256 connections.

If you disable CDP globally, CDP is also disabled for all interfaces.

For more information about CDP, see the Cisco Nexus 1000V System Management Configuration Guide.

DHCP Not Supported for the Management IP

DHCP is not supported for the management IP. The management IP must be configured statically.


The Link Aggregation Control Protocol (LACP) is an IEEE standard protocol that aggregates Ethernet links into an EtherChannel.

The Cisco Nexus 1000V has the following restrictions for enabling LACP on ports carrying the control and packet VLANs:

Note These restrictions do not apply to other data ports using LACP.

  • If LACP offload is disabled, at least two ports must be configured as part of the LACP channel.

Note This restriction is not applicable if LACP offload is enabled. You can check the LACP offload status by using the show lacp offload status command.

  • The upstream switch ports must be configured in spanning-tree port type edge trunk mode.

Upstream Switch Ports

All upstream switch ports must be configured in spanning-tree port type edge trunk mode.

Without spanning-tree PortFast on upstream switch ports, it takes approximately 30 seconds to recover these ports on the upstream switch. Because these ports are carrying control and packet VLANs, the VSM loses connectivity to the VEM.

The following commands are available to use on Cisco upstream switch ports in interface configuration mode:

  • spanning-tree portfast
  • spanning-tree portfast trunk
  • spanning-tree portfast edge trunk

DNS Resolution

The Cisco Nexus 1010 (1000V) cannot resolve a domain name or hostname to an IP address.


When the maximum transmission unit (MTU) is configured on an operationally up interface, the interface goes down and comes back up.

Layer 3 VSG

When a VEM communicates with the Cisco Virtual Security Gateway (VSG) in Layer 3 mode, an additional header with 94 bytes is added to the original packet. You must set the MTU to a minimum of 1594 bytes to accommodate this extra header for any network interface through which the traffic passes between the Cisco Nexus 1000V and the Cisco VSG. These interfaces can include the uplink port profile, the proxy ARP router, or a virtual switch.

Copy Running-Config Startup-Config Command

When you are using the copy running-config startup-config command, do not press the PrtScn key. If you do, the command aborts.

Dynamic Entries Are Not Deleted for Linux VM

On a Linux VM that has multiple adapters, a DHCP release packet is sent from an incorrect interface (because of OS functionality) and the DHCP release packet is dropped. As a result, the binding entry is not deleted. This issue is a Linux issue where the packets from all interfaces go out of one interface (which is the default interface). To avoid this issue, put the interfaces in different subnets and make sure that the default gateways for each interface is set.

Source Filter TX VLANs Are Missing After the VSM Restarts

When a SPAN (ERSPAN-source) session is created and the source interface is configured as a port channel and PVLAN Promiscuous access is programmed, the filter RX is not configured and the configured programmed filter TX is not persistent on a VSM reload.

To work around this issue, configure all the primary and secondary VLANs as filter VLANs while using the port channel with PVLAN Promiscuous access as the source interface.

Default SSH Inactive Session Timeout

The default SSH inactive session timeout is 30 minutes, but the timeout setting is disabled by default, so the connection remains active. The exec-timeout command can be used to explicitly configure the inactive session timeout limit.

Queueing Policy Cannot Be Changed in a Flexible Upgrade Setup

Queuing is valid starting from Cisco NX-OS Release 4.2(1)SV1(5.1). Any queueing configuration that exists on the VSM in an earlier release stops working. All port profiles that have a queueing configuration cannot be used. If a port is down, it should be moved to a profile without QoS queueing.

Clear QoS Statistics Fails on the VSM

When a policy-map, of type queuing, that has a class map of type “match-any” without any match criteria, is applied on an interface, a resource pool is not created for that specific class ID. As a result, the collection of statistics fails and no data is sent back to the VSM. To work around this issue, add a match criteria on the empty class map.

Span Source/Destination Removed from the Session Configuration After an Atomic Port-Profile Change

If a virtual Ethernet port is a SPAN/ERSPAN source or destination and its port profile changes atomically, the virtual Ethernet port is removed from the SPAN/ERSPAN configuration. If it was the only operational source/destination, the session might go down.


This section includes the following topics:

Open Caveats

The following are descriptions of the caveats in Cisco Nexus 1000V Release 4.2(1)SV2(2.2). The IDs are linked to the Cisco Bug Search tool.



Table 2 VDP

Open Caveat Headline


The VDP sends inconsistent IP address mappings for a VM’s NIC in some conditions.


Cisco Nexus 1000V VEM accepts system VLAN as a VDP allocated Dynamic VLAN.

VXLAN Gateway


Table 3 VXLAN Gateway

Open Caveat Headline


The VXLAN gateway restarts or freezes when you are bridging traffic with unique source MAC addresses.


Related flows are retained after removing the VLAN:VXLAN mapping.


The VXLAN gateway module flaps when the VTEP IP address is changed in the VSM.


LACP packets were not received on the VXLAN gateway VSB for traffic higher than 260 Kbps.


The throughput decreases with a unique source MAC address for each incoming flow.


Syslog messages from the VXLAN gateway do not go to the external syslog server.


Retain relevant debug components are in the VXLAN gateway.


The VXLAN gateway does not inherit a modified port profile on reattach.


Incorrect values are in the InOctets/OutOctets columns of the show interface counters module command.


The InOctets counter for the VXLAN gateway vEth (vxlannic0) interface is not working.


Traffic does not fail over when the VSM shuts down the port.


The throughput decreases and large packets get dropped with UDP traffic.


The port channel is down after you enter the shut or no shut command on the uplink port profile of a VXLAN gateway.


The show process CPU command does not show the same result for vssnet.


The configuration fails on a port profile inherited by VTEPs on the VEM and VXLAN gateway.


There are no OVA files for the gateway to do the deployment as a VSB on a VSA.


The attach module gateway command hangs if the VSM is on Layer 3 through a control interface.


The show cdp neighbors command on the VSM or VXLAN gateway does not show details of the upstream VXGW module.


The show process command does not display the reason for the crash.


In high-traffic scenarios, there is a possibility that IGMP-Query packets may be queued behind data packets. This issue can cause IGMP-Join(s) not to be sent for the corresponding VXLAN segments and cause traffic to fail for unknown-unicast/multicast/broadcast.


The VSM and gateway are out-of-sync after you reload the VSM after changing the port profile.


System log messages not going to the external system log server from the gateway.


Unable to deploy VXLAN gateway VSB using the enable properties command on the Cisco N1010.


The output for certain fields are missing for the vemcmd show card command in the VXLAN gateway.


When you deploy a VXLAN gateway, the MAC address entered does not get validated for proper syntax.


The VLAN pool-based network fails if the VSM reloads without copy r s.


VXGW-VTEP transport VLAN in VXLAN-VLAN mapping disrupts traffic.


Net Flow crashes while you are disabling the feature.


VXLAN: TCO/TSO support for inner IPv6 traffic.

Platform, Infrastructure, Ports, Port Channel, and Port Profiles


Table 4 Platform, Infrastructure, Ports, Port Channel, and Port Profiles

Open Caveat Headline


Installing an earlier REST-API plug-in version results in HTML errors.


Port channels do not come up in a non-LACP offload setup.


Not able to migrate VC/VSM and normal VM when adding a host to DVS.


SNMP V3 traps are not getting generated.


The LACP offload configuration is not persisting in stateless mode.


CDP does not work for certain NIC cards without VLAN 1 allowed.


Continuous SNMP polling causes high CPU usage.


The load-interval counter command configuration is not working.


A port profile through VCD fails when it is configured immediately after a switchover.


Editing a port profile fails with the error message “ERROR: unknown error.”


Reloading the VSM takes 12 minutes for modules to come online and vEthernet interfaces to come up.


The show tech-support dvs command does not have output related to DHCP snooping.


A native VLAN configured on the interface port channel is not programmed on the VEM.


After upgrading the VEM to Cisco NX-OS Release 4.2(1)SV1(5.1), two Cisco VIBs are installed.


PPM does not perform configuration checks when you configure a PVLAN in an offline port-profile mode.


The “SYSMGR_EXITCODE_FAILURE_NOCALLHOME” error message is received while upgrading with ISO images from Release 4.2(1)SV1(4) or 4.2(1)SV1(4a) to Release 4.2(1)SV1(5.2).


If you add a PVLAN promiscuous trunk port channel or Ethernet interface as the SPAN/ERSPAN source, some of the VLANs allowed on the port might not be spanned.


An error occurs while trying to override the PVLAN mapping in the child port profile.


Modules are not reattached after a VMKnic MAC address change in Layer 3 mode.


VCD does not display relevant error descriptions for error codes.


NSM should fix the Cisco Nexus 1000V feature limitation issue.


Powering up a single VM configures all vApp networks.


vEths mapped to the port profiles are not counted in the show resource-availability monitor command.


The server IP address becomes for a MN stateless host.


Traffic loss occurs after the VSM reloads if PSEC is restricted and the DSM bit is set.


You cannot process a large number of IGMP queries from the upstream switch on the Cisco Nexus 1000V.


The VEM does not increase the number of maximum ports after an upgrade to the current version of the Cisco Nexus 1000V.


snmpwalk does not return values of SyslogServer objects.


Internal VLANs (3968 to 4047) are being trunked to configure on ports.


Downloading the files from the VSM configuration with IPv6 throws an error.


When the VSM is set up with an IPv6 address and accessed, the server drops or resets the connection randomly and causes multiple issues.


When trying to install a license file, installation fails with an error message “file already exists” and the license file is not installed.


When plan mappings are configured on the port-channel interface directly, the mappings are incorrect on the VEM.


A VEM upgrade from previous releases of the Cisco Nexus 1000V software to the current release of Cisco Nexus 1000V software fails.


ifHCInOctets and ifInOctets wrap while taking a snapshot of virtual machines.


SNMP times out when browsing the entire tree or the CISCO-PROCESS-MIB.


When you select the Install VIB option, the installer checks for the SVS connection and list the hosts in the host selection page to proceed to the VEM Installation.


When you deploy a gateway as VSB, you are asked to enter the VSMs domain ID. For the gateway, you do not need the domain ID.


Unable to create DVportgroup during bulk VMotion.


The ISSU upgrade compatibility table is not modified to accommodate the VXLAN gateway.


The show install service-module command does not display the service module ISSU status.


Configuration mismatch between VSM and vPath.


IP address configuration on interface control0 is not persistent upon VSM reload.


When ESX is upgraded to vSphere 5.5, Mellanox NICs get a policy mismatch.


When ESX is upgraded to vSphere 5.5, the host management connectivity is lost.


CDN NIC gets a previous port policy after a reboot.


Removing a host with Intel Oplin from DVS causes all ports to reset.


A fully qualified domain name/user with port-profile visibility fails.


The port-profile visibility feature is not able to update permissions.


Improper sync occurs with vCenter when port-profile names have special characters.


A VEM upgrade fails when the scratch space is a network file system.


After unregistering the Cisco Nexus 1000V on Vshield, the alert timer runs.


The wrong message is displayed for VC user ID and password.


The virtual Ethernet auto delete option does not work. If the switch has non-participating virtual Ethernet (vEth) interfaces, those interfaces are not automatically deleted even if you configure the vEth auto delete option.

Quality of Service


Table 5 Quality of Service

Open Caveat Headline


QoS marking limitation occurs in the VCD environment.


Ports go to the error-disabled state during ACL or QoS commit errors.



Table 6 Features

Open Caveat Headline


PSEC with multiple MAC addresses and PVLAN are not supported.


Port migration with a switchover causes ports to go to “No port-profile.”


Split brain causes pending ACL/QoS transactions into an err-disabled state.


The interface configuration fails when vEths are nonparticipating due to an unreachable module.


A RADIUS AAA error occurs when the feature CTS is enabled and there is a switchover.


The copy run start takes 8 to 10 minutes to complete the copy.


The show bridge-domain vteps command shows the IP address even after removing the vEth.


When ACL deny is applied on mgmt0 to block http and https, ACL counters are not incremented but HTTP and HTTS is blocked as expected.


When the ACL applied to mgmt interface is changed, IP tables add entries to existing tables without clearing and rebuilding for the new ACL, which causes incorrect filtering.


VTEP IP address is stuck in the VTEP list after a headless VEM reconnects to the VSM.


ACLs does not work on IGMP version 2 and v3.


The no acllog match-log-level level command does not reset the ACL logging level to the default value.


The vethPerHostUsed field is displaying the same value as the vethUsed field in the XML response for http://vsm_ip/api/vc/limits API. It should display the number of vEths on the host with the maximum used vEths.


An incorrect default value is shown for max-deny flows and max-permit flows.


App fails power on with an insufficient resource error.


IGMP process fails to read the PVLAN association.

Resolved Caveats

The following are descriptions of caveats that were resolved in Cisco Nexus 1000V Release 4.2(1)SV2(2.2). The IDs are linked to the Cisco Bug Search tool.


Table 7 Resolved Caveats

Resolved Caveat Headline


Bandwidth allocated for a queue is not as expected when there is congestion for ESXi 5.5 hosts.


An existing vAPP cannot be powered down, and a new vAPP cannot be deployed.


PSOD with unknown unicast/broadcast VXLAN traffic in ESXi hosts.


The Cisco Nexus 1000V is unable to determine the speed which occurs on the M73KR-E, NC553i and NC554FLB on ESXi 5.5.


The segment ID is not updated in the VSM on the port bringup after a switchover.


Multicast packet are dropped by the ESXi firewall after upgrading to ESXi5.5.


VSG IP bindings are lost, when the module with active VMs flaps.


Multicast frames are looped by VXLAN VTEP when vPath is enabled.


On the VSM boot partition, memory usage is 90%.


The VSM PA coupler log file is not getting rolled over.


ISIS multicast packets must be forwarded by the VEM.


VEMDPA crashes when moving a VM from one host to another.


Stale Port-Profile/BD entries are in startup configuration.


PSS gets corrupted on the gateways in a corner case.

MIB Support

The Cisco Management Information Base (MIB) list includes Cisco proprietary MIBs and many other Internet Engineering Task Force (IETF) standard MIBs. These standard MIBs are defined in Requests for Comments (RFCs). To find specific MIB information, you must examine the Cisco proprietary MIB structure and related IETF-standard MIBs supported by the Cisco Nexus 1000V Series switch.

The MIB Support List is available at the following FTP site:

Obtaining Documentation and Submitting a Service Request

For information on obtaining documentation, using the Cisco Bug Search Tool (BST), submitting a service request, and gathering additional information, see What’s New in Cisco Product Documentation at:

Subscribe to What’s New in Cisco Product Documentation, which lists all new and revised Cisco technical documentation, as an RSS feed and deliver content directly to your desktop using a reader application. The RSS feeds are a free service.