The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Cisco Data Center Network Manager (DCNM), Release 10.2(1) includes the new features, enhancements, and hardware support that are described in the following section:
This section includes information about the new features, enhancements, and hardware support for Cisco DCNM, Release 10.2(1).
The following are the enhancements that have been made to Media Controller in this release:
Bandwidth Tracking on Host-Facing Link—The senders and receivers can connect to leaf switches of the Programmable Media Network (PMN) fabric. The sender initiates a multicast flow and the receiver subscribes to a multicast flow. Since multicast is used, multiple receivers can subscribe to a flow. Senders are devices such as cameras, microphones, playback devices, and so on. Receivers are devices such as video monitors, speakers, multiviewers, and so on. You can track the bandwidth on the host-facing link. Using the functionality, Cisco DCNM does not allow receivers to request for more flows or senders to send more flows than what the available bandwidth on the host-facing link will permit.
Topology Visualization—Cisco DCNM 10.2(1) consists of a new scalable topology visualization GUI which shows details about the PNM Fabric, endpoints attached to the fabric, search-based querying based on Flow ID, and related health statistics.
PMN Endpoint Sender and Receiver Management—Senders and receivers can connect to the leaf switches of the PMN Fabric. Senders initiate a multicast flow and receivers subscribe to a multicast flow. Cisco DCNM exposes the API for the registration of senders and receivers. Cisco DCNM also allows the senders and receivers to be validated or authenticated by the API users. A table lists all the current registered senders and receivers, with information about flow instances. The Cisco DCNM GUI and REST APIs allow users to add additional metadata to the receiver and sender information, such as Camera-BXB or Camera-SJ, to aid easy mapping.
Flow Alias—Using this functionality, you can specify names for multicast groups and flows. The multicast IP addresses are difficult to remember. Thus, by assigning a name to the multicast IP address, you can search and add policies based on the name.
You can configure a flow alias by choosing Cisco Web Client > Media Controller > Flow Alias.
To enable media controller on the Cisco DCNM Web Client, you must install Cisco DCNM in media-controller mode. See Cisco DCNM Installation Guide, Release 10.2(1) for more information.
You can monitor the devices by choosing Cisco DCNM Web Client > Media Controller.
In this release, Cisco DCNM provides a new wizard that enables you to deploy a network with ease. This feature provides simple network overlay provisioning for Cisco Nexus 9000 VXLAN EVPN LAN fabrics. This deploys the networks populated by the Cisco DCNM profile templates. To start, you need to select an existing fabric, or add a new fabric and then define Fabric Settings.
After you select or create a fabric, continue to the next step of selecting a network. You can also add a new network, or edit or delete an existing network. To access the LAN Fabric Provisioning feature, choose Configure > LAN Fabric Provisioning > Network Deployment.
In this release, Cisco DCNM enables you to manage switches with either IPv4 or IPv6 management interfaces. Cisco DCNM Web access (Cisco DCNM management interface) supports only IPv4. The extended fabric interface (eth1) supports both IPv4 and IPv6. Cisco DCNM supports LAN switches with the IPv6 management interface, not for SAN switches. Also, Cisco DCNM OVA and ISO installations support the LAN switches with the IPv6 management interface. Windows and Linux installations do not support IPv6 management interface.
Cisco DCNM allows POAP to LAN switches with IPv4 only, but the switch definition allows the management interface to be configured with IPv6 address. After POAP is complete, if the switch management interface is configured with IPv6, then Cisco DCNM communicates with the switch via SNMP, SSH, NxAPI, syslog with IPv6. For Programmable Fabric, switch can also communicate to the LDAP on the Cisco DCNM server with IPv6.
The Fabric Extender (FEX) feature allows you to manage a Cisco Nexus 2000 Series Fabric Extender and its association with the Cisco NX-OS switch that it is attached to. From Release 10.2(1), you can create or modify FEX for the LAN devices by choosing Cisco DCNM Web Client > Inventory > Switches. The FEX feature is available only on LAN devices. If a Cisco Nexus switch is discovered as part of the SAN fabric, the FEX feature will not be available. The FEX feature is also not supported on Cisco Nexus 1000V devices.
From Cisco DCNM Release 10.2(1), you can create and manage VDCs by choosing Cisco DCNM Web Client > Inventory > Switches > VDCs. As Cisco DCNM supports Cisco Nexus 7000 Series only, click an active Cisco Nexus 7000 switch. After you create a VDC, you can change the interface allocation, VDC resource limits, and the high availability (HA) policies.
Cisco DCNM 10.2(1) provides auditing for configuration change across network switches. You can get a report for all the configuration changes that take place on the devices in a data center.
You can generate an audit report for a given period. To generate reports using the Network Audit Config feature, a backup job should be scheduled for that device. The reports can also be exported to an HTML document. If real-time job is scheduled for the device, the audit will also show the mode through which configuration changes were made. The audit report uses color codes; green for new configurations, red for deleted configurations, and blue for changed configurations.
Cisco DCNM 10.2(1) allows you to edit VLANs.
To expand additional storage coverage, Cisco DCNM includes Pure Storage and HDS storage in DCNM Connect. For storage array discovery, Cisco DCNM supports storage virtualization profiles for the IBM SAN Volume Controller (SVC).
Currently, performance polling interval is a 5-minute fixed interval for all the entities, except Inter-Switch Link (ISL). Cisco DCNM 10.2(1) provides the ability to adjust this interval to 10 minutes or 15 minutes. To configure the interval, choose Administration > Performance Setup > Database in the Cisco DCNM GUI.
In this release, Cisco DCNM provides top-level switch information per fabric instead of top level Default SAN. To access this feature, choose Administration > DCNM Server > Multi Site Manager.
A custom port group is used in Cisco DCNM report and event forwarding. Cisco DCNM enables you to view custom port group and performance statistics. To access this feature, choose Monitor > Custom Port Groups.
In Cisco DCNM 10.2(1), you can export slow drain analysis data to a CSV or an Excel file. The default file format for export is CSV. To export this data, choose Monitor > SAN > Slow Drain Analysis.
In addition, Cisco DCNM enables you to schedule a slow drain analysis job and send the result to an email.
In the Slow Drain Analysis window, an Email To (optional) field is available, which allows you to enter an email address. The report will be emailed to this configured email address after each job is complete.
For more information about this feature, see the Web Client Online Help.
In this release, the Cisco DCNM templates support IPv6.
The Endpoint Locator feature allows real-time tracking of endpoints within a data center. This includes tracing the life history of an endpoint as well as providing insights into the trends associated with endpoint additions, removals, moves, and so on.
With the Cisco DCNM OVA or ISO installation, the Cisco DCNM VM is deployed in two interfaces—eth0 for general access to Cisco DCNM, and eth1 interface that is used primarily for fabric management. In most deployments, the eth1 interface is a part of the same network on which the mgmt0 interfaces of the Nexus switches reside. This allows Cisco DCNM to perform out-of-band management of these devices, including out-of-band POAP. Since the Border Gateway Protocol (BGP) process on Nexus devices runs only on the nonmanagement VRF (specifically, default VRF), there is a requirement to have IP connectivity from the Cisco DCNM to the fabric through any of the devices using one of the front-panel interfaces. For this purpose, a third interface, ethx, is required on Cisco DCNM VM; this interface can provide inband connectivity to the network fabric. This is a prerequisite for enabling the Endpoint Locator feature. The addition of a new interface does not require a restart of the Cisco DCNM VM. After the virtual network interface card (vnic) is added to the Cisco DCNM VM, the corresponding veth interface is created and is displayed in the CentOS VM on which Cisco DCNM runs, as an ethx interface.
High Availability with Endpoint Locator—The Endpoint Locator feature, along with its key components, runs only on the active Cisco DCNM. However, the search process also runs on the standby Cisco DCNM so that all the endpoint data and associated events are always synchronized between the active and standby Cisco DCNMs. This way, during a switchover, the data is already available on the newly active Cisco DCNM. In addition, the Endpoint Locator also allows a standby Cisco DCNM to be added in locations where there is a only a single Cisco DCNM instance running with EPL enabled and subsequently a standby is added at a later stage.
A third interface is required when inband management is used for a fabric via the eth1 interface. This ensures that the management interface used by Cisco DCNM for managing the devices, and potentially for POAP too, should not have any dependency on the interface through which EPL BGP peering occurs.
After physical connectivity is established between Cisco DCNM and the fabric through a switch’s front-panel interface, the configurations should be performed on the respective switches and Cisco DCNM.
The Endpoint Locator supports the following features:
Support for a BGP EVPN fabric (Cisco Nexus 9000, Cisco Nexus 5600 Series as leafs)
Support for a L3VPN or DFA fabric (Cisco Nexus 5600 Series, Cisco Nexus 6000 as leafs)
Support for dual-homed endpoints
Support for dual-stack endpoints
Supports up to 2 BGP route reflectors (Nexus 9000, Nexus 7000, Nexus 5600, Nexus 6000)
Support for the Endpoint Locator feature with and without NX-API (to gather additional information such as port, VLAN and so on)
Support for auto configuration of the fabric to enable the Endpoint Locator feature when Cisco DCNM is directly attached to a leaf or top-of-rack (ToR) in a fabric
Support of the Endpoint Locator feature when Cisco DCNM is not directly attached to a ToR or leaf in a fabric
Support for optional flush of the endpoint data to start afresh
Support for real-time and historical dashboards
Support for views with operational and exploratory insights such as endpoint lifetime, network, endpoint, VRF daily views, and operational heat map
Support for full high availability
Support for endpoint data stored for up to 180 days, amounting to a maximum of 5 G storage space
Supported scale: 10000 endpoints
In the General and LAN Fabrics screens (Web Client > LAN Fabric Settings), the user can select which replication mode (Ingress Replication or Multicast Replication) is used to handle BUM traffic for the VxLAN EVPN fabric. IR and multicast routing are mutually exclusive. If IR is selected, Cisco DCNM allows only IR-based profiles. Cisco DCNM will also add appropriate IR configurations in the leaf templates and will not show multicast configurations.
The following is a list of hardware supported in Cisco DCNM Release 10.2(1).
Hardware Description |
Part Number |
---|---|
Cisco MDS 9700 48-Port 32-Gbps Fibre Channel Switching Module |
DS-X9648-1536K9 |
Nexus 9300 with 24p 40/50G QSFP+ and 6p 40G/100G QSFP28 |
N9K-C93180LC-EX |
New fabric module for the Cisco Nexus 9516 Switch chassis |
N9K-C9516-FM-E |
40/100G Ethernet Module for for Nexus 9500 series chassis |
N9K-X9736C-EX |
N9K-C92300YC-FixedModule |
N9K-C92300YC |
48-port 1/10/25 Gigabit Ethernet SFP+ and 4‑port 40/100 Gigabit Ethernet QSFP line card |
N9K-X97160YC-EX |
Nexus N9K-C9232C Series fixed module with 32x40G/100G |
N9K-C9232C |
Cisco Nexus 2348TQ-E 10GE Fabric Extender |
Cisco DCNM POAP Template Package Release 10.2(1)ST(1) is a template package (.zip) file release.
You can download the Cisco-defined templates from https://software.cisco.com/download/release.html.
Note | Cisco DCNM POAP Template Package Release 10.2(1)ST(1) only works with Cisco DCNM 10.2.1 and later releases. It is not backward compatible with previous Cisco DCNM releases. |
Cisco DCNM POAP Template Package Release 10.2(1)ST(1), includes the new features, enhancements, that are described in the section:
Cisco DCNM POAP Template Package Release 10.2(1)ST(1) POAP templates allow IPv6 to be configured for the management interface. IPv6 can be configured for AAA, DNS, LDAP, NTP, SNMP and SYSLOG servers.
In the General and LAN Fabrics screens (Web Client > LAN Fabric Settings), Ingress Replication can be selected for handling BUM traffic for a VxLAN EVPN fabric. Corresponding IR-based profiles are added. When Ingress Replication is selected for the fabric, all multicast fields will be greyed out from POAP configuration and no multicast configuration will be added to the device.
Cisco DCNM POAP Template Package Release 10.2(1)ST(1) supports jumbo frames (up to 9216 bytes) for Nexus 7000 and Nexus 9000 templates.