The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Cisco Data Center Network Manager (DCNM) is a management system for Cisco NXOS-based Programmable Fabrics and Cisco NXOS-based Storage Fabrics. In addition to provisioning, monitoring, and troubleshooting the datacenter network infrastructure, the Cisco DCNM provides a comprehensive feature-set that meets the routing, switching, and storage administration needs of datacenters. It streamlines the provisioning for the Programmable Fabric and monitors the SAN components.
Cisco DCNM provides a high level of visibility and control through a single web-based management console for Cisco Nexus Series Switches, Cisco MDS, and Cisco Unified Computing System (UCS) products. During the DCNM installation, you can choose install applications related to Programmable Fabric only for Programmable Fabric-mode installations. Cisco DCNM also includes Cisco DCNM-SAN client functionality.
From Cisco DCNM Release 10.0.x, DCNM-LAN thick client is not supported. However, most of salient features are migrated and the operations can be performed from the Cisco DCNM Web Client.
|
|
---|---|
This feature was introduced. The Standalone Inline Upgrade and DCNM Native HA Inline Upgrade methods have been introduced as alternatives to the existing upgrade methods for OVA and ISO. Users can now upgrade with a fewer number of steps for these upgrade methods. For more information about the inline upgrade methods, see Standalone Inline Upgrade for OVA/ISO. |
|
The following installation options are provided for this release:
For more information about the installation options, see Deploying the Open Virtual Appliance as an OVF Template. |
Cisco DCNM provides an alternative to the command-line interface (CLI) for switch configuration commands.
In addition to complete configuration and status monitoring capabilities for Cisco MDS 9000 switches, Cisco DCNM-SAN provides powerful Fiber Channel troubleshooting tools. These in-depth health and configuration analysis capabilities leverage unique MDS 9000 switch capabilities: Fiber Channel Ping and Traceroute.
Cisco DCNM-SAN includes these management applications:
The Cisco DCNM-SAN Server component must be started before running Cisco DCNM-SAN. On a Windows PC, Cisco DCNM-SAN Server is installed as a service. This service can then be administered using the Windows Services in the control panel. Cisco DCNM-SAN Server is responsible for discovery of the physical and logical fabric and for listening for SNMP traps, syslog messages, and Performance Manager threshold events.
The Cisco DCNM Web Client allows operators to monitor and obtain reports for Cisco MDS and Nexus events, performance, and inventory from a remote location using a web browser. Licensing and discovery are part of the Cisco DCNM web client.
From Cisco DCNM Release 10.0(1), 10.1(1), or 10.1(2), the salient features of the DCNM LAN Client are migrated to be accessed and monitored via the Web Client. The Web Client now provides provisioning, monitoring of Ethernet interfaces for the Ethernet switches. It allows you to configure complex features such as vPC, VDC, and FabricPath and provides the topology representation of vPC, port channel, VLAN mappings, and FabricPath.
The Cisco DCNM-SAN Client displays a map of your network fabrics, including Cisco MDS 9000 Family switches, third-party switches, hosts, and storage devices. The Cisco DCNM-SAN Client provides multiple menus for accessing the features of the Cisco DCNM SAN functionality.
Cisco DCNM-SAN automatically installs the Device Manager. Device Manager provides two views of a single switch:
Performance Manager presents detailed traffic analysis by capturing data with SNMP. This data is compiled into various graphs and charts that can be viewed with any web browser.
This section details basic overview on Programmable Fabric and non-Programmable Fabric.
Cisco Programmable Fabric boosts network flexibility and efficiency. Programmable Fabric innovations simplify fabric management, optimize fabric infrastructure, and automate provisioning across physical and virtual environments.
The optimized spine-leaf topology provides enhanced forwarding, distributed control plane, and integrated physical and virtual environments. The topologies enable any network anywhere, supporting transparent mobility for physical servers and virtual machines, including network extensibility. This increases the resiliency with smaller failure domains and multitenant scale.
Cisco Programmable Fabric allows centralized fabric management across both physical servers and virtual machines. It provides automated network provisioning, common point of fabric access, and host, network, and tenant visibility. Open APIs allow better integration with orchestration and automation tools, in addition to cloud platforms.
With complete mobility across the fabric, the Programmable Fabric uses network automation and provisioning to simplify physical server and virtual machine deployments. The network administrator can define profile templates for both physical and virtual machine. When a server administrator provisions virtual machine and physical servers, instances of network policies are automatically created and applied to the network leaf switch. As virtual machines move across the fabric, the network policy is automatically applied to the leaf switch.
Cisco DCNM in non-Programmable Fabric mode provisions and optimizes the overall uptime and reliability of the data center fabric. The following are the significance of Programmable Fabric mode:
Cisco DCNM Release 10.4(2) offers four types of installers. The images are packaged with the Cisco DCNM installer, signature certificate, and signature verification script.
You must unzip the desired Cisco DCNM Installer image zip file to a directory. Image signature can be verified by following the steps in README file. The installer from this package installs the Cisco DCNM software.
This section includes 4 options
This installer is available as an Open Virtual Appliance file (.ova). The installer contains a pre-installed OS, DCNM and other applications needed for Programmable Fabric. This requires a vCenter Server and ESXi environment. This can be deployed either in Programmable Fabric or in non-Programmable Fabric modes. By default, it is deployed in Programmable Fabric mode.
This installer is available as an ISO image (.iso). The installer is a bundle of OS, DCNM and other applications needed for Dynamic Fabric Automation. This can be deployed either on VMware ESXi 5.x or 6.5. environment or as a Kernel-based Virtual Machine on RHEL 6.x. This can be deployed either in Programmable Fabric or in non-Programmable Fabric modes. By default, it gets deployed in Programmable Fabric mode.
This installer is available as a executable (.exe) and does not support Programmable Fabric features.
This installer is available as a binary (.bin) and does not support Programmable Fabric features.
The installer available for Cisco DCNM Release 10.4(2) can be deployed in one of the below modes.
All types of installers (.ova,.iso,.bin,.exe) are packaged along with PostgreSQL database. The default installation steps for the respective installers result in this mode of deployment.
If you have more than 50 switches in your setup or if you expect your setup to grow over time, an external Oracle server is recommended. This mode of deployment requires the default installation setup, followed by steps to configure DCNM to use the external Oracle as described under section Oracle Database for DCNM Servers.
The DCNM Virtual appliances, both OVA and ISO, can be deployed in High Availability mode to have resilience in case of application or OS failures. For more information, see Managing Applications in a High-Availability Environment .