Cisco DCNM Installation and Licensing Guide, Release 6.x
Preparing to Install Cisco DCNM
Downloads: This chapterpdf (PDF - 617.0KB) The complete bookPDF (PDF - 3.24MB) | Feedback

Table of Contents

Preparing to Install Cisco DCNM

Information About Cisco DCNM

Cisco MDS 9000 Switch Management and Cisco DCNM-SAN

Cisco MDS 9000 Switch Management

Storage Management Solutions Architecture

In-Band Management and Out-of-Band Management

Prerequisites

Initial Setup Routine

Preparing to Configure the Switch

Default Login

Setup Options

Assigning Setup Information

Configuring Out-of-Band Management

Configuring In-Band Management

Using the setup Command

Starting a Switch in the Cisco MDS 9000 Family

Accessing the Switch

Supported Software

Cisco DCNM

Information About Deploying Cisco DCNM

Database Support

Operating Systems

VMware Support

Cisco DCNM SAN Cluster and Federation Requirements

Server Ports

Clustered-Server Cisco DCNM-LAN Requirements

Deploying a Single-Server Cisco DCNM Environment

Deploying a Federarion or Clustered-Server Cisco DCNM Environment

Deploying a Single-Server Cisco DCNM Environment

Where Do You Go Next?

Preparing to Install Cisco DCNM

This chapter describes the prerequisites for installing Cisco DCNM components and includes the following sections:

Information About Cisco DCNM

This section includes the following topics:

Cisco MDS 9000 Switch Management and Cisco DCNM-SAN

The Cisco DCNM-SAN is a set of network management tools that supports Secure Simple Network Management Protocol version 3 (SNMPv3). It provides a graphical user interface (GUI) that displays real-time views of your network fabrics and allows you to manage the configuration of Cisco MDS 9000 Family devices and third-party switches. Cisco DCNM-SAN provides an alternative to the command-line interface (CLI) for most switch configuration commands.

In addition to complete configuration and status monitoring capabilities for Cisco MDS 9000 Family of switches, Cisco DCNM-SAN provides powerful Fibre Channel troubleshooting tools. These in-depth health and configuration analysis capabilities leverage unique Cisco MDS 9000 switch capabilities: Fibre Channel Ping and Traceroute.

This section includes the following topics:

Cisco MDS 9000 Switch Management

The Cisco MDS 9000 Family of switches can be accessed and configured in many different ways and supports standard management protocols. Table 1-1 lists the management protocols that Cisco DCNM-SAN supports to access, monitor, and configure the Cisco MDS 9000 Family of switches.

 

Table 1-1 Supported Management Protocols

Management Protocol
Purpose

Telnet/SSH

Provides remote access to the CLI for a Cisco MDS 9000 switch.

FTP/SFTP/TFTP, SCP

Copies configuration and software images between devices.

SNMPv1, v2c, and v3

Includes over 80 distinct Management Information Bases (MIBs). Cisco MDS 9000 Family switches support SNMP version 1, 2, and 3 and RMON V1 and V2. RMON provides advanced alarm and event management, including setting thresholds and sending notifications based on changes in device or network behavior.

By default, the Cisco DCNM-SAN communicates with Cisco MDS 9000 Family switches using SNMPv3, which provides secure authentication using encrypted usernames and passwords. SNMPv3 also provides the option to encrypt all management traffic.

HTTP/HTTPS

Includes HTTP and HTTPS for web browsers to communicate with Cisco DCNM-SAN Web Services and for the distribution and installation of the Cisco DCNM-SAN software. It is not used for communication between the Cisco DCNM-SAN Server and Cisco MDS 9000 Family switches.

XML/CIM over HTTP/HTTPS

Includes CIM server support for designing storage area network management applications to run on Cisco SAN-OS and NX-OS.

ANSI T11 FC-GS-3

Provides Fibre Channel-Generic Services (FC-GS-3) in the defining management servers in the Fabric Configuration Server (FCS). Cisco DCNM-SAN uses the information provided by FCS on top of the information contained in the Name Server database and in the Fibre Channel Shortest Path First (FSPF) topology database to build a detailed topology view and collect information for all the devices building the fabric.

Storage Management Solutions Architecture

Management services required for the storage environment can be divided into five layers, with the bottom layer being closest to the physical storage network equipment, and the top layer managing the interface between applications and storage resources.

Of these five layers of storage network management, Cisco DCNM-SAN provides tools for device (element) management and fabric management. In general, Device Manager is most useful for device management (a single switch), while Cisco DCNM-SAN is more efficient for performing fabric management operations involving multiple switches.

Tools for upper-layer management tasks can be provided by Cisco or by third-party storage and network management applications. The following summarizes the goals and function of each layer of storage network management:

  • Device management provides tools to configure and manage a device within a system or a fabric. You use device management tools to perform tasks on one device at a time, such as initial device configuration, setting and monitoring thresholds, and managing device system images or firmware.
  • Fabric management provides a view of an entire fabric and its devices. Fabric management applications provide fabric discovery, fabric monitoring, reporting, and fabric configuration.
  • Resource management provides tools for managing resources such as fabric bandwidth, connected paths, disks, I/O operations per second (IOPS), CPU, and memory. You can use Cisco DCNM-SAN to perform some of these tasks.
  • Data management provides tools for ensuring the integrity, availability, and performance of data. Data management services include redundant array of independent disks (RAID) schemes, data replication practices, backup or recovery requirements, and data migration. Data management capabilities are provided by third-party tools.
  • Application management provides tools for managing the overall system consisting of devices, fabric, resources, and data from the application. Application management integrates all these components with the applications that use the storage network. Application management capabilities are provided by third-party tools.

In-Band Management and Out-of-Band Management

Cisco DCNM-SAN requires an out-of-band (Ethernet) connection to at least one Cisco MDS 9000 Family switch. You need either mgmt0 or IP over Fibre Channel (IPFC) to manage the fabric.

mgmt0

The out-of-band management connection is a 10/100 Mbps Ethernet interface on the supervisor module, labeled mgmt0. The mgmt0 interface can be connected to a management network to access the switch through IP over Ethernet. You must connect to at least one Cisco MDS 9000 Family switch in the fabric through its Ethernet management port. You can then use this connection to manage the other switches using in-band (Fibre Channel) connectivity. Otherwise, you need to connect the mgmt0 port on each switch to your Ethernet network.

Each supervisor module has its own Ethernet connection; however, the two Ethernet connections in a redundant supervisor system operate in active or standby mode. The active supervisor module also hosts the active mgmt0 connection. When a failover event occurs to the standby supervisor module, the IP address and media access control (MAC) address of the active Ethernet connection are moved to the standby Ethernet connection.

IPFC

You can also manage switches on a Fibre Channel network using an in-band IP connection. The Cisco MDS 9000 Family supports RFC 2625 IP over Fibre Channel, which defines an encapsulation method to transport IP over a Fibre Channel network.

IPFC encapsulates IP packets into Fibre Channel frames so that management information can cross the Fibre Channel network without requiring a dedicated Ethernet connection to each switch. This feature allows you to build a completely in-band management solution.

Prerequisites

This section includes the following topics:

Before you can install Cisco DCNM, ensure that the Cisco DCNM system meets the following prerequisites:

  • Before installing Cisco DCNM, ensure that the hostname is mapped with the IP address in the hosts file under the following location:

Microsoft Windows–C:\WINDOWS\system32\drivers\etc\hosts

Linux–/etc/hosts


Note If Oracle RAC is chosen as the database for Cisco DCNM, ensure that the database host IP addresses and virtual IP addresses are added to the hosts file with their hostnames.


  • For RHEL, the maximum shared memory size must be 256 MB or more. To configure the maximum shared memory to 256 MB, use the following command:

sysctl -w kernel.shmmax=268435456

This setting, kernel.shmmax=268435456, should be saved in the /etc/sysctl.conf file. If this setting is not present or if it is less than 268435456, the Cisco DCNM server will fail after the server system is rebooted. For more information, visit the following URL:

http://www.postgresql.org/docs/8.3/interactive/kernel-resources.html

The server system must be registered with the DNS servers. No other programs are running on the server.

  • While using Remote PostgreSQL Database server, ensure that the Cisco DCNM Host IPs are added to the pg_hba.conf file present in the PostgreSQL installation directory. After the entries are added, restart the DB.

http://www.postgresql.org/docs/8.3/interactive/kernel-resources.html

  • Users installing Cisco DCNM must have full administrator privileges to create user accounts and start services. Users should also have access to all ports. These ports are used by Cisco DCNM Server and the PostgreSQL database: 1098, 1099, 4444, 4445, 8009, 8083, 8090, 8092, 8093, 514, 5432.
  • When you connect to the server for the first time, Cisco DCNM checks to see if you have the correct Sun Java Virtual Machine version installed on your workstation. Cisco DCNM looks for version 1.6(x) during installation. If required, install the Sun Java Virtual Machine software.
  • On Windows, remote Cisco DCNM installations or upgrades should be done through the console using VNC or through the Remote Desktop Client (RDC) in console mode (ensuring RDC is used with the /Console option). This process is very important if the default PostgreSQL database is used with Cisco DCNM, because this database requires the local console for all installations and upgrades.
  • Before installing Cisco DCNM on a Windows Vista system, turn the User Account Control (UAC) off. To turn off UAC, choose Start > Control Panel > User Accounts > Turn User Account Control on or off, clear the Use User Account Control (UAC) to help protect your computer check box, and then click OK. Click Restart Now to apply the change.
  • Telnet Client application is not installed by default on Microsoft Windows Vista. To install Telnet Client, choose Start > Programs > Control Panel > Click Turn Windows features on or off (if you have UAC turned on, you need to give it the permission to continue). Check the Telnet Client check box and then click OK.
  • You can run CiscoWorks on the same PC as Cisco DCNM even though the Java requirements are different. When installing the later Java version for Cisco DCNM, make sure that it does not overwrite the earlier Java version required for CiscoWorks. Both versions of Java can coexist on your PC.

Note When launching the Cisco DCNM installer, the console command option is not supported.



Note Using the Cisco DCNM installer in GUI mode requires that you must log in to the remote server using VNC or XWindows. Using Telnet or SSH to install Cisco DCNM in GUI mode is not possible.


Before you can access Cisco DCNM, you must complete the following tasks:

  • Install a supervisor module on each switch that you want to manage.
  • Configure the supervisor module with the following values using the setup routine or the CLI:

IP address assigned to the mgmt0 interface

SNMP credentials (v3 username and password or v1/v2 communities), maintaining the same username and password for all the switches in the fabric.

Initial Setup Routine

The first time that you access a switch in the Cisco MDS 9000 Family, it runs a setup program that prompts you for the IP address and other configuration information necessary for the switch to communicate over the supervisor module Ethernet interface. This information is required to configure and manage the switch. The IP address can only be configured from the CLI. All Cisco MDS 9000 Family switches have the network administrator as a default user (admin). You cannot change the default user at any time. You must explicitly configure a strong password for any switch in the Cisco MDS 9000 Family. The setup scenario differs based on the subnet to which you are adding the new switch:

  • Out-of-band management—This feature provides a connection to the network through a supervisor module front panel Ethernet port.
  • In-band management—This feature provides IP over Fibre Channel (IPFC) to manage the switches. The in-band management feature is transparent to the network management system (NMS).

Note The IP address can only be configured from the CLI. When you power up the switch for the first time, assign the IP address. After you perform this step, the Cisco DCNM-SAN can reach the switch through the management port.


Preparing to Configure the Switch

Before you configure a switch in the Cisco MDS 9000 Family for the first time, you need the following information:

  • Administrator password, including:

Creating a password for the administrator (required).

Creating an additional login account and password (optional).

  • IP address for the switch management interface—The management interface can be an out-of-band Ethernet interface or an in-band Fibre Channel interface (recommended).
  • Subnet mask for the switch's management interface (optional).
  • IP addresses, including:

Destination prefix, destination prefix subnet mask, and next-hop IP address if you want to enable IP routing. Also, provide the IP address of the default network (optional).

Otherwise, provide an IP address of the default gateway (optional).

  • SSH service on the switch—To enable this optional service, select the type of SSH key (dsa/rsa/rsa1) and number of key bits (768 to 2048).
  • DNS IP address (optional).
  • Default domain name (optional).
  • NTP server IP address (optional).
  • SNMP community string (optional).
  • Switch name—This is your switch prompt (optional).

Note Be sure to configure the IP route, the IP default network address, and the IP default gateway address to enable SNMP access. If IP routing is enabled, the switch uses the IP route and the default network IP address. If IP routing is disabled, the switch uses the default gateway IP address.



Note You should verify that the Cisco DCNM-SAN Server hostname entry exists on the DNS server, unless the Cisco DCNM-SAN Server is configured to bind to a specific interface during installation.


Default Login

All Cisco MDS 9000 Family switches have the network administrator as a default user (admin). You cannot change the default user at any time (see the Security Configuration Guide, Cisco DCNM for SAN).

You have an option to enforce a secure password for any switch in the Cisco MDS 9000 Family. If a password is trivial (short, easy-to-decipher), your password configuration is rejected. Be sure to configure a secure password (see the Security Configuration Guide, Cisco DCNM for SAN). If you configure and subsequently forget this new password, you have the option to recover this password (see the Security Configuration Guide, Cisco DCNM for SAN).

Setup Options

The setup scenario differs based on the subnet to which you are adding the new switch. You must configure a Cisco MDS 9000 Family switch with an IP address to enable management connections from outside of the switch (see Figure 1-1).


Note Some concepts such as out-of-band management and in-band management are briefly explained here. These concepts are explained in more detail in subsequent chapters.


Figure 1-1 Management Access to Switches

Assigning Setup Information

This section describes how to initially configure the switch for both out-of-band and in-band management.


Note Press Ctrl-C at any prompt to skip the remaining configuration options and proceed with what is configured until that point. Entering a new password for the administrator is a requirement and cannot be skipped.



Tip If you do not wish to answer a previously configured question, or if you wish to skip answers to any questions, press Enter. If a default answer is not available (for example, switch name), the switch uses what was previously configured and skips to the next question.


Configuring Out-of-Band Management

You can configure both in-band and out-of-band configuration together by entering Yes in both Step 11 c and Step 11 d in the following procedure.

DETAILED STEPS


Step 1 Power on the switch. Switches in the Cisco MDS 9000 Family boot automatically.

Do you want to enforce secure password standard (Yes/No)?
 

Step 2 Enter Yes to enforce a secure password.

a. Enter the administrator password.

Enter the password for admin: 2008asdf*lkjh17
 

b. Confirm the administrator password.

Confirm the password for admin: 2008asdf*lkjh17

Tip If a password is trivial (short, easy to decipher), your password configuration is rejected. Be sure to configure a secure password as shown in the sample configuration. Passwords are case sensitive. You must explicitly configure a password that meets the requirements listed in the Security Configuration Guide, Cisco DCNM for SAN.


Step 3 Enter yes to enter the setup mode.


Note This setup utility guides you through the basic configuration of the system. Setup configures only enough connectivity for management of the system.


 
Please register Cisco MDS 9000 Family devices promptly with your supplier. Failure to register may affect response times for initial service calls. MDS devices must be registered to receive entitled support services.
 
Press Enter anytime you want to skip any dialog. Use ctrl-c at anytime to skip away remaining dialogs.
 
Would you like to enter the basic configuration dialog (yes/no): yes
 

The setup utility guides you through the basic configuration process. Press Ctrl-C at any prompt to end the configuration process.

Step 4 Enter the new password for the administrator (admin is the default).

Enter the password for admin: admin
 

Step 5 Enter yes (no is the default) to create additional accounts.

Create another login account (yes/no) [n]: yes
 

While configuring your initial setup, you can create an additional user account (in the network-admin role) in addition to the administrator’s account. See the Security Configuration Guide, Cisco DCNM for SAN for information on default roles and permissions.


Note User login IDs must contain non-numeric characters.


a. Enter the user login ID [administrator].

Enter the user login ID: user_name
 

b. Enter the user password.

Enter the password for user_name: user-password
 

c. Confirm the user password.

Confirm the password for user_name: user-password
 

Step 6 Enter yes (no is the default) to create an SNMPv3 account.

Configure read-only SNMP community string (yes/no) [n]: yes
 

a. Enter the username (admin is the default).

SNMPv3 user name [admin]: admin
 

b. Enter the SNMPv3 password (minimum of eight characters). The default is admin123.

SNMPv3 user authentication password: admin_pass
 

Step 7 Enter yes (no is the default) to configure the read-only or read-write SNMP community string.

Configure read-write SNMP community string (yes/no) [n]: yes
 

a. Enter the SNMP community string.

SNMP community string: snmp_community
 

Step 8 Enter a name for the switch.

Enter the switch name: switch_name
 

Step 9 Enter yes (yes is the default) to configure out-of-band management.

Continue with Out-of-band (mgmt0) management configuration? [yes/no]: yes
 

a. Enter the mgmt0 IP address.

Mgmt0 IPv4 address: ip_address
 

b. Enter the mgmt0 subnet mask.

Mgmt0 IPv4 netmask: subnet_mask
 

Step 10 Enter yes (yes is the default) to configure the default gateway (recommended).

Configure the default-gateway: (yes/no) [y]: yes
 

a. Enter the default gateway IP address.

IPv4 address of the default gateway: default_gateway
 

Step 11 Enter yes ( no is the default) to configure advanced IP options such as in-band management, static routes, default network, DNS, and domain name.

Configure Advanced IP options (yes/no)? [n]: yes
 

a. Enter no (no is the default) at the in-band management configuration prompt.

Continue with in-band (VSAN1) management configuration? (yes/no) [no]: no
 

b. Enter yes (no is the default) to enable IP routing capabilities.

Enable the ip routing? (yes/no) [n]: yes
 

c. Enter yes (no is the default) to configure a static route (recommended).

Configure static route: (yes/no) [n]: yes
 

Enter the destination prefix.

Destination prefix: dest_prefix
 

Enter the destination prefix mask.

Destination prefix mask: dest_mask
 

Enter the next-hop IP address.

Next hop ip address: next_hop_address
 

Note Be sure to configure the IP route, the default network IP address, and the default gateway IP address to enable SNMP access. If IP routing is enabled, the switch uses the IP route and the default network IP address. If IP routing is disabled, the switch uses the default gateway IP address.


d. Enter yes (no is the default) to configure the default network (recommended).

Configure the default network: (yes/no) [n]: yes
 

Enter the default network IP address.


Note The default network IP address is the destination prefix provided in Step 11c .


Default network IP address [dest_prefix]: dest_prefix
 

e. Enter yes (no is the default) to configure the DNS IP address.

Configure the DNS IPv4 address? (yes/no) [n]: yes
 

Enter the DNS IP address.

DNS IPv4 address: name_server
 

f. Enter yes (default is no) to configure the default domain name.

Configure the default domain name? (yes/no) [n]: yes
 

Enter the default domain name.

Default domain name: domain_name
 

Step 12 Enter yes (no is the default) to enable Telnet service.

Enable the telnet server? (yes/no) [n]: yes
 

Step 13 Enter yes (no is the default) to enable the SSH service.

Enabled SSH server? (yes/no) [n]: yes
 

Step 14 Enter the SSH key type.

Type the SSH key you would like to generate (dsa/rsa)? dsa
 

Step 15 Enter the number of key bits within the specified range.

Enter the number of key bits? (768 to 2048): 768
 

Step 16 Enter yes (no is the default) to configure the NTP server.

Configure NTP server? (yes/no) [n]: yes
Configure clock? (yes/no) [n] :yes
Configure clock? (yes/no) [n] :yes
Configure timezone? (yes/no) [n] :yes
Configure summertime? (yes/no) [n] :yes
Configure the ntp server? (yes/no) [n] : yes
 

a. Enter the NTP server IP address.

NTP server IP address: ntp_server_IP_address
 

Step 17 Enter noshut (shut is the default) to configure the default switch port interface to the shut state.

Configure default switchport interface state (shut/noshut) [shut]: noshut
 

Step 18 Enter on (on is the default) to configure the switch port trunk mode.

Configure default switchport trunk mode (on/off/auto) [on]: on
 

Step 19 Enter no (no is the default) to configure switchport port mode F.

Configure default switchport port mode F (yes/no) [n] : no
 

Step 20 Enter permit (deny is the default) to deny a default zone policy configuration.

Configure default zone policy (permit/deny) [deny]: permit
 

This step permits traffic flow to all members of the default zone.

Step 21 Enter yes (no is the default) to disable a full zone set distribution (see the Fabric Configuration Guide, Cisco DCNM for SAN). Disables the switch-wide default for the full zone set distribution feature.

Enable full zoneset distribution (yes/no) [n]: yes
 

You see the new configuration. Review and edit the configuration that you have just entered.

Step 22 Enter no (no is the default) if you are satisfied with the configuration.

The following configuration will be applied:
username admin password admin_pass role network-admin
username user_name password user_pass role network-admin
snmp-server community snmp_community ro
switchname switch
interface mgmt0
ip address ip_address subnet_mask
no shutdown
ip routing
ip route dest_prefix dest_mask dest_address
ip default-network dest_prefix
ip default-gateway default_gateway
ip name-server name_server
ip domain-name domain_name
telnet server enable
ssh key dsa 768 force
ssh server enable
ntp server ipaddr ntp_server
system default switchport shutdown
system default switchport trunk mode on
system default port-channel auto-create
zone default-zone permit vsan 1-4093
zoneset distribute full vsan 1-4093
 
Would you like to edit the configuration? (yes/no) [n]: no
 

Step 23 Enter yes (yes is default) to use and save this configuration:

Use this configuration and save it? (yes/no) [y]: yes
 

Caution If you do not save the configuration at this point, none of your changes are updated the next time the switch is rebooted. Enter yes to save the new configuration to ensure that the kickstart and system images are also automatically configured.


 

Configuring In-Band Management

The in-band management logical interface is VSAN 1. This management interface uses the Fibre Channel infrastructure to transport IP traffic. An interface for VSAN 1 is created on every switch in the fabric. Each switch should have its VSAN 1 interface configured with an IP address in the same subnetwork. A default route that points to the switch that provides access to the IP network should be configured on every switch in the Fibre Channel fabric (see Fabric Configuration Guide, Cisco DCNM for SAN).


Note You can configure both in-band and out-of-band configuration together by entering Yes in both Step 9c and Step 9d in the following procedure.


DETAILED STEPS


Step 1 Power on the switch. Switches in the Cisco MDS 9000 Family boot automatically.

Step 2 Enter the new password for the administrator.

Enter the password for admin: 2004asdf*lkjh18
 

Tip If a password is trivial (short, easy-to-decipher), your password configuration is rejected. Be sure to configure a strong password as shown in the sample configuration. Passwords are case sensitive. You must explicitly configure a password that meets the requirements listed in the User Accounts section in Security Configuration Guide, Cisco DCNM for SAN.


Step 3 Enter yes to enter the setup mode.

This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system.
 
Please register Cisco MDS 9000 Family devices promptly with your supplier. Failure to register may affect response times for initial service calls. MDS devices must be registered to receive entitled support services.
 
Press Enter incase you want to skip any dialog. Use ctrl-c at anytime to skip away remaining dialogs.
 
Would you like to enter the basic configuration dialog (yes/no): yes
 

The setup utility guides you through the basic configuration process. Press Ctrl-C at any prompt to end the configuration process.

Step 4 Enter no (no is the default) if you do not wish to create additional accounts.

Create another login account (yes/no) [no]: no
 

Step 5 Configure the read-only or read-write SNMP community string.

a. Enter no (no is the default) to avoid configuring the read-only SNMP community string.

Configure read-only SNMP community string (yes/no) [n]: no
 

Step 6 Enter a name for the switch.


Note The switch name is limited to 32 alphanumeric characters. The default is switch.


Enter the switch name: switch_name
 

Step 7 Enter no (yes is the default) at the configuration prompt to configure out-of-band management.

Continue with Out-of-band (mgmt0) management configuration? [yes/no]: no
 

Step 8 Enter yes (yes is the default) to configure the default gateway.

Configure the default-gateway: (yes/no) [y]: yes
 

a. Enter the default gateway IP address.

IP address of the default gateway: default_gateway
 

Step 9 Enter yes ( no is the default) to configure advanced IP options such as in-band management, static routes, default network, DNS, and domain name.

Configure Advanced IP options (yes/no)? [n]: yes
 

a. Enter yes (no is the default) at the in-band management configuration prompt.

Continue with in-band (VSAN1) management configuration? (yes/no) [no]: yes
 

Enter the VSAN 1 IP address.

VSAN1 IP address: ip_address
 

Enter the subnet mask.

VSAN1 IP net mask: subnet_mask
 

b. Enter no (yes is the default) to enable IP routing capabilities.

Enable ip routing capabilities? (yes/no) [y]: no
 

c. Enter no (yes is the default) to configure a static route.

Configure static route: (yes/no) [y]: no
 

d. Enter no (yes is the default) to configure the default network.

Configure the default-network: (yes/no) [y]: no
 

e. Enter no (yes is the default) to configure the DNS IP address.

Configure the DNS IP address? (yes/no) [y]: no
 

f. Enter no (no is the default) to skip the default domain name configuration.

Configure the default domain name? (yes/no) [n]: no
 

Step 10 Enter no (yes is the default) to disable Telnet service.

Enable the telnet service? (yes/no) [y]: no
 

Step 11 Enter yes (no is the default) to enable the SSH service.

Enabled SSH service? (yes/no) [n]: yes
 

Step 12 Enter the SSH key type (see the Security Configuration Guide, Cisco DCNM for SAN) that you would like to generate.

Type the SSH key you would like to generate (dsa/rsa/rsa1)? rsa
 

Step 13 Enter the number of key bits within the specified range.

Enter the number of key bits? (768 to 1024): 1024
 

Step 14 Enter no (no is the default) to configure the NTP server.

Configure NTP server? (yes/no) [n]: no
 

Step 15 Enter shut (shut is the default) to configure the default switch port interface to the shut state.

Configure default switchport interface state (shut/noshut) [shut]: shut
 

Note The management Ethernet interface is not shut down at this point—only the Fibre Channel, iSCSI, FCIP, and Gigabit Ethernet interfaces are shut down.


Step 16 Enter auto (off is the default) to configure the switch port trunk mode.

Configure default switchport trunk mode (on/off/auto) [off]: auto
 

Step 17 Enter deny (deny is the default) to deny a default zone policy configuration.

Configure default zone policy (permit/deny) [deny]: deny
 

This step denies traffic flow to all members of the default zone.

Step 18 Enter no (no is the default) to disable a full zone set distribution.

Enable full zoneset distribution (yes/no) [n]: no
 

This step disables the switch-wide default for the full zone set distribution feature.

You see the new configuration. Review and edit the configuration that you have just entered.

Step 19 Enter no (no is the default) if you are satisfied with the configuration.

The following configuration will be applied:
username admin password admin_pass role network-admin
snmp-server community snmp_community rw
switchname switch
interface vsan1
ip address ip_address subnet_mask
no shutdown
ip default-gateway default_gateway
no telnet server enable
ssh key rsa 1024 force
ssh server enable
no system default switchport shutdown
system default switchport trunk mode auto
no zone default-zone permit vsan 1-4093
no zoneset distribute full vsan 1-4093
 
Would you like to edit the configuration? (yes/no) [n]: no
 

Step 20 Enter yes (yes is default) to use and save this configuration.

Use this configuration and save it? (yes/no) [y]: yes
 

Caution If you do not save the configuration at this point, none of your changes are updated the next time the switch is rebooted. Enter yes to save the new configuration. To ensure that the kickstart and system images are also automatically configured.


 

Using the setup Command

To make changes to the initial configuration at a later time, you can enter the setup command in EXEC mode.

switch# setup
---- Basic System Configuration Dialog ----
This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
*Note: setup always assumes a predefined defaults irrespective
of the current system configuration when invoked from CLI.
 
Press Enter incase you want to skip any dialog. Use ctrl-c at anytime
to skip away remaining dialogs.
 
Would you like to enter the basic configuration dialog (yes/no): yes
 

The setup utility guides you through the basic configuration process.

Starting a Switch in the Cisco MDS 9000 Family

The following procedure is a review of the tasks you should have completed during hardware installation, including starting up the switch. These tasks must be completed before you can configure the switch.


Note You must use the CLI for initial switch start up.


DETAILED STEPS


Step 1 Verify the following physical connections for the new Cisco MDS 9000 Family switch:

  • The console port is physically connected to a computer terminal (or terminal server).
  • The management 10/100 Ethernet port (mgmt0) is connected to an external hub, switch, or router.

See the Cisco MDS 9000 Family Hardware Installation Guide (for the required product) for more information.


Tip Save the host ID information for future use (for example, to enable licensed features). The host ID information is provided in the Proof of Purchase document that accompanies the switch.


Step 2 Verify that the default console port parameters are identical to those of the computer terminal (or terminal server) attached to the switch console port:

  • 9600 baud
  • 8 data bits
  • 1 stop bit
  • No parity

Step 3 Power on the switch. The switch boots automatically and the switch# prompt appears in your terminal window.


 

Accessing the Switch

After initial configuration, you can access the switch in one of the three ways:

  • Serial console access—You can use a serial port connection to access the CLI.
  • In-band IP (IPFC) access—You can use Telnet or SSH to access a switch in the Cisco MDS 9000 Family or use SNMP to connect to a Cisco DCNM-SAN application.
  • Out-of-band (10/100BASE-T Ethernet) access—You can use Telnet or SSH to access a switch in the Cisco MDS 9000 Family or use SNMP to connect to a Cisco DCNM-SAN application.

After initial configuration, you can access the switch in one of three ways (see Figure 1-2):

  • Serial console access—You can use a serial port connection to access the CLI.
  • In-band IP (IPFC) access—You can use Telnet or SSH to access a switch in the Cisco MDS 9000 Family or use Cisco DCNM-SAN to access the switch.
  • Out-of-band (10/100BASE-T Ethernet) access—You can use Telnet or SSH to access a switch in the Cisco MDS 9000 Family or use Cisco DCNM-SAN to access the switch.

Figure 1-2 Switch Access Options

Supported Software


Note For the latest information on supported software, see the Cisco DCNM Release Notes, Release 6.x.


Cisco DCNM and Cisco Device Manager have been tested with the following software:

  • Operating Systems

Server— Windows 2003, Windows 2008 Standard SP2 edition (32-bit or 64-bit), Windows 2008 R2 Sp1 (64-bit), RHEL 5.4/5.5/5.6/5.7 (32-bit and 64-bit)

Client—Windows XP and Windows 7 (32-bit and 64-bit)

Red Hat Enterprise Linux AS Release 5.4


Note You cannot install Cisco DCNM-SAN and Cisco DCNM-LAN Server on Windows 7 (32-bit and 64-bit) platform.


 

  • Java

Sun JRE and JDK 1.6(x) is supported

Java Web Start 1.6 u31

  • Browsers

The following common web browsers that support Adobe Flash 10 are qualified to use with Cisco DCNM-LAN and Cisco DCNM-SAN.

Internet Explorer

Firefox

  • Databases

Oracle Database 10g Express, Oracle Enterprise Edition 10g, Oracle Enterprise Edition 11g, and 11gR2 Enterprise Edition (we recommend Oracle 11gR2 Enterprise Edition for customers with large fabric).

Oracle RAC

PostgreSQL 8.2, 8.3 (Windows and Red Hat Enterprise Linux AS Release 5)

PostgreSQL 8.1 (Solaris 9 and 10)

  • Security

Cisco ACS 3.1 and 4.0

PIX Firewall

IP Tables

SSH v2

Global Enforce SNMP Privacy Encryption

HTTPS

  • VMware Supported Version

EsX 4.5

ESXi 4.5 and 5.0

Cisco DCNM

Cisco DCNM provides an alternative to the command-line interface (CLI) for most switch configuration commands.

In addition to complete configuration and status monitoring capabilities for Cisco MDS 9000 switches, Cisco DCNM-SAN provides powerful Fibre Channel troubleshooting tools. These in-depth health and configuration analysis capabilities leverage unique MDS 9000 switch capabilities: Fibre Channel Ping and Traceroute.

Cisco DCNM-SAN includes these management applications:

Cisco DCNM Server

The Cisco DCNM-SAN Server component must be started before running Cisco DCNM-SAN. On a Windows PC, Cisco DCNM-SAN Server is installed as a service. This service can then be administered using the Windows Services in the control panel. Cisco DCNM-SAN Server is responsible for discovery of the physical and logical fabric and for listening for SNMP traps, syslog messages, and Performance Manager threshold events.


Note Cisco DCNM-SAN standalone is not supported in Cisco DCNM 6.1.x release.


Cisco DCNM-SAN Client

The Cisco DCNM-SAN Client displays a map of your network fabrics, including Cisco MDS 9000 Family switches, third-party switches, hosts, and storage devices. The Cisco DCNM-SAN Client provides multiple menus for accessing the features of the Cisco DCNM-SAN Server.

Device Manager

Starting from Cisco MDS NX-OS Release 5.2(1), Cisco DCNM-SAN automatically installs Device Manager. Device Manager provides two views of a single switch:

  • Device View displays a graphic representation of the switch configuration and provides access to statistics and configuration information.
  • Summary View displays a summary of xE ports (Inter-Switch Links), Fx ports (fabric ports), and Nx ports (attached hosts and storage) on the switch, as well as Fibre Channel and IP neighbor devices. Summary or detailed statistics can be charted, printed, or saved to a file in tab-delimited format.

Performance Manager

Performance Manager presents detailed traffic analysis by capturing data with SNMP. This data is compiled into various graphs and charts that can be viewed with any web browser.

Cisco DCNM Web Client

The Cisco DCNM Web Client allows operators to monitor and obtain reports for Cisco MDS and Nexus events, performance, and inventory from a remote location using a web browser. Licensing and discovery are part of the Cisco DCNM web client.

Cisco DCNM-LAN Client

The Cisco DCNM-LAN Client displays a map of discovered Ethernet Networks including including the Cisco Nexus 7000 Series, Cisco Nexus 5000 Series, Cisco Nexus 4000 Series, Cisco Nexus 3000 Series, and Cisco Nexus 1000 Series switches and the Catalyst 6500 Series switches. The Cisco DCNM-LAN client provides provisioning, monitoring of Ethernet interfaces for the Ethernet switches. It allows you to configure complex features such as vPC, VDC, and FabricPath and provides the topolopy representation of vPC, port channel, VLAN mappings, and FabricPath.

Information About Deploying Cisco DCNM

This section includes the following topics:


Note Cisco DCNM can be deployed as Cisco DCNM-SAN federation or Cisco DCNM-LAN cluster model. For Cisco DCNM-SAN federation, the database URL (properties) should remain the same for all the Cisco DCNM-SAN nodes in the federation. For Cisco DCNM-LAN clusters, the database URL (properties), partition name, and multicast addresses should remain the same for all the Cisco DCNM-LAN nodes in the cluster. If Cisco DCNM is deployed in a SAN federation, then you should also deploy Cisco DCNM-LAN in cluster mode because both Cisoc DCNM-SAN and Cisco DCNM-LAN use the same database schema as a single product. If Cisco DCNM-LAN is deployed as a cluster, Cisco DCNM-SAN has to be deployed as a federation.


Database Support

Cisco DCNM supports the following databases:

  • PostgreSQL 8.2
  • PostgreSQL 8.3
  • Oracle Database 10g
  • Oracle Database 11g
  • Oracle RAC 10g and 11g

If the Cisco DCNM installer does not find a previous installation of a supported database, it can install PostgreSQL 8.3 for you.

Operating Systems

For information about the specific editions of supported server operating systems, see the Cisco DCNM Release Notes, Release 5.x , at the following location:

http://www.cisco.com/en/US/products/ps9369/tsd_products_support_series_home.html

You can install Cisco DCNM on a supported version of one of the following operating systems:

  • Microsoft Windows Server

If the server system runs the Microsoft Windows operating system, the Cisco DCNM server software runs as a service. By default, the Cisco DCNM server starts automatically when you boot up the server system.

  • Red Hat Enterprise Linux

VMware Support

Cisco DCNM supports the installation of Cisco DCNM servers in VMware virtual machines that have a compatible Windows operating system or Linux operating system supported by Cisco DCNM-LAN. The following requirements apply:

  • The VMware server software must be a supported version.
  • The virtual machine in which you install a Cisco DCNM server must meet all server requirements.

For the latest information about supported VMware server software and other server requirements, see the Cisco DCNM Release Notes, Release 5.x , at the following location:

http://www.cisco.com/en/US/products/ps9369/tsd_products_support_series_home.html

Cisco DCNM SAN Cluster and Federation Requirements

When you are installing Cisco DCNM for the first time, Cisco DCNM-LAN services should not be started when the hosts where Cisco DCNM will be deployed in the federation or cluster mode and are in different subnets. During the end of the installation procedure, ensure you uncheck the Start LAN and SAN services checkbox and manually start the Cisco DCNM-SAN services.

When you are doing a silent installation, ensure that you change the property value to FALSE for Start_DCNM in the installer.properties file.

When you are upgrading and the exisiting Cisco DCNM-SAN 5.x federation nodes are in different subnets, you should not start the Cisco DCNM-LAN services. Cisco DCNM-LAN (cluster) does not support the deployment where Cisco DCNM-LAN nodes are in different subnets because the database might get corrupted. Ensure that you do not use the Cisco DCNM shortcuts to start and stop the Cisco DCNM services and manually start the Cisco DCNM-SAN services using the following methods:

  • On Microsoft Windows—Control panel > Services > Start DCNM-SAN Service or navigate to Cisco DCNM Install folder\fm\bin and type Fmserver to start the Cisco DCNM services.
  • On Linux—Navigate to Cisco DCNM Install folde>/fm/bin and type ./Fmserver start.

In a Cisco DCNM-LAN server cluster, one server performs the master server role and the remaining servers are member servers. The server with the oldest start time is the master server; therefore, you can control which server is the master server by starting that server first. For information about how Cisco DCNM-LAN operates in a clustered-server environment, see the Cluster Administration feature in the Cisco DCNM Fundamentals Guide, Release 5.x .

To help simplify the management of your server cluster, we recommend that you use the primary Cisco DCNM server as the master server. To do so, start the primary server before you start any other server in the cluster.

From Cisco DCNM Release 6.x, the recommended scenario is the Cisco DCNM-LAN clustering and DCNM-SAN federation across nodes and the following scenarios are not recommended when you install Cisco DCNM for the first time:

  • DCNM-SAN federation mode without DCNM-LAN Clustering setup
  • DCNM-LAN clustering mode without DCNM-SAN federation setup

For more information, see “Deploying a Federarion or Clustered-Server Cisco DCNM Environment” section.

Server Ports

A Cisco DCNM-LAN server must be able to receive the network traffic from Cisco DCNM-LAN clients on a number of ports. Any network gateway device that controls the traffic sent from a Cisco DCNM-LAN client to a Cisco DCNM-LAN server must permit the traffic sent to the ports that the Cisco DCNM-LAN server is configured to use.

Table 1-2 lists the default ports that services on a Cisco DCNM-LAN server listen to for client communications. One port is not configurable. You can configure the other ports. The server installer can resolve port conflicts automatically.

 

Table 1-2 Default TCP Ports for Client Communications

Service Name
Default Port for SAN
Default Port for LAN
Configurable?

RMI

1198

1098

During installation

Naming Service

9099

1099

During installation

SSL

3943

3843

During installation

EJB

3973

3873

During installation

Server Bind 1

5644

4445

During installation

Server Bind 2

5446

4446

During installation

JMS

5457

4457

During installation

Syslog (system message) Receiver

5545

5445

During installation

AJP Connector

9009

8009

During installation

Web Server

80

8080

During installation

Web Services

9093

8083

During installation

RMI Object

244444

14444

During installation

UIL2

8093

During installation

In a clustered-server deployment, the Cisco DCNM-LAN servers in the cluster listen for UDP messages that are multicast to the cluster partition name. The supported topologies for clustered-server deployments do not allow gateway devices between servers in the cluster; however, for reference purposes, Table 1-3 lists the default ports that a Cisco DCNM-LAN server listens to for server cluster communications. Some ports are not configurable. You can configure the other ports during the server installation. The installer software creates a default value for the three ports.

 

Table 1-3 Default Ports for Clustered-Server Communications

Service Name
Protocol
Default Port
Configurable?

High Availability Naming Service

TCP

1100

No

High Availability RMI Naming Service

TCP

1101

No

High Availability Naming Service

UDP

1102

No

Multicast port

UDP

Determined at installation

During installation

Multicast port

UDP

Determined at installation

During installation

Multicast port

UDP

Determined at installation

During installation

Clustered-Server Cisco DCNM-LAN Requirements

This section includes the following topics:

Prerequisites for Deploying a Clustered-Server Cisco DCNM-LAN Environment

Before you begin to deploy a clustered-server Cisco DCNM-LAN environment, you must ensure that the server systems in the cluster meet the following requirements:

  • The following items must be identical for all server systems in the cluster:

Operating system

Number of CPUs

CPU speed

Memory

  • If you plan to install Cisco DCNM-LAN servers in VMware virtual machines, the following additional requirements must be met:

All servers in the cluster must be installed in a virtual machine. You cannot deploy a server cluster with a mix of virtual and physical server systems.

  • There can be no routing device between servers in a Cisco DCNM-LAN deployment.
  • If you plan to use RADIUS or TACACS+ authentication of Cisco DCNM-LAN users, you must ensure that the authentication servers are configured to accept authentication requests from all the Cisco DCNM-LAN servers in the cluster.
  • You must enable the Network Time Protocol (NTP) on all servers in the cluster.

Clustered-Server Configuration Requirements

During the deployment of a clustered-server Cisco DCNM-LAN environment, you must ensure that the following requirements are met:

  • All servers in the cluster must run an identical release of Cisco DCNM-LAN, such as Cisco DCNM Release 5.0(2).
  • You must specify the following information identically on all servers:

Cluster partition name

Multicast addresses and ports

Cisco DCNM-LAN database path and credentials

Authentication settings

This requirement is met by the secondary server installation process.

  • The archive directory specified during the installation of each server must refer to the same directory. The path to the directory can be different for each server. This shared directory must be an external shared directory and accessible by all Cisco DCNM-LAN servers with read/write privilege. For example, two Cisco DCNM-LAN servers installed on Microsoft Windows could use different paths, such as X:\DCNM\data and F:\data, but the two paths must refer to the same directory.
  • You must enable or disable secured client communications on all servers in the cluster.

Deploying a Single-Server Cisco DCNM Environment

You can deploy Cisco DCNM in a single-server environment. In a single-server environment, the primary Cisco DCNM server is the one server system that runs the Cisco DCNM server software. This procedure provides the general steps that you must take to deploy a single-server Cisco DCNM environment.

BEFORE YOU BEGIN

The server system that runs the Cisco DCNM server must meet the system requirements for the Cisco DCNM server. For details about system requirements, see the Cisco DCNM Release Notes, Release 5.x .

DETAILED STEPS


Step 1 Ensure that the server system that you want to install the Cisco DCNM server on meets all the server system requirements.

For more information, see the “Clustered-Server Cisco DCNM-LAN Requirements” section.

Step 2 Download the Cisco DCNM server software. Cisco DCNM-SAN, Cisco DCNM-LAN, and SMI-S are installed as part of Cisco DCNM installation.

For more information, see the “Where Do You Go Next?” section.

Step 3 If your deployment will use a previously installed database, make sure that you have prepared the database:

    • PostgreSQL—If the PostgreSQL server system will be remote to the single Cisco DCNM server, you must configure the PostgreSQL server to allow connections from the Cisco DCNM server. For more information, see the “Preparing a PostgreSQL Database” section.

If you intend to install the Cisco DCNM server on the same server system as the PostgreSQL software, no further database preparation is required.

Step 4 Install the Cisco DCNM-LAN server software on the server system.

Step 5 (Optional) If you want to encrypt client-server communication, enable the Cisco DCNM-LAN server to use TLS with client-server communications.

For more information, see the “Enabling Encrypted Client-Server Communications” section.

Step 6 (Optional) If you want to allow the use of the Cisco DCNM-LAN client outside a firewall or other gateway device that the Cisco DCNM-LAN server is behind, do the following:

a. Configure the Cisco DCNM-LAN server with a specific secondary server bind port.

For more information, see the “Specifying a Secondary Server Bind Port” section.

b. Configure the firewall or gateway device to permit connections from the Cisco DCNM-LAN client to the ports used by the Cisco DCNM-LAN server, including the secondary server bind port that you specified.

For more information about the ports used by the Cisco DCNM-LAN server, see the “Server Ports” section.

c. For information on configuring the firewall or gateway device to permit connections from the Cisco DCNM-SAN client to the ports used by the Cisco DCNM-SAN server, see “Running Cisco DCNM Behind a Firewall” section.

Step 7 (Optional) If you did not start the Cisco DCNM servers when you installed it, start the Cisco DCNM-LAN server now. For more information, see the Cisco DCNM Fundamentals Guide, Release 5.x .

Step 8 You can install Cisco DCNM licenses using the Cisco DCNM web client. For more information on licensing, see Chapter6, “Installing and Managing Licenses for Cisco Data Center Network Manager”

Step 9 Install the Cisco DCNM-LAN and Cisco DCNM-SAN client. For more information, see the Cisco DCNM Fundamentals Guide, Release 5.x .

Step 10 Perform device discovery for one or more devices using the Cisco DCNM web client. From the Cisco DCNM web client, click Add Data Source to start discovering devices. For more information, see the Cisco DCNM Fundamentals Guide, Release 5.x .

Step 11 Begin using Cisco DCNM to configure and monitor the managed devices. For more information about using Cisco DCNM, see the Cisco DCNM-LAN and Cisco DCNM-SAN configuration guides, available at the following location:

http://www.cisco.com/en/US/products/ps9369/tsd_products_support_series_home.html


 

Deploying a Federarion or Clustered-Server Cisco DCNM Environment

A Cisco DCNM server cluster includes one primary server and between one and four secondary servers. This procedure provides the general steps that you must take to deploy a clustered-server Cisco DCNM environment.


Note Ensure the virtual machines are on the same host when you install Cisco DCNM cluster in a virtual environment.


BEFORE YOU BEGIN

Every server system that will run the Cisco DCNM server software must meet the system requirements for the Cisco DCNM server. For details about system requirements, see the Cisco DCNM Release Notes, Release 5.x .

DETAILED STEPS


Step 1 Ensure that each server system that will be part of the Cisco DCNM-LAN server cluster and DCNM-SAN federation cluster meets all the server system requirements.

For more information, see the “Clustered-Server Cisco DCNM-LAN Requirements” section.

Step 2 Ensure that each server system meets the additional server requirements for a federation clustered-server deployment.

For more information, see the “Prerequisites for Deploying a Clustered-Server Cisco DCNM-LAN Environment” section.

Step 3 Download the Cisco DCNM server software. Cisco DCNM-SAN, Cisco DCNM-LAN, and the SMI-S agent are installed as part of Cisco DCNM installation.

For more information, see the “Where Do You Go Next?” section.

Step 4 If your deployment will use a previously installed database, make sure that you have prepared the database as follows:

If you intend to install one of the Cisco DCNM servers on the same server system as the PostgreSQL software, you do not need to configure the PostgreSQL server to accept connections from the locally installed Cisco DCNM server.


Note Cisco DCNM server installations using a remote PostgreSQL server will fail if the PostgreSQL server is not configured to accept remote connections from the Cisco DCNM server system.


Step 5 Set up a shared directory that all Cisco DCNM servers in the cluster can use to archive common data and files. The path to the directory can be different for each server. The Cisco DCNM shared directory must be an external shared directory and accessible by all Cisco DCNM servers with read/write privilege. For example, two Cisco DCNM servers installed on Microsoft Windows could use different paths, such as X:\DCNM\data and F:\data, but the two paths must refer to the same directory.

Step 6 On the primary server system, install the Cisco DCNM server software.

Step 7 If you installed the PostgreSQL server during the primary Cisco DCNM server, you must configure the PostgreSQL server to allow connections from each secondary Cisco DCNM server in the cluster, because these connections are remote to the PostgreSQL server system. For more information, see the “Preparing a PostgreSQL Database” section.


Note Cisco DCNM server installations using a remote PostgreSQL server will fail if the PostgreSQL server is not configured to accept remote connections from the Cisco DCNM-LAN server system.


Step 8 On each secondary server system, install the Cisco DCNM server software.


Note All the nodes in the cluster should have the same Cisco DCNM partition name and the multicast IP addresses.


Step 9 (Optional) If you want to use secure client communication, enable every Cisco DCNM server in the cluster to use TLS to encrypt client-server communications.

For more information, see the “Enabling Encrypted Client-Server Communications” section.

Step 10 (Optional) If you want to allow the use of the Cisco DCNM-LAN client outside a firewall or other gateway device that the Cisco DCNM-LAN server cluster is behind, do the following:

a. Configure each Cisco DCNM-LAN server in the cluster with the same, specific secondary server bind port.

For more information, see the “Specifying a Secondary Server Bind Port” section.

b. Configure the firewall or gateway device to permit connections from the Cisco DCNM-LAN client to the ports used by each Cisco DCNM-LAN server in the cluster, including the secondary server bind port that you specified.

For more information about the ports used by the Cisco DCNM server, see the “Server Ports” section.

c. For information on configuring the firewall or gateway device to permit connections from the Cisco DCNM-SAN client to the ports used by the DCNM-SAN server, see the “Running Cisco DCNM Behind a Firewall” section.

Step 11 (Optional) If you have not started all the Cisco DCNM servers in the federation or cluster, start each server system in the server cluster now. For more information about starting a Cisco DCNM-LAN server cluster, see the Cisco DCNM Fundamentals Guide, Release 5.x .

Step 12 Install the Cisco DCNM (SAN and LAN) clients. For more information, see the Cisco DCNM Fundamentals Guide, Release 5.x .

Step 13 Perform device discovery for one or more devices using the Cisco DCNM web client. From the Cisco DCNM web client, click Add Data Source to start discovering devices. For more information, see the Cisco DCNM Fundamentals Guide, Release 5.x .

Step 14 Begin using Cisco DCNM to configure and monitor the managed devices. For more information about using Cisco DCNM, see the Cisco DCNM-LAN configuration guides that are available at the following location:

http://www.cisco.com/en/US/products/ps9369/tsd_products_support_series_home.html


 

Deploying a Single-Server Cisco DCNM Environment

Beginning with Cisco DCNM Release 6.x, you can deploy Cisco DCNM in a clustered-server environment.

For installing Cisco DCNM Sever on a Microsoft Windows platform, see the “Installing Cisco DCNM on Windows and Linux using the GUI” section.

For installing Cisco DCNM server using the script, see the “Installing Cisco DCNM Using the Silent Installer” section.

For installing Cisco DCNM using the VSB, see the “Installing and Administering Cisco DCNM VSB on a Cisco Nexus 1010 Switch” section.

Where Do You Go Next?

After reviewing the default configuration, you can change it or perform other configuration or management tasks. The initial setup can only be performed at the CLI. However, you can continue to configure other software features, or access the switch after initial configuration by using either the CLI or the Device Manager and Cisco DCNM applications.