Cisco DCNM Installation and Licensing Guide, Release 6.x
Installing and Administering Cisco DCNM VSB on a Cisco Nexus 1010 Switch
Downloads: This chapterpdf (PDF - 219.0KB) The complete bookPDF (PDF - 2.12MB) | Feedback

Table of Contents

Installing and Administering Cisco DCNM VSB on a Cisco Nexus 1010 Switch

Information About Cisco DCNM VSB

Installing Cisco DCNM VSB

System Requirements

Installing Cisco DCNM VSB

Using a Remote Database Server for Standalone and Cluster installations

Using the Remote Database for a Standalone Installation

Using the Remote Database for an HA-Enabled Cluster Mode Installation

Administering the Cisco DCNM VSB

Verifying the Status of a Cisco DCNM VSB

Accessing Cisco DCNM VSB Using the CLI

Deleting a Cisco DCNM VSB

Managing Cisco DCNM VSBs Using the Attachmate Reflection Tool

Using the Attachmate Reflection Tool to Reset User Credentials

Installing and Administering Cisco DCNM VSB on a Cisco Nexus 1010 Switch

This chapter describes how to install and administer the Cisco Data Center Network Manager Virtual Service Blade (Cisco DCNM VSB) on a Cisco Nexus 1010 Virtual Services switch.

This chapter includes the following sections:

Information About Cisco DCNM VSB

The Cisco Nexus 1010 switch is a shell that hosts multiple Virtual Switch Modules (VSMs) and other service modules such as the Cisco DCNM and Network Analysis Module (NAM) but supresses all of the details about the multiple virtual machines running on a hypervisor. From a network management perspective, the hosted VSMs appear as a cluster. Each Virtual Supervisor Module (VSM) and its associated Virtual Ethernet Modules (VEMs) comprise one virtual switch.

In addition to VSMs, the Cisco Nexus1010 switch can host other service modules. Each of these components is known as a Virtual Service Blade (VSB). The Cisco DCNM VSB enables network administrators to manage the data center LAN infrastructure. The Cisco DCNM VSB is integrated with the Cisco Nexus 1010 switches. The Cisco DCNM VSB extends visibility and interconnects the virtual machines in the Cisco Nexus 1000V switch deployments.


Note Cisco DCNM-SAN Essentials is not supported on the Cisco Nexus 1010 switch. You cannot open a fabric in the Cisco DCNM-SAN web client from a remote desktop.


Installing Cisco DCNM VSB

This section describes how to install Cisco DCNM VSB.

This section includes the following topics:

System Requirements

Table 6-1 lists the system requirements for the Cisco DCNM VSB.

Table 6-1 Cisco DCNM VSB System Requirements

Component
Recommended Requirements

RAM (free)

8 GB

CPU speed

Dual-processor or dual-core CPU

Disk space (free)

80 GB for standalone installation

40 GB for cluster installation

Operating system

Wind River Linux 3.0

Installing Cisco DCNM VSB

Figure 6-1 shows the local database for standalone installation.

Figure 6-1 DCNM VSB with a Local Database

 

BEFORE YOU BEGIN

You must log in to the Cisco Nexus 1010 switch using the CLI or a web browser.


Note You cannot create a Oracle RAC database in the Cisco DCNM VSB on N1010 device.


To install VSB on your machine see the Upgrading Cisco DCNM Servers Section.

 

Using a Remote Database Server for Standalone and Cluster installations

You can use a remote database for both standalone and cluster mode installations. In a standalone installation, you can configure the installation setup to use a remote Oracle database server. In a cluster mode installation, the remote database (PostgreSQL and Oracle) is shared by all of the nodes in the cluster.

Cisco DCNM installs PostgreSQL database on the Cisco Nexus 1010 switch by default. If you want to use an external database server, you can specify the URL instead of choosing the local database. The IP address entries of the slave nodes should exist in the pg_hba.conf file of the database that resides in the data folder where the PostgreSQL database is installed.

This section includes the following topics:

Using the Remote Database for a Standalone Installation

You can perform a standalone Cisco DCNM VSB installation using the remote database. Figure 6-2 shows a standalone Cisco DCNM VSB installation using the remote database.

Figure 6-2 Cisco DCNM VSB with a Remote Database

 

BEFORE YOU BEGIN

You must log in to the Cisco Nexus 1010 switch using the CLI or a web browser.

DETAILED STEPS


Step 1 Copy the Cisco DCNM ISO file to the bootflash:repository location of the Cisco Nexus 1010 switch.

Step 2 Enter the configuration mode and create a VSB.

virtual-service-blade VSB-NAME

Step 3 Associate the ISO file with the VSB.

virtual-service-blade-type new FILE-NAME.iso
 

Step 4 Initiate the Cisco VSB installation as follows:

virtual-service-blade VSB-NAME

 

a. Set up a cluster with a redundant Cisco Nexus 1010 pair of switches.

n1010(config-vsb-config)# enable
 

b. Set up a standalone Cisco DCNM VSB on the primary Cisco Nexus 1010 switch.

n1010(config-vsb-config)# enable primary

 

c. Set up a standalone Cisco DCNM VSB on the secondary Cisco Nexus 1010 switch.

n1010(config-vsb-config)# enable secondary

 

Step 5 Enter the name of the VSB image.

Enter vsb image:

Note The value is populated by the Nexus 1010 with the Cisco DCNM ISO file name that we specify in Step 3.


 

Step 6 Enter the type of installation. The default is set to fresh installation.

Enter the mode of Installation [fresh/upgrade]:
 

Step 7 Enter Y to setup a Cisco DCNM cluster and federation.

Setup a DCNM Cluster and Federation [Y/N] : [N]
 

Step 8 Enter the hostname.

Enter the hostname: [dcnm-vsb]
 

Step 9 Enter the management IP address.

Enter Mgmt IP address:

Note The management address is used as the IP address for the primary Cisco DCNM VSB.


Step 10 Enter the management subnet mask IP address.

Enter Mgmt subnet mask Ip address: [dcnm-vsb]
 

Step 11 Enter the IP address of the default gateway.

Enter IP address of the default gateway:
 

Step 12 Enter Y to enable HTTPS for Cisco DCNM.

Enable HTTPS for DCNM[Y/N]: [N]
 

Step 13 Enter the Cisco DCNM partition name.

Enter DCNM partition name:
 

Step 14 Enter Y to use the default multicast addresses for cluster.

Use default multicast addresses for cluster (239.255.253.1-239.255.253.4)?[Y|N]: [Y]

Note If you want to use the default multicast address, enter Y. However, you can override the current set of musticast addresses.


 

Note Steps 15 to 18 are only displayed if you enter N in Step 14.


Step 15 Enter the multicast IP address for cluster 1.

Enter multicast IP address for cluster (1 of 4):
 

Step 16 Enter the multicast IP address for cluster 2.

Enter multicast IP address for cluster (2 of 4):
 

Step 17 Enter the multicast IP address for cluster 3.

Enter multicast IP address for cluster (3 of 4):
 

Step 18 Enter the multicast IP address for cluster 4.

Enter multicast IP address for cluster (4 of 4):
 

Step 19 Enter the location of the database.

Specify the location of the database [local/remote]: [local]
 

Step 20 Enter the URL for the remote database.

Enter URL for remote database:

Note Step 20 is displayed when you chose remote in Step 19.


Step 21 Enter the database location.

Enter the DCNM database location:
 

Step 22 Enter the database name.

Enter the DCNM database name:
 

Step 23 Enter the Cisco DCNM database username.

Enter database username for DCNM[dcnmuser]: dcnmuser

Note The default Cisco DCNM database user name is dcnmuser. This property is displayed for both local and remote database.


Step 24 Enter the Cisco DCNM database password:

Enter database password for DCNM:
 

Step 25 Specify whether or not you want to mount the network file system as a data archive.

Mount a network file system as data archive[Y/N]: [N]
 

Step 26 Enter the network file system path to mount.

Enter NFS share path to mount[Ip-Address:path]:

Note When you use a Network File System (NFS) server as the repository for archiving configuration files and templates, you must specify the shared location. For example, you can specify 10.77.212.81:/opt/share/dcnm-repository where 10.77.212.81 is the NFS server and /opt/share/dcnm-repository is the shared directory.



Note Cisco DCNM-LAN clustering does not support a Cisco Nexus 1010 HA pair of switches.



 

Using the Remote Database for an HA-Enabled Cluster Mode Installation

You can perform an HA-enabled cluster mode Cisco DCNM VSB installation by using the remote database. Figure 6-3 shows the two-node Cisco DCNM cluster.

Figure 6-3 Two Node Cisco DCNM Cluster

 

BEFORE YOU BEGIN

You must log in to the Cisco Nexus 1010 switch using the CLI or a web browser.

DETAILED STEPS


Step 1 Copy the Cisco DCNM ISO file to the bootflash:repository location of the Cisco Nexus 1010 switch.

Step 2 Enter the configuration mode and create a VSB.

virtual-service-blade VSB-NAME

Step 3 Associate the ISO file with the VSB.

virtual-service-blade-type new FILE-NAME.iso
 

Step 4 Initiate the Cisco VSB installation as follows:

virtual-service-blade VSB-NAME

 

a. Set up a cluster with a redundant Cisco Nexus 1010 pair of switches.

n1010(config-vsb-config)# enable
 

b. Set up a standalone Cisco DCNM VSB on the primary Cisco Nexus 1010 switch.

n1010(config-vsb-config)# enable primary

 

c. Set up a standalone Cisco DCNM VSB on the secondary Cisco Nexus 1010 switch.

n1010(config-vsb-config)# enable secondary

 

Step 5 Enter the name of the VSB image.

Enter vsb image:

Note The value is populated by the Nexus 1010 with the Cisco DCNM ISO file name that we specify in Step 3.


 

Step 6 Enter the type of installation. The default is set to fresh installation.

Enter the mode of Installation [fresh/upgrade]:
 

Step 7 Enter Y to setup a Cisco DCNM cluster and federation.

Setup a DCNM Cluster and Federation [Y/N] : [N]
 

Step 8 Enter the hostname.

Enter the hostname: [dcnm-vsb]
 

Step 9 Enter the management IP address.

Enter Mgmt IP address:

Note The management address is used as the IP address for the primary Cisco DCNM VSB.


Step 10 Enter the management subnet mask IP address.

Enter Mgmt subnet mask Ip address: [dcnm-vsb]
 

Step 11 Enter the IP address of the default gateway.

Enter IP address of the default gateway:
 

Step 12 Enter Y to enable HTTPS for Cisco DCNM.

Enable HTTPS for DCNM[Y/N]: [N]
 

Step 13 Enter the Cisco DCNM partition name.

Enter DCNM partition name:
 

Step 14 Enter Y to use the default multicast addresses for a cluster.

Use default multicast addresses for cluster (239.255.253.1-239.255.253.4)?[Y|N]: [Y]

Note If you want to use the default multicast address, enter Y. However, you can override the current set of musticast addresses.



Note Steps 15 to 18 are only displayed if you enter N in Step 14.


Step 15 Enter the multicast IP address for cluster 1.

Enter multicast IP address for cluster (1 of 4):
 

Step 16 Enter the multicast IP address for cluster 2.

Enter multicast IP address for cluster (2 of 4):
 

Step 17 Enter the multicast IP address for cluster 3.

Enter multicast IP address for cluster (3 of 4):
 

Step 18 Enter the multicast IP address for cluster 4.

Enter multicast IP address for cluster (4 of 4):
 

Step 19 Enter the location of the database.

Specify the location of the database [local/remote]: [local]
 

Step 20 Enter the URL for the remote database.

Enter URL for remote database:

Note Step 20 is displayed when you choose remote in Step 19.


Step 21 Enter the database location.

Enter the DCNM database location:
 

Step 22 Enter the database name.

Enter the DCNM database name:
 

Step 23 Enter the Cisco DCNM database username.

Enter database username for DCNM[dcnmuser]: dcnmuser

Note The default Cisco DCNM database username is dcnmuser. This property is displayed for both local and remote database.


Step 24 Enter the Cisco DCNM database password:

Enter database password for DCNM:
 

Step 25 Specify whether or not you want to mount the network file system as a data archive.

Mount a network file system as data archive[Y/N]: [N]
 

Step 26 Enter the network file system path to mount.

Enter NFS share path to mount[Ip-Address:path]:

Note When you use a Network File System (NFS) server as the repository for archiving configuration files and templates, you must specify the shared location. For example, you can specify 10.77.212.81:/opt/share/dcnm-repository where 10.77.212.81 is the NFS server and /opt/share/dcnm-repository is the shared directory.



Note Cisco DCNM-LAN clustering does not support a Cisco Nexus 1010 high-availablity pair of switches.



 

Administering the Cisco DCNM VSB

The Cisco DCNM installer binary file in the installer package is available at the following location: /root/CSCOdcnm/install. The default data archive location configured during installation is /root/CSCOdcnm/data_archive. You can override this value by specifying a different location during the Cisco DCNM VSB deployment.

Table 6-2 shows the soft links that are available in the /root directory of the Cisco DCNM VSB.

Table 6-2 Cisco DCNM Shortcuts

File Name
Purpose

Start_DCNM_Servers

Starts the Cisco DCNM Servers

Stop_DCNM_Servers

Stops the Cisco DCNM Servers

Uninstall_DCNM

Uninstalls the Cisco DCNM Server

DCNM_Location

Points to the Cisco DCNM installation directory

Verifying the Status of a Cisco DCNM VSB

You can verify the configuration and status of a deployed Cisco DCNM VSB by using one of the following commands:

Command
Purpose

show virtual-service-blade

Displays all the deployed Cisco DCNM VSBs and the configurations applied to each VSB.

show virtual-service-blade summary

Displays all the deployed Cisco DCNM VSBs and the summary of each VSB.

show virtual-service-blade-type summary

Displays all the Cisco DCNM VSBs that are aligned to a VSB type.

Accessing Cisco DCNM VSB Using the CLI

You can access a deployed Cisco DCNM VSB using the CLI by using the following commands:

Command
Purpose

login virtual-service-blade VSB_NAME [primary/secondary]

Logs in to the respective Cisco DCNM VSB.

Deleting a Cisco DCNM VSB

You can delete a Cisco DCNM VSB.

DETAILED STEPS

Command
Purpose

Step 1

shutdown [primary/secondary]

Powers down the Cisco DCNM VSBs.

Step 2

no enable [primary/secondary]

Disables the deployed Cisco DCNM VSBs.

Step 3

no enable force

Force disables the deployed Cisco DCNM VSBs

Step 4

no virtual-service-blade <VSB_NAME>

Deletes both the primary and secondary Cisco DCNM VSBs.


Note Once the Cisco DCNM VSB is deployed and if you are not able to launch the Cisco DCNM-LAN and DCNM-SAN web client, follow the procedure below:

  • Ensure the DCNM servers are started. To start the DCNM servers, use the command ./Start_DCNM_Servers from the /root location.
  • When the Cisco DCNM installation fails, check the error.properties file under the /root directory. Update the installer.properties file available under /root/CSCOdcnm/install. On the command prompt, type sh dcnm.bin -i silent -f installer.properties in the directory /root/CSCOdcnm/install. Restart the server once the installation is complete.


 

Managing Cisco DCNM VSBs Using the Attachmate Reflection Tool

Cisco DCNM supports the Attachmate Reflection tool on computers that run Windows and connect to VSBs installed on Linux hosts. You can use the Attachmate Reflection tool to upgrade Cisco DCNM VSBs, install licenses, and manage user credentials. You must install the Attachmate Reflection tool on the computer from where you connect to the VSB node.

To access the Cisco DCNM VSB user interface on a computer that runs Windows, enter the following command on the VSB node:

export DISPLAY=<ip address>:0.0
 

where the IP address is the IP address of the computer on which the Attachmate Reflection tool is installed.

Using the Attachmate Reflection Tool to Reset User Credentials

You can use the Attachmate Reflection tool to reset Cisco DCNM user credentials by using the password reset script stored in the /usr/local/cisco/dcm/dcnm/bin.

To modify user credentials, run pwreset.sh.