Cisco Nexus 1000V Installation and Upgrade Guide, Release 4.2(1)SV1(5.1)
Installation of VSMs and VEMs
Downloads: This chapterpdf (PDF - 6.66MB) The complete bookPDF (PDF - 10.1MB) | Feedback

Installing the Cisco Nexus 1000V

Table Of Contents

Installing the Cisco Nexus 1000V

Information About Installing the Cisco Nexus 1000V

Obtaining the VSM Software

Obtaining the VEM Software

Information About the Nexus 1000V Installation Management Center

Prerequisites for Installing the Cisco Nexus 1000V

Nexus 1000V Installation Management Center Prerequisites

Host Prerequisites

Upstream Switch Prerequisites

VSM Prerequisites

VEM Prerequisites

Guidelines and Limitations

Guidelines and Limitations of the Nexus 1000V Installation Management Center

Guidelines and Limitations When Installing the Cisco Nexus 1000V

Installing a VSM HA Pair With L3 Mode Behind a VEM Using the Nexus 1000V Installation Management Center

Installing a VSM HA Pair Using the Nexus 1000V Installation Management Center

Installing VEM Software Using the Nexus 1000V Installation Management Center

Adding VEM Hosts to the Distributed Virtual Switch

Moving the Secondary VSM to a Different Host

Setting Virtual Machine Startup and Shutdown Parameters

Installing the VEM Software Using VUM

Installing the VEM Software Using the CLI

Installing VEM Software Locally on a VMware 4.1 Host by Using the CLI

Installing VEM Software Remotely on a VMware 4.1 Host by Using the CLI

Installing VEM Software Locally on a VMware 5.0 Host by Using the CLI

Installing VEM Software Remotely on a VMware 5.0 Host by Using the CLI

Installing the VEM Software on a Stateless ESXi Host

Stateless ESXi Host

Adding the Cisco Nexus 1000V to an ESXi Image Profile

Installing the VEM Software on a Stateless ESXi Host Using esxcli

Installing the VEM Software on a Stateless ESXi Host Using VUM

Configuring Layer 2 Connectivity

Installing a VSM on the Cisco Nexus 1010

Feature History for Installing the Cisco Nexus 1000V


Installing the Cisco Nexus 1000V


This chapter describes how to install the Cisco Nexus 1000V Virtual Supervisor Modules (VSMs) and Virtual Ethernet Modules (VEMs) on a VMware ESX or ESXi server.

This chapter includes the following topics:

Information About Installing the Cisco Nexus 1000V

Prerequisites for Installing the Cisco Nexus 1000V

Guidelines and Limitations

Installing a VSM HA Pair With L3 Mode Behind a VEM Using the Nexus 1000V Installation Management Center

Installing the VEM Software Using VUM

Installing the VEM Software Using the CLI

Installing the VEM Software on a Stateless ESXi Host

Installing a VSM on the Cisco Nexus 1010

Feature History for Installing the Cisco Nexus 1000V

Information About Installing the Cisco Nexus 1000V

This section includes the following topics:

Obtaining the VSM Software

Obtaining the VEM Software

Obtaining the VSM Software

You can obtain the Cisco Nexus 1000V software from the Cisco Nexus 1000V Series Switches web page:

Cisco Nexus 1000V Download Software page

The file name is Nexus1000v.4.2.1.SV1.5.1.zip.

Obtaining the VEM Software

You can obtain the VEM software from the sources listed in Table 2-1.

Table 2-1 Obtaining VEM Software  

Source
Description

VSM

After the VSM has been installed as a VM, copy the file that contains the VEM software from the VSM home page located at the following URL:

http://VSM_IP_Address/

Cisco

Download the VEM software from the Cisco Nexus 1000V Download Software page.


Information About the Nexus 1000V Installation Management Center

The Nexus 1000V Installation Management Center is the graphical user interface (GUI) for installing the VSMs in HA mode and the VEMs on ESX/ESXi hosts.

The installer creates the following for the user:

Create port profiles for the control, management, and packet port groups.

Create uplink port profiles.

Create port profiles for VMware kernel NICs.

Specify a VLAN to be used for system login and configuration, and control and packet traffic.


Note You can use the same VLAN for control, packet, and management port groups, but you can also use separate VLANs for flexibility. If you use the same VLAN, make sure that the network segment where it resides has adequate bandwidth and latency.


Enable Telnet and Secure Shell (SSH) and configure an SSH connection.

Create a Cisco Nexus 1000V plug-in and register it on the vCenter server.

Migrate each VMware port group or kernel NIC to the correct port profile.

Migrate each physical network interface card (PNIC) from the VMware vSwitch to the correct uplink on the Distributed Virtual Switch (DVS).

Add the host to the DVS.

To prevent a disruption in connectivity, all port profiles are created with a system VLAN. You can change this after migration if needed.

The host and adapter migration process moves all PNICs used by the VSM from the VMware vSwitches to the Cisco Nexus 1000V DVS.

The migration process supports Layer 2 and Layer 3.

Prerequisites for Installing the Cisco Nexus 1000V

This section lists the prerequisites for installing the Cisco Nexus 1000V and includes the following topics:

Nexus 1000V Installation Management Center Prerequisites

Host Prerequisites

Upstream Switch Prerequisites

VSM Prerequisites

VEM Prerequisites

Nexus 1000V Installation Management Center Prerequisites


Note The Installation Management Center requires you to satisfy all the prerequisites.


If you migrate the host and adapters from the VMware vSwitch to the Cisco Nexus 1000V DVS.:

The host must have one or more physical NICs on each VMware vSwitch in use.

The VMware vSwitch does not have any active VMs.

To prevent a disruption in connectivity during migration, any VMs that share a VMware vSwitch with port groups used by the VSM must be powered off.

You must also configure the VSM connection to the vCenter server datacenter where the host resides.

No VEMs were previously installed on the host where the VSM resides.

You require administrative credentials for vCenter Server.

Host Prerequisites

The ESX or ESXi hosts to be used for the Cisco Nexus 1000V have the following prerequisites:

You have already installed and prepared the vCenter Server for host management using the instructions from VMware.

SSH has been enabled.

You should have the VMware vSphere Client installed.

You have already installed the VMware Enterprise Plus license on the hosts.

All VEM hosts must be running ESX/ESXi 4.1 or later releases.

Two physical NICs on each host for redundancy.

All hosts must have Layer 2 connectivity to each other.

If you are using a set of switches, make sure that the inter-switch trunk links carry all relevant VLANs, including control and packet VLANs. The uplink should be a trunk port carrying all VLANs configured on the host.

The control and management VLANs must already be configured on the host to be used for the VSM VM.

Make sure that the VM to be used for the VSM meets the minimum requirements listed in Table 2-2.


Caution The VSM VM might fail to boot if RAM and CPU are not properly allocated.
This document includes procedures for allocating RAM and setting the CPU speed.

Table 2-2 Minimum Requirements for a VM Hosting a VSM

VSM VM Component
Minimum Requirement

Platform

64-bit

Type

Other 64-bit Linux (recommended)

Processor

1

RAM
(configured and reserved)

2 GB1

NIC

3

SCSI Hard Disk

3 GB with LSI Logic Parallel adapter

CPU speed

1500 MHz 2

1 If you are installing the VSM using an OVA file, the correct RAM setting is made automatically during the installation of this file.

You are using the CD ISO image, see the "Installing the Software from the ISO Image" section to reserve RAM and set the memory size.

2 If you are installing the VSM using an OVA file, the correct CPU speed setting is made automatically during the installation of this file.

You are using the CD ISO image, see the "Installing the Software from the ISO Image" section to set CPU speed.


Upstream Switch Prerequisites

The switch upstream from the Cisco Nexus 1000V has the following prerequisites:

If you are using a set of switches, make sure that the inter-switch trunk links carry all relevant VLANs, including control and packet VLANs. The uplink should be a trunk port that carries all VLANs configured on the host.

The following spanning tree prerequisites apply to the switch upstream from the Cisco Nexus 1000V on the ports connected to the VEM.

On upstream switches, the following configuration is mandatory:

On your Catalyst series switches with Cisco IOS software:
(config-if) spanning-tree portfast trunk
or
(config-if) spanning-tree portfast edge trunk

On your Cisco Nexus 5000 series switches with Cisco NX-OS software:
(config-if) spanning-tree port type edge trunk

On upstream switches we highly recommend that you enable the following globally:

Global BPDU Filtering

Global BPDU Guard

On upstream switches where you cannot globally enable BPDU Filtering and BPDU Guard, we highly recommend that you configure the following:

(config-if) spanning-tree bpdu filter

(config-if) spanning-tree bpdu guard

For more information about spanning tree and its supporting commands, see the documentation for your upstream switch.

VSM Prerequisites

The Cisco Nexus 1000V VSM software has the following are prerequisites:

You have the VSM IP address.

You have installed the appropriate vCenter Server and VMware Update Manager versions.

VSMs have the following prerequisites:

If you are installing redundant VSMs, make sure that you first install and set up the software on the primary VSM before installing and setting up the software on the secondary VSM.

You have already identified the HA role for this VSM from the list in Table 2-3.

Table 2-3 HA Roles

HA Role
Single Supervisor System
Dual Supervisor System

Standalone (test environment only)

X

 

HA

 

X



Note A standalone VSM is not supported in a production environment.


You are familiar with the Cisco Nexus 1000V topology diagram that is shown in Figure 1-4.

VEM Prerequisites

The Cisco Nexus 1000V VEM software has the following prerequisites:


Caution If the VMware vCenter Server is hosted on the same ESX/ESXi host as a Cisco Nexus 1000V VEM, a VUM-assisted upgrade on the host will fail. You should manually vMotion the vCenter Server VM to another host before you perform an upgrade.

When you perform any VUM operation on hosts that are a part of a cluster, ensure that VMWare high availability (HA), VMware fault tolerance (FT), and VMware distributed power management (DPM) features are disabled for the entire cluster. Otherwise, VUM cannot install the hosts in the cluster.

You have a copy of your VMware documentation available for installing software on a host.

You have already obtained a copy of the VEM software file from one of the sources listed in Table 2-1.

You have already downloaded the correct VEM software based on the current ESX/ESXi host patch level. For more information, see the Cisco Nexus 1000V Compatibility Information, Release 4.2(1)SV1(5.1).

For a VUM-based installation, you must deploy VUM and make sure that the VSM is connected to the vCenter Server.

Guidelines and Limitations

Installing the Cisco Nexus 1000V has the guidelines and limitations included in the following topics:

Guidelines and Limitations of the Nexus 1000V Installation Management Center

Guidelines and Limitations When Installing the Cisco Nexus 1000V

Guidelines and Limitations of the Nexus 1000V Installation Management Center

Configuring the software using the Nexus 1000V Installation Management Center has the following guidelines and limitations:

For a complete list of port profile guidelines and limitations, see the Cisco Nexus 1000V Port Profile Configuration Guide, Release 4.2(1)SV1(5.1).


Caution Host management connectivity might be interrupted if the management vmknic or vswif are migrated and the uplink's native VLAN is not correctly specified in the setup process.

If you are installing a Cisco Nexus 1000V in an environment where the upstream switch does not support static port channels, such as UCS, you must use the channel group auto mode on mac-pinning command instead of the channel group auto mode command.

This change is required before you add VMNICs in the DVS using this profile.

We recommend that you install redundant VSMs on the Cisco Nexus 1000V. For information about high availability and redundancy, see the Cisco Nexus 1000V High Availability and Redundancy Configuration Guide, Release 4.2(1)SV1(5.1).

Guidelines and Limitations When Installing the Cisco Nexus 1000V

Use the following guidelines and limitations when installing the Cisco Nexus 1000V software:

Do not enable VMware fault tolerance (FT) for the VSM VM because it is not supported. Instead, NX-OS HA provides high availability for the VSM.

The VSM VM supports VMware high availability (HA). However, we strongly recommend that you deploy redundant VSMs and configure Cisco NX-OS HA between them. Use the VMware recommendations for the VMware HA.

Do not enable VM Monitoring for the VSM VM because it is not supported, even if you enable the VMware HA on the underlying host. Cisco NX-OS redundancy is the preferred method.

When a user moves a VSM from the VMware vSwitch to the Cisco Nexus 1000V DVS, it is possible that the connectivity between the active and standby VSM is temporarily lost. In that situation, both active and standby VSMs will assume the active role. Once the connectivity is restored between the VSMs, the VSM configured with the role of primary will reload itself and come back as standby.

If the VSM is moved from the VMware vSwitch to the Cisco Nexus 1000V DVS, it is recommended that you configure the port-security on the VSM vethernet interfaces to secure control/packet MACs.

To improve redundancy, install primary and secondary VSM virtual machines in separate hosts that are connected to different upstream switches.

The Cisco Nexus 1000V VSM always uses the following three network interfaces in the same order as specified below:

1. Control Interface

2. Management Interface

3. Packet Interface

Installing a VSM HA Pair With L3 Mode Behind a VEM Using the Nexus 1000V Installation Management Center

You can install a VSM HA pair behind a VEM by using the Nexus 1000V Installation Management Center.

The steps to install a VSM HA pair behind a VEM are as follows:

1. Installing a VSM HA Pair Using the Nexus 1000V Installation Management Center

2. Installing VEM Software Using the Nexus 1000V Installation Management Center

3. Adding VEM Hosts to the Distributed Virtual Switch

4. Moving the Secondary VSM to a Different Host

5. Setting Virtual Machine Startup and Shutdown Parameters

Installing a VSM HA Pair Using the Nexus 1000V Installation Management Center

You can install a VSM HA pair using the Nexus 1000V Installation Management Center.

BEFORE YOU BEGIN

Before beginning this procedure, you must know or do the following:

You have the following information:

Control VLAN ID

Management VLAN ID

Domain ID

Management IP address

Subnet mask

Gateway IP address

SVS datacenter name

VLAN ID of the untagged traffic

You have the JDK version 1.6 or later installed.

PROCEDURE


Step 1 Download the Nexus1000v.4.2.1.SV1.5.1.zip file.

Step 2 Enter the following command from a Windows, Linux, or Mac command prompt.

java -jar 
zip_file_location/Nexus1000v.4.2.1.SV1.5.1/VSM/Installer_App/Nexus1000V-install.jar VSM 
 
   

The Enter vCenter Credentials screen opens. See Figure 2-1.

Figure 2-1 Enter vCenter Credentials Screen

Step 3 Enter the following vCenter credentials:

vCenter IP address

Secure HTTP port

Port 443 is configured by default, but you can change the port if needed.

vCenter User ID (for a vCenter user with administrator-level privileges)

vCenter Password (for a vCenter user with administrator-level privileges)

Step 4 Click Next.

The Select the VSM's host screen opens. See Figure 2-2.

Figure 2-2 Select the VSM's host Screen

Step 5 Choose a host where the VSM will be deployed.

Step 6 Click Next.

The Select OVA File to create VSM screen opens. See Figure 2-3.

Figure 2-3 Select OVA File to create VSM Screen

Step 7 Click Browse OVA and browse to the location of the OVA file.

Step 8 From the System Redundancy drop-down list, choose a System Redundancy value.


Note If you choose a System Redundancy value of HA, a primary and secondary VSM is created.


Step 9 In the Virtual Machine Name field, enter a name for the virtual machine.


Note The application appends -1 to the primary VSM and -2 to the secondary VSM.


Step 10 From the VSM Datastore drop-down list, choose a datastore.

Step 11 Click Next.

The Configure Networking screen opens. See Figure 2-4.

Figure 2-4 Configuring Networking Screen

Step 12 To configure Layer 3 connectivity, click L3: Configure port groups for L3.


Note To configure Layer 2 connectivity, see the "Configuring Layer 2 Connectivity" section.


Step 13 For the control port group, do the following:

a. Click the Create New radio button

b. In the Port Group Name field, enter a port group name.

c. In the VLAN ID field, enter a VLAN.

d. From the vSwitch drop-down list, choose a VMware vSwitch value.

Step 14 For the management port group, do the following:

a. Click the Create New radio button

b. In the Port Group Name field, enter a port group name.

c. In the VLAN ID field, enter a VLAN.

d. From the vSwitch drop-down list, choose a VMware vSwitch value.


Note You can also use existing port groups for the control and management port groups.


Step 15 For Layer 3 connectivity, choose the mgmt0 radio button and do the following:

a. In the Enter L3 mgmt0 Interface Port Profile VLAN ID field, enter the VLAN of your management network.

Step 16 Click Next.

The Configure VSM screen opens. See Figure 2-5.


Note For Layer 3 control0 mode, control and management IP addresses must be in different subnets. Step 17 will fail if the control and management IP addresses are not in different subnets.


Figure 2-5 Configure VSM Screen

Step 17 In the Configure VSM screen, enter the following information:

In the Switch Name field, enter the switch name.

In the Enter Admin Password field, enter the admin password.


Note All alphanumeric characters and symbols on a standard US keyboard are allowed except for these three: $ \ ?


In the Confirm Admin Password field, enter the admin password.

In the Mgmt IP Address field, enter the mgmt0 IP address of the VSM VM.

In the Subnet Mask field, enter the subnet mask.

In the Gateway IP address field, enter the gateway IP address.

In the Domain ID field, enter the domain ID.

From the SVS Datacenter Name drop-down list, choose a data center name.

In the vSwitch0 Native VLAN ID field, enter the native VLAN ID.


Caution Host management connectivity might be interrupted if VMware kernel 0 and the VMware vSwitch interface 0 are migrated and the native VLAN is not correctly specified.

Step 18 (Optional) Click Enable Telnet if you want to enable Telnet or click Enable SSH if you want to enable SSH.

Step 19 Click Next.

The Review Configuration screen opens. See Figure 2-6.

Figure 2-6 Review Configuration Screen

.

Step 20 Do one of the following:

To make corrections, click Prev, go back to the previous screens, and make corrections.

If the configuration is correct, continue with Step 21.

Step 21 Click Next.

As the configuration is applied to the VSM, the Review Configuration Installation Process screen opens. See Figure 2-7.

Figure 2-7 Review Configuration Installation Progress Screen


Note If you chose a redundancy value of HA, the primary and secondary VSMs are being created.


The Configure Migration screen opens. See Figure 2-8.

You are then prompted to migrate the host and networks to the new DVS.

Figure 2-8 Configure Migration Screen

Step 22 In the Configuration Migration screen, do the following:

a. Click the Yes radio button.

b. In the VMkernel L3 IP Address field, enter an IP address.

c. In the VMkernel L3 subnet mask field, enter the subnet mask.

When you click Yes, one of the channel-group commands in Table 2-4 is applied to the uplink port profile during migration.

Table 2-4 Port Channel Creation During Migration Options

Port Channel created during migration
For VMware vSwitch Teaming policy in use

A static port channel
channel-group auto mode on

Route based on IP hash or
route based on the originating virtual port ID

A vPC host mode port channel with mac-pinning
channel-group auto mode on mac-pinning

MAC hash



Note If VUM is not installed, the VSM installer installs the vSphere Installation Bundles (VIBs) as a part of the VSM installation.


Step 23 Click Next.

The DVS Migration Components screen opens to display the details of the proposed migration. See Figure 2-9.

Figure 2-9 DVS Migration Components Screen

Step 24 Click Finish.

The migration starts and the DVS Migration Installation Progress screen opens. See Figure 2-10.

Figure 2-10 DVS Migration Installation Progress Screen

The Migration Summary screen opens.

The Initial Installation Completed Successfully screen opens. See Figure 2-11.

Figure 2-11 Initial VSM Installation Completed Successfully Screen

Step 25 Click Close.

Installing VEM Software Using the Nexus 1000V Installation Management Center

You can install VEM software using the Nexus 1000V Installation Management Center.

BEFORE YOU BEGIN

Before beginning this procedure, you must know or do the following:

You have the following information:

vCenter IP address

vCenter User ID

vCenter Password

VSM IP Address

VSM Password


Note The installer application does not expect the VIBs to be available.



Step 1 Enter the following command from a Windows, Linux, or Mac command prompt.

java -jar 
zip_file_location/Nexus1000v.4.2.1.SV1.5.1/VSM/Installer_App/Nexus1000V-install.jar VEM 
 
   

The VEM Enter vCenter Credentials screen opens. See Figure 2-12.

Figure 2-12 VEM Enter vCenter Credentials Screen

Step 2 Enter the following vCenter credentials:

vCenter IP address

Secure HTTP port

Port 443 is configured by default, but you can change the port if needed.

vCenter User ID (for a vCenter user with administrator-level privileges)

vCenter Password (for a vCenter user with administrator-level privileges)

Step 3 Click Next.

The VEM Enter VSM IP & Credentials Screen opens. See Figure 2-13.

Figure 2-13 VEM Enter VSM IP & Credentials Screen

Step 4 Enter the following VSM IP address and credentials:

VSM IP address

VSM Password

Step 5 Click Next.

The Select VEM Host(s) screen opens. See Figure 2-14.

Figure 2-14 Select VEM Host(s) Screen

Step 6 Choose the hosts on which to install the VEM software.


Note The hosts that are displayed in the Select VEM Host(s) pane are all the hosts in the datacenter.

You do not need to install the VEM software on the host which contains the VSM. The VEM software was installed on the VSM host during the VSM installation process.

VIBs should be present on the selected hosts. If there are VIBs present, the application fails to install the VEM on that host.


Step 7 Click Next.

The VEM Summary: Please Review Configuration Screen opens. See Figure 2-15.


Note The VIBs are not present for VEM installation in an initial installation.


Figure 2-15 VEM Summary: Please Review Configuration Screen

Step 8 Validate the entries and click Finish.

The VEM Summary screen opens. See Figure 2-16.

Figure 2-16 VEM Summary Screen

Step 9 Click Close.


Note If the VEM software fails to install on a host, the following message is displayed: Install status: Failure.


The installation of the VEM software is complete.

For more information about troubleshooting VSMs and VEMS, see the Cisco Nexus 1000V Troubleshooting Guide, Release 4.2(1)SV1(5.1).


Adding VEM Hosts to the Distributed Virtual Switch

You can add VEM hosts to the distributes virtual switch. VSM.

BEFORE YOU BEGIN

Before beginning this procedure, you must know or do the following:

You have the following information:

Physical adapters

Uplink port groups


Step 1 Begin in the vSphere Client window. See Figure 2-17.

Figure 2-17 vSphere Client WIndow

Step 2 In the vSphere Client window, choose Hosts and Clusters > Networking.

The vSphere Client Hosts window opens. See Figure 2-18.

Figure 2-18 vSphere Client Hosts Window

Step 3 Select the DVS and click the Hosts tab.

The Adding Host to DVS window opens. See Figure 2-19.

Figure 2-19 Adding Host to DVS

Step 4 Right-click the DVS and choose Add Host.

The Select Hosts and Physical Adapters screen opens. See Figure 2-20.

Figure 2-20 Select Hosts and Physical Adapters Screen

Step 5 Choose the hosts and the uplink port groups and click Next.

The Network Connectivity screen opens. See Figure 2-21.

Figure 2-21 Network Connectivity Screen

Step 6 Click Next.

The Virtual Machine Networking screen opens. See Figure 2-22.

Figure 2-22 Virtual Machine Networking Screen

Step 7 Click Next.

The Ready to Complete screen opens. See Figure 2-23.

Figure 2-23 Ready to Complete Screen

Step 8 Click Finish.

The vSphere Client Hosts window opens. See Figure 2-24.

Figure 2-24 vSphere Client Hosts Window

Step 9 Confirm that the hosts are in the Connected state.

The host connection process is complete.


Moving the Secondary VSM to a Different Host

You must move the secondary VSM to a different host to establish a high availability environment.

BEFORE YOU BEGIN

Before beginning this procedure, you must know or do the following:

You have the following information:

Host to which you are moving the secondary VSM

The host to which you are moving the secondary VSM must be a part of the DVS


Step 1 Begin in the vSphere Client window. See Figure 2-25.

Figure 2-25 vSphere Client Window

Step 2 Choose Networking > Host and Clusters.

The Powering Off Secondary VSM window opens. See Figure 2-26.

Figure 2-26 Powering Off Secondary VSM

Step 3 Right-click the secondary VSM and choose Power > Power Off.

The Confirm Power Off dialog box opens. See Figure 2-27.

Figure 2-27 Confirm Power Off Dialog Box

Step 4 Click Yes.

The Migrate Secondary VSM window opens. See Figure 2-28.

Figure 2-28 Migrate Secondary VSM Window

Step 5 Right-click the secondary VSM and choose Migrate.

The Select Migration Type window opens. See Figure 2-29.

Figure 2-29 Select Migration Type Window

Step 6 Click the Change both host and datastore radio button and click Next.

The Select Destination screen opens. See Figure 2-30.

Figure 2-30 Select Destination Screen

Step 7 Choose the host for migration and click Next.

The Storage screen opens. See Figure 2-31.

Figure 2-31 Storage Screen

Step 8 Click Next.

The Ready to Complete screen opens. See Figure 2-32

Figure 2-32 Ready to Complete Screen.

Step 9 Click Finish.

The Power On window opens. See Figure 2-33.

Figure 2-33 Power On Window

Step 10 Right-click the secondary VSM and choose Power > Power On.

The movement of the secondary VSM to a different host than the primary VSM is complete.


Setting Virtual Machine Startup and Shutdown Parameters

You can set the VM startup and shutdown parameters.

BEFORE YOU BEGIN

Before beginning this procedure, you must know or do the following:

You have the following information:

Number of seconds for the default startup delay

Number of seconds for the default shutdown delay


Step 1 Begin in the vSphere Client window. See Figure 2-34.

Figure 2-34 vSphere Client Window

Step 2 Choose a host and click the Configuration tab.

The Configuration pane opens. See Figure 2-35.

Figure 2-35 Configuration Pane

Step 3 Choose Virtual Machine Startup/Shutdown.

The Virtual Machine Startup and Shutdown pane opens. See Figure 2-36.

Figure 2-36 Virtual Machine Startup and Shutdown Pane

Step 4 Click Properties.

The System Settings dialog box opens. See Figure 2-37.

Figure 2-37 System Settings Dialog Box

Step 5 Check the Allow virtual machines to start and stop automatically with the system check box.

Step 6 In the System Settings pane, do the following:

Enter a number of seconds in the Default Startup Delay seconds field.

Enter a number of seconds in the Default Shutdown Delay seconds field.

Step 7 In the Startup Order pane, do the following:

Choose the virtual machine.

Click the Move Up button until the virtual machine is under Automatic Startup.

Step 8 Click OK.

Step 9 Repeat Step 2 through Step 8 for the other virtual machine. See Figure 2-38.

Figure 2-38 Secondary Virtual Machine Startup and Shutdown

Figure 2-38 shows the secondary VM setting.

Startup and Shutdown settings are complete.

You have completed the setup of the Cisco Nexus 1000VVSM HA pair.


Installing the VEM Software Using VUM

The VMware Update Manager (VUM) automatically selects the correct VEM software to be installed on the host when the host is added to the DVS.


Note Make sure you read the "VEM Prerequisites" section to ensure that the VUM operation proceeds without failure.


Installing the VEM Software Using the CLI

There are four different installation paths based on the version of VMware ESX/ESXi software that is running on the server.

Installing VEM Software Locally on a VMware 4.1 Host by Using the CLI

Installing VEM Software Remotely on a VMware 4.1 Host by Using the CLI

Installing VEM Software Locally on a VMware 5.0 Host by Using the CLI

Installing VEM Software Remotely on a VMware 5.0 Host by Using the CLI

Installing VEM Software Locally on a VMware 4.1 Host by Using the CLI

BEFORE YOU BEGIN

If you are using the esxupdate command:

You are logged in to the ESX host.

Check the Cisco Nexus 1000V Compatibility Information, Release 4.2(1)SV1(5.1) for compatible versions.

You have already copied the VEM software installation file to the /tmp directory.

You know the name of the VEM software file to be installed.

PROCEDURE


Step 1 From the ESX host /tmp directory, enter the esxupdate command.

If the offline bundle is used, enter the following command:

esxupdate --bundle VMware_offline_update_bundle update 
 
   

For example:

/tmp # esxupdate --bundle VEM410-201201401.zip update 
Unpacking cross_cisco-vem-v14.. ######################################## [100%] 
 
   
Installing packages :cross_ci.. ######################################## [100%] 
 
   
Running [/usr/sbin/vmkmod-install.sh]... 
ok.
 
   

If the VIB file is used, enter the following command:

esxupdate -b VIB_file update 
 
   

For example:

esxupdate -b cross_cisco-vem-v140-4.2.1.1.5.1.0-2.0.1.vib update 
 
   
Unpacking cross_cisco-vem-v140-esx.. ################################################# 
[100%] 
 
   
Installing packages :cross_cisco-v.. ################################################# 
[100%] 
 
   
Running [/usr/sbin/vmkmod-install.sh]... 
ok.
 
   

This command loads the software manually onto the host, loads the kernel modules, and starts the VEM Agent on the running system.

Step 2 Verify that the VEM software is installed on the host.

esxupdate --vib-view query | grep cisco 
cross_cisco-vem-v140-4.2.1.1.5.1.0-2.0.1.vib                                installed     
2012-02-02T12:29:18.728890+00:00 
 
   

Step 3 Verify that the installation was successful by checking for the "VEM Agent (vemdpa) is running" statement in the output of the vem status command.

vem status -v 
Package vssnet-esx5.5.0-00000-release
Version 4.2.1.1.5.1.0-2.0.2
Build 2
Date Tue Jan 31 05:01:37 PST 2012
 
Number of PassThru NICs are 0
VEM modules are loaded
 
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks   
vSwitch0         128         3           128               1500    vmnic0    
 
Number of PassThru NICs are 0
VEM Agent (vemdpa) is running 
 
   

Step 4 Do one of the following:

If the installation was successful, the installation procedure is complete.

If the installation was not successful, see the "Recreating the Cisco Nexus 1000V Installation" section in the Cisco Nexus 1000V Troubleshooting Guide, Release 4.2(1)SV1(5.1).


Installing VEM Software Remotely on a VMware 4.1 Host by Using the CLI

BEFORE YOU BEGIN

Before beginning this procedure, you must know or do the following:

If you are using vCLI:

You have downloaded and installed the VMware vCLI. For information about installing the vCLI, see the VMware vCLI documentation.

You are logged in to the remote machine where the vCLI is installed.


Note The vSphere Command-Line Interface (vCLI) command set allows you to enter common system administration commands against ESX/ESXi systems from any machine with network access to those systems. You can also enter most CLI commands against a vCenter Server system and target any ESX/ESXi system that the vCenter Server system manages. vCLI commands are especially useful for ESXi hosts because ESXi does not include a service console.


PROCEDURE


Step 1 Go to the directory where the new VEM software was copied.

[root@serialport -]# cd tmp 
[root@serialport tmp]# 
 
   

Step 2 Using the vihostupdate command and the name of the new VEM software file, install the VEM software by entering the following command.

[root@serialport tmp]# vihostupdate -i -b ./Cisco updated VEM offline bundle --server 
vsphere_host 
 
   

For example:

[root@serialport tmp]# vihostupdate -i -b ./VEM410-201201401.zip --server 192.0.2.0 
Enter username: root
Enter password:
Please wait patch installation is in progress ...
Host updated successfully.
 
   

Step 3 Verify that the VEM software is installed on the host.

vihostupdate.pl -q --server host_ip_address 
 
   

For example:

vihostupdate.pl -q --server 192.0.2.1 
Enter username: root 
Enter password: 
 
   

Look for the following:

---------Bulletin ID--------- -----Installed----- ----------------Summary-------
----------
VEM410-201201401-BG           2012-02-02T00:54:14 Cisco Nexus 1000V 4.2(1)SV1(5.1)
 
   

Step 4 Do one of the following:

If the installation was successful, the installation procedure is complete.

If the installation was not successful, see the "Recreating the Cisco Nexus 1000V Installation" section in the Cisco Nexus 1000V Troubleshooting Guide, Release 4.2(1)SV1(5.1).


Installing VEM Software Locally on a VMware 5.0 Host by Using the CLI

PROCEDURE


Step 1 Copy the VEM software to the /tmp directory.

Step 2 Enter the following command.

esxcli software vib install -v /tmp/VIB_FILE 
 
   

For example:

esxcli software vib install -v /tmp/cross_cisco-vem-v140-4.2.1.1.5.1.0-3.0.1.vib 
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: Cisco_bootbank_cisco-vem-v140-esx_4.2.1.1.5.1.0-3.0.1
   VIBs Removed: 
   VIBs Skipped:
 
   

Step 3 Verify that the VEM software is installed on the host.

esxcli software vib list | grep cisco 
cisco-vem-v140-esx 4.2.1.1.5.1.0-3.0.1 Cisco PartnerSupported
2012-02-02
 
   

Step 4 Verify that the installation was successful by checking for the "VEM Agent (vemdpa) is running" statement in the output of the vem status command.

# vem status -v 
Package vssnet-esxmn-ga-release 
Version 4.2.1.1.5.1.0-3.0.1 
Build 1 
Date Mon Jan 30 18:38:49 PST 2012 
 
   
Number of PassThru NICs are 0 
VEM modules are loaded 
 
   
Switch Name    Num Ports   Used Ports  Configured Ports MTU   Uplinks 
vSwitch0       128         3           128              1500  vmnic0 
Number of PassThru NICs are 0 
VEM Agent (vemdpa) is running 
 
   

Step 5 Do one of the following:

If the installation was successful, the installation procedure is complete.

If the installation was not successful, see the "Recreating the Cisco Nexus 1000V Installation" section in the Cisco Nexus 1000V Troubleshooting Guide, Release 4.2(1)SV1(5.1).


Installing VEM Software Remotely on a VMware 5.0 Host by Using the CLI

PROCEDURE


Step 1 Copy the VEM software to the NFS storage which is mounted on the ESXi 5.0 host.

Step 2 Enter the following command from the remote machine where the vCLI is installed.

esxcli --server=[server ip] software vib install --depot=Path_to_the_NFS_storage_mounted_ 
_on_ESXi_5.0 host 
 
   

For example:

vi-admin@localhost:~> esxcli --server=192.0.2.2 software vib install 
--depot=/vmfs/volumes/newnfs/MN-patch01/CN-FCS/VEM500-201201140102-BG-release.zip
Enter username: root
Enter password:
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: Cisco_bootbank_cisco-vem-v140-esx_4.2.1.1.5.1.0-3.0.1
   VIBs Removed:
   VIBs Skipped:
 
   

where 192.0.2.2 is the target ESXi 5.0 host IP address and newnfs is the NFS storage mounted on the ESXi 5.0 host.


Note You should refer to the official VMware documentation for further information on the esxcli command.


Step 3 Verify that the VEM software is installed on the host.

esxcli --server=host_ip_address software vib list 
 
   

For example:

esxcli --server=192.0.2.1 software vib list 
Enter username: root
Enter password:
 
   

Look for the following:

Name                  Version                             Vendor  Acceptance Lev
el  Install Date
--------------------  ----------------------------------  ------  --------------
--  ------------
cisco-vem-v140-esx    4.2.1.1.5.1.0-3.0.1                 Cisco   PartnerSupport
ed  2012-04-06
 
   
 
   

Step 4 Do one of the following:

If the installation was successful, the installation procedure is complete.

If the installation was not successful, see the "Recreating the Cisco Nexus 1000V Installation" section in the Cisco Nexus 1000V Troubleshooting Guide, Release 4.2(1)SV1(5.1).


Installing the VEM Software on a Stateless ESXi Host

This section includes the following topics:

Stateless ESXi Host

Adding the Cisco Nexus 1000V to an ESXi Image Profile

Installing the VEM Software on a Stateless ESXi Host Using esxcli

Installing the VEM Software on a Stateless ESXi Host Using VUM

Configuring Layer 2 Connectivity

Stateless ESXi Host


Note For Stateless ESXi, the VLAN used for gPXE and Management must be a native VLAN in the Cisco Nexus 1000V management uplink. It must also be a system VLAN on the management VMkernel NIC and on the uplink.


VMware vSphere 5.0.0 introduces the VMware Auto Deploy feature that provides the infrastructure for loading the ESXi image directly into the host's memory. The software image of a stateless ESXi is loaded from the Auto Deploy Server after every boot. In this context, the image with which the host boots is identified as the image profile.

An image profile is a collection of vSphere Installation Bundles (VIBs) required for the host to operate and the image profile includes base VIBs from VMware and additional VIBs from partners.

On a stateless host, VEM software can be installed or upgraded using either the VUM or CLI.

In addition, the new or modified VEM module should also be bundled in the Image Profile from which the stateless host boots. If it is not bundled in the image profile, the VEM module does not persist across reboots of the stateless host.

The following procedure describes how to bundle the VEM into the image profile and how to upgrade existing VEMs in the image profile.

For more information about the VMware Auto Deploy Infrastructure and Stateless boot process, see the "Installing ESXi using VMware Auto Deploy" chapter of the vSphere Installation and Setup, vSphere 5.0.0 document.

Adding the Cisco Nexus 1000V to an ESXi Image Profile

You can add Cisco Nexus 1000V VEM software to an ESXi Image Profile by using the VMware PowerCLI.

BEFORE YOU BEGIN

Before beginning this procedure, you must know or do the following:

Install and set up the VMware Auto Deploy server. See the vSphere Installation and Setup, vSphere 5.0.0 document.

Install the VMware PowerCLI on a Windows platform. This step is required for bundling the VEM module into the image profile. For more information, see the vSphere PowerCLI Installation Guide.

On the same Windows platform, where VMware PowerCLI is installed, do the following:

Download the image profile offline bundle, which is a ZIP file, to a local file path.

Download the VEM offline bundle, which is a ZIP file, to a local file path.


Note In the following procedure, the image profile bundle is available as C:\ESXi-5.0.0-depot.zip and the VEM bundle is available as C:\VEM500-20110822140-BG.zip.


PROCEDURE


Step 1 Start the vSphere PowerCLI application.

Step 2 Connect to vCenter Server.

[vSphere PowerCLI] > Connect-VIServer 192.0.2.1 -User Administrator -Password XXXXX 
 
   

Step 3 Load the image profile offline bundle.


Note Each image profile bundle can include multiple image profiles.


[vSphere PowerCLI] > Add-ESXSoftwareDepot c:\vmware-ESXi-5.0.0-depot.zip 
 
   

Step 4 List the image profiles.

[vSphere PowerCLI] > Get-EsxImageProfile 
 
   
Name                             Vendor          Last Modified
----                             ------          -------------
ESXi-5.0.0-standard              VMware, Inc.    2/25/2011 9:42:21 PM
ESXi-5.0.0-no-tools              VMware, Inc.    2/25/2011 9:42:21 PM
 
   

Step 5 Choose the image profile into which the VEM is to be bundled from the output of the Get-EsxImageProfile command.


Note The image profiles will be in read-only format. You need to clone the image profile before adding VEM module into it.


[vSphere PowerCLI] > New-EsxImageProfile -CloneProfile ESXi-5.0.0-standard -Name 
n1kv-Image 
 
   

Note The n1kv-Image is the cloned image profile of ESXi-5.0.0-standard.


Step 6 Load the Cisco Nexus 1000V VEM offline bundle.


Note The offline bundle is a zip file that includes the n1kv-vib file.


[vSphere PowerCLI] > Add-EsxSoftwareDepot C:\VEM500-20110822140-BG.zip 
 
   

Step 7 Confirm that the n1kv-vib package is loaded.

[vSphere PowerCLI] > Get-EsxSoftwarePackage -Name cisco* 
 
   
Name                           Version              Vendor          Release
----                           -------              ------          -----------
cisco-vem-v131-esx             4.2.1.1.3.24.0-3.0.8 Cisco           8/22/2011.
 
   

Step 8 Bundle the n1kv-package into the cloned image profile.

[vSphere PowerCLI] > Add-EsxSoftwarePackage -ImageProfile n1kv-Image -SoftwarePackage 
cisco-vem-v131-esx 
 
   

Step 9 List all the VIBs in the cloned image profile.

[vSphere PowerCLI]> $img = Get-EsxImageProfile n1kv-Image 
[vSphere PowerCLI]> $img.vibList 
 
   
Name                     Version                        Vendor     Release Date
----                     -------                        ------     ------------
scsi-bnx2i               1.9.1d.v50.1-3vmw.500.0.0.4... VMware     6/22/2011...
net-s2io                 2.1.4.13427-3vmw.500.0.0.43... VMware     6/22/2011...
net-nx-nic               4.0.557-3vmw.500.0.0.434219    VMware     6/22/2011...
scsi-aic79xx             3.1-5vmw.500.0.0.434219        VMware     6/22/2011...
sata-ata-piix            2.12-4vmw.500.0.0.434219       VMware     6/22/2011...
net-e1000e               1.1.2-3vmw.500.0.0.434219      VMware     6/22/2011...
net-forcedeth            0.61-2vmw.500.0.0.434219       VMware     6/22/2011...
tools-light              5.0.0-0.0.434219               VMware     6/22/2011...
ipmi-ipmi-msghandler     39.1-4vmw.500.0.0.434219       VMware     6/22/2011...
scsi-aacraid             1.1.5.1-9vmw.500.0.0.434219    VMware     6/22/2011...
net-be2net               4.0.88.0-1vmw.500.0.0.434219   VMware     6/22/2011...
sata-ahci                3.0-6vmw.500.0.0.434219        VMware     6/22/2011...
ima-qla4xxx              2.01.07-1vmw.500.0.0.434219    VMware     6/22/2011...
ata-pata-sil680          0.4.8-3vmw.500.0.0.434219      VMware     6/22/2011...
scsi-ips                 7.12.05-4vmw.500.0.0.434219    VMware     6/22/2011...
scsi-megaraid-sas        4.32-1vmw.500.0.0.434219       VMware     6/22/2011...
scsi-mpt2sas             06.00.00.00-5vmw.500.0.0.43... VMware     6/22/2011...
net-cnic                 1.10.2j.v50.7-2vmw.500.0.0.... VMware     6/22/2011...
ipmi-ipmi-si-drv         39.1-4vmw.500.0.0.434219       VMware     6/22/2011...
esx-base                 5.0.0-0.0.434219               VMware     6/22/2011...
ata-pata-serverworks     0.4.3-3vmw.500.0.0.434219      VMware     6/22/2011...
scsi-mptspi              4.23.01.00-5vmw.500.0.0.434219 VMware     6/22/2011...
net-bnx2x                1.61.15.v50.1-1vmw.500.0.0.... VMware     6/22/2011...
ata-pata-hpt3x2n         0.3.4-3vmw.500.0.0.434219      VMware     6/22/2011...
sata-sata-sil            2.3-3vmw.500.0.0.434219        VMware     6/22/2011...
scsi-hpsa                5.0.0-17vmw.500.0.0.434219     VMware     6/22/2011...
block-cciss              3.6.14-10vmw.500.0.0.434219    VMware     6/22/2011...
net-tg3                  3.110h.v50.4-4vmw.500.0.0.4... VMware     6/22/2011...
net-igb                  2.1.11.1-3vmw.500.0.0.434219   VMware     6/22/2011...
ata-pata-amd             0.3.10-3vmw.500.0.0.434219     VMware     6/22/2011...
ata-pata-via             0.3.3-2vmw.500.0.0.434219      VMware     6/22/2011...
net-e1000                8.0.3.1-2vmw.500.0.0.434219    VMware     6/22/2011...
scsi-adp94xx             1.0.8.12-6vmw.500.0.0.434219   VMware     6/22/2011...
scsi-lpfc820             8.2.2.1-18vmw.500.0.0.434219   VMware     6/22/2011...
scsi-mptsas              4.23.01.00-5vmw.500.0.0.434219 VMware     6/22/2011...
ata-pata-cmd64x          0.2.5-3vmw.500.0.0.434219      VMware     6/22/2011...
sata-sata-svw            2.3-3vmw.500.0.0.434219        VMware     6/22/2011...
misc-cnic-register       1.1-1vmw.500.0.0.434219        VMware     6/22/2011...
ipmi-ipmi-devintf        39.1-4vmw.500.0.0.434219       VMware     6/22/2011...
sata-sata-promise        2.12-3vmw.500.0.0.434219       VMware     6/22/2011...
sata-sata-nv             3.5-3vmw.500.0.0.434219        VMware     6/22/2011...
cisco-vem-v131-esx       4.2.1.1.3.24.0-3.0.8           Cisco      6/30/2011...
 
   

Step 10 Export the image profile to a depot file for future use.

[vSphere PowerCLI] > Export-EsxImageProfile -ImageProfile n1kv-Image -FilePath 
C:\n1kv-Image.zip -ExportToBundle. 
 
   

Step 11 Set up the rule for the host to boot with this image profile.


Note Any of the host parameters, such as the MAC address, IPV4 IP address, or domain name, can be used to associate an image profile with the host.


[vSphere PowerCLI] > New-deployrule -item $img -name rule-test -Pattern 
"mac=00:50:56:b6:03:c1" 
[vSphere PowerCLI] > Add-DeployRule -DeployRule rule-test 
 
   

Step 12 Display the configured rule to make sure the correct image profile is associated with the host.

[vSphere PowerCLI] > Get-DeployRuleSet 
 
   
Name        : rule-test
PatternList : {mac=00:50:56:b6:03:c1}
ItemList    : {n1kv-Image}
 
   

Step 13 Reboot the host.

The host contacts the Auto-Deploy server and presents the host boot parameters. The Auto Deploy server checks the rules to find the image profile associated with this host. The Auto Deploy server loads the image to the host's memory and the host boots from it.


Installing the VEM Software on a Stateless ESXi Host Using esxcli

You can install the VEM software by using the esxcli command.

BEFORE YOU BEGIN

Before beginning this procedure, you must know or do the following:

When entering the esxcli software vib install command on an ESXi 5.0.0 host, the following message displays:

Message: WARNING: Only live system was updated, the change is not persistent.

PROCEDURE


Step 1 Display the VMware version and build number.

~ # vmware -v 
VMware ESXi 5.0.0 build-441354
~ #
~ # vmware -l 
VMware ESXi 5.0.0 GA
 
   

Step 2 Log in to the ESXi stateless host.

Step 3 Copy the offline bundle to the host.

~ # esxcli software vib install -d 
/vmfs/volumes/newnfs/MN-VEM/VEM500-20110728153-BG-release.zip 
Installation Result
   Message: WARNING: Only live system was updated, the change is not persistent.
   Reboot Required: false
   VIBs Installed: Cisco_bootbank_cisco-vem-v131-esx_4.2.1.1.4.1.0-3.0.5
   VIBs Removed:
   VIBs Skipped:
 
   

Note If the host is an ESXi 5.0.0 stateful host, the "Message: Operation finished successfully" line appears.


Step 4 Verify that the VIB has installed.

~ # esxcli software vib list | grep cisco 
cisco-vem-v131-esx    4.2.1.1.4.1.0-3.0.5                 Cisco   PartnerSupported  
2011-08-18
 
   

Step 5 Check that the VEM agent is running.

~ # vem status -v 
Package vssnet-esxmn-ga-release
Version 4.2.1.1.4.1.0-3.0.5
Build 5
Date Thu Jul 28 01:37:10 PDT 2011
Number of PassThru NICs are 0
VEM modules are loaded
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch0         128         4           128               1500    vmnic4
Number of PassThru NICs are 0
VEM Agent (vemdpa) is running
 

Step 6 Display the VEM version, VSM version, and ESXi version.

~ # vemcmd show version 
VEM Version: 4.2.1.1.4.1.0-3.0.5
VSM Version:
System Version: VMware ESXi 5.0.0 Releasebuild-441354
 

Step 7 Display the ESXi version and details about pass-through NICs.

~ # vem version -v 
Number of PassThru NICs are 0
Running esx version -441354 x86_64
VEM Version: 4.2.1.1.4.1.0-3.0.5
VSM Version:
System Version: VMware ESXi 5.0.0 Releasebuild-441354
 
   

Step 8 Add the host to the DVS by using the vCenter Server.

Step 9 Enter the show module command on the VSM.

switch# show module 
Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
1    0      Virtual Supervisor Module         Nexus1000V          active *
2    0      Virtual Supervisor Module         Nexus1000V          ha-standby
3    248    Virtual Ethernet Module          NA                             ok
 
   
Mod  Sw                Hw
---  ----------------  ------------------------------------------------
1    4.2(1)SV1(4a)     0.0
2    4.2(1)SV1(4a)     0.0
3    4.2(1)SV1(4a)     VMware ESXi 5.0.0 Releasebuild-441354 (3.0)
 
   
Mod  MAC-Address(es)                         Serial-Num
---  --------------------------------------  ----------
1    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
2    00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8  NA
3    02-00-0c-00-03-00 to 02-00-0c-00-03-80  NA
 
   
Mod  Server-IP        Server-UUID                           Server-Name
---  ---------------  ------------------------------------  --------------------
1    10.104.62.227    NA                                    NA
2    10.104.62.227    NA                                    NA
3    10.104.62.216    3fa746d4-de2f-11de-bd5d-c47d4f7ca460  sans2-216.cisco.com
 
   

Installing the VEM Software on a Stateless ESXi Host Using VUM

The following procedure shows you how to install the VEM software using VUM.

BEFORE YOU BEGIN

Make sure the VUM patch repository has VEM software downloaded.


Step 1 In the vCenter Server, choose Home > Update Manager > Configuration > ESX host/Cluster settings.

The ESX Host/Cluster Settings window opens. See Figure 39.

Figure 39 ESX Host/Cluster Settings WIndow

Step 2 Check the PXE Booted ESXi Host Settings check box.

Step 3 Add the host to the DVS by using the vCenter Server.


Configuring Layer 2 Connectivity


Note Layer 3 connectivity is the preferred method.


Figure 2-40 shows an example of redundant VSM VMs, where the software for the primary VSM is installed on ESXi 1, and the software for the secondary VSM is installed on ESXi 2 for Layer 2 connectivity.

Figure 2-40 Cisco Nexus 1000V Installation Diagram for Layer 2

Figure 2-41 shows a VSM and VEM running on the same host in Layer 2 mode.

Figure 2-41 VSM and VEM on the Same Host In Layer 2 Mode

The following section describes how to configure L2 mode.

PROCEDURE


Step 1 To configure a different VMware vSwitch port group for each VSM network adapter, click L2: Configure port groups for L2 in the Configure Networking screen. See Figure 2-42.

Figure 2-42 Configure Networking Screen

Step 2 In the L2: Configure groups for L2 screen, do the following:

Choose your port groups from the Port Group drop-down lists.

(Optional) In the VLAN ID field, enter the VLAN ID.

Only needed if you choose to create a new port group.

Click Next.

The Configure VSM screen opens. See Figure 2-5.

Step 3 Return to Step 17 in the Layer 2 configuration procedure.


Installing a VSM on the Cisco Nexus 1010

The following procedure shows you how to install the VSM on the Cisco Nexus 1010 and move from Layer 2 to Layer 3 connectivity.


Step 1 Create a virtual service blade by entering the following commands.

N1010 (config)# show virtual-service-blade summary 
 
   
---------------------------------------------------------------------------------
Name        HA-Role       HA-Status     Status               Location
---------------------------------------------------------------------------------
N1010(config)# virtual-service-blade vsm-1 
N1010(config-vsb-config)# virtual-service-blade-type new nexus-1000v.4.2.1.SV1.5.1.iso 
N1010(config-vsb-config)# show virtual-service-blade summary 
 
   
--------------------------------------------------------------------------------------
Name        HA-Role       HA-Status     Status                   Location
--------------------------------------------------------------------------------------
vsm-1       PRIMARY       NONE          VSB NOT PRESENT          PRIMARY
vsm-1       SECONDARY     NONE          VSB NOT PRESENT          SECONDARY
 
   
N1010(config-vsb-config)#
 
   

Step 2 Configure the control, packet, and management interface VLANs for static and flexible topologies.

N1010(config-vsb-config)# interface management vlan 100 
N1010(config-vsb-config)# interface control vlan 101 
N1010(config-vsb-config)# interface packet vlan 101 
 
   

Step 3 Configure Cisco Nexus 1000V on the Cisco Nexus 1010.

N1010(config-vsb-config)# enable 
Enter vsb image: [nexus-1000v.4.2.1.SV1.5.1.iso]
Enter domain id[1-4095]: 101 
SVS ontrol mode (L2 / L3) : [L3] 
Management IP version [V4/V6]: [V4] 
Enter Management IP address: 192.0.2.79 
Enter Management subnet mask: 255.255.255.0 
IPv4 address of the default gateway: 192.0.2.1 
Enter HostName: n1000v 
Enter the password for `admin': ********
Note: VSB installation is in progress, please use show virtual-service-blade commands to 
check the installation status.
N1010(config-vsb-config)# 
 
   

Step 4 Display primary and secondary VSM status.

N1010(config-vsb-config)# show virtual-service-blade summary 
 
   
--------------------------------------------------------------------------------------
Name        HA-Role       HA-Status     Status                   Location
--------------------------------------------------------------------------------------
vsm-1       PRIMARY       NONE          VSB POWER ON IN PROGRESS PRIMARY
vsm-1       SECONDARY     ACTIVE        VSB POWERED ON           SECONDARY
 
   

Step 5 Log in to the VSM.

N1010(config)# virtual-service-blade vsm-1 
N1010(config-vsb-config)# login virtual-service-blade vsm-1
Telnet escape character is `^\'.
Trying 192.0.2.18...
Connected to 192.0.2.18.
Escape character is `^\'.
 
   
Nexus 1000v Switch
n1000v login: admin
Password:
Cisco Nexus operating System (NX-OS) Software 
TAC support: http://www/cisco.com/tac 
Copyright (c) 2002-2012, Cisco Systems, Inc. All rights reserved. 
The copyrights to certain works contained in this software are 
owned by other third parties and used and distributed under 
license. Certain components of this software are licensed under 
the GNU General Public License (GPL) version 2.0 or the GNU 
Lesser General Public License (LGPL) Version 2.1. A copy of each 
such license is available at 
http://www.opensource.org/licenses/gpl-2.0.php and 
http://www.opensource.org/licenses/lgpl-2.1.php 
n1000v#
 
   

Step 6 Create a Layer 3 port profile.

n1000v# configure terminal
n1000v(config)# port-profile type vethernet l3_control
n1000v(config-port-prof)# switchport mode access
n1000v(config-port-prof)# switchport access vlan 3160
n1000v(config-port-prof)# capability l3control
n1000v(config-port-prof)# vmware port-group
n1000v(config-port-prof)# system vlan 3160
n1000v(config-port-prof)# state enabled
n1000v(config-port-prof)# no shutdown
 
   

Step 7 Change svs mode from L2 to L3 in Cisco Nexus 1000V.

n1000v(config)# svs-domain 
n1000v(config-svs-domain)# no control vlan 
Warning: Config saved but not pushed to vCenter Server due to inactive connection! 
n1000v(config-svs-domain)# no packet vlan 
Warning: Config saved but not pushed to vCenter Server due to inactive connection! 
n1000v(config-svs-domain)# svs mode L3 interface mgmt0 
Warning: Config saved but not pushed to vCenter Server due to inactive connection! 
n1000v(config-svs-domain)# show svs domain 
SVS domain config
  Domain id:    101
  Control vlan: 1 
  Packet vlan:  1
  L2/L3 Control mode: L3
  L3 control interface: mgmt0
  Status: Config push to VC failed: (communication failure to VC).
n1000v(config-svs-domain)#
 
   

Feature History for Installing the Cisco Nexus 1000V

Table 2-5 lists the release history for this feature.

Table 2-5 Feature History for Installing the Cisco Nexus 1000V 

Feature Name
Releases
Feature Information

VSM and VEM Installation

4.2(1)SV1(5.1)

Java applications introduced for VSM and VEM installation.

Installing the Cisco Nexus 1000V

4.0(1)SV1(1)

Introduced in this release.