Guest

Data Center Designs: Virtualization

VMware Built on FlexPod Deployment Guide

  • Viewing Options

  • PDF (1.3 MB)
  • Feedback
VMware Built on FlexPod Deployment Guide

Table Of Contents

About the Authors

About Cisco Validated Design (CVD) Program

VMware Built on FlexPod Deployment Guide

VMware Built on FlexPod Overview

Audience

VMware Built on FlexPod Configuration Deployment

Cabling Information

VMware ESXi Deployment Procedure

VMware vCenter Server Deployment Procedure

Cisco Nexus 1010 and 1000V Deployment Procedure

NetApp Virtual Storage Console Deployment Procedure

NetApp Operations Manager Deployment Procedure

Appendix—VMware Built on FlexPod Configuration Information

Nexus 1010 and 1000V Configuration Information

VMware Configuration Information

Cisco Nexus 5548 Sample Running Configuration

Cisco Nexus 1010 Sample Running Configuration

Cisco Nexus 1000V Sample Running Configuration

References


VMware Built on FlexPod Deployment Guide
Last Updated: August 31, 2011

Building Architectures to Solve Business Problems

About the Authors

David Antkowiak, Solutions Architect, Systems Development Unit, Cisco Systems

David Antkowiak is a Solutions Architect with the Systems Development Unit (SDU). With over 10 years of experience in various private and government organizations, his areas of focus have included virtual desktop infrastructure, server virtualization, cloud migration, and storage design. Prior to joining Cisco, David was Solutions Architect at JetBlue. David holds a masters degree from Florida State University and two VMware certifications.

Chris Reno, Reference Architect, Infrastructure and Cloud Engineering, NetApp

Chris Reno is a reference architect in the NetApp Infrastructure and Cloud Enablement group and is focused on creating, validating, supporting, and evangelizing solutions based on NetApp products. Before his current role, he worked with NetApp product engineers designing and developing innovative ways to do Q&A for NetApp products, including enablement of a large grid infrastructure using physical and virtualized compute resources. In these roles, Chris gained expertise in stateless computing, netboot architectures, and virtualization.

Mike Zimmerman, Reference Architect, Infrastructure and Cloud Enablement, NetApp

Mike Zimmerman is a Reference Architect in NetApp's Infrastructure and Cloud Engineering team. He focuses on the implementation, compatibility, and testing of various vendor technologies to develop innovative end-to-end cloud solutions for customers. Zimmerman started his career at NetApp as an architect and administrator of Kilo Client, NetApp's internal cloud infrastructure, where he gained extensive knowledge and experience building end-to-end shared architectures based upon server, network, and storage virtualization.

Wen Yu, Senior Infrastructure Technologist, VMware

Wen Yu is a Sr. Infrastructure Technologist at VMware, with a focus on partner enablement and evangelism of virtualization solutions. Wen has been with VMware for six years during which time four years have been spent providing engineering level escalation support for customers. Wen specializes in virtualization products for continuous availability, backup recovery, disaster recovery, desktop, and vCloud. Wen Yu is VMware, Red Hat, and ITIL certified.

About Cisco Validated Design (CVD) Program


The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit http://www.cisco.com/go/designzone.

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

Cisco and the Cisco Logo are trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at http://www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1005R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

VMware Built on FlexPod Deployment Guide

© 2011 Cisco Systems, Inc. All rights reserved.

VMware Built on FlexPod Deployment Guide


VMware Built on FlexPod Overview

Industry trends indicate a vast data center transformation toward shared infrastructures. Enterprise customers are moving away from silos of information and moving toward shared infrastructures to virtualized environments and eventually to the cloud to increase agility and reduce costs.

FlexPod™ is a predesigned, base configuration that is built on the Cisco® Unified Computing System™ (UCS), Cisco Nexus® data center switches, and NetApp® FAS storage components and includes a range of software partners. FlexPod can scale up for greater performance and capacity or it can scale out for environments that need consistent, multiple deployments. FlexPod is a baseline configuration, but also has the flexibility to be sized and optimized to accommodate many different use cases.

VMware Built on FlexPod is a platform that can address current virtualization needs and simplify their evolution to ITaaS infrastructure. It is built on the FlexPod infrastructure stack with added VMware® components including VMware vSphere™ and vCenter™ for virtualized application workloads. The FlexPod Deployment Guide is available at: http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/flexpod_deploy.html.

NetApp partners may access additional information at: https://fieldportal.netapp.com/.

Audience

This document describes the general procedures for deploying VMware on a base FlexPod. The intended audience of this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who want to deploy the VMware Built on FlexPod architecture.


Note For more deployment information, Cisco, NetApp, and VMware partners should contact their local account teams or visit: http://www.netapp.com/us/technology/flexpod/.

VMware Built on FlexPod Configuration Deployment

The first step is to setup and configure the base FlexPod (see the FlexPod Deployment Guide available at: http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/flexpod_deploy.html).

The following section provides information on configuring VMware vSphere and vCenter on a base FlexPod. The FlexPod for VMware architecture is flexible, so the exact configuration detailed below may vary depending on specific customer implementation requirements. The practices, features, and configurations below may be used to build a customized VMware Built on FlexPod deployment.

Cabling Information

Follow the cabling instructions in the FlexPod Deployment Guide available at: http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/flexpod_deploy.html.


Note The Nexus 1010 is an additional FlexPod component that is only used in VMware environments.

Figure 1 VMware Built on FlexPod Cabling

Table 1 VMware Built on FlexPod Ethernet Cabling Information

Local Device
Local Port
Connection
Remote Device
Remote Port

Cisco Nexus1 5548 A

Eth1/7

1GbE

Cisco Nexus 1010 A

Eth1

Eth1/8

1GbE

Cisco Nexus 1010 B

Eth1

Cisco Nexus1 5548 B

Eth1/7

1GbE

Cisco Nexus 1010 A

Eth2

Eth1/8

1GbE

Cisco Nexus 1010 B

Eth2

Nexus 1010 A (only used with VMware

Eth1

1GbE

Nexus 5548 A

Eth1/7

Eth2

1GbE

Nexus 5548 B

Eth1/7

Nexus 1010 A (only used with VMware

Eth1

1GbE

Nexus 5548 A

Eth1/8

Eth2

1GbE

Nexus 5548 B

Eth1/8

1 The Cisco Nexus 1010 virtual appliances require the use of two 1GbE Copper SFP+'s (GLC-T=).


VMware ESXi Deployment Procedure

This section describes the installation of ESXi 4.1 on the Cisco UCS and should result in the following:

A functional ESXi host

NFS and vMotion network connectivity

Availability of NFS datastores to the ESXi host

The following outlines the process for installing VMware ESXi within a FlexPod environment.

VMware ESXi Deployment via UCSM KVM Console.

There are multiple methods for installing ESXi within such an environment. In this case, an ISO image is mounted via the KVM console to make ESXi accessible to the blade.

Set up the ESXi Host's Administration Password.

Set up the ESXi Host's Management Networking.

Set up the management VLAN.

Set up DNS.

Set up the NFS and VMotion VMkernel ports with Jumbo Frames MTU.

Access the ESXi host via Web browser and download VMware vSphere Client.

Log into VMware ESXi Host using VMware vSphere Client.

Set up the vMotion VMkernel Port on the Virtual Switch for individual hosts.

Change VLAN ID for default VM-Traffic Port-group called "VM-Network".

Mount the Required datastores for individual hosts.

Set NTP time configuration for individual hosts.

Move the swapfile from local to NFS export location.


Note For detailed ESXi 4.1 installation instructions, see: http://www.vmware.com/support/pubs/vs_pubs.html.

VMware vCenter Server Deployment Procedure

The following section describes the installation of VMware vCenter 4.1 within a FlexPod environment and results in the following:

A running VMware vCenter virtual machine

A running SQL virtual machine acting as the vCenter database server

A vCenter DataCenter with associated ESXi hosts

VMware DRS and HA functionality enabled

The deployment procedures necessary to achieve these objectives include:

Log into VMware ESXi Host using VMware vSphere Client.

Build a SQL Server VM using Windows Server 2008 R2 x64 image.

Create the required databases and database users. Use the script provided in the vCenter installation directory.


Note VMware vCenter can use one of a number of vendor Databases. This deployment guide assumes Microsoft SQL Server 2008. If a database server already exists and it is compatible with vCenter you can create the required database instance for vCenter and skip this step.

Build a vCenter virtual machine on another Windows Server 2008 R2 virtual machine instance. The default disk partitioning used in Windows Server 2008 aligns the disk blocks in the virtual machine disk file with the NetApp storage system disk blocks. If an earlier version of Windows Server is being used, refer to NetApp TR-3747: Best Practices for File System Alignment in Virtual Environments.

Install SQL Server 2008 R2 Native Client on the vCenter virtual machine.

Create Data Source Name referencing the SQL instance on the vCenter machine.

Install VMware vCenter Server referencing the SQL server data source previously established.

Create a vCenter Datacenter.

Create a new management cluster with DRS and HA enabled.

Add Hosts to the management cluster.


Note For detailed vCenter 4.1 installation instructions, see: http://www.vmware.com/support/pubs/vs_pubs.html.

Cisco Nexus 1010 and 1000V Deployment Procedure

The following section outlines the procedures to deploy the Cisco Nexus 1010 and 1000v platforms within a FlexPod environment. At the completion of this section the following should be in place:

A clustered pair of Cisco Nexus 1010s

A clustered pair of Cisco Nexus 1010s configured with uplink type 1 connectivity using two of the six 1Gb ports on the 1010. For options supporting increased connections and port-channeling of uplinks see: http://www.cisco.com/en/US/partner/docs/switches/datacenter/nexus1000/sw/4_2_1_s_p_1_2/software/configuration/guide/n1010_vsvcs_cfg_4uplink.html.

An active/standby pair of Nexus 1000V virtual supervisor modules (VSM)

The Nexus 1000V acting as the virtual distributed switching platform for vSphere supporting VM, NFS, and vMotion traffic types

The following procedures are required to meet these objective.

Log into Cisco Nexus 1010 virtual appliance console.

Configure the CIMC or "out-of-band" management interface.

Execute the Cisco Nexus 1010 Virtual Appliances setup.

Create and install the Cisco Nexus 1000V VSM on a Nexus 1010 virtual service blade.

Register the Cisco Nexus 1000V as a vCenter Plug-in.

Configure Networking on the Cisco Nexus 1000V, including:

Management, NFS, vMotion, and virtual machine data traffic VLANs

vCenter connectivity

Port profiles

Ethernet Port Profile(s) for host uplinks (multiple Ethernet Port Profile uplinks can be configured for a host when using the Cisco M81KR VIC Adapter to provide prioritization and rate limiting to the virtual uplinks defined on the VIC adapter)

vEthernet port profiles for VM interfaces and host vmkernels

Install the Nexus 1000V VEMs on each ESXi host using VMware Update Manager or RCLI/CLI options.

Replace the default virtual switch with the Cisco Nexus 1000V and add uplink ports to Cisco Nexus 1000V.

Enable Jumbo Frames in the Nexus 1000V.

NetApp Virtual Storage Console Deployment Procedure

The following presents the general procedures for installing the NetApp Virtual Storage Console for use in a VMware Built on FlexPod environment.

Install the NetApp Virtual Storage Console on a dedicated virtual machine running Microsoft Windows Server 2008 R2 x64 with 4 GB of RAM, 30 GB of storage, and two network interfaces for management and NFS traffic.


Note The VSC download is available at: http://.now.netapp.com.

Note This machine may also host the NetApp Data Fabric Manager.

Configure the VSC plug-in to register with vCenter.

Configure the VSC via vCenter NetApp tab to work with the FlexPod vFilers.

Set the recommended values for ESXi hosts via NetApp best practices for HBA/CNA, MPIO, and NFS.

NetApp Operations Manager Deployment Procedure

The following section provides the general procedures for configuring the NetApp Operations Manager which is part of the DataFabric Manager (DFM) 4.0 suite for use in a VMware Built on FlexPod environment. After completing this section the following should be available:

A Microsoft Windows 2008 virtual machine running NetApp DataFabric Manager Suite including:

Operations Manager

Provisioning Manager

Protection Manger

NetApp Operations Manager monitoring both FlexPod storage controllers

The following section provides the procedures for configuring NetApp Operations Manager for use in a VMware Built on FlexPod environment.

Install DFM on the same Windows virtual machine hosting the virtual storage controller via Web browser (Windows).


Note DFM is available at: http://now.netapp.com/NOW/download/software/dfm_win/Windows/.

Generate a secure SSL key for the DFM HTTPs server.

Enable HTTPs.

Add a license in DFM server.

Enable SNMP v3 configuration.

Configure AutoSupport information.

Run diagnostics to verify DFM communication with FlexPod controllers.

Configure an SNMP Trap Host.

Configure Operations Manager to generate E-mails for every Critical or higher Event and send E-mails

Appendix—VMware Built on FlexPod Configuration Information

The following tables outline the information which needs to be available to complete the setup and deployment of VMware Built on FlexPod.

Nexus 1010 and 1000V Configuration Information

This information is used with the Nexus 1010 and 1000V deployment in the environment.

Table 2 Cisco Nexus 1010 and 1000V Configuration Information

Name
Customized Value
Description

Cisco Nexus 1010 A Hostname

 

Provide a hostname for the Cisco Nexus 1010 A virtual appliance.

Cisco Nexus 1010 B Hostname

 

Provide a hostname for the Cisco Nexus 1010 B virtual appliance.

Cisco Nexus 1010 A CIMC IP Address

 

Provide the IP address for the out-of-band management interface or CIMC on the Cisco Nexus 1010 A appliance.

Cisco Nexus 1010 A CIMC netmask

 

Provide the netmask for the out-of-band management interface or CIMC on the Cisco Nexus 1010 A appliance

Cisco Nexus 1010 A CIMC gateway

 

Provide the gateway for the out-of-band management interface or CIMC on the Cisco Nexus 1010 A appliance.

Cisco Nexus 1010 A Hostname

 

Provide the hostname for the Cisco Nexus 1010 A virtual appliance.

Cisco Nexus 1010 A Management Interface IP

 

Provide the IP address for the management interface on the Cisco Nexus 1010 A appliance.

Cisco Nexus 1010 A Management Interface Netmask

 

Provide the netmask for the management interface on the Cisco Nexus 1010 A appliance.

Cisco Nexus 1010 A Management Interface Gateway

 

Provide the gateway for the management interface on the Cisco Nexus 1010 A appliance.

Cisco Nexus 1010 B CIMC IP Address

 

Provide the IP address for the out-of-band management interface or CIMC on the Cisco Nexus 1010 B appliance.

Cisco Nexus 1010 B CIMC netmask

 

Provide the netmask for the out-of-band management interface or CIMC on the Cisco Nexus 1010 B appliance

Cisco Nexus 1010 B CIMC gateway

 

Provide the gateway for the out-of-band management interface or CIMC on the Cisco Nexus 1010 B appliance.

Cisco Nexus 1010 Domain ID

 

Provide a unique domain id for the Cisco Nexus 1010 virtual appliances in the environment.

Primary Cisco Nexus 1000V Virtual Supervisor Module Hostname

 

Provide the hostname for the primary VSM.

Primary Cisco Nexus 1000V Virtual Supervisor Module Management Interface IP Address

 

Provide the IP Address for the management interface for the primary Cisco Nexus 1000V Virtual Supervisor Module.

Primary Cisco Nexus 1000V Virtual Supervisor Module Management Interface Netmask

 

Provide the netmask for the management interface for the primary Cisco Nexus 1000V Virtual Supervisor Module.

Primary Cisco Nexus 1000V Virtual Supervisor Module Management Interface Gateway

 

Provide the gateway for the management interface for the primary Cisco Nexus 1000V Virtual Supervisor Module.

Cisco Nexus 1000V Virtual Supervisor Module Domain ID

 

Provide a unique domain id for the Cisco Nexus 1000V VSMs. This domain id should be different than the domain id used for the Cisco Nexus 1010 virtual appliance domain id.


VMware Configuration Information

The information in Table 3 is specific to the VMware specifc portion of the deployment.

Table 3 VMware Configuration Information

Name
Customized Value
Description

ESXi Server 1 Hostname

 

The hostname for the first esxi host in the infrastructure cluster.

ESXi Server 1 Management Interface IP Address

 

The IP address for the management vmkernel port on the first host in the infrastructure cluster.

ESXi Server 1 Management Interface Netmask

 

The netmask for the management vmkernel port on the first host in the infrastructure cluster.

ESXi Server 1 Management Interface Gateway

 

The gateway for the management vmkernel port on the first host in the infrastructure cluster.

ESXi Server 1 NFS VMkernel Interface IP Address

 

The IP Address for the nfs vmkernel port on the first host in the cluster.

ESXi Server 1 NFS VMkernel Interface Netmask

 

The netmask for the nfs vmkernel port on the first host in the infrastructure cluster.

ESXi Server 1 VMotion VMkernel Interface IP Address

 

The IP Address for the vmotion vmkernel port on the first host in the cluster.

ESXi Server 1 VMotion VMkernel Interface Netmask

 

The netmask for the vmotion vmkernel port on the first host in the infrastructure cluster.

ESXi Server 2 Hostname

 

The hostname for the second esxi host in the infrastructure cluster.

ESXi Server 2 Management Interface IP Address

 

The IP address for the management vmkernel port on the second host in the infrastructure cluster.

ESXi Server 2 Management Interface Netmask

 

The netmask for the management vmkernel port on the second host in the infrastructure cluster.

ESXi Server 2 Management Interface Gateway

 

The gateway for the management vmkernel port on the second host in the infrastructure cluster.

ESXi Server 2 NFS VMkernel Interface IP Address

 

The IP Address for the nfs vmkernel port on the second host in the cluster.

ESXi Server 2 NFS VMkernel Interface Netmask

 

The netmask for the nfs vmkernel port on the second host in the infrastructure cluster.

ESXi Server 2 VMotion VMkernel Interface IP Address

 

The IP Address for the vmotion vmkernel port on the second host in the cluster.

ESXi Server 2 VMotion VMkernel Interface Netmask

 

The netmask for the vmotion vmkernel port on the second host in the infrastructure cluster.

SQL Server VM Hostname

 

The hostname of the SQL server virtual machine that will run the vCenter Server database.

SQL Server VM IP Address

 

The IP address of the SQL server virtual machine that will run the vCenter Server database.

vCenter Server VM Hostname

 

The hostname of the vCenter Server virtual machine.

vCenter Server VM IP Address

 

The IP address of the vCenter Server virtual machine.

vCenter Server License Key

 

The vCenter license key.


Cisco Nexus 5548 Sample Running Configuration

version 5.0(3)N2(1)
feature fcoe
feature npiv
feature fport-channel-trunk
feature telnet
feature tacacs+	
cfs ipv4 distribute
cfs eth distribute
feature lacp
feature vpc
feature lldp
feature fex
logging level aaa 5
logging level cdp 6
logging level vpc 6
logging level lldp 5
logging level flogi 5
logging level radius 5
_[7m--More--_[m
logging level tacacs 5
logging level monitor 6
logging level session-mgr 6
logging level port-channel 6
logging level spanning-tree 6
role distribute
role commit
username admin password 5 $1$EaGDiyA3$MFDqLd80A1y/b7sk57EWO/  role network-admin
ip domain-lookup
tacacs-server key 7 "K1kmN0gy"
ip tacacs source-interface mgmt0
tacacs-server host 172.26.162.216 timeout 3 
tacacs-server host 172.26.162.214 
aaa group server tacacs+ TacacsServer 
    server 172.26.162.216 
    deadtime 1
    use-vrf management
    source-interface mgmt0
hostname DC24-N5K-1
logging event link-status default
logging event trunk-status default
service unsupported-transceiver
_[7m--More--_[m
class-map type qos class-fcoe
class-map type queuing class-all-flood
  match qos-group 2
class-map type queuing class-ip-multicast
  match qos-group 2
class-map type network-qos class-all-flood
  match qos-group 2
class-map type network-qos class-ip-multicast
  match qos-group 2
policy-map type network-qos jumbo
  class type network-qos class-fcoe
    pause no-drop
    mtu 2158
  class type network-qos class-default
    mtu 9216
system qos
  service-policy type network-qos jumbo
snmp-server user admin network-admin auth md5 0xa4b33c3268a7f0d1d2fdceea57d08390
 priv 0xa4b33c3268a7f0d1d2fdceea57d08390 localizedkey
snmp-server host 172.26.165.6 traps version 2c public  udp-port 2162
snmp-server host 172.26.162.250 traps version 2c public  udp-port 1163
snmp-server host 172.26.162.250 traps version 2c public  udp-port 1164
snmp-server host 64.102.87.252 traps version 2c public  udp-port 1163
snmp-server host 172.26.165.6 traps version 2c public  udp-port 1163
snmp-server enable traps entity fru
snmp-server community RO group network-operator
ntp server 172.26.162.9 use-vrf management
vrf context management
  ip route 0.0.0.0/0 172.26.162.1
vlan 1
vlan 2
  name New_Native
vlan 10
  name NAS_GLOBAL_VLAN
vlan 11
  name VLAN11
vlan 12
  name VLAN12
vlan 18
  fcoe vsan 18 
  name GLOBAL_VSAN18_FCOE
vlan 20
  fcoe vsan 20 
  name GLOBAL_VSAN20_FCOE
vlan 101-163
vlan 164
  name Mgmt
vlan 165-200,301-500
vlan 501
  name Tenant_41_Ext_Application
vlan 502
  name Tenant_41_Public
vlan 503-508
vlan 509
  name Tenant_41_RAC_Private
vlan 510-600
vlan 900
  name Nexus1010_Traffic
vpc domain 100
  role priority 100
  peer-keepalive destination 172.26.164.28
 
   
 
   
 
   
interface Ethernet1/7
description *** drs24-n1010-1 Mgmt Int port 1 ***
  switchport mode trunk
  switchport trunk allowed vlan 185
  spanning-tree port type edge trunk
  spanning-tree bpdufilter enable
interface Ethernet1/8
description *** drs25-n1010-1 Mgmt Int port 1 ***
  switchport mode trunk
  switchport trunk allowed vlan 185
  spanning-tree port type edge trunk
  spanning-tree bpdufilter enable

Cisco Nexus 1010 Sample Running Configuration

version 4.0(4)SP1(1)
username admin password 5 $1$EVg2LPBC$EX8pjL9GBayKAaUmwjLjD.  role network-admin
ntp server 10.61.185.9
ip domain-lookup
ip host n1010-1 10.61.185.165
kernel core target 0.0.0.0
kernel core limit 1
system default switchport
snmp-server user admin network-admin auth md5 0x7ccf323f71b74c6cf1cba6d255e9ded9 priv 
0x7ccf323f71b74c6cf1cba6d255e9ded9 localizedkey
snmp-server enable traps license
vrf context management
  ip route 0.0.0.0/0 10.61.185.1
switchname n1010-1
vlan 1,162,950
vlan 902
  name data
vdc n1010-1 id 1
  limit-resource vlan minimum 16 maximum 513
  limit-resource monitor-session minimum 0 maximum 64
  limit-resource vrf minimum 16 maximum 8192
  limit-resource port-channel minimum 0 maximum 256
  limit-resource u4route-mem minimum 32 maximum 80
  limit-resource u6route-mem minimum 16 maximum 48
network-uplink type 3
virtual-service-blade drs1-vsm1
  virtual-service-blade-type name VSM-1.0
  interface control vlan 950
  interface packet vlan 950
  ramsize 2048
  disksize 3
  no shutdown
virtual-service-blade drs2-vsm1
  virtual-service-blade-type name VSM-1.0
  interface control vlan 950
  interface packet vlan 950
  ramsize 2048
  disksize 3
  no shutdown
virtual-service-blade drs3-vsm1
  virtual-service-blade-type name VSM-1.0
  interface control vlan 950
  interface packet vlan 950
  ramsize 2048
  disksize 3
  no shutdown
virtual-service-blade NAM
  virtual-service-blade-type name NAM-1.0
  interface data vlan 902
  ramsize 2048
  disksize 53
  no shutdown primary
 
   
interface mgmt0
  ip address 10.61.185.165/16
 
   
interface control0
logging logfile messages 6
boot kickstart bootflash:/nexus-1010-kickstart-mz.4.0.4.SP1.1.bin
boot system bootflash:/nexus-1010-mz.4.0.4.SP1.1.bin 
boot kickstart bootflash:/nexus-1010-kickstart-mz.4.0.4.SP1.1.bin
boot system bootflash:/nexus-1010-mz.4.0.4.SP1.1.bin 
svs-domain
  domain id 51
  control vlan 950

Cisco Nexus 1000V Sample Running Configuration

version 4.0(4)SV1(3b)
username admin password 5 $1$hgzMSZ3F$NCCbwTw4Z8QU5yjIo7Me11  role network-admin
ssh key rsa 2048 
ntp server 10.61.185.3
ip domain-lookup
ip host n1010-1-vsm 10.61.185.137
kernel core target 0.0.0.0
kernel core limit 1
system default switchport
vem 3
  host vmware id 737ff954-0de3-11e0-0000-000000000001
vem 4
  host vmware id 737ff954-0de3-11e0-0000-000000000002
snmp-server user admin network-admin auth md5 0xfe02f063cf936282f39c604c06e628df priv 
0xfe02f063cf936282f39c604c06e628df localizedkey
snmp-server enable traps license
vrf context management
  ip route 0.0.0.0/0 10.61.185.1
hostname n1010-1-vsm
vlan 1
vlan 185
  name MGMT-VLAN
vlan 900
  name NFS-VLAN
vlan 901
  name vMotion-VLAN
vlan 950
  name VM-Traffic-VLAN
vdc n1010-1-vsm id 1
  limit-resource vlan minimum 16 maximum 513
  limit-resource monitor-session minimum 0 maximum 64
  limit-resource vrf minimum 16 maximum 8192
  limit-resource port-channel minimum 0 maximum 256
  limit-resource u4route-mem minimum 32 maximum 80
  limit-resource u6route-mem minimum 16 maximum 48
port-profile type vethernet MGMT-VLAN
  vmware port-group
  switchport mode access
  switchport access vlan 185
  no shutdown
  system vlan 185
  state enabled
port-profile type vethernet NFS-VLAN
  vmware port-group
  switchport mode access
  switchport access vlan 900
  no shutdown
  system vlan 900
  state enabled
port-profile type ethernet Unused_Or_Quarantine_Uplink
  description Port-group created for Nexus1000V internal usage. Do not use.
  vmware port-group
  shutdown
  state enabled
port-profile type vethernet Unused_Or_Quarantine_Veth
  description Port-group created for Nexus1000V internal usage. Do not use.
  vmware port-group
  shutdown
  state enabled
port-profile type vethernet VM-Traffic-VLAN
  vmware port-group
  switchport mode access
  switchport access vlan 950
  no shutdown
  system vlan 950
  state enabled
port-profile type ethernet system-uplink
  description system profile for blade uplink ports
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 185,900-901,950
  system mtu 9000
  channel-group auto mode on mac-pinning
  no shutdown
  system vlan 185,900-901,950
  state enabled
port-profile type vethernet vMotion-VLAN
  vmware port-group
  switchport mode access
  switchport access vlan 901
  no shutdown
  system vlan 901
  state enabled
 
   
interface port-channel1
  inherit port-profile system-uplink
  mtu 9000
 
   
interface port-channel2
  inherit port-profile system-uplink
  mtu 9000
 
   
interface Ethernet3/1
  inherit port-profile system-uplink
  mtu 9000
 
   
interface Ethernet3/2
  inherit port-profile system-uplink
  mtu 9000
 
   
interface Ethernet4/1
  inherit port-profile system-uplink
  mtu 9000
 
   
interface Ethernet4/2
  inherit port-profile system-uplink
  mtu 9000
 
   
interface mgmt0
  ip address 10.61.185.137/24
 
   
interface Vethernet1
  inherit port-profile MGMT-VLAN
  description VMware VMkernel, vmk0
  vmware dvport 35
 
   
interface Vethernet2
  inherit port-profile NFS-VLAN
  description VMware VMkernel, vmk1
  vmware dvport 67
 
   
interface Vethernet3
  inherit port-profile vMotion-VLAN
  description VMware VMkernel, vmk2
  vmware dvport 130
 
   
interface control0
boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.0.4.SV1.3b.bin sup-1
boot system bootflash:/nexus-1000v-mz.4.0.4.SV1.3b.bin sup-1
boot kickstart bootflash:/nexus-1000v-kickstart-mz.4.0.4.SV1.3b.bin sup-2
boot system bootflash:/nexus-1000v-mz.4.0.4.SV1.3b.bin sup-2
svs-domain
  domain id 10
  control vlan 950
  packet vlan 950
  svs mode L2  
svs connection vCenter
  protocol vmware-vim
  remote ip address 10.61.185.114 port 80
  vmware dvs uuid "2d 5b 20 50 21 69 05 64-2c 68 d0 b3 63 bf b2 9f" datacenter-name 
FlexPod_DC_1
  connect

References

Cisco Nexus 1010 Virtual Services Appliance: http://www.cisco.com/en/US/products/ps10785/index.html

NetApp On The Web (NOW) Site: http://.now.netapp.com

VMware vSphere: http://www.vmware.com/products/vsphere/