[an error occurred while processing this directive]

Design Zone for Data Centers

VMware vSphere Built On FlexPod With IP-Based Storage

Table Of Contents

About the Authors

About Cisco Validated Design (CVD) Program

VMware vSphere Built On FlexPod With IP-Based Storage

Overview

Audience

Architecture

FlexPod Benefits

Benefits of Cisco Unified Computing System

Benefits of Cisco Nexus 5548UP

Benefits of the NetApp FAS Family of Storage Controllers

Benefits of OnCommand Unified Manager Software

OnCommand Host Package

Storage Service Catalog

FlexPod Management Solutions

Benefits of VMware vSphere with the NetApp Virtual Storage Console

Software Revisions

Configuration Guidelines

Deployment

Cabling Information

NetApp FAS2240-2 Deployment Procedure: Part 1

Assign Controller Disk Ownership

Set Up Data ONTAP 8.1

Install Data ONTAP to Onboard Flash Storage

Harden Storage System Logins and Security

Install the Required Licenses

Enable Licensed Features

Enable Active-Active Controller Configuration Between Two Storage Systems

Start iSCSI

Set Up Storage System NTP Time Synchronization and CDP Enablement

Create Data Aggregate aggr1

Create an SNMP Requests Role and Assign SNMP Login Privileges

Create an SNMP Management Group and Assign an SNMP Request Role

Create an SNMP User and Assign It to an SNMP Management Group

Set Up SNMP v1 Communities on Storage Controllers

Set Up SNMP Contact Information for Each Storage Controller

Set SNMP Location Information for Each Storage Controller

Reinitialize SNMP on Storage Controllers

Initialize NDMP on the Storage Controllers

Set 10GbE Flow Control and Add VLAN Interfaces

Add Infrastructure Volumes

Export NFS Infrastructure Volumes to ESXi Servers

Cisco Nexus 5548 Deployment Procedure

Set up Initial Cisco Nexus 5548 Switch

Enable Appropriate Cisco Nexus Features

Set Global Configurations

Create Necessary VLANs

Add Individual Port Descriptions for Troubleshooting

Create Necessary PortChannels

Add PortChannel Configurations

Configure Virtual PortChannels

Uplink Into Existing Network Infrastructure

Cisco Unified Computing System Deployment Procedure

Perform Initial Setup of Cisco UCS C-Series Servers

Perform Initial Setup of the Cisco UCS 6248 Fabric Interconnects

Log Into Cisco UCS Manager

Upgrade Cisco UCS Manager Software to Version 2.0(2m)

Add a Block of IP Addresses for KVM Access

Synchronize Cisco UCS to NTP

Edit the Chassis Discovery Policy

Enable Server and Uplink Ports

Create Uplink PortChannels to the Cisco Nexus 5548 Switches

Create an Organization

Create MAC Address Pools

Create IQN Pools for iSCSI Boot

Create UUID Suffix Pool

Create Server Pool

Create VLANs

Create a Firmware Management Package

Create Host Firmware Package Policy

Set Jumbo Frames in Cisco UCS Fabric

Create a Local Disk Configuration Policy

Create a Network Control Policy for Cisco Discovery Protocol (CDP)

Create a Server Pool Qualification Policy

Create a Server BIOS Policy

Create vNIC Placement Policy for Virtual Machine Infrastructure Hosts

Create vNIC Templates

Create Boot Policies

Create Service Profiles

Add More Servers to the FlexPod Unit

Gather Necessary Information

NetApp FAS2240-2 Deployment Procedure: Part 2

Add Infrastructure Host Boot LUNs

Create iSCSI igroups

Map LUNs to igroups

VMware ESXi 5.0 Deployment Procedure

Log Into the Cisco UCS 6200 Fabric Interconnects

Set Up the ESXi Installation

Install ESXi

Set Up the ESXi Hosts' Management Networking

Set Up Management Networking for Each ESXi Host

Download VMware vSphere Client and vSphere Remote Command Line

Log in to VMware ESXi Host Using the VMware vSphere Client

Change the iSCSI Boot Port MTU to Jumbo

Load Updated Cisco VIC enic Driver Version 2.1.2.22

Set Up iSCSI Boot Ports on Virtual Switches

Set Up VMkernel Ports and Virtual Switch

Mount the Required Datastores

NTP Time Configuration

Move the VM Swap File Location

VMware vCenter 5.0 Deployment Procedure

Build a Microsoft SQL Server Virtual Machine

Install Microsoft SQL Server 2008 R2

Build a VMware vCenter Virtual Machine

Install VMware vCenter Server

vCenter Setup

NetApp Virtual Storage Console Deployment Procedure

Installing NetApp Virtual Storage Console 4.0

Optimal Storage Settings for ESXi Hosts

Provisioning and Cloning Setup

NetApp OnCommand Deployment Procedure

Manually Add Data Fabric Manager Storage Controllers

Run Diagnostics for Verifying Data Fabric Manager Communication

Configure Additional Operations Manager Alerts

Deploy the NetApp OnCommand Host Package

Set a Shared Lock Directory to Coordinate Mutually Exclusive Activities on Shared Resources

Install NetApp OnCommand Windows PowerShell Cmdlets

Configure Host Services

Appendix

B-Series Deployment Procedure

Cisco Nexus 1000v Deployment Procedure

Log into Both Cisco Nexus 5548 Switches

Add Packet-Control VLAN to Switch Trunk Ports

Log Into Cisco UCS Manager

Add Packet-Control VLAN to Host Server vNICs

Log in to the VMware vCenter

Install the Virtual Ethernet Module (VEM) on Each ESXi Host

Adjust ESXi Host Networking

Deploy the Primary VSM

Base Configuration of the Primary VSM

Register the Nexus 1000v as a vCenter Plugin

Base Configuration of the Primary VSM

Migrate the ESXi Hosts' Networking to the Nexus 1000v

Deploy the Secondary VSM

Base Configuration of the Secondary VSM

Nexus 5548 Reference Configurations

Nexus A

Nexus B


VMware vSphere Built On FlexPod With IP-Based Storage
Last Updated: November 29, 2012

Building Architectures to Solve Business Problems

About the Authors

John George, Reference Architect, Infrastructure and Cloud Engineering, NetApp

John George is a Reference Architect in the NetApp Infrastructure and Cloud Engineering team and is focused on developing, validating, and supporting cloud infrastructure solutions that include NetApp products. Before his current role, he supported and administered Nortel's worldwide training network and VPN infrastructure. John holds a Master's degree in computer engineering from Clemson University.

Ganesh Kameth, Technical Marketing Engineer, NetApp

Ganesh Kamath is a Technical Architect in the NetApp TSP solutions engineering team focused on architecting and validating solutions for TSP's based on NetApp products. Ganesh's diverse experiences at NetApp include working as a Technical Marketing Engineer as well as a member of the NetApp Rapid Response Engineering team qualifying specialized solutions for our most demanding customers.

John Kennedy, Technical Leader, Cisco
John Kennedy is a Technical Marketing Engineer in the Server Access and Virtualization Technology Group. Currently, John is focused on the validation of FlexPod architecture while contributing to future SAVTG products. John spent two years in the Systems Development Unit at Cisco, researching methods of implementing long distance vMotion for use in the Data Center Interconnect Cisco Validated Designs. Previously, John worked at VMware Inc. for eight and a half years as a Senior Systems Engineer supporting channel partners outside the US and serving on the HP Alliance team. He is a VMware Certified Professional on every version of VMware's ESX / ESXi, vCenter, and Virtual Infrastructure including vSphere 5. He has presented at various industry conferences in over 20 countries.
Chris Reno, Reference Architect, Infrastructure and Cloud Engineering, NetApp

Chris Reno is a Reference Architect in the NetApp Infrastructure and Cloud Engineering team and is focused on creating, validating, supporting, and evangelizing solutions based on NetApp products. Chris has his Bachelors of Science degree in International Business and Finance and his Bachelors of Arts degree in Spanish from the University of North Carolina - Wilmington while also holding numerous industry certifications.

Lindsey Street, Systems Architect, Infrastructure and Cloud Engineering, NetApp

Lindsey Street is a systems architect in the NetApp Infrastructure and Cloud Engineering team. She focuses on the architecture, implementation, compatibility, and security of innovative vendor technologies to develop competitive and high-performance end-to-end cloud solutions for customers. Lindsey started her career in 2006 at Nortel as an interoperability test engineer, testing customer equipment interoperability for certification. Lindsey has her Bachelors of Science degree in Computer Networking and her Master's of Science in Information Security from East Carolina University.

NetApp, the NetApp logo, Go further, faster, AutoSupport, DataFabric, Data ONTAP, FlexClone, FlexPod, and OnCommand are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries.

About Cisco Validated Design (CVD) Program


The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit http://www.cisco.com/go/designzone.

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

Cisco and the Cisco Logo are trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at http://www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1005R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

© 2012 Cisco Systems, Inc. All rights reserved.

VMware vSphere Built On FlexPod With IP-Based Storage


Overview

Industry trends indicate a vast data center transformation toward shared infrastructures. By leveraging virtualization, enterprise customers have embarked on the journey to the cloud by moving away from application silos and toward shared infrastructure, thereby increasing agility and reducing costs. NetApp and Cisco have partnered to deliver FlexPod, which serves as the foundation for a variety of workloads and enables efficient architectural designs that are based on customer requirements.

Audience

This Cisco Validated Design describes the architecture and deployment procedures of an infrastructure composed of Cisco, NetApp, and VMware virtualization that leverages IP-based storage protocols. The intended audience of this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineers, and customers who want to deploy the core FlexPod architecture.

Architecture

The FlexPod architecture is highly modular or "podlike." Although each customer's FlexPod unit varies in its exact configuration, once a FlexPod unit is built, it can easily be scaled as requirements and demand change. The unit can be scaled both up (adding resources to a FlexPod unit) and out (adding more FlexPod units).

Specifically, FlexPod is a defined set of hardware and software that serves as an integrated foundation for all virtualization solutions. VMware vSphere Built On FlexPod With IP-Based Storage includes NetApp storage, Cisco networking, the Cisco® Unified Computing System™ (Cisco UCS™), and VMware vSphere™ software in a single package. The computing and storage can fit in one data center rack, with the networking residing in a separate rack or deployed according to a customer's data center design. Port density enables the networking components to accommodate multiple configurations of this kind.

One benefit of the FlexPod architecture is the ability to customize or "flex" the environment to suit a customer's requirements. This is why the reference architecture detailed in this document highlights the resiliency, cost benefit, and ease of deployment of an IP-based storage solution. Ethernet storage systems are a steadily increasing source of network traffic, requiring design considerations that maximize the performance of servers to storage systems. This new concept is quite different from a network that caters to thousands of clients and servers connected across LAN and WAN networks. Correct Ethernet storage network designs can achieve the performance of Fibre Channel networks, provided that technologies such as jumbo frames, virtual interfaces (VIFs), virtual LANs (VLANs), IP multipathing (IPMP), Spanning Tree Protocol (STP), port channeling, and multilayer topologies are employed in the architecture of the system.

Figure 1 shows the VMware vSphere Built On FlexPod With IP-Based Storage components and the network connections for a configuration with IP-based storage. This design leverages the Cisco Nexus® 5548UP, Cisco Nexus 2232 FEX, Cisco UCS C-Series with the Cisco UCS virtual interface card (VIC), and the NetApp FAS family of storage controllers, which are all deployed to enable iSCSI-booted hosts with file- and block-level access to IP-based datastores. The reference architecture reinforces the "wire-once" strategy, because as additional storage is added to the architecture-either FC, FCoE, or 10GbE-no re-cabling is required from the hosts to the UCS fabric interconnect. An alternate IP-based storage configuration with Cisco UCS B-Series is described in B-Series Deployment Procedure in the Appendix.

Figure 1 VMware vSphere Built On FlexPod With IP-Based Storage Components

The reference configuration includes:

Two Cisco Nexus 5548UP switches

Two Cisco Nexus 2232 fabric extenders

Two Cisco UCS 6248UP fabric interconnects

Support for 16 UCS C-Series servers without any additional networking components

Support for hundreds of UCS C-Series servers by way of additional fabric extenders

One NetApp FAS2240-2A (HA pair)

Storage is provided by a NetApp FAS2240-2A (HA configuration in a single chassis). All system and network links feature redundancy, providing end-to-end high availability (HA). For server virtualization, the deployment includes VMware vSphere. Although this is the base design, each of the components can be scaled flexibly to support specific business requirements. For example, more (or different) servers or even blade chassis can be deployed to increase compute capacity, additional disk shelves can be deployed to improve I/O capacity and throughput, and special hardware or software features can be added to introduce new features.

The remainder of this document guides you through the low-level steps for deploying the base architecture, as shown in Figure 1. This includes everything from physical cabling, to compute and storage configuration, to configuring virtualization with VMware vSphere.

FlexPod Benefits

One of the founding design principles of the FlexPod architecture is flexibility. Previous FlexPod architectures have highlighted FCoE- or FC-based storage solutions in addition to showcasing a variety of application workloads. This particular FlexPod architecture is a predesigned configuration that is built on the Cisco Unified Computing System, the Cisco Nexus family of data center switches, NetApp FAS storage components, and VMware virtualization software. FlexPod is a base configuration, but it can scale up for greater performance and capacity, and it can scale out for environments that require consistent, multiple deployments. FlexPod has the flexibility to be sized and optimized to accommodate many different use cases. These use cases can be layered on an infrastructure that is architected based on performance, availability, and cost requirements.

FlexPod is a platform that can address current virtualization needs and simplify the evolution to an IT-as-a-service (ITaaS) infrastructure. The VMware vSphere Built On FlexPod With IP-Based Storage solution can help improve agility and responsiveness, reduce total cost of ownership (TCO), and increase business alignment and focus.

This document focuses on deploying an infrastructure that is capable of supporting VMware vSphere, VMware vCenter™ with NetApp plug-ins, and NetApp OnCommand™ as the foundation for virtualized infrastructure. Additionally, this document details a use case for those who want to design a potentially lower cost solution by leveraging IP storage protocols such as iSCSI, CIFS, and NFS, thereby avoiding the costs and complexities typically incurred with traditional FC SAN architectures. For a detailed study of several practical solutions deployed on FlexPod, refer to the NetApp Technical Report 3884, FlexPod Solutions Guide.

Benefits of Cisco Unified Computing System

Cisco Unified Computing System™ is the first converged data center platform that combines industry-standard, x86-architecture servers with networking and storage access into a single converged system. The system is entirely programmable using unified, model-based management to simplify and speed deployment of enterprise-class applications and services running in bare-metal, virtualized, and cloud computing environments.

The system's x86-architecture rack-mount and blade servers are powered by Intel® Xeon® processors. These industry-standard servers deliver world-record performance to power mission-critical workloads. Cisco servers, combined with a simplified, converged architecture, drive better IT productivity and superior price/performance for lower total cost of ownership (TCO). Building on Cisco's strength in enterprise networking, Cisco's Unified Computing System is integrated with a standards-based, high-bandwidth, low-latency, virtualization-aware unified fabric. The system is wired once to support the desired bandwidth and carries all Internet protocol, storage, inter-process communication, and virtual machine traffic with security isolation, visibility, and control equivalent to physical networks. The system meets the bandwidth demands of today's multicore processors, eliminates costly redundancy, and increases workload agility, reliability, and performance.

Cisco Unified Computing System is designed from the ground up to be programmable and self- integrating. A server's entire hardware stack, ranging from server firmware and settings to network profiles, is configured through model-based management. With Cisco virtual interface cards, even the number and type of I/O interfaces is programmed dynamically, making every server ready to power any workload at any time. With model-based management, administrators manipulate a model of a desired system configuration, associate a model's service profile with hardware resources, and the system configures itself to match the model. This automation speeds provisioning and workload migration with accurate and rapid scalability. The result is increased IT staff productivity, improved compliance, and reduced risk of failures due to inconsistent configurations.

Cisco Fabric Extender technology reduces the number of system components to purchase, configure, manage, and maintain by condensing three network layers into one. This represents a radical simplification over traditional systems, reducing capital and operating costs while increasing business agility, simplifying and speeding deployment, and improving performance.

Cisco Unified Computing System helps organizations go beyond efficiency: it helps them become more effective through technologies that breed simplicity rather than complexity. The result is flexible, agile, high-performance, self-integrating information technology, reduced staff costs with increased uptime through automation, and more rapid return on investment.

This reference architecture highlights the use of the Cisco UCS C200-M2 server, the Cisco UCS 6248UP, and the Nexus 2232 FEX to provide a resilient server platform balancing simplicity, performance and density for production-level virtualization. Also highlighted in this architecture, is the use of Cisco UCS service profiles that enable iSCSI boot of the native operating system. Coupling service profiles with unified storage delivers on demand stateless computing resources in a highly scalable architecture.

Recommended support documents include:

Cisco Unified Computing System: http://www.cisco.com/en/US/products/ps10265/index.html

Cisco Unified Computing System C-Series Servers: http://www.cisco.com/en/US/products/ps10493/index.html

Cisco Unified Computing System B-Series Servers: http://www.cisco.com/en/US/products/ps10280/index.html

Benefits of Cisco Nexus 5548UP

The Cisco Nexus 5548UP Switch delivers innovative architectural flexibility, infrastructure simplicity, and business agility, with support for networking standards. For traditional, virtualized, unified, and high-performance computing (HPC) environments, it offers a long list of IT and business advantages, including:

Architectural Flexibility

Unified ports that support traditional Ethernet, Fibre Channel (FC), and Fibre Channel over Ethernet (FCoE)

Synchronizes system clocks with accuracy of less than one microsecond, based on IEEE 1588

Offers converged Fabric extensibility, based on emerging standard IEEE 802.1BR, with Fabric Extender (FEX) Technology portfolio, including:

Cisco Nexus 2000 FEX

Adapter FEX

VM-FEX

Infrastructure Simplicity

Common high-density, high-performance, data-center-class, fixed-form-factor platform

Consolidates LAN and storage

Supports any transport over an Ethernet-based fabric, including Layer 2 and Layer 3 traffic

Supports storage traffic, including iSCSI, NAS, FC, RoE, and IBoE

Reduces management points with FEX Technology

Business Agility

Meets diverse data center deployments on one platform

Provides rapid migration and transition for traditional and evolving technologies

Offers performance and scalability to meet growing business needs

Specifications at-a Glance

A 1 -rack-unit, 1/10 Gigabit Ethernet switch

32 fixed Unified Ports on base chassis and one expansion slot totaling 48 ports

The slot can support any of the three modules: Unified Ports, 1/2/4/8 native Fibre Channel, and Ethernet or FCoE

Throughput of up to 960 Gbps

This reference architecture highlights the use of the Cisco Nexus 5548UP. As mentioned, this platform is capable of serving as the foundation for wire-once, unified fabric architectures. This document provides guidance for an architecture capable of delivering IP protocols including iSCSI, CIFS, and NFS. Ethernet architectures yield lower TCO through their simplicity and lack of need for FC SAN trained professionals while requiring fewer licenses to deliver the functionality associated with enterprise class solutions.

Recommended support documents include:

Cisco Nexus 5000 Family of switches: http://www.cisco.com/en/US/products/ps9670/index.html

Benefits of the NetApp FAS Family of Storage Controllers

The NetApp Unified Storage Architecture offers customers an agile and scalable storage platform. All NetApp storage systems use the Data ONTAP® operating system to provide SAN (FCoE, FC, iSCSI), NAS (CIFS, NFS), and primary and secondary storage in a single unified platform so that all virtual desktop data components can be hosted on the same storage array.

A single process for activities such as installation, provisioning, mirroring, backup, and upgrading is used throughout the entire product line, from the entry level to enterprise-class controllers. Having a single set of software and processes simplifies even the most complex enterprise data management challenges. Unifying storage and data management software and processes streamlines data ownership, enables companies to adapt to their changing business needs without interruption, and reduces total cost of ownership.

This reference architecture focuses on the use case of leveraging IP-based storage to solve customers' challenges and to meet their needs in the data center. Specifically, this entails iSCSI boot of UCS hosts, provisioning of VM data stores by using NFS, and application access through iSCSI, CIFS, or NFS, all while leveraging NetApp unified storage.

In a shared infrastructure, the availability and performance of the storage infrastructure are critical because storage outages and performance issues can affect thousands of users. The storage architecture must provide a high level of availability and performance. For detailed documentation about best practices, NetApp and its technology partners have developed a variety of best practice documents.

This reference architecture highlights the use of the NetApp FAS2000 product line, specifically the FAS2240-2A with the 10GbE mezzanine card and SAS storage. Available to support multiple protocols while allowing the customer to start smart, at a lower price point, the FAS2240-2 is an affordable and powerful choice for delivering shared infrastructure.

Recommended support documents include:

NetApp storage systems: www.netapp.com/us/products/storage-systems/

NetApp FAS2000 storage systems: http://www.netapp.com/us/products/storage-systems/fas2000/fas2000.html

NetApp TR-3437: Storage Best Practices and Resiliency Guide

NetApp TR-3450: Active-Active Controller Overview and Best Practices Guidelines

NetApp TR-3749: NetApp and VMware vSphere Storage Best Practices

NetApp TR-3884: FlexPod Solutions Guide

NetApp TR-3824: MS Exchange 2010 Best Practices Guide

Benefits of OnCommand Unified Manager Software

NetApp OnCommand management software delivers efficiency savings by unifying storage operations, provisioning, and protection for both physical and virtual resources. The key product benefits that create this value include:

Simplicity. A single unified approach and a single set of tools to manage both the physical world and the virtual world as you move to a services model to manage your service delivery. This makes NetApp the most effective storage for the virtualized data center. It has a single configuration repository for reporting, event logs, and audit logs.

Efficiency. Automation and analytics capabilities deliver storage and service efficiency, reducing IT capex and opex spend by up to 50%.

Flexibility. With tools that let you gain visibility and insight into your complex multiprotocol, multivendor environments and open APIs that let you integrate with third-party orchestration frameworks and hypervisors, OnCommand offers a flexible solution that helps you rapidly respond to changing demands.

OnCommand gives you visibility across your storage environment by continuously monitoring and analyzing its health. You get a view of what is deployed and how it is being used, enabling you to improve your storage capacity utilization and increase the productivity and efficiency of your IT administrators. And this unified dashboard gives at-a-glance status and metrics, making it far more efficient than having to use multiple resource management tools.

Figure 2 OnCommand Architecture

OnCommand Host Package

You can discover, manage, and protect virtual objects after installing the NetApp OnCommand Host Package software. The components that make up the OnCommand Host Package are:

OnCommand host service VMware plug-in. A plug-in that receives and processes events in a VMware environment, including discovering, restoring, and backing up virtual objects such as virtual machines and datastores. This plug-in executes the events received from the host service.

Host service. The host service software includes plug-ins that enable the NetApp DataFabric® Manager server to discover, back up, and restore virtual objects, such as virtual machines and datastores. The host service also enables you to view virtual objects in the OnCommand console. It enables the DataFabric Manager server to forward requests, such as the request for a restore operation, to the appropriate plug-in, and to send the final results of the specified job to that plug-in. When you make changes to the virtual infrastructure, automatic notification is sent from the host service to the DataFabric Manager server. You must register at least one host service with the DataFabric Manager server before you can back up or restore data.

Host service Windows PowerShell cmdlets. Cmdlets that perform virtual object discovery, local restore operations, and host configuration when the DataFabric Manager server is unavailable.

Management tasks performed in the virtual environment by using the OnCommand console include:

Create a dataset and then add virtual machines or datastores to the dataset for data protection.

Assign local protection and, optionally, remote protection policies to the dataset.

View storage details and space details for a virtual object.

Perform an on-demand backup of a dataset.

Mount existing backups onto an ESX™ server to support tasks such as backup verification, single file restore, and restoration of a virtual machine to an alternate location.

Restore data from local and remote backups as well as restore data from backups made before the introduction of OnCommand management software.

View storage details and space details for a virtual object.

Storage Service Catalog

The Storage Service Catalog, a component of OnCommand, is a key NetApp differentiator for service automation. It lets you integrate storage provisioning policies, data protection policies, and storage resource pools into a single service offering that administrators can choose when provisioning storage. This automates much of the provisioning process, and it also automates a variety of storage management tasks associated with the policies.

The Storage Service Catalog provides a layer of abstraction between the storage consumer and the details of the storage configuration, creating "storage as a service." The service levels defined with the Storage Service Catalog automatically specify and map policies to the attributes of your pooled storage infrastructure. This higher level of abstraction between service levels and physical storage lets you eliminate complex, manual work, encapsulating storage and operational processes together for optimal, flexible, and dynamic allocation of storage.

The service catalog approach also incorporates the use of open APIs into other management suites, which leads to a strong ecosystem integration.

FlexPod Management Solutions

The FlexPod platform open APIs offer easy integration with a broad range of management tools. NetApp and Cisco work with trusted partners to provide a variety of management solutions.

Products designated as Validated FlexPod Management Solutions must pass extensive testing in Cisco and NetApp labs against a broad set of functional and design requirements. Validated solutions for automation and orchestration provide unified, turnkey functionality. Now you can deploy IT services in minutes instead of weeks by reducing complex processes that normally require multiple administrators to repeatable workflows that are easily adaptable. The following list names the current vendors for these solutions:


Note Some of the following links are available only to partners and customers.


CA

http://solutionconnection.netapp.com/CA-Infrastructure-Provisioning-for-FlexPod.aspx

http://www.youtube.com/watch?v=mmkNUvVZY94

Cloupia

http://solutionconnection.netapp.com/cloupia-unified-infrastructure-controller.aspx

http://www.cloupia.com/en/flexpodtoclouds/videos/Cloupia-FlexPod-Solution-Overview.html

Gale Technologies

http://solutionconnection.netapp.com/galeforce-turnkey-cloud-solution.aspx

http://www.youtube.com/watch?v=ylf81zjfFF0

Products designated as FlexPod Management Solutions have demonstrated the basic ability to interact with all components of the FlexPod platform. Vendors for these solutions currently include BMC Software Business Service Management, Cisco Intelligent Automation for Cloud, DynamicOps, FireScope, Nimsoft, and Zenoss. Recommended documents include:

https://solutionconnection.netapp.com/flexpod.aspx

http://www.netapp.com/us/communities/tech-ontap/tot-building-a-cloud-on-flexpod-1203.html

Benefits of VMware vSphere with the NetApp Virtual Storage Console

VMware vSphere, coupled with the NetApp Virtual Storage Console (VSC), serves as the foundation for VMware virtualized infrastructures. vSphere 5.0 offers significant enhancements that can be employed to solve real customer problems. Virtualization reduces costs and maximizes IT efficiency, increases application availability and control, and empowers IT organizations with choice. VMware vSphere delivers these benefits as the trusted platform for virtualization, as demonstrated by their contingent of more than 300,000 customers worldwide.

VMware vCenter Server is the best way to manage and leverage the power of virtualization. A vCenter domain manages and provisions resources for all the ESX hosts in the given data center. The ability to license various features in vCenter at differing price points allows customers to choose the package that best serves their infrastructure needs.

The VSC is a vCenter plug-in that provides end-to-end virtual machine (VM) management and awareness for VMware vSphere environments running on top of NetApp storage. The following core capabilities make up the plug-in:

Storage and ESXi host configuration and monitoring by using Monitoring and Host Configuration

Datastore provisioning and VM cloning by using Provisioning and Cloning

Backup and recovery of VMs and datastores by using Backup and Recovery

Online alignment and single and group migrations of VMs into new or existing datastores by using Optimization and Migration

Because the VSC is a vCenter plug-in, all vSphere clients that connect to vCenter can access VSC. This availability is different from a client-side plug-in that must be installed on every vSphere client.

Software Revisions

It is important to note the software versions used in this document. Table 1 details the software revisions used throughout this document.

Table 1 Software Revisions

Layer
Compute
Version or Release
Details

Compute

Cisco UCS Fabric Interconnect

Cisco UCS C-200-M2

2.0(2m)


2.0(2m)

Embedded management


Hardware BIOS version

Network

Nexus Fabric Switch

5.1(3)N2(1)

Operating system version

Storage

NetApp FAS2240-2A

Data ONTAP 8.1

Operating system version

Software

Cisco UCS Hosts

Microsoft .NET Framework

Microsoft SQL Server


VMware vCenter


NetApp OnCommand


NetApp Virtual Storage Console (VSC)


Nexus 1000v

VMware vSphere ESXi 5.0

3.5.1

MS SQL Server 2008 R2 SP1

5.0


5.0


4.0



4.2.1.SV1.5.1

Operating system version

Feature enabled within Windows ® operating system


VM (1): SQL Server DB


VM (1): VMware vCenter


VM (1): OnCommand


Plug-in VMware vCenter


VM(2): Alternative option for VM networking


Configuration Guidelines

This document provides details for configuring a fully redundant, highly available configuration for a FlexPod unit with IP-based storage. Therefore, reference is made to which component is being configured with each step, either A or B. For example, controller A and controller B are used to identify the two NetApp storage controllers that are provisioned with this document, and Nexus A and Nexus B identify the pair of Cisco Nexus switches that are configured. The Cisco UCS fabric interconnects are similarly configured. Additionally, this document details steps for provisioning multiple Cisco UCS hosts, and these are identified sequentially: VM-Host-Infra-01, VM-Host-Infra-02, and so on. Finally, to indicate that you should include information pertinent to your environment in a given step, <text> appears as part of the command structure. See the following example for the vlan create command:

controller A> vlan create

Usage:

vlan create [-g {on|off}] <ifname> <vlanid_list>

vlan add <ifname> <vlanid_list>

vlan delete -q <ifname> [<vlanid_list>]

vlan modify -g {on|off} <ifname>

vlan stat <ifname> [<vlanid_list>]

Example:

controller A> vlan create vif0 <management VLAN ID>

This document is intended to enable you to fully configure the customer environment. In this process, various steps require you to insert customer-specific naming conventions, IP addresses, and VLAN schemes, as well as to record appropriate MAC addresses. Table 2 details the list of VLANs necessary for deployment as outlined in this guide. The VM-Mgmt VLAN is used for management interfaces of the VMware vSphere hosts. Table 3 lists the configuration variables that are used throughout this document. This table can be completed based on the specific site variables and leveraged as one reads the document configuration steps..


Note If you are using separate in-band and out-of-band management VLANs, you must create a layer 3 route between these VLANs. For this validation, a common management VLAN was used.


Table 2 Necessary VLANs

VLAN Name
VLAN Purpose
ID Used in Validating this Document

Mgmt in band

VLAN for in-band management interfaces

3175

Mgmt out of band

VLAN for out-of-band management interfaces

3175

Native

VLAN to which untagged frames are assigned

2

NFS

VLAN for NFS traffic

3170

iSCSI-A

VLAN for iSCSI traffic for fabric A

3171

iSCSI-B

VLAN for iSCSI traffic for fabric B

3172

vMotion

VLAN designated for the movement of virtual machines from one physical host to another

3173

VM Traffic

VLAN for VM application traffic

3174


Table 3 Configuration Variables

Variable
Description
Variable Value in the Reference Implementation
Manually Configure in DNS

<#>

Number of disks to assign to a controller

12

 

<necessary licenses>

Licenses needed for a controller

cf, iSCSI, NFS, FlexClone

 

<NTP server IP>

NTP server IP

192.168.175.4

 

<# of disks for aggr1>

Number of disks to assign to aggr1

7

 

<ntap SNMP request role>

Creates the SNMP request role

snmpv3role

 

<ntap SNMP managers>

SNMP management group

snmpv3group

 

<ntap SNMP users>

Creates an SNMP user

snmpv3user

 

<ntap SNMP community>

SNMP community

icefxp7-cmty

 

<ntap admin email address>

SNMP admin e-mail address

JaneDoe@Netapp.com

 

<ntap SNMP site name>

SNMP site name

rtp, building 1, lab 3

 

<NFS VLAN ID>

VLAN for NFS traffic

3170

 

<Controller A NFS IP>

Controller A NFS IP

192.168.170.96

 

<NFS Netmask>

NFS netmask

255.255.255.0

 

<iSCSI-A VLAN ID>

VLAN for iSCSI traffic for fabric A

3171

 

<Controller A iSCSI-A IP>

Controller A iSCSI A IP

192.168.171.96

 

<iSCSI-A Netmask>

iSCSI A netmask

255.255.255.0

 

<iSCSI-B VLAN ID>

VLAN for iSCSI traffic for fabric B

3172

 

<Controller A iSCSI-B IP>

Controller A iSCSI B IP

192.168.172.96

 

<iSCSI-B Netmask>

iSCSI B netmask

255.255.255.0

 

<Controller B NFS IP>

Controller B NFS IP

192.168.170.97

 

<Controller B iSCSI-A IP>

Controller B iSCSI A IP

192.168.171.97

 

<Controller B iSCSI-B IP>

Controller B iSCSI B IP

192.168.172.97

 

<ESXi Host 1 NFS IP>

ESXi host 1 NFS IP

192.168.170.98

 

<ESXi Host 2 NFS IP>

ESXi host 2 NFS IP

192.168.170.99

 

<Nexus A Switch name>

Nexus switch A host name

ice5548-1

Y

<Nexus A mgmt0 IP>

Nexus switch A mgmt IP

192.168.175.69

 

<Nexus A mgmt0 netmask>

Nexus switch A MGMT 0 netmask

255.255.255.0

 

<Nexus A mgmt0 gateway>

Nexus switch A MGMT 0 gateway

192.168.175.1

 

<Nexus B Switch name>

Nexus switch B host name

ice5548-2

Y

<Nexus B mgmt0 IP>

Nexus switch B mgmt IP

192.168.175.70

 

<Nexus B mgmt0 netmask>

Nexus switch B MGMT 0 netmask

255.255.255.0

 

<Nexus B mgmt0 gateway>

Nexus switch B MGMT 0 gateway

192.168.175.1

 

<MGMT VLAN ID>

VLAN ID for in-band and out-of-band management interfaces

3175

 

<Native VLAN ID>

VLAN ID for native VLAN

2

 

<vMotion VLAN ID>

VLAN for vMotion® traffic

3173

 

<VM-Traffic VLAN ID>

VLAN for VM application traffic

3174

 

<Nexus vPC domain ID>

Nexus vPC domain ID

7

 

<VM-Host-Infra-01 IP address>.

Infra 01 VM IP

192.168.175.98

 

<VM-Host-Infra-02 IP address>.

Infra 02 VM IP

192.168.175.99

 

<ESXi Host IP>

ESXi host A and B IPs

Varies by host

 

<root password>

Root password for your ESXi environment

********

 

<Password>

SQL password

********

 

<Storage Controller A>

Name of storage controller A

ice2240-1a-m

Y

<Storage Controller B>

Name of storage controller B

ice2240-1b-m

Y

<global ssl country>

SSL country for DFM installation

US

 

<global ssl state>

SSL state for DFM installation

"North Carolina"

 

<global ssl locality>

SSL locality for DFM installation

RTP

 

<global ssl org>

SSL organization for DFM installation

NetApp

 

<global ssl org unit>

SSL organization unit for DFM installation

ICE

 

<global ntap dfm hostname>

Global NetApp DFM host name for DFM installation

icefxp7-vsc-oc

 

<ntap admin email address>

NetApp admin e-mail address for DFM installation

JaneDoe@netapp.com

 

<ntap snmp password>

DFM SNMP password

********

 

<ntap autosupport mailhost>

Local mail server

mailhost

 

<ntap A hostname>

DFM host name of controller A

ice2240-1a

Y

<ntap B hostname>

DFM host name of controller B

Ice2240-1b

Y

<global default password>

Global password

*********

 

<ntap snmp traphosts>

SNMP traphost name

Icefxp7-vsc-oc

 

Appendix Variable
Description
Variable Value in the Reference Implementation
Provision in DNS

<vCenter Server IP>

vCenter server IP

192.168.175.213

 

<Primary VSM IP Address>

VSM primary IP

192.168.175.216

 

<Pkt-Ctrl VLAN ID>

Packet control VLAN ID

3176

 

<Host Server IP>

ESXi host server IP

Varies by host

 

<Root Password>

Root password

********

 


Note In this document, management IPs and host names must be assigned for the following components:


NetApp storage controllers A and B

Cisco UCS Fabric Interconnects A and B and the UCS Cluster

Cisco Nexus 5548s A and B

VMware ESXi Hosts 1 and 2

VMware vCenter SQL Server Virtual Machine

VMware vCenter Virtual Machine

NetApp Virtual Storage Console or OnCommand virtual machine

For all host names except the virtual machine host names, the IP addresses must be preconfigured in the local DNS server. Additionally, the NFS IP addresses of the NetApp storage systems are used to monitor the storage systems from OnCommand DataFabric Manager. In this validation, a management host name was assigned to each storage controller (that is, ice2240-1a-m) and provisioned in DNS. A host name was also assigned for each controller in the NFS VLAN (that is, ice2240-1a) and provisioned in DNS. This NFS VLAN host name was then used when the storage system was added to OnCommand DataFabric Manager.

Deployment

This document describes the steps to deploy base infrastructure components as well as to provision VMware vSphere as the foundation for virtualized workloads. When you finish these deployment steps, you will be prepared to provision applications on top of a VMware virtualized infrastructure. The outlined procedure contains the following steps:

Initial NetApp controller configuration

Initial Cisco UCS configuration

Initial Cisco Nexus configuration

Creation of necessary VLANs for management, basic functionality, and virtualized infrastructure specific to VMware

Creation of necessary vPCs to provide high availability among devices

Creation of necessary service profile pools: MAC, UUID, server, and so forth

Creation of necessary service profile policies: adapter, boot, and so forth

Creation of two service profile templates from the created pools and policies: one each for fabric A and B

Provisioning of two servers from the created service profiles in preparation for OS installation

Initial configuration of the infrastructure components residing on the NetApp controller

Installation of VMware vSphere 5.0

Installation and configuration of VMware vCenter

Enabling of NetApp Virtual Storage Console (VSC)

Configuration of NetApp OnCommand

The VMware vSphere Built On FlexPod With IP-Based Storage architecture is flexible; therefore the configuration detailed in this section can vary for customer implementations, depending on specific requirements. Although customer implementations can deviate from the following information, the best practices, features, and configurations described in this section should be used as a reference for building a customized VMware vSphere Built On FlexPod With IP-Based Storage built on FlexPod solution.

Cabling Information

The information in this section is provided as a reference for cabling the physical equipment in a FlexPod environment. To simplify cabling requirements, the tables include both local and remote device and port locations.

The tables in this section contain details for the prescribed and supported configuration of the NetApp FAS2240-2A running Data ONTAP 8.1. This configuration leverages a dual-port 10GbE adapter and the onboard SAS disk shelves with no additional external storage. For any modifications of this prescribed architecture, consult the NetApp Interoperability Matrix Tool (IMT).

This document assumes that out-of-band management ports are plugged into an existing management infrastructure at the deployment site.

Be sure to follow the cabling directions in this section. Failure to do so will result in necessary changes to the deployment procedures that follow because specific port locations are mentioned.

It is possible to order a FAS2240-2A system in a different configuration from what is prescribed in the tables in this section. Before starting, be sure that the configuration matches the descriptions in the tables and diagrams in this section.

Figure 3 shows a FlexPod cabling diagram. The labels indicate connections to end points rather than port numbers on the physical device. For example, connection 1 is an FCoE target port connected from NetApp controller A to Nexus 5548 A. SAS connections 23, 24, 25, and 26 as well as ACP connections 27 and 28 should be connected to the NetApp storage controller and disk shelves according to best practices for the specific storage controller and disk shelf quantity.


Note For disk shelf cabling, refer to the Universal SAS and ACP Cabling Guide at https://library.netapp.com/ecm/ecm_get_file/ECMM1280392.


Figure 3 FlexPod Cabling Diagram

Table 4 Cisco Nexus 5548 A Ethernet Cabling Information


Note For devices added at a later date requiring 1GbE connectivity, use the GbE Copper SFP+s (GLC-T=).


Table 5 Cisco Nexus 5548 B Ethernet Cabling Information


Note For devices added at a later date requiring 1GbE connectivity, use the GbE Copper SFP+s (GLC-T=).


Table 6 NetApp Controller A Ethernet Cabling Information

Table 7 NetApp Controller B Ethernet Cabling Information

Table 8 Cisco UCS Fabric Interconnect A Ethernet Cabling Information

Table 9 Cisco UCS Fabric Interconnect B Ethernet Cabling Information

Table 10 Cisco Nexus 2232PP Fabric Extender A (FEX A)

Table 11 Cisco Nexus 2232PP Fabric Extender B (FEX B)

Table 12 Cisco UCS C Series 1

Table 13 Cisco UCS C Series 2

Table 14 Cisco UCS C Series 3

Table 15 Cisco UCS C Series 4

NetApp FAS2240-2 Deployment Procedure: Part 1

This section provides a detailed procedure for configuring the NetApp FAS2240-2 for use in a VMware vSphere Built On FlexPod With IP-Based Storage solution. These steps should be followed precisely. Failure to do so could result in an improper configuration.


Note The configuration steps described in this section provide guidance for configuring the FAS2240-2 running Data ONTAP 8.1.


Assign Controller Disk Ownership

These steps provide details for assigning disk ownership and disk initialization and verification.


Note Typical best practices should be followed when determining the number of disks to assign to each controller head. You may choose to assign a disproportionate number of disks to a given storage controller in an HA pair, depending on the intended workload.



Note In this reference architecture, half the total number of disks in the environment is assigned to one controller and the remainder to its partner. Divide the number of disks in half and use the result in the following command for <# of disks>.


Controller A

1. If the controller is at a LOADER-A> prompt, enter autoboot to start Data ONTAP. During controller boot, when prompted for Boot Menu, press CTRL-C.

2. At the menu prompt, select option 5 for Maintenance mode boot.

3. If prompted with Continue to boot? enter Yes.

4. Enter ha-config show to verify that the controller and chassis configuration is ha.


Note If either component is not in HA mode, use ha-config modify to put the components in HA mode.


5. Enter disk show. No disks should be assigned to the controller.

6. To determine the total number of disks connected to the storage system, enter disk show -a.

7. Enter disk assign -n <#>. NetApp recommends connecting half the disks to each controller. Workload design could dictate different percentages, however.

8. Enter halt to reboot the controller.

9. If the controller stops at a LOADER-A> > prompt, enter autoboot to start Data ONTAP.

10. During controller boot, when prompted, press CTRL-C.

11. At the menu prompt, select option 4 for "Clean configuration and initialize all disks."

12. The installer asks if you want to zero the disks and install a new file system. Enter y.

13. A warning is displayed that this will erase all of the data on the disks. Enter y to confirm that this is what you want to do.


Note The initialization and creation of the root volume can take 75 minutes or more to complete, depending on the number of disks attached. When initialization is complete, the storage system reboots.


Controller B

1. If the controller is at a LOADER-B> prompt, enter autoboot to start Data ONTAP. During controller boot, when prompted, press CTRL-C for the special boot menu.

2. At the menu prompt, select option 5 for Maintenance mode boot.

3. If prompted with Continue to boot? enter Yes.

4. Enter ha-config show to verify that the controller and chassis configuration is ha.


Note If either component is not in HA mode, use ha-config modify to put the components in HA mode.


5. Enter disk show. No disks should be assigned to this controller.

6. To determine the total number of disks connected to the storage system, enter disk show -a. This will now show the number of remaining unassigned disks connected to the controller.

7. Enter disk assign -n <#> to assign the remaining disks to the controller.

8. Enter halt to reboot the controller.

9. If the controller stops at a LOADER-B> prompt, enter autoboot to start Data ONTAP.

10. During controller boot, when prompted, press CTRL-C for the boot menu.

11. At the menu prompt, select option 4 for "Clean configuration and initialize all disks."

12. The installer asks if you want to zero the disks and install a new file system. Enter y.

13. A warning is displayed that this will erase all of the data on the disks. Enter y to confirm that this is what you want to do.


Note The initialization and creation of the root volume can take 75 minutes or more to complete, depending on the number of disks attached. When initialization is complete, the storage system reboots.


Set Up Data ONTAP 8.1

These steps provide details for setting up Data ONTAP 8.1.

Controller A and Controller B

1. After the disk initialization and the creation of the root volume, Data ONTAP setup begins.

2. Enter the host name of the storage system.

3. Enter n for enabling IPv6.

4. Enter y for configuring interface groups.

5. Enter 1 for the number of interface groups to configure.

6. Name the interface vif0.

7. Enter l to specify the interface as LACP.

8. Enter i to specify IP load balancing.

9. Enter 2 for the number of links for vif0.

10. Enter e1a for the name of the first link.

11. Enter e1b for the name of the second link.

12. Press Enter to accept the blank IP address for vif0.

13. Enter n for interface group vif0 taking over a partner interface.

14. Press Enter to accept the blank IP address for e0a.

15. Enter n for interface e0a taking over a partner interface.

16. Press Enter to accept the blank IP address for e0b.

17. Enter n for interface e0b taking over a partner interface.

18. Press Enter to accept the blank IP address for e0c.

19. Enter n for interface e0c taking over a partner interface.

20. Press Enter to accept the blank IP address for e0d.

21. Enter n for interface e0d taking over a partner interface.

22. Enter the IP address of the out-of-band management interface, e0M.

23. Enter the net mask for e0M.

24. Enter y for interface e0M taking over a partner IP address during failover.

25. Enter e0M for the name of the interface to be taken over.

26. Enter n to continue setup through the Web interface.

27. Enter the IP address for the default gateway for the storage system.

28. Enter the IP address for the administration host.

29. Enter the local time zone (such as PST, MST, CST, or EST or Linux time zone format; for example,. America/New_York).

30. Enter the location for the storage system.

31. Press Enter to accept the default root directory for HTTP files [/home/http].

32. Enter y to enable DNS resolution.

33. Enter the DNS domain name.

34. Enter the IP address for the first nameserver.

35. Enter n to finish entering DNS servers, or select y to add up to two more DNS servers.

36. Enter n for running the NIS client.

37. Press Enter to acknowledge the AutoSupport™ message.

38. Enter y to configure the SP LAN interface.

39. Enter n to setting up DHCP on the SP LAN interface.

40. Enter the IP address for the SP LAN interface.

41. Enter the net mask for the SP LAN interface.

42. Enter the IP address for the default gateway for the SP LAN interface.

43. Enter the fully qualified domain name for the mail host to receive SP messages and AutoSupport.

44. Enter the IP address for the mail host to receive SP messages and AutoSupport.


Note If you make a mistake during setup, press CTRL+C to get a command prompt. Enter setup and run the setup script again. Or you can complete the setup script and at the end enter setup to redo the setup script.



Note At the end of the setup script, the storage system must be rebooted for changes to take effect.


45. Enter passwd to set the administrative (root) password.

46. Enter the new administrative (root) password.

47. Enter the new administrative (root) password again to confirm.

Install Data ONTAP to Onboard Flash Storage

The following steps describe installing Data ONTAP to the onboard flash storage.

Controller A and Controller B

1. To install the Data ONTAP image to the onboard flash device, enter software install and indicate the http or https Web address of the NetApp Data ONTAP 8.1 flash image; for example, http://192.168.175.5/81_q_image.tgz

2. Enter download and press Enter to download the software to the flash device.

Harden Storage System Logins and Security

The following steps describe hardening the storage system logins and security.

Controller A and Controller B

1. Enter secureadmin disable ssh.

2. Enter secureadmin setup -f ssh to enable ssh on the storage controller.

3. If prompted, enter yes to rerun ssh setup.

4. Accept the default values for ssh1.x protocol.

5. Enter 1024 for ssh2 protocol.

6. If the information specified is correct, enter yes to create the ssh keys.

7. Enter options telnet.enable off to disable telnet on the storage controller.

8. Enter secureadmin setup ssl to enable ssl on the storage controller.

9. If prompted, enter yes to rerun ssl setup.

10. Enter the country name code, state or province name, locality name, organization name, and organization unit name.

11. Enter the fully qualified domain name of the storage system.

12. Enter the administrator's e-mail address.

13. Accept the default for days until the certificate expires.

14. Enter 1024 for the ssl key length.

15. Enter options httpd.admin.enable off to disable http access to the storage system.

16. Enter options httpd.admin.ssl.enable on to enable secure access to the storage system.

Install the Required Licenses

The following steps provide details about storage licenses that are used to enable features in this reference architecture. A variety of licenses come installed with the Data ONTAP 8.1 software.


Note The following licenses are required to deploy this reference architecture:


cluster (cf): To configure storage controllers into an HA pair

iSCSI: To enable the iSCSI protocol

nfs: To enable the NFS protocol

flex_clone: To enable the provisioning of NetApp FlexClone® volumes and files

Controller A and Controller B

1. Enter license add <necessary licenses> to add licenses to the storage system.

2. Enter license to double-check the installed licenses.

3. Enter reboot to reboot the storage controller.

4. Log back in to the storage controller with the root password.

Enable Licensed Features

The following steps provide details for enabling licensed features.

Controller A and Controller B

1. Enter options licensed_feature.multistore.enable on.

2. Enter options licensed_feature.nearstore_option.enable on.

Enable Active-Active Controller Configuration Between Two Storage Systems

This step provides details for enabling active-active controller configuration between the two storage systems.

Controller A only

1. Enter cf enable and press Enter to enable active-active controller configuration.

Start iSCSI

This step provides details for enabling the iSCSI protocol.

Controller A and Controller B

1. Enter iscsi start.

Set Up Storage System NTP Time Synchronization and CDP Enablement

The following steps provide details for setting up storage system NTP time synchronization and enabling Cisco Discovery Protocol (CDP).

Controller A and Controller B

1. Enter date CCyymmddhhmm, where CCyy is the four-digit year, mm is the two-digit month, dd is the two-digit day of the month, hh is the two-digit hour, and the second mm is the two-digit minute to set the storage system time to the actual time.

2. Enter options timed.proto ntp to synchronize with an NTP server.

3. Enter options timed.servers <NTP server IP> to add the NTP server to the storage system list.

4. Enter options timed.enable on to enable NTP synchronization on the storage system.

5. Enter options cdpd.enable on.

Create Data Aggregate aggr1

The following step provides details for creating the data aggregate aggr1.


Note In most cases, the following command finishes quickly, but depending on the state of each disk, it might be necessary to zero some or all of the disks in order to add them to the aggregate. This could take up to 60 minutes to complete.


Controller A

1. Enter aggr create aggr1 -B 64 <# of disks for aggr1> to create aggr1 on the storage controller.

Controller B

1. Enter aggr create aggr1 -B 64 <# of disks for aggr1> to create aggr1 on the storage controller.

Create an SNMP Requests Role and Assign SNMP Login Privileges

This step provides details for creating the SNMP request role and assigning SNMP login privileges to it.

Controller A and Controller B

1. Run the following command: useradmin role add <ntap SNMP request role> -a login-snmp.

Create an SNMP Management Group and Assign an SNMP Request Role

This step provides details for creating an SNMP management group and assigning an SNMP request role to it.

Controller A and Controller B

1. Run the following command: useradmin group add <ntap SNMP managers> -r <ntap SNMP request role>.

Create an SNMP User and Assign It to an SNMP Management Group

This step provides details for creating an SNMP user and assigning it to an SNMP management group.

Controller A and Controller B

1. Run the following command: useradmin user add <ntap SNMP users> -g <ntap SNMP managers>.


Note After the user is created, the system prompts for a password. Enter the SNMP password.


Set Up SNMP v1 Communities on Storage Controllers

These steps provide details for setting up SNMP v1 communities on the storage controllers so that OnCommand System Manager can be used.

Controller A and Controller B

1. Run the following command: snmp community delete all.

2. Run the following command: snmp community add ro <ntap SNMP community>.

Set Up SNMP Contact Information for Each Storage Controller

This step provides details for setting SNMP contact information for each of the storage controllers.

Controller A and Controller B

1. Run the following command: snmp contact <ntap admin email address>.

Set SNMP Location Information for Each Storage Controller

This step provides details for setting SNMP location information for each of the storage controllers.

Controller A and Controller B

1. Run the following command: snmp location <ntap SNMP site name>.

Reinitialize SNMP on Storage Controllers

This step provides details for reinitializing SNMP on the storage controllers.

Controller A and Controller B

1. Run the following command: snmp init 1.

Initialize NDMP on the Storage Controllers

This step provides details for initializing NDMP.

Controller A and Controller B

1. Run the following command: ndmpd on.

Set 10GbE Flow Control and Add VLAN Interfaces

These steps provide details for adding VLAN interfaces on the storage controllers.

Controller A

1. Run the following command: ifconfig e1a flowcontrol none .

2. Run the following command: wrfile -a /etc/rc ifconfig e1a flowcontrol none .

3. Run the following command: ifconfig e1b flowcontrol none .

4. Run the following command: wrfile -a /etc/rc ifconfig e1b flowcontrol none .

5. Run the following command: vlan create vif0 <NFS VLAN ID>.

6. Run the following command: wrfile -a /etc/rc vlan create vif0 <NFS VLAN ID>.

7. Run the following command: ifconfig vif0-<NFS VLAN ID> <Controller A NFS IP> netmask <NFS Netmask> mtusize 9000 partner vif0-<NFS VLAN ID>.

8. Run the following command: wrfile -a /etc/rc ifconfig vif0-<NFS VLAN ID> <Controller A NFS IP> netmask <NFS Netmask> mtusize 9000 partner vif0-<NFS VLAN ID>.

9. Run the following command: vlan add vif0 <iSCSI-A VLAN ID>.

10. Run the following command: wrfile -a /etc/rc vlan add vif0 <iSCSI-A VLAN ID>.

11. Run the following command: ifconfig vif0-<iSCSI-A VLAN ID> <Controller A iSCSI-A IP> netmask <iSCSI-A Netmask> mtusize 9000 partner vif0-<iSCSI-A VLAN ID>.

12. Run the following command: wrfile -a /etc/rc ifconfig vif0-<iSCSI-A VLAN ID> <Controller A iSCSI-A IP> netmask <iSCSI-A Netmask> mtusize 9000 partner vif0-<iSCSI-A VLAN ID>.

13. Run the following command: vlan add vif0 <iSCSI-B VLAN ID>.

14. Run the following command: wrfile -a /etc/rc vlan add vif0 <iSCSI-B VLAN ID>.

15. Run the following command: ifconfig vif0-<iSCSI-B VLAN ID> <Controller A iSCSI-B IP> netmask <iSCSI-B Netmask> mtusize 9000 partner vif0-<iSCSI-B VLAN ID>.

16. Run the following command: wrfile -a /etc/rc ifconfig vif0-<iSCSI-B VLAN ID> <Controller A iSCSI-B IP> netmask <iSCSI-B Netmask> mtusize 9000 partner vif0-<iSCSI-B VLAN ID>.

17. Run the following command to verify additions to the /etc/rc file: rdfile /etc/rc.

Controller B

1. Run the following command: ifconfig e1a flowcontrol none.

2. Run the following command: wrfile -a /etc/rc ifconfig e1a flowcontrol none.

3. Run the following command: ifconfig e1b flowcontrol none.

4. Run the following command: wrfile -a /etc/rc ifconfig e1b flowcontrol none.

5. Run the following command: vlan create vif0 <NFS VLAN ID>.

6. Run the following command: wrfile -a /etc/rc vlan create vif0 <NFS VLAN ID>.

7. Run the following command: ifconfig vif0-<NFS VLAN ID> <Controller B NFS IP> netmask <NFS Netmask>.mtusize 9000 partner vif0-<NFS VLAN ID>.

8. Run the following command: wrfile -a /etc/rc ifconfig vif0-<NFS VLAN ID> <Controller B NFS IP> netmask <NFS Netmask>.mtusize 9000 partner vif0-<NFS VLAN ID>.

9. Run the following command: vlan add vif0 <iSCSI A VLAN ID>.

10. Run the following command: wrfile -a /etc/rc vlan add vif0 <iSCSI-A VLAN ID>.

11. Run the following command: ifconfig vif0-<iSCSI-A VLAN ID> <Controller B iSCSI-A IP> netmask <iSCSI A Netmask> mtusize 9000 partner vif0-<iSCSI-A VLAN ID>.

12. Run the following command: wrfile -a /etc/rc ifconfig vif0-<iSCSI-A VLAN ID> <Controller B iSCSI-A IP> netmask <iSCSI-A Netmask> mtusize 9000 partner vif0-<iSCSI-A VLAN ID>.

13. Run the following command: vlan add vif0 <iSCSI-B VLAN ID>.

14. Run the following command: wrfile -a /etc/rc vlan add vif0 <iSCSI-B VLAN ID>.

15. Run the following command: ifconfig vif0-<iSCSI-B VLAN ID> <Controller B iSCSI-B IP> netmask <iSCSI-B Netmask>. mtusize 9000 partner vif0-<iSCSI B VLAN ID>.

16. Run the following command: wrfile -a /etc/rc ifconfig vif0-<iSCSI-B VLAN ID> <Controller B iSCSI-B IP> netmask <iSCSI-B Netmask>. mtusize 9000 partner vif0-<iSCSI-B VLAN ID>.

17. Run the following command to verify additions to the /etc/rc file: rdfile /etc/rc.

Add Infrastructure Volumes

The following steps describe adding volumes on the storage controller for SAN boot of the Cisco UCS hosts as well as virtual machine provisioning.


Note In this reference architecture, controller A houses the boot LUNs for the VMware hypervisor in addition to the swap files, while controller A houses the first datastore for virtual machines.


Controller A

1. Run the following command: vol create esxi_boot -s none aggr1 100g.

2. Run the following command: sis on /vol/esxi_boot .

3. Run the following command: vol create infra_swap -s none aggr1 100g.

4. Run the following command: snap sched infra_swap 0 0 0.

5. Run the following command: snap reserve infra_swap 0.

Controller B

1. Run the following command: vol create infra_datastore_1 -s none aggr1 500g.

2. Run the following command: sis on /vol/infra_datastore_1.

Export NFS Infrastructure Volumes to ESXi Servers

These steps provide details for setting up NFS exports of the infrastructure volumes to the VMware ESXi servers.

Controller A

1. Run the following command: exportfs -p rw=<ESXi Host 1 NFS IP>:<ESXi Host 2 NFS IP>,root=<ESXi Host 1 NFS IP>:<ESXi Host 2 NFS IP> /vol/infra_swap.

2. Run the following command: exportfs. Verify that the NFS exports are set up correctly.

Controller B

1. Run the following command: exportfs -p rw=<ESXi Host 1 NFS IP>:< ESXi Host 2 NFS IP>,root=<ESXi Host 1 NFS IP>:< ESXi Host 2 NFS IP> /vol/infra_datastore_1.

2. Run the following command: exportfs. Verify that the NFS exports are set up correctly.

Cisco Nexus 5548 Deployment Procedure

The following section provides a detailed procedure for configuring the Cisco Nexus 5548 switches for use in a FlexPod environment. Follow these steps precisely because failure to do so could result in an improper configuration.


Note The configuration steps detailed in this section provides guidance for configuring the Nexus 5548UP running release 5.1(3)N2(1). This configuration also leverages the native VLAN on the trunk ports to discard untagged packets, by setting the native VLAN on the PortChannel, but not including this VLAN in the allowed VLANs on the PortChannel.


Set up Initial Cisco Nexus 5548 Switch

These steps provide details for the initial Cisco Nexus 5548 Switch setup.

Cisco Nexus 5548 A

On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start.

1. Enter yes to enforce secure password standards.

2. Enter the password for the admin user.

3. Enter the password a second time to commit the password.

4. Enter yes to enter the basic configuration dialog.

5. Create another login account (yes/no) [n]: Enter.

6. Configure read-only SNMP community string (yes/no) [n]: Enter.

7. Configure read-write SNMP community string (yes/no) [n]: Enter.

8. Enter the switch name: <Nexus A Switch name> Enter.

9. Continue with out-of-band (mgmt0) management configuration? (yes/no) [y]: Enter.

10. Mgmt0 IPv4 address: <Nexus A mgmt0 IP> Enter.

11. Mgmt0 IPv4 netmask: <Nexus A mgmt0 netmask> Enter.

12. Configure the default gateway? (yes/no) [y]: Enter.

13. IPv4 address of the default gateway: <Nexus A mgmt0 gateway> Enter.

14. Enable the telnet service? (yes/no) [n]: Enter.

15. Enable the ssh service? (yes/no) [y]: Enter.

16. Type of ssh key you would like to generate (dsa/rsa):rsa.

17. Number of key bits <768-2048> :1024 Enter.

18. Configure the ntp server? (yes/no) [y]: Enter.

19. NTP server IPv4 address: <NTP Server IP> Enter.

20. Enter basic FC configurations (yes/no) [n]: Enter.

21. Would you like to edit the configuration? (yes/no) [n]: Enter.

22. Be sure to review the configuration summary before enabling it.

23. Use this configuration and save it? (yes/no) [y]: Enter.

24. Configuration may be continued from the console or by using SSH. To use SSH, connect to the mgmt0 address of Nexus A. It is recommended to continue setup via the console or serial port.

25. Log in as user admin with the password previously entered.

Cisco Nexus 5548 B

On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start.

1. Enter yes to enforce secure password standards.

2. Enter the password for the admin user.

3. Enter the password a second time to commit the password.

4. Enter yes to enter the basic configuration dialog.

5. Create another login account (yes/no) [n]: Enter.

6. Configure read-only SNMP community string (yes/no) [n]: Enter.

7. Configure read-write SNMP community string (yes/no) [n]: Enter.

8. Enter the switch name: <Nexus B Switch name> Enter.

9. Continue with out-of-band (mgmt0) management configuration? (yes/no) [y]: Enter.

10. Mgmt0 IPv4 address: <Nexus B mgmt0 IP> Enter.

11. Mgmt0 IPv4 netmask: <Nexus B mgmt0 netmask> Enter.

12. Configure the default gateway? (yes/no) [y]: Enter.

13. IPv4 address of the default gateway: <Nexus B mgmt0 gateway> Enter.

14. Enable the telnet service? (yes/no) [n]: Enter.

15. Enable the ssh service? (yes/no) [y]: Enter.

16. Type of ssh key you would like to generate (dsa/rsa):rsa.

17. Number of key bits <768-2048>:1024 Enter.

18. Configure the ntp server? (yes/no) [y]: Enter.

19. NTP server IPv4 address: <NTP Server IP> Enter.

20. Enter basic FC configurations (yes/no) [n]: Enter.

21. Would you like to edit the configuration? (yes/no) [n]: Enter.

22. Be sure to review the configuration summary before enabling it.

23. Use this configuration and save it? (yes/no) [y]: Enter.

24. Configuration may be continued from the console or by using SSH. To use SSH, connect to the mgmt0 address of Nexus B. It is recommended to continue setup via the console or serial port.

25. Log in as user admin with the password previously entered.

Enable Appropriate Cisco Nexus Features

These steps provide details for enabling the appropriate Cisco Nexus features.

Nexus A and Nexus B

1. Type config t to enter the global configuration mode.

2. Type feature lacp.

3. Type feature vpc.

Set Global Configurations

These steps provide details for setting global configurations.

Nexus A and Nexus B

1. From the global configuration mode, type spanning-tree port type network default to make sure that, by default, the ports are considered as network ports in regards to spanning-tree.

2. Type spanning-tree port type edge bpduguard default to enable bpduguard on all edge ports by default.

3. Type spanning-tree port type edge bpdufilter default to enable bpdufilter on all edge ports by default.

4. Type policy-map type network-qos jumbo.

5. Type class type network-qos class-default.

6. Type mtu 9000.

7. Type exit.

8. Type exit.

9. Type system qos.

10. Type service-policy type network-qos jumbo.

11. Type exit.

12. Type copy run start.

Create Necessary VLANs

These steps provide details for creating the necessary VLANs.

Nexus A and Nexus B

1. Type vlan <MGMT VLAN ID>.

2. Type name MGMT-VLAN.

3. Type exit.

4. Type vlan <Native VLAN ID>.

5. Type name Native-VLAN.

6. Type exit.

7. Type vlan <NFS VLAN ID>.

8. Type name NFS-VLAN.

9. Type exit.

10. Type vlan <iSCSI-A VLAN ID>.

11. Type name iSCSI-A-VLAN.

12. Type exit.

13. Type vlan <iSCSI-B VLAN ID>.

14. Type name iSCSI-B-VLAN.

15. Type exit.

16. Type vlan <vMotion VLAN ID>.

17. Type name vMotion-VLAN.

18. Type exit.

19. Type vlan <VM-Traffic VLAN ID>.

20. Type name VM-Traffic-VLAN.

21. Type exit.

Add Individual Port Descriptions for Troubleshooting

These steps provide details for adding individual port descriptions for troubleshooting activity and verification.

Cisco Nexus 5548 A

1. From the global configuration mode, type interface Eth1/1.

2. Type description <Controller A:e1a>.

3. Type exit.

4. Type interface Eth1/2.

5. Type description <Controller B:e1a>.

6. Type exit.

7. Type interface Eth1/5.

8. Type description <Nexus B:Eth1/5>.

9. Type exit.

10. Type interface Eth1/6.

11. Type description <Nexus B:Eth1/6>.

12. Type exit.

13. Type interface Eth1/3.

14. Type description <UCSM A:Eth1/19>.

15. Type exit.

16. Type interface Eth1/4.

17. Type description <UCSM B:Eth1/19>.

18. Type exit.

Cisco Nexus 5548 B

1. From the global configuration mode, type interface Eth1/1.

2. Type description <Controller A:e1b>.

3. Type exit.

4. Type interface Eth1/2.

5. Type description <Controller B:e1b>.

6. Type exit.

7. Type interface Eth1/5.

8. Type description <Nexus A:Eth1/5>.

9. Type exit.

10. Type interface Eth1/6.

11. Type description <Nexus A:Eth1/6>.

12. Type exit.

13. Type interface Eth1/3.

14. Type description <UCSM A:Eth1/20>.

15. Type exit.

16. Type interface Eth1/4.

17. Type description <UCSM B:Eth1/20>.

18. Type exit.

Create Necessary PortChannels

These steps provide details for creating the necessary PortChannels between devices.

Cisco Nexus 5548 A

1. From the global configuration mode, type interface Po10.

2. Type description vPC peer-link.

3. Type exit.

4. Type interface Eth1/5-6.

5. Type channel-group 10 mode active.

6. Type no shutdown.

7. Type exit.

8. Type interface Po11.

9. Type description <Controller A>.

10. Type exit.

11. Type interface Eth1/1.

12. Type channel-group 11 mode active.

13. Type no shutdown.

14. Type exit.

15. Type interface Po12.

16. Type description <Controller B>.

17. Type exit.

18. Type interface Eth1/2.

19. Type channel-group 12 mode active.

20. Type no shutdown.

21. Type exit.

22. Type interface Po13.

23. Type description <UCSM A>.

24. Type exit.

25. Type interface Eth1/3.

26. Type channel-group 13 mode active.

27. Type no shutdown.

28. Type exit.

29. Type interface Po14.

30. Type description <UCSM B>.

31. Type exit.

32. Type interface Eth1/4.

33. Type channel-group 14 mode active.

34. Type no shutdown.

35. Type exit.

36. Type copy run start.

Cisco Nexus 5548 B

1. From the global configuration mode, type interface Po10.

2. Type description vPC peer-link.

3. Type exit.

4. Type interface Eth1/5-6.

5. Type channel-group 10 mode active.

6. Type no shutdown.

7. Type exit.

8. Type interface Po11.

9. Type description <Controller A>.

10. Type exit.

11. Type interface Eth1/1.

12. Type channel-group 11 mode active.

13. Type no shutdown.

14. Type exit.

15. Type interface Po12.

16. Type description <Controller B>.

17. Type exit.

18. Type interface Eth1/2.

19. Type channel-group 12 mode active.

20. Type no shutdown.

21. Type exit.

22. Type interface Po13.

23. Type description <UCSM A>.

24. Type exit.

25. Type interface Eth1/3.

26. Type channel-group 13 mode active.

27. Type no shutdown.

28. Type exit.

29. Type interface Po14.

30. Type description <UCSM B>.

31. Type exit.

32. Type interface Eth1/4.

33. Type channel-group 14 mode active.

34. Type no shutdown.

35. Type exit.

36. Type copy run start.

Add PortChannel Configurations

These steps provide details for adding PortChannel configurations.

Cisco Nexus 5548 A

1. From the global configuration mode, type interface Po10.

2. Type switchport mode trunk.

3. Type switchport trunk native vlan <Native VLAN ID>.

4. Type switchport trunk allowed vlan <MGMT VLAN ID, NFS VLAN ID, iSCSI-A VLAN ID, iSCSI-B VLAN ID, vMotion VLAN ID, VM-Traffic VLAN ID>.

5. Type spanning-tree port type network.

6. Type no shutdown.

7. Type exit.

8. Type interface Po11.

9. Type switchport mode trunk.

10. Type switchport trunk native vlan <Native VLAN ID>.

11. Type switchport trunk allowed vlan <NFS VLAN ID, iSCSI-A VLAN ID, iSCSI-B VLAN ID>.

12. Type spanning-tree port type edge trunk.

13. Type no shutdown.

14. Type exit.

15. Type interface Po12.

16. Type switchport mode trunk.

17. Type switchport trunk native vlan <Native VLAN ID>.

18. Type switchport trunk allowed vlan <NFS VLAN ID, iSCSI-A VLAN ID, iSCSI-B VLAN ID>.

19. Type spanning-tree port type edge trunk.

20. Type no shutdown.

21. Type exit.

22. Type interface Po13.

23. Type switchport mode trunk.

24. Type switchport trunk native vlan <Native VLAN ID>.

25. Type switchport trunk allowed vlan <MGMT VLAN ID, NFS VLAN ID, iSCSI-A VLAN ID, vMotion VLAN ID, VM-Traffic VLAN ID>.

26. Type spanning-tree port type edge trunk.

27. Type no shutdown.

28. Type exit.

29. Type interface Po14.

30. Type switchport mode trunk.

31. Type switchport trunk native vlan <Native VLAN ID>.

32. Type switchport trunk allowed vlan <MGMT VLAN ID, NFS VLAN ID, iSCSI-B VLAN ID, vMotion VLAN ID, VM-Traffic VLAN ID>.

33. Type spanning-tree port type edge trunk.

34. Type no shutdown.

35. Type exit.

36. Type copy run start.

Cisco Nexus 5548 B

1. From the global configuration mode, type interface Po10.

2. Type switchport mode trunk.

3. Type switchport trunk native vlan <Native VLAN ID>.

4. Type switchport trunk allowed vlan <MGMT VLAN ID, NFS VLAN ID, iSCSI-A VLAN ID, iSCSI-B VLAN ID, vMotion VLAN ID, VM-Traffic VLAN ID>.

5. Type spanning-tree port type network.

6. Type no shutdown.

7. Type exit.

8. Type interface Po11.

9. Type switchport mode trunk.

10. Type switchport trunk native vlan <Native VLAN ID>.

11. Type switchport trunk allowed vlan <NFS VLAN ID, iSCSI-A VLAN ID, iSCSI-B VLAN ID>.

12. Type spanning-tree port type edge trunk.

13. Type no shutdown.

14. Type exit.

15. Type interface Po12.

16. Type switchport mode trunk.

17. Type switchport trunk native vlan <Native VLAN ID>.

18. Type switchport trunk allowed vlan <NFS VLAN ID, iSCSI-A VLAN ID, iSCSI-B VLAN ID>.

19. Type spanning-tree port type edge trunk.

20. Type no shutdown.

21. Type exit.

22. Type interface Po13.

23. Type switchport mode trunk.

24. Type switchport trunk native vlan <Native VLAN ID>.

25. Type switchport trunk allowed vlan <MGMT VLAN ID, NFS VLAN ID, iSCSI-A VLAN ID, vMotion VLAN ID, VM-Traffic VLAN ID>.

26. Type spanning-tree port type edge trunk.

27. Type no shutdown.

28. Type exit.

29. Type interface Po14.

30. Type switchport mode trunk.

31. Type switchport trunk native vlan <Native VLAN ID>.

32. Type switchport trunk allowed vlan <MGMT VLAN ID, NFS VLAN ID, iSCSI-B VLAN ID, vMotion VLAN ID, VM-Traffic VLAN ID>.

33. Type spanning-tree port type edge trunk.

34. Type no shutdown.

35. Type exit.

36. Type copy run start.

Configure Virtual PortChannels

These steps provide details for configuring virtual PortChannels (vPCs).

Cisco Nexus 5548 A

1. From the global configuration mode, type vpc domain <Nexus vPC domain ID>.

2. Type role priority 10.

3. Type peer-keepalive destination <Nexus B mgmt0 IP> source <Nexus A mgmt0 IP>.

4. Type exit.

5. Type interface Po10.

6. Type vpc peer-link.

7. Type exit.

8. Type interface Po11.

9. Type vpc 11.

10. Type exit.

11. Type interface Po12.

12. Type vpc 12.

13. Type exit.

14. Type interface Po13.

15. Type vpc 13.

16. Type exit.

17. Type interface Po14.

18. Type vpc 14.

19. Type exit.

20. Type copy run start.

Cisco Nexus 5548 B

1. From the global configuration mode, type vpc domain <Nexus vPC domain ID>.

2. Type role priority 20.

3. Type peer-keepalive destination <Nexus A mgmt0 IP> source <Nexus B mgmt0 IP>.

4. Type exit.

5. Type interface Po10.

6. Type vpc peer-link.

7. Type exit.

8. Type interface Po11.

9. Type vpc 11.

10. Type exit.

11. Type interface Po12.

12. Type vpc 12.

13. Type exit.

14. Type interface Po13.

15. Type vpc 13.

16. Type exit.

17. Type interface Po14.

18. Type vpc 14.

19. Type exit.

20. Type copy run start.

Uplink Into Existing Network Infrastructure

Depending on the available network infrastructure, several methods and features can be used to uplink the FlexPod environment. If an existing Cisco Nexus environment is present, it is recommended to use virtual port channels to uplink the Cisco Nexus 5548 switches included in the FlexPod environment into the infrastructure. The previously described procedures can be used to create an uplink vPC to the existing environment. Make sure to type copy run start to save the configuration on each switch once configuration is completed.

Cisco Unified Computing System Deployment Procedure

The following section provides a detailed procedure for configuring the Cisco Unified Computing System for use in a FlexPod environment. These steps should be followed precisely because a failure to do so could result in an improper configuration.

Perform Initial Setup of Cisco UCS C-Series Servers

These steps provide details for initial setup of the Cisco UCS C-Series Servers. It is important to get the systems to a known state with the appropriate firmware package.

All C-Series Servers

1. From a system connected to the internet, download the appropriate Cisco UCS Host Upgrade Utility Release 1.4(3c) for your C-Series servers from www.cisco.com. Navigate to Downloads Home > Products Unified Computing and Servers > Cisco UCS C-Series Rack-Mount Standalone Server Software.

2. After downloading the Host Upgrade Utility package, install the contents to a recordable CD / DVD.

3. Power on the C-Series server and insert the media into the C-Series server.

4. Monitor the system as it proceeds through Power On Self Test (POST).

5. Enter F8 to enter the CIMC Config.

6. Upon entering the CIMC Configuration Utility, select the box to return the CIMC to its factory defaults.

7. Enter F10 to save.

8. Enter F10 to confirm the configuration and the system will reset the CIMC to its factory defaults and automatically reboot.

9. Monitor the system as it proceeds through POST.

10. Enter F6 to enter the Boot Selection Menu.

11. Select the SATA DVD Drive when prompted and the server will boot into the upgrade utility.

12. Press Y in the C-Series Host Based Upgrade utility screen to acknowledge the Cisco EULA.

13. Select option 8 to upgrade all of the upgradeable items installed.

14. The system will then begin updating the various components, a process that can take 10 -15 minutes.

15. If the system prompts a note that the current version of the LOM is equal to the upgrade version and if the system should continue, enter Y.

16. Press any key to acknowledge completion of the updates.

17. Select option 10 to reboot the server.

18. Eject the upgrade media from the server.

Perform Initial Setup of the Cisco UCS 6248 Fabric Interconnects

These steps provide details for initial setup of the Cisco UCS 6248 fabric Interconnects

Cisco UCS 6248 A

1. Connect to the console port on the first Cisco UCS 6248 fabric interconnect.

2. At the prompt to enter the configuration method, enter console to continue.

3. If asked to either do a new setup or restore from backup, enter setup to continue.

4. Enter y to continue to set up a new fabric interconnect.

5. Enter y to enforce strong passwords.

6. Enter the password for the admin user.

7. Enter the same password again to confirm the password for the admin user.

8. When asked if this fabric interconnect is part of a cluster, answer y to continue.

9. Enter A for the switch fabric.

10. Enter the cluster name for the system name.

11. Enter the Mgmt0 IPv4 address.

12. Enter the Mgmt0 IPv4 netmask.

13. Enter the IPv4 address of the default gateway.

14. Enter the cluster IPv4 address.

15. To configure DNS, answer y.

16. Enter the DNS IPv4 address.

17. Answer y to set up the default domain name.

18. Enter the default domain name.

19. Review the settings that were printed to the console, and if they are correct, answer yes to save the configuration.

20. Wait for the login prompt to make sure the configuration has been saved.

Cisco UCS 6248 B

1. Connect to the console port on the second Cisco UCS 6248 fabric interconnect.

2. When prompted to enter the configuration method, enter console to continue.

3. The installer detects the presence of the partner fabric interconnect and adds this fabric interconnect to the cluster. Enter y to continue the installation.

4. Enter the admin password for the first fabric interconnect.

5. Enter the Mgmt0 IPv4 address.

6. Answer yes to save the configuration.

7. Wait for the login prompt to confirm that the configuration has been saved.

Log Into Cisco UCS Manager

Cisco UCS Manager

These steps provide details for logging into the Cisco UCS environment.

1. Open a Web browser and navigate to the Cisco UCS 6248 fabric interconnect cluster address.

2. Select the Launch link to download the Cisco UCS Manager software.

3. If prompted to accept security certificates, accept as necessary.

4. When prompted, enter admin for the username and enter the administrative password and click Login to log in to the Cisco UCS Manager software.

Upgrade Cisco UCS Manager Software to Version 2.0(2m)

This document assumes the use of Cisco UCS Manager 2.0(2m). Refer to Upgrading Between Cisco UCS 2.0 Releases to upgrade the Cisco UCS Manager software and Cisco UCS 6248 Fabric Interconnect software to version 2.0(2m). Also, make sure that the Cisco UCS C-Series version 2.0 (2m) software bundle is loaded on the fabric interconnects.

Add a Block of IP Addresses for KVM Access

These steps provide details for creating a block of KVM IP addresses for server access in the Cisco UCS environment. This block of IP addresses should be in the same subnet as the management IP addresses for the Cisco UCS Manager.

Cisco UCS Manager

1. Select the Admin tab at the top of the left window.

2. Select All > Communication Management.

3. Right-click Management IP Pool.

4. Select Create Block of IP Addresses.

5. Enter the starting IP address of the block and number of IPs needed as well as the subnet and gateway information.

6. Click OK to create the IP block.

7. Click OK in the message box.

Synchronize Cisco UCS to NTP

These steps provide details for synchronizing the Cisco UCS environment to the NTP server.

Cisco UCS Manager

1. Select the Admin tab at the top of the left window.

2. Select All > Timezone Management.

3. In the right pane, select the appropriate timezone in the Timezone drop-down menu.

4. Click Save Changes and then OK.

5. Click Add NTP Server.

6. Input the NTP server IP and click OK.

7. Click OK.

Edit the Chassis Discovery Policy

These steps provide details for modifying the chassis discovery policy. Setting the discovery policy now will simplify the addition of future Cisco B-Series UCS Chassis and additional fabric extenders for further Cisco UCS C-Series connectivity.

Cisco UCS Manager

1. Navigate to the Equipment tab in the left pane.

2. In the right pane, click the Policies tab.

3. Under Global Policies, change the Chassis Discovery Policy to 2-link.

4. Change the Link Grouping Preference to Port Channel.

5. Click Save Changes in the bottom right corner.

6. Click OK.

Enable Server and Uplink Ports

These steps provide details for enabling server and uplinks ports.

Cisco UCS Manager

1. Select the Equipment tab on the top left of the window.

2. Select Equipment > Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.

3. Expand the Unconfigured Ethernet Ports section.

4. Select the number of ports that are connected to the Cisco 2232 FEX (2 per FEX), right-click them, and select Configure as Server Port.

5. A prompt displays asking if this is what you want to do. Click Yes, then OK to continue.

6. Select ports 19 and 20 that are connected to the Cisco Nexus 5548 switches, right-click them, and select Configure as Uplink Port.

7. A prompt displays asking if this is what you want to do. Click Yes, then OK to continue.

8. Select Equipment > Fabric Interconnects > Fabric Interconnect B (subordinate) > Fixed Module.

9. Expand the Unconfigured Ethernet Ports section.

10. Select ports the number of ports that are connected to the Cisco 2232 Fexes (2 per Fex), right-click them, and select Configure as Server Port.

11. A prompt displays asking if this is what you want to do. Click Yes, then OK to continue.

12. Select ports 19 and 20 that are connected to the Cisco Nexus 5548 switches, right-click them, and select Configure as Uplink Port. Test

13. A prompt displays asking if this is what you want to do. Click Yes, then OK to continue.

Create Uplink PortChannels to the Cisco Nexus 5548 Switches

These steps provide details for configuring the necessary PortChannels out of the Cisco UCS environment.

Cisco UCS Manager

1. Select the LAN tab on the left of the window.


Note Two PortChannels are created, one from fabric A to both Cisco Nexus 5548 switches and one from fabric B to both Cisco Nexus 5548 switches.


2. Under LAN Cloud, expand the Fabric A tree.

3. Right-click Port Channels.

4. Select Create Port Channel.

5. Enter 13 as the unique ID of the PortChannel.

6. Enter vPC-13-N5548 as the name of the PortChannel.

7. Click Next.

8. Select the port with slot ID: 1 and port: 19 and also the port with slot ID: 1 and port 20 to be added to the PortChannel.

9. Click >> to add the ports to the PortChannel.

10. Click Finish to create the PortChannel.

11. Select the check box for Show navigator for Port-Channel 13 (Fabric A).

12. Click OK to continue.

13. Under Actions, select Enable Port Channel.

14. In the pop-up box, click Yes, then OK to enable.

15. Wait until the overall status of the Port Channel is up.

16. Click OK to close the Navigator.

17. Under LAN Cloud, expand the Fabric B tree.

18. Right-click Port Channels.

19. Select Create Port Channel.

20. Enter 14 as the unique ID of the PortChannel.

21. Enter vPC-14-N5548 as the name of the PortChannel.

22. Click Next.

23. Select the port with slot ID: 1 and port: 19 and also the port with slot ID: 1 and port 20 to be added to the PortChannel.

24. Click >> to add the ports to the PortChannel.

25. Click Finish to create the PortChannel.

26. Select the check box for Show navigator for Port-Channel 14 (Fabric B).

27. Click OK to continue.

28. Under Actions, select Enable Port Channel.

29. In the pop-up box, click Yes, then OK to enable.

30. Wait until the overall status of the Port Channel is up

31. Click OK to close the Navigator.

Create an Organization

These steps provide details for configuring an organization in the Cisco UCS environment. Organizations are used as a means to organize and restrict access to various groups within the IT organization, thereby enabling multi-tenancy of the compute resources. This document does not assume the use of Organizations, however the necessary steps are included below.

Cisco UCS Manager

1. From the New... menu at the top of the window, select Create Organization.

2. Enter a name for the organization.

3. Enter a description for the organization (optional).

4. Click OK.

5. In the message box that displays, click OK.

Create MAC Address Pools

These steps provide details for configuring the necessary MAC address pools for the Cisco UCS environment.

Cisco UCS Manager

1. Select the LAN tab on the left of the window.

2. Select Pools > root.


Note Two MAC address pool are created, one for each switching fabric.


3. Right-click MAC Pools under the root organization.

4. Select Create MAC Pool to create the MAC address pool.

5. Enter MAC_Pool_A for the name of the MAC pool.

6. (Optional) Enter a description of the MAC pool.

7. Click Next.

8. Click Add.

9. Specify a starting MAC address. It is recommend to place 0A in the next to last octet of the starting MAC address to differentiate the MAC addresses as Fabric A addresses.

10. Specify a size of the MAC address pool sufficient to support the available server resources.

11. Click OK.

12. Click Finish.

13. In the message box that displays, click OK.

14. Right-click MAC Pools under the root organization.

15. Select Create MAC Pool to create the MAC address pool.

16. Enter MAC_Pool_B for the name of the MAC pool.

17. (Optional) Enter a description of the MAC pool.

18. Click Next.

19. Click Add.

20. Specify a starting MAC address. It is recommend to place 0B in the next to last octet of the starting MAC address to differentiate the MAC addresses as Fabric B addresses.

21. Specify a size of the MAC address pool sufficient to support the available server resources.

22. Click OK.

23. Click Finish.

24. In the message box that displays, click OK.

Create IQN Pools for iSCSI Boot

These steps provide details for configuring the necessary IQN pools for the Cisco UCS environment.

Cisco UCS Manager

1. Select the SAN tab on the left of the window.

2. Select Pools > root.


Note Two IQN pools are created, one for each switching fabric.


3. Right-click IQN Pools under the root organization.

4. Select Create IQN Suffix Pool to create the IQN pool.

5. Enter IQN_Pool_A for the name of the IQN pool.

6. (Optional) Enter a description of the IQN pool.

7. Enter iqn.1992-08.com.cisco for the Prefix.

8. Click Next.

9. Click Add.

10. Specify Fabric-A-ucs-host as the Suffix:.

11. Specify 1 as the From:.

12. Specify a size of the IQN block sufficient to support the available server resources.

13. Click OK.

14. Click Finish.

15. In the message box that displays, click OK.

16. Right-click IQN Pools under the root organization.

17. Select Create IQN Suffix Pool to create the IQN pool.

18. Enter IQN_Pool_B for the name of the IQN pool.

19. (Optional) Enter a description of the IQN pool.

20. Enter iqn.1992-08.com.cisco for the Prefix:.

21. Click Next.

22. Click Add.

23. Specify Fabric-B-ucs-host as the Suffix:. Specify 1 as the From:.

24. Specify a size of the IQN block sufficient to support the available server resources.

25. Click OK.

26. Click Finish.

27. In the message box that displays, click OK.

Create UUID Suffix Pool

These steps provide details for configuring the necessary UUID suffix pool for the Cisco UCS environment.

Cisco UCS Manager

1. Select the Servers tab on the top left of the window.

2. Select Pools > root.

3. Right-click UUID Suffix Pools.

4. Select Create UUID Suffix Pool.

5. Name the UUID suffix pool UUID_Pool.

6. (Optional) Give the UUID suffix pool a description.

7. Leave the prefix at the derived option.

8. Click Next to continue.

9. Click Add to add a block of UUID's

10. Retain the From field at the default setting.

11. Specify a size of the UUID block sufficient to support the available server resources.

12. Click OK.

13. Click Finish to proceed.

14. Click OK to finish.

Create Server Pool

These steps provide details for configuring the necessary server pool for the Cisco UCS environment.

Cisco UCS Manager

1. Select the Servers tab at the top left of the window.

2. Select Pools > root.

3. Right-click Server Pools.

4. Select Create Server Pool.

5. Name the server pool Infra_Pool.

6. (Optional) Give the server pool a description.

7. Click Next to continue to add servers.

8. Select two C200 servers to be added to the Infra_Pool server pool. Click >> to add them to the pool.

9. Click Finish.

10. Select OK to finish.

Create VLANs

These steps provide details for configuring the necessary VLANs for the Cisco UCS environment.

Cisco UCS Manager

1. Select the LAN tab on the left of the window.


Note Seven VLANs are created.


2. Select LAN Cloud.

3. Right-click VLANs.

4. Select Create VLANs.

5. Enter MGMT-VLAN as the name of the VLAN to be used for management traffic.

6. Keep the Common/Global option selected for the scope of the VLAN.

7. Enter the VLAN ID for the management VLAN. Keep the sharing type as none.

8. Click OK, then OK.

9. Right-click VLANs.

10. Select Create VLANs.

11. Enter NFS-VLAN as the name of the VLAN to be used for the NFS VLAN.

12. Keep the Common/Global option selected for the scope of the VLAN.

13. Enter the VLAN ID for the NFS VLAN.

14. Click OK, then OK.

15. Right-click VLANs.

16. Select Create VLANs.

17. Enter iSCSI-A-VLAN as the name of the VLAN to be used for the first iSCSI VLAN.

18. Select the Fabric A option selected for the scope of the VLAN.

19. Enter the VLAN ID for the first iSCSI VLAN.

20. Click OK, then OK.

21. Right-click VLANs.

22. Select Create VLANs.

23. Enter iSCSI-B-VLAN as the name of the VLAN to be used for the second iSCSI VLAN.

24. Select the Fabric B option selected for the scope of the VLAN.

25. Enter the VLAN ID for the second iSCSI VLAN.

26. Click OK, then OK.

27. Right-click VLANs.

28. Select Create VLANs.

29. Enter vMotion-VLAN as the name of the VLAN to be used for the vMotion VLAN.

30. Keep the Common/Global option selected for the scope of the VLAN.

31. Enter the VLAN ID for the vMotion VLAN.

32. Click OK, then OK.

33. Right-click VLANs.

34. Select Create VLANs.

35. Enter VM-Traffic-VLAN as the name of the VLAN to be used for the VM Traffic VLAN.

36. Keep the Common/Global option selected for the scope of the VLAN.

37. Enter the VLAN ID for the VM Traffic VLAN.

38. Click OK, then OK.

39. Right-click VLANs.

40. Select Create VLANs.

41. Enter Native-VLAN as the name of the VLAN to be used for the Native VLAN.

42. Keep the Common/Global option selected for the scope of the VLAN.

43. Enter the VLAN ID for the Native VLAN.

44. Click OK, then OK.

45. In the list of VLANs in the left pane, right-click the newly created Native-VLAN and select Set as Native VLAN.

46. Click Yes and OK.

Create a Firmware Management Package

These steps provide details for a firmware management policy for the Cisco UCS environment.

Cisco UCS Manager

1. Select the Servers tab at the top left of the window.

2. Select Policies > root.

3. Right-click Management Firmware Packages.

4. Select Create Management Firmware Package.

5. Enter VM-Host-Infra as the management firmware package name.

6. Select the appropriate packages and latest versions of the Server Management Firmware for the servers that you have.

7. Click OK to complete creating the management firmware package.

8. Click OK.

Create Host Firmware Package Policy

These steps provide details for creating a firmware management policy for a given server configuration in the Cisco UCS environment. Firmware management policies allow the administrator to select the corresponding packages for a given server configuration. These often include adapter, BIOS, board controller, FC adapters, HBA option ROM, and storage controller properties.

Cisco UCS Manager

1. Select the Servers tab at the top left of the window.

2. Select Policies > root.

3. Right-click Host Firmware Packages.

4. Select Create Host Firmware Package.

5. Enter VM-Host-Infra as the name of the host firmware package.

6. Navigate the tabs of the Create Host Firmware Package Navigator and select the appropriate packages and versions for the server configuration.

7. Click OK to complete creating the host firmware package.

8. Click OK.

Set Jumbo Frames in Cisco UCS Fabric

These steps provide details for setting Jumbo frames and enabling the quality of server in the Cisco UCS Fabric.

Cisco UCS Manager

1. Select the LAN tab at the top left of the window.

2. Go to LAN Cloud > QoS System Class.

3. In the right pane, click the General tab.

4. On the Best Effort row, type 9000 in the MTU box.

5. Click Save Changes in the bottom right corner.

6. Click OK to continue.

Create a Local Disk Configuration Policy

These steps provide details for creating a local disk configuration for the Cisco UCS environment, which is necessary if the servers in question do not have a local disk.


Note This policy should not be used on servers that contain local disks.


Cisco UCS Manager

1. Select the Servers tab on the left of the window.

2. Go to Policies > root.

3. Right-click Local Disk Config Policies.

4. Select Create Local Disk Configuration Policy.

5. Enter SAN-Boot as the local disk configuration policy name.

6. Change the Mode to No Local Storage.

7. Uncheck the Protect Configuration box.

8. Click OK to complete creating the host firmware package.

9. Click OK.

Create a Network Control Policy for Cisco Discovery Protocol (CDP)

These steps provide details for creating a network control policy that enables CDP on virtual network ports.

Cisco UCS Manager

1. Select the LAN tab on the left of the window.

2. Go to Policies > root.

3. Right-click Network Control Policies.

4. Select Create Network Control Policy.

5. Enter Enable_CDP as the policy name.

6. Click the Enabled radio button for CDP.

7. Click OK to complete creating the Network Control Policy.

8. Click OK.

Create a Server Pool Qualification Policy

These steps provide details for creating a server pool qualification policy for the Cisco UCS environment.

Cisco UCS Manager

1. Select the Servers tab on the left of the window.

2. Go to Policies > root.

3. Right-click Server Pool Policy Qualifications.

4. Select Create Server Pool Policy Qualification.

5. Type C200-M2 as the name for the Policy.

6. Select Create Server PID Qualifications.

7. Enter R200-1120402W as the PID.

8. Click OK to complete creating the server qualification.

9. Click OK.

10. Click OK.

Create a Server BIOS Policy

These steps provide details for creating a server BIOS policy for the Cisco UCS environment.

Cisco UCS Manager

1. Select the Servers tab on the left of the window.

2. Go to Policies > root.

3. Right-click BIOS Policies.

4. Select Create BIOS Policy.

5. Enter VM-Host-Infra as the BIOS policy name.

6. Change the Quiet Boot property to Disabled.

7. Click Finish to complete creating the BIOS policy.

8. Click OK.

Create vNIC Placement Policy for Virtual Machine Infrastructure Hosts

These steps provide details for creating a vNIC placement policy for infrastructure hosts.

Cisco UCS Manager

1. Select the Servers tab on the left of the window.

2. Go to Policies > root.

3. Right-click vNIC/HBA Placement policy and select Create Placement Policy.

4. Enter the name VM-Host-Infra.

5. Click 1 and select Assigned Only.

6. Click OK.

7. Click OK.

Create vNIC Templates

These steps provide details for creating multiple vNIC templates for the Cisco UCS environment.

Cisco UCS Manager

1. Select the LAN tab on the left of the window.

2. Go to Policies > root.

3. Right-click vNIC Templates.

4. Select Create vNIC Template.

5. Enter vNIC_Template_A as the vNIC template name.

6. Leave Fabric A selected. Do not check the Enable Failover box. Under target, make sure the VM box is not selected. Select Updating Template as the Template Type. Under VLANs, select MGMT-VLAN, NFS_VLAN, Native-VLAN, VM-Traffic-VLAN, and vMotion-VLAN. Set Native-VLAN as the Native VLAN. Under MTU, enter 9000. Under MAC Pool:, select MAC_Pool_A. Under Network Control Policy:, select Enable_CDP.

7. Click OK to complete creating the vNIC template.

8. Click OK.

9. Select the LAN tab on the left of the window.

10. Go to Policies > root.

11. Right-click vNIC Templates.

12. Select Create vNIC Template.

13. Enter vNIC_Template_B as the vNIC template name.

14. Select Fabric B. Do not check the Enable Failover box. Under target, make sure the VM box is not selected. Select Updating Template as the Template Type. Under VLANs, select MGMT-VLAN, NFS_VLAN, Native-VLAN, VM-Traffic-VLAN, and vMotion-VLAN. Set Native-VLAN as the Native VLAN. Under MTU, enter 9000. Under MAC Pool, select MAC_Pool_B. Under Network Control Policy, select Enable_CDP.

15. Click OK to complete creating the vNIC template.

16. Click OK.

17. Select the LAN tab on the left of the window.

18. Go to Policies > root.

19. Right-click vNIC Templates.

20. Select Create vNIC Template.

21. Enter iSCSI_Template_A as the vNIC template name.

22. Leave Fabric A selected. Do not select the Enable Failover checkbox. Under Target, make sure that the VM checkbox is not selected. Select Updating Template as the Template Type. For VLANs, select iSCSI-A-VLAN. Set iSCSI-A-VLAN as the Native VLAN. For MTU, enter 9000. For MAC Pool, select MAC_Pool_A. For Network Control Policy, select Enable_CDP.

23. Click OK to complete creating the vNIC template.

24. Click OK .

25. Select the LAN tab on the left of the window.

26. Go to Policies > root.

27. Right-click vNIC Templates.

28. Select Create vNIC Template.

29. Enter iSCSI_Template_B as the vNIC template name.

30. Select Fabric B. Do not select the Enable Failover checkbox. Under Target, make sure that the VM checkbox is not selected. Select Updating Template as the Template Type. For VLANs, select iSCSI-B-VLAN. Set iSCSI-B-VLAN as the Native VLAN. For MTU, enter 9000. For Mac Pool, select MAC_Pool_B. For Network Control Policy, select Enable_CDP.

31. Click OK to complete creating the vNIC template.

32. Click OK.

Create Boot Policies

These steps provide details for creating two iSCSI boot policies for the Cisco UCS environment. The first policy will configure the primary boot target to be controller A through Fabric A and the second boot policy primary target will be controller A through Fabric B.

Cisco UCS Manager

1. Select the Servers tab at the top left of the window.

2. Go to Policies > root.

3. Right-click Boot Policies.

4. Select Create Boot Policy.

5. Name the boot policy Boot-Fabric-A.

6. (Optional) Give the boot policy a description.

7. Leave Reboot on Boot Order Change unchecked.

8. Expand Local Devices and select Add CD-ROM.

9. Expand iSCSI vNICs and select Add iSCSI Boot.

10. Enter iSCSI-vNIC-A in the iSCSI vNIC: field in the Add iSCSI Boot window that displays.

11. Click OK to add the iSCSI boot initiator.

12. Under iSCSI vNICs, select Add iSCSI Boot.

13. Enter iSCSI-vNIC-B in the iSCSI vNIC: field in the Add iSCSI Boot window that displays.

14. Click OK to add the iSCSI boot initiator.

15. Click OK to add the Boot Policy.

16. Click OK.

17. Right-click Boot Policies.

18. Select Create Boot Policy.

19. Name the boot policy Boot-Fabric-B.

20. (Optional) Give the boot policy a description.

21. Leave Reboot on Boot Order Change unchecked.

22. Select Add CD-ROM.

23. Under iSCSI vNICs, select Add iSCSI Boot.

24. Enter iSCSI-vNIC-B in the iSCSI vNIC field in the Add iSCSI Boot window that displays.

25. Click OK to add the iSCSI boot initiator.

26. Under iSCSI vNICs, select Add iSCSI Boot.

27. Enter iSCSI-vNIC-A in the iSCSI vNIC: field in the Add iSCSI Boot window that displays.

28. Click OK to add the iSCSI boot initiator.

29. Click OK to add the Boot Policy.

30. Click OK.

Create Service Profiles

This section details the creation of two service profiles: one for fabric A boot and one for fabric B boot. The first profile is created and then cloned and modified for the second host.

Cisco UCS Manager

1. Select the Servers tab at the top left of the window.

2. Go to Service Profile > root.

3. Right-click root.

4. Select Create Service Profile (expert).

5. The Create Service Profile Template window displays.These steps detail configuration info for the Identify the Service Profile Template Section.

a. Name the service profile template VM-Host-Infra-01. This service profile is configured to boot from controller A on Fabric A.

b. In the UUID section, select UUID_Pool as the UUID pool.

c. Click Next to continue to the next section.

6. Storage section

a. Select Default if you do have local disks or select SAN-Boot for the local disk configuration policy if you have no local disks.

b. Select the No vHBAs option for the How would you like to configure SAN connectivity? Field.

c. Click Next to continue to the next section.

7. Networking Section.

a. Leave the Dynamic vNIC Connection Policy field at the default.

b. Select Expert for the How would you like to configure LAN connectivity? option.

c. Click Add to add a vNIC to the template.

d. The Create vNIC window displays. Name the vNIC vNIC-A.

e. Check the Use LAN Connectivity Template checkbox.

f. Select vNIC_Template_A for the vNIC Template field.

g. Select VMWare in the Adapter Policy field.

h. Click OK to add the vNIC to the template.

i. Click Add to add a vNIC to the template.

j. The Create vNIC window displays. Name the vNIC vNIC-B.

k. Check the Use LAN Connectivity Template checkbox.

l. Select vNIC_Template_B for the vNIC Template field.

m. Select VMWare in the Adapter Policy field.

n. Click OK to add the vNIC to the template.

o. Click Add to add a vNIC to the template.

p. The Create vNIC window displays. Name the vNIC iSCSI-vNIC-A.

q. Check the Use LAN Connectivity Template checkbox.

r. Select iSCSI_Template_A for the vNIC Template field.

s. Select VMWare in the Adapter Policy field.

t. Click OK to add the vNIC to the template.

u. Click Add to add a vNIC to the template.

v. The Create vNIC window displays. Name the vNIC iSCSI-vNIC-B.

w. Check the Use LAN Connectivity Template checkbox.

x. Select iSCSI_Template_B for the vNIC Template field.

y. Select VMWare in the Adapter Policy field.

z. Click OK to add the vNIC to the template.

aa. Expand the iSCSI vNICs Section.

ab. Click Add in the iSCSI vNICs section.

ac. Name the iSCSI vNIC iSCSI-vNIC-A.

ad. Set the Overlay vNIC: to iSCSI-vNIC-A.

ae. Set the iSCSI Adapter Policy: to default.

af. Set the VLAN: to iSCSI-A-VLAN.

ag. Do not set an iSCSI MAC Address.

ah. Click OK to add the iSCSI vNIC to the template.

ai. Click Add in the iSCSI vNICs section.

aj. Name the iSCSI vNIC iSCSI-vNIC-B.

ak. Set the Overlay vNIC: to iSCSI-vNIC-B.

al. Set the iSCSI Adapter Policy: to default.

am. Set the VLAN: to iSCSI-B-VLAN.

an. Do not set an iSCSI MAC Address.

ao. Click OK to add the iSCSI vNIC to the template.

ap. Verify: Review the table to make sure that all of the vNICs were created.

aq. Click Next to continue to the next section.

8. vNIC/vHBA Placement Section.

a. Select the VM-Host-Infra Placement Policy in the Select Placement field.

b. Select vCon1 and assign the vNICs in the following order:

vNIC-A

vNIC-B

iSCSI-vNIC-A

iSCSI-vNIC-B

c. Verify: Review the table to make sure that all of the vNICs were assigned in the appropriate order.

d. Click Next to continue to the next section.

9. Server Boot Order Section

a. Select Boot-Fabric-A in the Boot Policy field.

b. In the Boot Order section, select iSCSI-vNIC-A.

c. Click the Set iSCSI Boot Parameters button.

d. In the Set iSCSI Boot Parameters window, set the Initiator Name Assignment: to IQN_Pool_A.

e. Set the Initiator IP Address Policy: to Static.

f. In the IPv4 Address: field enter the iSCSI address for the server corresponding to the iSCSI-A-VLAN subnet.

g. Leave the radio button next to iSCSI Static Target Interface selected and click the green Plus Sign to the right.

h. Log into Storage Controller A and type the following commands, iscsi nodename.

i. Note or copy the iSCSI target nodename.

j. In the Create iSCSI Static Target window, paste the iSCSI target nodename from the Storage Controller A into the Name: field.

k. Input the IP Address for Storage Controller A's vif0-<iSCSI-A-VLAN ID> into the IPv4 Address: field.

l. Click OK to add the iSCSI Static Target.

m. Click OK.

n. In the Boot Order section, select iSCSI-vNIC-B.

o. Click the Set iSCSI Boot Parameters button.

p. In the Set iSCSI Boot Parameters window, set the Initiator Name Assignment: to IQN_Pool_B.

q. Set the Initiator IP Address Policy: to Static.

r. In the IPv4 Address: field enter the iSCSI address for the server corresponding to the iSCSI-B-VLAN subnet.

s. Leave the radio button next to iSCSI Static Target Interface selected and click the green Plus Sign to the right.

t. In the Create iSCSI Static Target window, paste the iSCSI target nodename from the Storage Controller A into the Name: field (same target name as above).

u. Input the IP Address for Storage Controller A's vif0-<iSCSI-B-VLAN ID> into the IPv4 Address: field.

v. Click OK to add the iSCSI Static Target.

w. Click OK.

x. Verify: Review the table to make sure that all of the boot devices were created and identified. Verify that the boot devices are in the correct boot sequence.

y. Click Next to continue to the next section.

10. Maintenance Policy Section.

a. Keep the default of no policy used by default.

b. Click Next to continue to the next section.

11. Server Assignment Section.

a. Select Infra_Pool in the Pool Assignment field.

b. Select C200-M2 for the Server Pool Qualification field.

c. Select Down for the power state.

d. Select VM-Host-Infra in the Host Firmware field.

e. Select VM-Host-Infra in the Management Firmware field.

f. Click Next to continue to the next section.

12. Operational Policies Section

a. Select VM-Host-Infra in the BIOS Policy field.

b. Click Finish to create the Service Profile template.

c. Click OK in the pop-up window to proceed.

13. Select the Servers tab at the top left of the window.

14. Go to Service Profiles> root.

15. Select the previously created VM-Host-Infra-Fabric-A-01 profile.

16. Click Create a Clone.

17. Enter VM-Host-Infra-Fabric-02 in the Clone Name field and click OK.

18. Click OK.

19. Select the newly created service profile and select the Boot Order tab.

20. Click Modify Boot Policy.

21. Select Boot-Fabric-B Boot Policy.

22. In the Boot Order box, select iSCSI-vNIC-B.

23. Click Set iSCSI Boot Parameters.

24. In the Set iSCSI Boot Parameters window, set the Initiator Name Assignment: to IQN_Pool_B.

25. Set the Initiator IP Address Policy: to Static.

26. In the IPv4 Address: field enter the iSCSI address for the server corresponding to the iSCSI-B-VLAN subnet.

27. Click OK.

28. In the Boot Order section, select iSCSI-vNIC-A.

29. Click the Set iSCSI Boot Parameters button.

30. In the Set iSCSI Boot Parameters window, set the Initiator Name Assignment: to IQN_Pool_A.

31. Set the Initiator IP Address Policy: to Static.

32. In the IPv4 Address: field enter the iSCSI address for the server corresponding to the iSCSI-A-VLAN subnet.

33. Click OK.

34. Select the Network tab and click Modify vNIC/HBA Placement.

35. Move iSCSI-vNIC-B ahead of iSCSI-vNIC-A in the placement order and click OK.

36. Click OK.

Add More Servers to the FlexPod Unit

Add server pools, service profile templates, and service profiles in the respective organizations to add more servers to the FlexPod unit. All other pools and policies are at the root level and can be shared among the organizations.

Gather Necessary Information

After the Cisco UCS service profiles have been created (as detailed in the previous steps), the infrastructure blades in the environment each have a unique configuration. To proceed with the FlexPod deployment, specific information must be gathered from each Cisco UCS server. Insert the required information in the tables below.

Table 16 Cisco UCS Server Information

Cisco UCS Service Profile Name
iSCSI-vNIC-A Initiator Name/IP Address
iSCSI-vNIC-A Initiator Name/IP Address

VM-Host-Infra-01

   

VM-Host-Infra-02

   

To gather the information in the table above, do the following:

1. Launch the Cisco UCS Manager GUI.

2. Select the Servers tab.

3. Expand Servers > Service Profiles > root.

4. Click each service profile and then click the Boot Order tab.

5. Expand iSCSI and select each iSCSI vNIC.

6. Click Set iSCSI Boot Parameters.

7. Record the Initiator Name and IP Address for each iSCSI vNIC in each service profile in the table above.

NetApp FAS2240-2 Deployment Procedure: Part 2

Add Infrastructure Host Boot LUNs

These steps provide details for adding the necessary boot LUNs on the storage controller for SAN boot of the Cisco UCS hosts.

Controller A

1. Run the following command: lun create -s 10g -t vmware -o noreserve /vol/esxi_boot/VM-Host-Infra-0.

2. Run the following command: lun create -s 10g -t vmware -o noreserve /vol/esxi_boot/VM-Host-Infra-02.

Create iSCSI igroups

These steps provide details for creating the iSCSI igroups on the storage controller for SAN boot of the Cisco UCS hosts.

Controller A

1. Run the following command: igroup create -i -t vmware VM-Host-Infra-01 <VM-Host-Infra-01 iSCSI-vNIC-A Initiator ID> <VM-Host-Infra-01 iSCSI-vNIC-B Initiator ID>.

2. Run the following command: igroup create -i -t vmware VM-Host-Infra-02 <VM-Host-Infra-02 iSCSI-vNIC-A Initiator ID> <VM-Host-Infra-02 iSCSI-vNIC-B Initiator ID>.

Map LUNs to igroups

These steps provide details for mapping the boot LUNs to the iSCSI igroups on the storage controller for SAN boot of the Cisco UCS hosts.

Controller A

1. Run the following command: lun map /vol/esxi_boot/VM-Host-Infra-01 VM-Host-Infra-01 0.

2. Run the following command: lun map /vol/esxi_boot/VM-Host-Infra-02 VM-Host-Infra-02 0.

3. Run the following command: lun show -m.

4. Verify that the created LUNs are mapped correctly.

VMware ESXi 5.0 Deployment Procedure

The following subsections (through "Move the VM Swap File Location") provide detailed procedures for installing VMware ESXi 5.0 in a VMware vSphere Built On FlexPod With IP-Based Storage environment. The deployment procedures that follow are customized to include the environment variables described in previous sections. By the end of this section, two iSCSI booted ESX hosts will be provisioned.


Note Multiple methods exist for installing ESXi in such an environment. This procedure highlights using the built-in KVM console and virtual media features in Cisco UCS Manager to map remote installation media to each individual server and connect to their iSCSI boot LUNs.


Log Into the Cisco UCS 6200 Fabric Interconnects

Cisco UCS Manager

1. Log in to the Cisco UCS 6200 fabric interconnects and launch the Cisco UCS Manager application.

2. In the main window, select the Servers tab.

3. Select Servers > Service Profiles > root > VM-Host-Infra-01.

4. Navigate to the Actions section and select the KVM Console link.

5. Select Servers > Service Profiles > root > VM-Host-Infra-02.

6. Navigate to the Actions section and select the KVM Console link.

Set Up the ESXi Installation

ESXi Hosts VM-Host-Infra-01 and VM-Host-Infra-02

1. In the KVM window, select the Virtual Media tab.

2. Click the Add Image button in the window that is displayed.

3. Browse to the ESXi installer ISO image file.

4. Click Open to add the image to the list of virtual media.

5. Select the Mapped checkbox next to the entry corresponding to the image you just added.

6. In the KVM window, select the KVM tab to monitor during boot.

7. In the KVM window, click the Boot Server button in the upper-left corner.

8. Click OK.

9. Click OK.

Install ESXi

ESXi Hosts VM-Host-Infra-01 and VM-Host-Infra-02

1. On reboot, the machine detects the presence of the ESXi installation media.

2. Select ESXi Installer from the menu that is displayed.

3. When the installer has finished loading, press Enter to continue with the installation.

4. Read through the EULA, press F11 to accept it, and continue with the installation.

5. Select the NetApp LUN that you set up previously as the installation disk for ESXi, then press Enter to continue.

6. Select a keyboard layout and press Enter to continue.

7. Enter and confirm the root password and press Enter to continue.

8. The installer warns you that existing partitions will be removed on the volume. If you are sure that this is what you want, press F11 to install ESXi.

9. After the installation is complete, unmap the ESXi installation image by deselecting the Mapped checkbox in the Virtual Media window. (This makes sure that the server reboots into ESXi and not into the installer.)

10. The Virtual Media window might warn you that it is preferable to eject the media from the guest. Because you cannot do this (and the media is read-only), click Yes and unmap it.

11. Back at the KVM tab, press Enter to reboot the server.

Set Up the ESXi Hosts' Management Networking

ESXi Hosts VM-Host-Infra-01 and VM-Host-Infra-02

1. When the server has finished rebooting, press F2 (the Customize System option).

2. Log in with root as the login name and the root password that you entered in step 7 of the previous procedure.

3. Select the Configure Management Network menu option.

4. Select the VLAN (optional) menu option.

5. Enter the <MGMT VLAN ID> and press Enter.

Set Up Management Networking for Each ESXi Host

ESXi Host VM-Host-Infra-01

1. From the Configure Management Network menu, select IP Configuration.

2. Select the Set static IP address and network configuration option by using the space bar to manually set up the management networking.

3. Enter the IP address for managing the first ESXi host.

4. Enter the subnet mask for the first ESXi host.

5. Enter the default gateway for the first ESXi host.

6. Press Enter to accept the changes to the management networking.

7. Select the DNS Configuration menu option.

8. Because you manually specified the IP configuration for the ESXi host, you must also specify the DNS information manually.

9. Enter the IP address of the primary DNS server.

10. (Optional) Enter the IP address of the secondary DNS server.

11. Enter the fully qualified domain name (FQDN) for the first ESXi host.

12. Press Enter to accept the changes to the DNS configuration.

13. Press Esc to exit the Configure Management Network submenu.

14. Enter y to confirm the changes and return to the main menu.

15. Select Test Management Network and verify that the management network is set up correctly.

16. Press Esc to log out of the VMware Console.

ESXi Host VM-Host-Infra-02

1. From the Configure Management Network menu, select IP Configuration.

2. Select the Set static IP address and network configuration option by using the space bar to manually set up the management networking.

3. Enter the IP address for managing the second ESXi host.

4. Enter the subnet mask for the second ESXi host.

5. Enter the default gateway for the second ESXi host.

6. Press Enter to accept the changes to the management networking.

7. Select the DNS Configuration menu option.

8. Because you manually specified the IP configuration for the ESXi host, you must also specify the DNS information manually.

9. Enter the IP address of the primary DNS server.

10. (Optional) Enter the IP address of the secondary DNS server.

11. Enter the fully qualified domain name (FQDN) for the second ESXi host.

12. Press Enter to accept the changes to the DNS configuration.

13. Press Esc to exit the Configure Management Network submenu.

14. Enter y to confirm the changes and return to the main menu.

15. Select Test Management Network and verify that the management network is set up correctly.

16. Press Esc to log out of the VMware Console.

Download VMware vSphere Client and vSphere Remote Command Line

1. Open a Web browser and navigate to http://<VM-Host-Infra-01 IP address>.

2. Download and install both the vSphere client and the Windows version of the vSphere remote command line.


Note These downloads come from the VMware Web site and Internet access is required on the management workstation.


Log in to VMware ESXi Host Using the VMware vSphere Client

ESXi Host VM-Host-Infra-01

1. Open the vSphere client and enter <VM-Host-Infra-01 IP address> as the host to connect to.

2. Enter root for the username.

3. Enter the root password.

4. Click the Login button to connect.

ESXi Host VM-Host-Infra-02

1. Open the vSphere client and enter <VM-Host-Infra-02 IP address> as the host to connect to.

2. Enter root for the username.

3. Enter the root password.

4. Click the Login button to connect.

Change the iSCSI Boot Port MTU to Jumbo

ESXi Host VM-Host-Infra-01 and ESXi Host VM-Host-Infra-02

1. In the vSphere Client, select the host on the left panel.

2. Select the Configuration tab.

3. Select the Networking link in the Hardware box.

4. Select the Properties link in the right field on iScsiBootvSwitch.

5. Select the vSwitch configuration and click the Edit button.

6. Change the MTU field to 9000 and click OK.

7. Select the iScsiBootPG configuration and click the Edit button.

8. Change the MTU field to 9000 and click OK.

9. Click Close to close the dialog box.

Load Updated Cisco VIC enic Driver Version 2.1.2.22

Download and expand the zip file for the Cisco enic driver, available for the C-Series servers: https://download3.vmware.com/software/SCATEST/Cisco_ENIC/enic_driver_2.1.2.22-564611.zip.

ESXi Hosts VM-Host-Infra-01 and VM-Host-Infra-02

1. In the vSphere client, select the host on the left panel.

2. Select the Summary tab.

3. On the right, under Resources > Storage, right-click datastore1 and select Browse Datastore.

4. Click the fourth button and select Upload File.

5. Navigate to the folder where the expanded downloaded zip file is located and select the net-enic-2.1.2.22-1OEM.500.0.0.441683.x86_64.vib file.

6. Click Open.

7. Click Yes. The .vib file is uploaded to datastore1.

8. Open the VMware vSphere CLI Command Prompt that was installed earlier.

9. For each ESXi Host in the Command Prompt, enter esxcli -s <ESXi Host IP> -u root -p <root password> software vib install -v /vmfs/volumes/datastore1/net-enic-2.1.2.22-1OEM.500.0.0.441683.x86_64.vib, as shown in the following screenshot.

10. Back at the vSphere client, right-click the host in the left panel and select Reboot.

11. Click Yes to continue.

12. Enter a reason for the reboot and click OK.

13. When the reboot is complete, log back in to both hosts using the vSphere client.

Set Up iSCSI Boot Ports on Virtual Switches

ESXi Host VM-Host-Infra-01

1. In the vSphere client, select the host on the left panel.

2. Select the Configuration tab.

3. Select the Networking link in the Hardware box.

4. Select the Properties link in the right field on iScsiBootvSwitch.

5. Select the iScsiBootPG configuration and click the Edit button.

6. Change the Network Label field to VMkernel-iSCSI-A and click OK. Do not set a VLAN ID.

7. Click Close to close the dialog box.

8. In the upper-right corner of the screen, click Add Networking.

9. Select the VMkernel radio button and click Next.

10. Select the checkbox next to vmnic3, deselect the checkbox next to vmnic1, and click Next.

11. Change the Network Label field to VMkernel-iSCSI-B and click Next. Do not set a VLAN ID.

12. Enter the IP address and subnet mask for the iSCSI-B port for host VM-Host-Infra-01, gathered earlier from the Cisco UCS Manager, and click Next.

13. Click Finish to complete vSwitch1. The Networking window in vSphere client should look like the following screenshot.

14. Select the Properties link in the right field on vSwitch1.

15. Select the vSwitch configuration and click Edit.

16. Change the MTU field to 9000 and click OK.

17. Select the VMkernel-iSCSI-B configuration and click Edit.

18. Change the MTU field to 9000 and click OK.

19. Click Close to close the window.

20. Select the Storage Adapters link in the Hardware box.

21. Make sure that iSCSI Software Adapter is selected under Storage Adapters and select Properties. in the right middle of the window.

22. Select the Network Configuration tab and click Add.

23. Select VMkernel-iSCSI-A and click OK.

24. Click Add.

25. Select VMkernel-iSCSI-B and click OK.

26. Select the Static Discovery tab.

27. Click Settings.

28. Right-click the iSCSI Target Name field and select Copy.

29. Click Close.

30. Click Add.

31. Right-click the iSCSI Target Name field and select Paste.

32. Enter the <Controller A iSCSI-B IP> in the iSCSI Server field.

33. Click OK.

34. Click Close.

35. Click Yes.

36. The Details of the iSCSI Software Adapter should show three paths to the NETAPP iSCSI Disk. Right-click NETAPP iSCSI Disk and select Manage Paths.

37. Using the pull-down menu, change the Path Selection field to Round Robin (VMware). Click Change.

38. All three paths should now show Active (I/O). Click Close.

39. Right-click the host on the left panel and select Reboot.

40. Click Yes to continue.

41. Enter a reason for the reboot and click OK.

42. The reboot can take as long as 12 minutes. When the reboot is complete, log back in to host VM-Host-Infra-01 using the vSphere client.

ESXi Host VM-Host-Infra-02

1. In the vSphere client, select the host on the left panel.

2. Select the Configuration tab.

3. Select the Networking link in the Hardware box.

4. Select the Properties link in the right field on iScsiBootvSwitch.

5. Select the iScsiBootPG configuration and click the Edit button.

6. Change the Network Label field to VMkernel-iSCSI-B and click OK. Do not set a VLAN ID.

7. Click Close to close the dialog box.

8. In the upper-right corner of the screen, click Add Networking.

9. Select the VMkernel radio button and click Next.

10. Select the checkbox next to vmnic3, deselect the checkbox next to vmnic1, and click Next.

11. Change the Network Label field to VMkernel-iSCSI-A and click Next. Do not set a VLAN ID.

12. Enter the IP address and subnet mask for the iSCSI-A port for host VM-Host-Infra-02, gathered earlier from the Cisco UCS Manager, and click Next.

13. Click Finish to complete vSwitch1. The Networking window in the vSphere client should look like the following screenshot.

14. Click the Properties link in the right field on vSwitch1.

15. Select the vSwitch configuration and click Edit.

16. Change the MTU field to 9000 and click OK.

17. Select the VMkernel-iSCSI-A configuration and click Edit.

18. Change the MTU field to 9000 and click OK.

19. Click Close to close the window.

20. Select the Storage Adapters link in the Hardware box.

21. Make sure that iSCSI Software Adapter is selected under Storage Adapters and select Properties in the right middle of the window.

22. Select the Network Configuration tab and click Add.

23. Select VMkernel-iSCSI-A and click OK.

24. Click Add.

25. Select VMkernel-iSCSI-B and click OK.

26. Select the Static Discovery tab.

27. Click Settings.

28. Right-click the iSCSI Target Name field and select Copy.

29. Click Close.

30. Click Add.

31. Right-click the iSCSI Target Name field and select Paste.

32. Enter the <Controller A iSCSI-A IP> in the iSCSI Server field.

33. Click OK.

34. Click Close.

35. Click Yes.

36. The Details of the iSCSI Software Adapter should show three paths to the NETAPP iSCSI Disk. Right-click the NETAPP iSCSI Disk and select Manage Paths.

37. Using the pull-down menu, change the Path Selection field to Round Robin (VMware). Click Change.

38. All three paths should now show Active (I/O). Click Close.

39. Right-click the host on the left panel and select Reboot.

40. Click Yes to continue.

41. Enter a reason for the reboot and click OK.

42. The reboot can take as long as 12 minutes. When the reboot is complete, log back in to host VM-Host-Infra-02 using the vSphere client.

Set Up VMkernel Ports and Virtual Switch

ESXi Host VM-Host-Infra-01

1. In the vSphere client, select the host on the left panel.

2. Select the Configuration tab.

3. Select the Networking link in the Hardware box.

4. Select the Properties link in the right field on vSwitch0.

5. Select the Network Adapters tab and click Add.

6. Select the checkbox next to vmnic1 and click Next.

7. Click Next.

8. Click Finish.

9. Select the Ports tab, select the vSwitch configuration and click Edit.

10. On the General tab, change the MTU field to 9000.

11. Select the NIC Teaming tab.

12. Using the pull-down menu, change the Load Balancing field to Route based on source MAC hash.

13. Click OK.

14. Select the Management Network configuration and click Edit.

15. Change the Network Label to VMkernel-MGMT and select the Management Traffic checkbox.

16. Select the NIC Teaming tab.

17. Using the pull-down menu, change the Load Balancing field to Route based on source MAC hash.

18. Click OK.

19. Select the VM Network configuration and click Edit.

20. Change the Network Label to MGMT Network and enter <MGMT VLAN ID> for the VLAN ID (Optional) field.

21. Click OK to continue.

22. Click Add.

23. Select the VMkernel radio button and click Next.

24. Change the Network Label to VMkernel-NFS and enter <NFS VLAN ID> for the VLAN ID (Optional) field.

25. Click Next.

26. Enter the IP address and subnet mask for the NFS VLAN interface for VM-Host-Infra-01.

27. Click Next.

28. Click Finish.

29. Select the VMkernel-NFS configuration and click Edit.

30. Change the MTU field to 9000 and click OK.

31. Click Add.

32. Select the VMkernel radio button and click Next.

33. Change the Network Label to VMkernel-vMotion, enter <vMotion VLAN ID> for the VLAN ID (Optional) field, and select the Use this port group for vMotion checkbox.

34. Click Next.

35. Enter the IP address and subnet mask for the vMotion VLAN interface for VM-Host-Infra-01.

36. Click Next.

37. Click Finish.

38. Select the VMkernel-vMotion configuration and click Edit.

39. Change the MTU field to 9000 and click OK.

40. Click Add.

41. Select the Virtual Machine radio button and click Next.

42. Change the Network Label to VM Traffic Network and enter <VM-Traffic VLAN ID> for the VLAN ID (Optional) field.

43. Click Next.

44. Click Finish.

45. Click Add.

46. Select the Virtual Machine radio button and click Next.

47. Change the Network Label to NFS Network and enter <NFS VLAN ID> for the VLAN ID (Optional) field.

48. Click Next.

49. Click Close to close the dialog box.

ESXi Host VM-Host-Infra-02

1. In the vSphere client, select the host on the left panel.

2. Select the Configuration tab.

3. Select the Networking link in the Hardware box.

4. Select the Properties link in the right field on vSwitch0.

5. Select the Network Adapters tab and click Add.

6. Select the checkbox next to vmnic1 and click Next.

7. Click Next.

8. Click Finish.

9. Click the Ports tab, select the vSwitch configuration, and click Edit.

10. From the General tab, change MTU field to 9000.

11. Select the NIC Teaming tab.

12. Using the pull-down menu, change the Load Balancing field to Route based on source MAC hash.

13. Click OK.

14. Select the Management Network configuration and click Edit.

15. Change the Network Label to VMkernel-MGMT and select the Management Traffic checkbox.

16. Click the NIC Teaming tab.

17. Using the pull-down menu, change the Load Balancing field to Route based on source MAC hash.

18. Click OK.

19. Select the VM Network configuration and click Edit.

20. Change the Network Label to MGMT Network and enter <MGMT VLAN ID> for the VLAN ID (Optional) field.

21. Click OK to continue.

22. Click Add.

23. Select the VMkernel radio button and click Next.

24. Change the Network Label to VMkernel-NFS and enter <NFS VLAN ID> for the VLAN ID (Optional) field.

25. Click Next.

26. Enter the IP address and subnet mask for the NFS VLAN interface for VM-Host-Infra-02.

27. Click Next.

28. Click Finish.

29. Select the VMkernel-NFS configuration and click Edit.

30. Change the MTU field to 9000 and click OK.

31. Click Add.

32. Select the VMkernel radio button and click Next.

33. Change the Network Label to VMkernel-vMotion, enter <vMotion VLAN ID> for the VLAN ID (Optional) field, and select the Use this port group for vMotion checkbox.

34. Click Next.

35. Enter the IP address and subnet mask for the vMotion VLAN interface for VM-Host-Infra-02.

36. Click Next.

37. Click Finish.

38. Select the VMkernel-vMotion configuration and click Edit.

39. Change the MTU field to 9000 and click OK.

40. Click Add.

41. Select the Virtual Machine radio button and click Next.

42. Change the Network Label to VM Traffic Network and enter <VM-Traffic VLAN ID> for the VLAN ID (Optional) field.

43. Click Next.

44. Click Finish.

45. Click Add.

46. Select the Virtual Machine radio button and click Next.

47. Change the Network Label to NFS Network and enter <NFS VLAN ID> for the VLAN ID (Optional) field.

48. Click Next.

49. Click Finish.

50. Click Close to close the dialog box. The ESXi host networking setup should be similar to the following screenshot.

Mount the Required Datastores

ESXi Hosts VM-Host-Infra-01 and VM-Host-Infra-02

1. In the vSphere client, select the host on the left panel.

2. Select the Configuration tab.

3. Select Storage in the Hardware box.

4. Select Add Storage in the top ribbon in the Datastore section on the right panel.

5. The Add Storage wizard is displayed. Select the Network File System radio button and click Next to continue.

6. The wizard prompts for the location of the NFS export. Enter the <Controller B NFS IP> for the server.

7. Enter /vol/infra_datastore_1 as the Folder path to the NFS export.

8. Make sure that the Mount NFS read only checkbox is not selected.

9. Enter infra_datastore_1 as the Datastore Name.

10. Click Next to continue.

11. Review the information you entered and click Finish to add the datastore.

12. Select Add Storage in the top ribbon in the Datastore section on the right panel.

13. The Add Storage wizard is displayed. Select the Network File System radio button and click Next to continue.

14. The wizard prompts for the location of the NFS export. Enter the <Controller A NFS IP> for the server.

15. Enter /vol/infra_swap as the Folder path to the NFS export.

16. Make sure that the Mount NFS read only checkbox is not selected.

17. Enter infra_swap as the Datastore Name.

18. Click Next to continue.

19. Review the information you entered and click Finish to add the datastore.

NTP Time Configuration

ESXi Hosts VM-Host-Infra-01 and VM-Host-Infra-02

1. In the vSphere client, select the host on the left panel.

2. Select the Configuration tab.

3. Select the Time Configuration link in the Software box.

4. Select the Properties link on the right panel.

5. In the Time Configuration window that is displayed, click Options.

6. In the NTP Daemon Options window that is displayed, select the Start and stop with host radio button. Select NTP Settings in the left box, then click Add.

7. Another pop-up window is displayed. Enter the IP address of the NTP server and click OK to continue.

8. In the original NTP Daemon Options window, select the Restart NTP service to apply changes checkbox. Click OK.

9. Select the NTP Client Enabled checkbox. Click OK at the bottom of the window to continue and close the window.

10. In the Time Configuration window, verify that the clock is now set to the correct time.

Move the VM Swap File Location

ESXi Hosts VM-Host-Infra-01 and VM-Host-Infra-02

1. In the vSphere client, select the host on the left panel.

2. Select the Configuration tab.

3. In the Software box, select Virtual Machine Swapfile Location.

4. On the right panel, click Edit.

5. Select the radio button for Store the swapfile in a swap file datastore selected below if it is not already selected.

6. Select infra_swap as the datastore in which to store the swapfile.

7. Click OK at the bottom of the page to finish.

VMware vCenter 5.0 Deployment Procedure

The following sections provide detailed procedures for installing VMware vCenter 5.0 within a VMware vSphere Built On FlexPod With IP-Based Storage environment. The deployment procedures that follow are customized to include the specific environment variables that have been noted in previous sections. By the end of this section, a VMware vCenter server will be configured along with a Microsoft SQL Server providing the database to support vCenter. Although this procedure walks through the installation and configuration of an external Microsoft SQL Server 2008 R2 database, other types of external databases are supported by vCenter. If you choose to use an alternate database, please refer to VMware vSphere 5.0 documentation for how to setup this database and integrate it into vCenter.

Build a Microsoft SQL Server Virtual Machine

1. Log in to host VM-Host-Infra-01 with the VMware vSphere Client.

2. In vSphere Client, select the host on the left panel.

3. Right-click the host and select New Virtual Machine...

4. Select the Custom radio button and click Next.

5. Name the Virtual Machine and click Next.

6. Select infra_datastore_1 and click Next.

7. Select the Virtual Machine Version: 8 radio button and click Next.

8. Make sure the Windows radio button and Microsoft Windows Server 2008 R2 (64-bit) Version: is selected and click Next.

9. Select 2 virtual sockets and 1 core per virtual socket and click Next.

10. Make sure 4 GB of memory is selected and click Next.

11. For NIC 1:, select MGMT Network and the VMXNET 3 Adapter. Click Next.

12. Leave the LSI Logic SAS SCSI controller selected and click Next.

13. Leave Create a new virtual disk selected and click Next.

14. Make the disk size at least 40 GB and click Next.

15. Click Next.

16. Select the checkbox next to Edit the virtual machine settings before completion and click Continue.

17. Select Options tab.

18. Select Boot Options.

19. On the right, select the Force BIOS Setup checkbox.

20. Click Finish.

21. On the left panel, expand the host field by clicking the plus sign.

22. Right-click the just created SQL Server Virtual Machine and click Open Console.

23. Click the third button (right-arrow) to Power On the VM.

24. Click the ninth button (CD with a Wrench) to map the Windows Server 2008 R2 SP1 iso and select Connect to ISO image on local disk...

25. Navigate to the Windows Server 2008 R2 SP1 iso, select it, and click Open.

26. Back in the BIOS Setup Utility Window, click in the window and use the Right Arrow key to move to the Boot menu. Use the Down Arrow key to highlight CD-ROM Drive. Use the + key two times to move CD-ROM Drive to the top of the list. Press F10 and Enter to Save and Exit the BIOS Setup Utility.

27. The Windows Installer will boot. Select the appropriate Language, Time and Currency format, and Keyboard and click Next. Click Install now. Make sure Windows Server 2008 R2 Standard (Full Installation) is selected and click Next. Accept the license terms and click Next. Select Custom (advanced). Make sure Disk 0 Unallocated Space is selected and click Next. Windows Installation will complete.

28. After Windows Installation is complete and the virtual machine has rebooted, click OK to enter the Administrator password. Enter and confirm the Administrator password and click the Blue Arrow to log in. Click OK to confirm the Password Change.

29. When you are logged in to the virtual machine desktop, in the virtual machine console window, select the VM menu. Under Guest, select Install/Upgrade VMware Tools. Click OK.

30. If prompted to eject the windows install media prior to running setup for VMware tools, click OK.

31. In the popup window, select Run setup64.exe.

32. In the VMware Tools installer window, click Next.

33. Make sure Typical is selected and click Next.

34. Click Install.

35. Click Finish.

36. Click Yes to restart the virtual machine.

37. Enter a reason for the reboot and click OK.

38. After the reboot is complete log back into the virtual machine.

39. After logging in, set the virtual machine's timezone, IP address and gateway, and hostname in the Windows Active Directory domain. A reboot is required.

40. Log back into the virtual machine and download and install all required Windows Updates. This requires several reboots.

Install Microsoft SQL Server 2008 R2

vCenter SQL Server Virtual Machine

1. Log into the vCenter SQL server virtual machine as the local administrator and open Server Manager.

2. Expand Features and click Add Features.

3. Expand .NET Framework 3.5.1 Features and select only .NET Framework 3.5.1.

4. Click Next.

5. Click Install.

6. Click Close.

7. Open Windows Firewall with Advanced Security by clicking Start > Administrative Tools > Windows Firewall with Advanced Security.

8. Highlight Inbound Rules and click New Rule...

9. Select Port and click Next.

10. Select TCP and enter the specific local port 1433. Click Next.

11. Select Allow the connection and click Next, then Next again.

12. Name the rule SQL Server and click Finish.

13. Close Windows Firewall with Advanced Security.

14. In the vCenter SQL Server VMware console, click the ninth button (CD with a Wrench) to map the Microsoft SQL Server 2008 R2 iso and select Connect to ISO image on local disk...

15. Navigate to the SQL Server 2008 R2 iso, select it, and click Open.

16. In the popup, click Run SETUP.EXE.

17. In the SQL Server Installation Center window, click Installation on the left.

18. Click New installation or add features to an existing installation on the right.

19. Click OK.

20. Select Enter the product key:, enter a product key and click Next.

21. Select the checkbox to accept the license terms and decide whether to select the second checkbox. Click Next.

22. Click Install to install the Setup Support Files.

23. Address any warnings except the Windows Firewall warning. The Windows Firewall issue was addressed earlier. Click Next.

24. Select SQL Server Feature Installation and click Next.

25. Under Features;, select only Database Engine Services, Management Tools - Basic, and Management Tools - Complete. Click Next.

26. Click Next.

27. Leave Default instance selected and click Next.

28. Click Next.

29. Click in the first Account Name Field next to SQL Server Agent and click <<Browse...>>.

30. Enter the local machine Administrator name (example: system name\administrator), check it and click OK.

31. Enter the Administrator Password.

32. Change the SQL Server Agent Startup Type to Automatic.

33. Next to the SQL Server Database Engine, select Administrator under Account Name and enter the Administrator Password.

34. Click Next.

35. Select Mixed Mode (SQL Server authentication and Windows authentication). Enter and confirm the password for the sa account.

36. Click Add Current User.

37. Click Next.

38. Decide whether to send Error Reports to Microsoft and click Next.

39. Click Next.

40. Click Install.

41. Click Close to close the SQL Server Installer.

42. Close the SQL Server Installation Center.

43. Install all available Microsoft Windows updates by navigating to Start -> All Programs -> Windows Updates.

44. Open the SQL Server Management Studio by selecting Start > All Programs > Microsoft SQL Server 2008 R2 > SQL Server Management Studio.

45. Under Server name, select the local machine name. Under Authentication, select SQL Server Authentication. Enter sa for the Login and the sa password. Click Connect.

46. On the left, click New Query.

47. Input the following script, substituting a vpxuser password for the <Password> variable:

use [master]

go

CREATE DATABASE [VCDB] ON PRIMARY

(NAME = N'vcdb', FILENAME = N'C:\VCDB.mdf' , SIZE = 2000KB , FILEGROWTH = 10% )

LOG ON

(NAME = N'vcdb_log', FILENAME = N'C:\VCDB.ldf' , SIZE = 1000KB , FILEGROWTH = 10%)

COLLATE SQL_Latin1_General_CP1_CI_AS

go

use VCDB

go

sp_addlogin @loginame=[vpxuser], @passwd=N'<Password>', @defdb='VCDB', @deflanguage='us_english'

go

ALTER LOGIN [vpxuser] WITH CHECK_POLICY = OFF

go

CREATE USER [vpxuser] for LOGIN [vpxuser]

go

CREATE SCHEMA [VMW]

go

ALTER USER [vpxuser] WITH DEFAULT_SCHEMA =[VMW]

go

sp_addrolemember @rolename = 'db_owner', @membername = 'vpxuser'

go

use MSDB

go

CREATE USER [vpxuser] for LOGIN [vpxuser]

go

sp_addrolemember @rolename = 'db_owner', @membername = 'vpxuser'

go

48. In the upper-middle of the window, click Execute. The Query should execute successfully.

49. Close Microsoft SQL Server Management Studio.

50. Disconnect the Microsoft SQL Server 2008 R2 iso from the SQL Server virtual machine.

Build a VMware vCenter Virtual Machine

Using the instructions above that were used to build the SQL Server VM, build a VMware vCenter Virtual Machine with 4 GB RAM, 2 CPUs, and one virtual network interface in the <MGMT VLAN ID> VLAN. Bring up the virtual machine, install VMware Tools, and assign an IP address and host name in the Active Directory Domain.

1. Log into the vCenter VM as the local Administrator and open Server Manager.

2. Expand Features and click Add Features.

3. Expand .NET Framework 3.5.1 Features and select only .NET Framework 3.5.1.

4. Click Next.

5. Click Install.

6. Click Close to close the Add Features Wizard.

7. Close Server Manager.

8. Download and install the Client Components of the Microsoft Server 2008 R2 Native Client from http://go.microsoft.com/fwlink/?LinkID=188401&clcid=0x409.

9. Create the vCenter Database Data Source Name (DSN). Open Data Sources (ODBC) by selecting Start > Administrative Tools > Data Sources (ODBC).

10. Select the System DSN tab.

11. Click Add...

12. Select SQL Server Native Client 10.0 and click Finish.

13. Name the Data Source VCDB. In the Server field, enter the IP address of the vCenter SQL server. Click Next.

14. Select With SQL Server authentication... and enter vpxuser for the Login ID and the vpxuser password. Click Next.

15. Select the checkbox next to Change the default database to select VCDB from the pull-down, and click Next.

16. Click Finish.

17. Click Test DataSource. The test should complete successfully.

18. Click OK.

19. Click OK.

20. Click OK.

21. Disconnect the install media.

22. Install all available Microsoft Windows updates by navigating to Start -> All Programs -> Windows Updates.


Note A restart may be required.


Install VMware vCenter Server

vCenter Server Virtual Machine

1. In the vCenter Server VMware console, click the ninth button (CD with a Wrench) to map the VMware vCenter iso and select Connect to ISO image on local disk...

2. Navigate to the VMware vCenter 5.0 (VIMSetup) iso, select it, and click Open.

3. In the popup, click Run autorun.exe.

4. In the VMware vCenter Installer window, make sure vCenter Server is selected and click Install.

5. Select the appropriate language and click OK to continue.

6. Click Next.

7. Click Next.

8. Agree to the license terms and click Next.

9. Enter a User Name, Organization, and vCenter License key. Click Next.

10. Select Use an existing supported database, select VCDB using the pull-down, and click Next.

11. Enter the vpxuser Password and click Next.

12. Note the warning and click OK.

13. Click Next.

14. Click Next.

15. Make sure Create a standalone VMware vCenter Server instance is selected and click Next.

16. Click Next.

17. Click Next.

18. Select the appropriate Inventory Size and click Next.

19. Click Install.

20. Click Finish.

21. Click Exit in the VMware vCenter Installer window.

22. Disconnect the VMware vCenter iso from the vCenter virtual machine.

vCenter Setup

1. Using the vSphere Client, log into the vCenter Server just created as Administrator.

2. Near the center of the window, click Create a datacenter.

3. Enter FlexPod_DC_1 as the datacenter name.

4. Right-click the newly created FlexPod_DC_1, and select New Cluster...

5. Name the Cluster FlexPod_Management and select the checkboxes next to Turn On vSphere HA and Turn on vSphere DRS. Click Next.

6. Accept the defaults for vSphere DRS and click Next.

7. Accept the defaults for Power Management and click Next.

8. Accept the defaults for vSphere HA and click Next.

9. Accept the defaults for Virtual Machine Options and click Next.

10. Accept the defaults for VM Monitoring and click Next.

11. Accept the defaults for VMware EVC and click Next.

12. Select Store the swapfile in the datastore specified by the host and click Next.

13. Click Finish.

14. Right-click the newly created FlexPod_Management cluster and select Add Host...

15. In the Host field, enter either the IP address or hostname of the VM-Host-Infra_01 host. Enter root for the Username and the root password for this host for the Password. Click Next.

16. Click Yes.

17. Click Next.

18. Select Assign a new license key to the host. Click Enter Key...Enter a vSphere license key and click OK. Click Next.

19. Click Next.

20. Click Next.

21. Click Finish. VM-Host-Infra-01 is added to the cluster.

22. Using the instructions above, add VM-Host-Infra-02 to the cluster.

NetApp Virtual Storage Console Deployment Procedure

The following subsections through Provisioning and Cloning Setup, provide detailed procedures for installing the NetApp Virtual Storage Console. The deployment procedures that follow are customized to include the environment variables discussed in previous sections. By the end of this section, a VSC will be a configured and operational plug-in with VMware vCenter.

Installing NetApp Virtual Storage Console 4.0

Using the previous instructions for virtual machine creation, build a VSC and OnCommand virtual machine with 4GB RAM, two CPUs, and two virtual network interfaces, one in the <MGMT VLAN ID> VLAN and the other in the <NFS VLAN ID> VLAN. The second virtual network interface should be a VMXNET 3 adapter. Bring up the virtual machine, install VMware Tools, assign IP addresses, and join the machine to the Active Directory domain. Install the current version of Adobe Flash Player on the VM. Install all Windows updates on the virtual machine, but do not install Internet Explorer 9. Keep Internet Explorer 8 on the virtual machine.

1. Configure Jumbo Frames on the network adapter in the <NFS VLAN ID> VLAN.

2. Open Server Manager and click View Network Connections.

3. Right-click the network connection in the <NFS VLAN ID> VLAN and select Properties.

4. Click Configure.

5. Select the Advanced tab.

6. Select the Jumbo Packet property, and use the pull-down menu to select Jumbo 9000.

7. Click OK.

8. Close the Network Connections window.

9. Download the Virtual Storage Console 4.0 from the NetApp Support site.

10. To install the VSC plug-in, double-click the file that you downloaded in step 2 (for example, VSC-4.0-win64.exe).

11. On the installation wizard landing page, select Next at the bottom of the screen to proceed with the software installation.

12. Select the Backup and Recovery checkbox.

13. Click Next if you have the necessary licenses.

14. Select the location where VSC will be installed and click Next.

15. Make a note of the registration URL. This URL is needed to register the VSC plug-in with vCenter after the installation.

16. Click Install.


Note A browser window with the URL shown in the previous screenshot opens automatically when the installation phase is complete. However, some browser settings may interfere with this function. If the browser window does not open automatically, open one manually and enter the URL.


17. Open a Web browser.

18. Enter the URL provided by the installation wizard or replace localhost with the hostname or IP address of the VSC server: https://localhost:8143/Register.html.

19. In the Plugin service information section, select the IP address that the vCenter server uses to access the VSC server.

20. In the vCenter Server information section, enter the host name or IP address, port, user name, and user password, and click Register to complete the registration.

21. Click Finish in the install wizard window.

22. Open a console window in the virtual machine that contains VSC. Log in to the virtual machine as the local administrator and open Windows Explorer.

23. Navigate to C:\Program Files\NetApp\Virtual Storage Console\etc\kamino.

24. Right-click the kaminoprefs file and select Edit. A Windows Notepad window opens.

25. Scroll down to the Restrict NFS Options section. In the Entry Key section, enter the two iSCSI VLAN network addresses, separated by a semicolon.

26. Move to the Restrict iSCSI Options section. In the Entry Key section, enter the NFS VLAN network address as shown in the previous screenshot.

27. Scroll down to the NFS Networks section. In the Entry Key section, enter the NFS VLAN network addresses.

28. Move to the iSCSI Networks section. In the Entry Key section, enter the two iSCSI VLAN network addresses separated by a semicolon, as shown in the preceding example.

29. Save the changes to the file and close Notepad.

30. Reboot the VSC virtual machine. When reboot is complete, close and log back in to the vSphere client connected to vCenter. It may be necessary to re-enable the Virtual Storage Console plug-in the Plug-Ins window in vCenter.

31. Add credentials for discovered storage controllers:

a. Right-click a discovered storage controller.

b. Select Modify Credentials.

c. Make sure that the storage controller IP address is in the NFS VLAN.

d. Enter the user name and B.

e. Select the Use SSL checkbox.

f. Click OK.


Note This discovery process applies to monitoring and host configuration and to provisioning and cloning. Storage controllers must be manually added for backup and recovery.


Optimal Storage Settings for ESXi Hosts

VSC allows the automated configuration of storage-related settings for all ESXi hosts that are connected to NetApp storage controllers. These steps describe setting the recommended values for NetApp attached ESXi hosts.

1. Select individual or multiple ESXi hosts and right-click them to open the drop-down menu.

2. Select Set Recommended Values for these hosts.


Note This functionality sets values for HBAs and CNAs, sets appropriate paths and path-selection plug-ins, and verifies appropriate settings for software-based I/O (NFS and iSCSI).



Note Depending on what changes have been made, servers might require a restart for network-related parameter changes to take effect. If no reboot is required, the Status value is set to Normal. If required, the Status is set to Pending Reboot. If required, the ESXi hosts should be evacuated, placed into maintenance mode, and restarted before proceeding.


Provisioning and Cloning Setup

Provisioning and Cloning in VSC 4.0 helps administrators to provision both VMFS and NFS datastores at the data center, datastore cluster, or host level in VMware environments. Furthermore, the target storage can be storage controllers running in either Cluster-Mode or 7-Mode.

1. In a vSphere client connected to vCenter, select Home > Solutions and Applications > NetApp and select the Provisioning and Cloning tab on the left.

2. Select Storage controllers.

3. In the main part of the window, right-click <Storage Controller A> and select Resources.

4. In the <Storage Controller A> resources window, use the arrows to move volumes vol0 and infra_swap and aggregate aggr0 to the left. Also select the Prevent further changes checkbox, as shown in the following screenshot.

5. Click Save.

6. In the main part of the window, right-click <Storage Controller B> and select Resources.

7. In the <Storage Controller B> resources window, use the arrows to move volumes vol0 and infra_datastore_1 and aggregate aggr0 to the left. Select the Prevent Further changes checkbox as shown in the following screenshot.

8. Click Save.

9. Move to the Restrict iSCSI Options section. In the Entry Key section, enter the NFS VLAN network address as shown in Installing NetApp Virtual Storage Console 4.0.

10. Save the changes to the file and close Notepad.

11. Reboot the VSC VM.

12. When reboot is complete, close and log back in to the vSphere client connected to vCenter. It may be necessary to re-enable the Virtual Storage Console plug-in the Plug-Ins window in vCenter.

NetApp OnCommand Deployment Procedure

The following subsections provide detailed procedures for installing the NetApp OnCommand software. The deployment procedures that follow are customized to include the environment variables described in previous sections. By the end of this section through Configure Host Services, an OnCommand System Manager will be configured and operational.

Install NetApp OnCommand

1. Open the console on the VSC OnCommand (DataFabric Manager) virtual machine in vCenter.

2. Open a Web browser, navigate to the NetApp Support site, and download OnCommand Core 5.0: http://support.netapp.com/NOW/download/software/occore_win/5.0/.

3. Verify the solution components and the system requirements.

4. Click Continue at the bottom of the screen.

5. Click Accept to accept the license agreement.

6. Click the Windows 64-bit (occore-setup-5-0-win-x64.exe) OnCommand Core package file and save.

7. Double-click the 64-bit installer (occore-setup-5-0-win-x64.exe) to start the installation of the DataFabric Manager software.


Note The 32bit OnCommand Core package is not supported on 64-bit systems.


8. Click Next to continue installation.

9. Select I accept for the AutoSupport Notice and click Next.

10. Select Standard Edition and click Next.


Note The Express Edition is for a small environment (four storage systems and one virtualized host), and it also has limited monitoring capabilities. For more information, see the OnCommand Installation and Setup Guide.


11. Enter the license key and click Next.

12. Click Next to accept the default installation location, or modify the location as necessary.

13. Click Install to install DataFabric Manager.

14. Verify that the installation is complete, then click Next.

15. Make sure that Launch OnCommand Console is selected, then click Finish to complete the installation.


Note To install DataFabric Manager as an HA service in a Microsoft MSCS environment, follow the instructions in the OnCommand Installation and Setup Guide.


16. Log in to the DataFabric Manager server as the local administrator.

17. Enter the following command from an MS-DOS command prompt to generate the SSL keys on the host: dfm ssl server setup -z 1024 -d 365 -c <global ssl country> -s <global ssl state> -l <global ssl locality> -o <global ssl org> -u <global ssl org unit> -n <global ntap dfm hostname> -e <ntap admin email address>.


Note The command in step 17 is one complete command. Take care when pasting this command into the command line. Also, any two-word parameters, such as "North Carolina" for the state, should be enclosed in double quotes.



Note On a clustered system, the SSL keys must be copied to the other cluster nodes. The files are located in the directory <install-dir>/conf. For more information, refer to the product installation and upgrade guide.


18. Log in to the DataFabric Manager server and configure it to communicate by using SSL. Disable HTTP access by enabling or disabling the following options:

dfm option set httpsEnabled=yes

dfm option set httpEnabled=no

dfm option set httpsPort=8443

dfm option set hostLoginProtocol=ssh

dfm option set hostAdminTransport=https

dfm option set discoverEnabled=no


Note Storage controllers that are being monitored by Operations Manager must have the https and ssh protocols enabled.


19. Restart the HTTP service on the DataFabric Manager server to make sure that the changes take effect:

dfm service stop http

dfm service start http

20. Issue the following commands to configure Operations Manager to use SNMPv3 to poll configuration information from the storage devices:

dfm snmp modify -v 3 -c <ntap snmp community> -U <ntap snmp user> -P <ntap snmp password> -A md5 default


Note Verify that the storage controllers have the same SNMP community defined as specified in this command. NetApp recommends deleting the public SNMP domain and creating a custom SNMP community. NetApp also recommends using SNMPv3.


If SNMPv3 is not in use, enter the following command: dfm snmp modify -v 1-c <ntap snmp community> default.

21. Enter the following commands to configure the DataFabric Manager AutoSupport feature:

dfm option set SMTPServerName=<ntap autosupport mailhost>

dfm option set autosupportAdminContact=<ntap admin email address>

dfm option set autosupportContent=complete

dfm option set autosupportProtocol=https

Manually Add Data Fabric Manager Storage Controllers

The following steps provide details for the manual addition of storage controllers into the Data Fabric Manager server.

1. Log in to the DataFabric Manager server.

2. Use the following commands to add a storage system manually:

dfm host add <ntap A hostname>

dfm host add <ntap B hostname>

3. Use the following commands to set the array login and password credentials:

dfm host set <ntap a hostname> hostlogin=root

dfm host set <ntap A hostname> hostPassword=<global default password>

dfm host set <ntap B hostname> hostlogin=root

dfm host set <ntap B hostname> hostPassword= <global default password>

4. List the storage systems discovered by DataFabric Manager and list the storage system properties:

dfm host list

dfm host get <ntap A hostname>

dfm host get <ntap B hostname>


Note If the arrays being added or discovered use a custom SNMP community, then the correct SNMP community string must be defined before the arrays can be discovered because the default discovery method uses the "ro public" SNMP community. This does not work if a custom SNMP string is defined.


Run Diagnostics for Verifying Data Fabric Manager Communication

The following steps provide details for verifying Data Fabric Manager communication by running diagnostics. This helps identify misconfigurations that can prevent the Data Fabric Manager server from monitoring or managing a particular appliance, and it should be the first command you use when troubleshooting.

1. Log in to the DataFabric Manager server.

2. Use the following command to run diagnostics:

dfm host diag <ntap A hostname>

dfm host diag <ntap B hostname>

3. You can also refresh host properties after a change or do a force discovery by using the following command:

dfm host discover <ntap A hostname>

dfm host discover <ntap B hostname>

Configure Additional Operations Manager Alerts

The following steps provide details for configuring an SNMP trap host as well as configuring daily e-mails from Operations Manager for critical alerts.

1. Log in to the DFM server.

2. Use the following command to configure an SNMP traphost: dfm alarm create -T <ntap snmp traphosts>.

3. Use the following command to configure daily e-mails from Operations Manager: dfm alarm create -E <ntap admin email address> - v Critical .

Deploy the NetApp OnCommand Host Package

The following steps provide details for deploying the OnCommand Host Package.

1. Log in to the DataFabric Manager server where you intend to install the Host Package.

2. Install the required software for the Host Package.

3. Open Server Manager.

4. Click Features.

5. Click Add Feature.

6. Expand the .NET Framework and select .NET Framework 3.5.1. Do not select WCF Activation.

7. Click Next.

8. Click Install.

9. Click Close.

10. Open a Web browser and navigate to http://support.netapp.com/NOW/download/software/ochost/1.1/.

11. Verify the solution components and the system requirements.

12. Click Continue at the bottom of the screen.

13. Click Accept to accept the license agreement.

14. Verify that the prerequisites for the Host Package installation are satisfied.

15. Click the appropriate OnCommand Host Package file: Windows 64-bit (ochost-setup-1-1-x64.exe) to download the Host Package installer.

16. Double-click the 64-bit installer (ochost-setup-1-1-x64.exe) to start the installation of the OnCommand Host Package software.

17. Click Next to continue installation.

18. Click Next to accept the default installation location or modify the location as necessary.

19. Specify the Service Credentials by entering the username and password in the domainname\username format, then click Next.

20. On the Configure Communication Ports page, enter the port numbers to use or accept the default port numbers then click Next.

21. On the Configure DataFabric Manager server page, enter the IP address and the username and password to access the DataFabric Manager server and click Next. You can skip the validation of the DataFabric Manager server if you do not have the server credentials available during the installation.


Note In this configuration, the OnCommand Host Server is the same as the DataFabric Manager server.


22. Enter the vCenter server information.

23. Enter the IP address of the system on which you are installing the OnCommand Host Package (use the management VLAN address).

24. Enter the hostname or IP address of the system on which the vCenter server is installed and the username and password that allow the vSphere client to communicate with the vCenter server, then click Next.

25. Click Install on the summary page, then click Finish.


Note After you finish, the host service must be configured to perform backups. You must associate storage systems with a host service when you finish installing the OnCommand Host Package. For the purposes of this reference architecture, NetApp recommends using VSC.


Set a Shared Lock Directory to Coordinate Mutually Exclusive Activities on Shared Resources


Note If you plan to install the OnCommand Host Package on the same system as Virtual Storage Console 4.0, a best practice is to set up a shared lock directory. A shared lock directory is used for products that share resources through the vSphere client. This makes certain that mutually exclusive functions do not happen in parallel.


1. Stop the OnCommand Host Service VMware plug-in by using Services in Administrative Tools on the system:

a. Select Start > Administrative Tools.

b. Select Services.

c. Locate the OnCommand Host Service VMware Plug-in

d. Right-click the OnCommand Host Service VMware Plug-in and select Stop.

2. Delete the locks subdirectory in the OnCommand Host Package installation directory: <OC_install_directory>\locks.

3. Locate and open the smvi.override file. This file is installed in the following location by default: <OC_install_directory>\VMware Plugin\etc\smvi.override.

4. Add the following line in the smvi.override file: shared.subplugin.lock.directory=<VSC installation directory>\locks.

5. Save and close the smvi.override file.

6. Restart the OnCommand Host Service VMware Plug-in.

Install NetApp OnCommand Windows PowerShell Cmdlets

The following steps provide details for installing the NetApp OnCommand Windows PowerShell cmdlets.

1. Navigate to the installation folder for OnCommand Core Package, then navigate to the folder that has the powershell installation package: <DFM_Install_dir>\DFM\web\clients folder

2. Copy the powershell executable installation file to the system that has the host package software installed.

3. Execute the installation package and follow the installation wizard prompts to finish the installation.

Configure Host Services

The following steps provide details for configuring host services.

1. To open Windows Firewall with Advanced Security, click Start > Administration Tools > Windows Firewall with Advanced Security.

2. Select Inbound Rules.

3. For each OnCommand rule and the SnapManager for Virtual Infrastructure rule, right-click and select Properties. Select the Advanced tab. Select the Public checkbox. Click OK. When all changes have been made, all of these rules should show All under Profile.

4. Close Windows Firewall with Advanced Security.

5. Verify that the host service is registered with the DataFabric Manager server to correctly discover objects:

a. On the system that has the OnCommand core package installed, open a Web browser and enter https://localhost:8443 to open the OnCommand console.

b. Log in to the console.

c. Select Administration >Host Services.

d. In the Host Services list, verify that the name of the host service is listed. The status of the host service should be Pending Authorization.

e. If the host service is not displayed, add and configure the new host service by clicking Add on the Host Services tab and entering the correct properties.

6. Authorize the new host service to access the storage system credentials. You must authorize the host service to access the storage system to create jobs and to see the reporting statistics.

a. Select Administration > Host Services.

b. In the Host Services list, select a host service and click Edit.

c. In the Edit Host Service dialog box, click Authorize, review the certificate, and click Yes. If the Authorize area is not available, then the host service is already authorized.

7. Associate a host service with the vCenter to provide part of the communication needed for discovery, monitoring, and backup and recovery of virtual server objects such as virtual machines and datastores:

a. Select Administration > Host Services.

b. In the Host Services list, select a host service and click Edit.

c. Enter the vCenter server properties by specifying the hostname or FQDN for the host service registration, then click OK. If the details were specified during the installation, then the property fields is already populated.

8. Verify communication between the host service and the OnCommand Plug-in.

a. Select View Menu >Server.

b. Scroll through the list of virtual machines to verify that the virtual machines related to the host services are listed.

9. Associate storage systems with a host service. You must associate one or more storage systems that host virtual machines for the host service. This enables communication between the service and storage to provide that storage objects such as virtual disks are discovered and that the host service features work properly.

a. Click Administration > Host Services.

b. In the Host Services list, select the host service with which you want to associate storage and click Edit.

c. In the Storage Systems area, click Associate. To associate storage systems shown in the available storage systems list, select the system names and click the right-arrow button , then click OK. To associate a storage system that is not listed, click Add, enter the required information, and click OK.

d. The newly associated storage systems is displayed in the storage systems area.

e. In the list of storage systems, verify that the status is Good for the login and NDMP credentials. If the status is other than Good for any storage system, you must edit the storage system properties to provide the correct credentials before you can use that storage system.

f. Click OK.

g. Click Refresh to see the associated storage systems.

Appendix

B-Series Deployment Procedure

To add B-Series chassis and servers to an existing Cisco UCS environment with managed C-Series, the first step is to cable ports from the FEX modules on the back of the chassis to the Cisco UCS 6248 fabric interconnects. On the back of the chassis, the FEX on the left cables to the Fabric A fabric interconnect. The FEX on the right cables to the Fabric B fabric interconnect. Depending on the FEX type, one, two, four, or eight uplink cables can be run from each FEX to the corresponding fabric interconnect.

In this particular implementation of FlexPod, the Global Chassis Discover Policy in the Equipment tab of the Cisco UCS Manager was set to "2 Link," requiring a minimum of two uplinks to be run from each FEX to the fabric interconnect. It is important to use the same number of uplinks on each fabric. If the chassis you are connecting has Cisco UCS 2104XP FEXs, you must change the Link Grouping Preference of the Global Chassis Discovery Policy to None. You must also change the Connectivity Policy of each rack-mount FEX to an Admin State of Port Channel.

When the chassis has been cabled to the fabric interconnects, go into the UCS Manager and designate the uplink ports from the chassis as Server ports. At this point, the chassis and blades should be automatically discovered and listed in the Equipment tab of the Cisco UCS Manager. The blades can then be added to server pools and assigned to service profiles.


Note Server Pool Policy Qualifications, Host Firmware Packages, and Management Firmware Packages must be created or updated for the blades being added to the environment. If you are using Server Pool Policy Qualification, it may be necessary to clone the Service Profile Template and use a different Server Pool Policy Qualification for the Cisco UCS B-Series chassis.


In the validation for this Cisco Validated Design, a Cisco UCS blade chassis was added to the FlexPod environment, service profiles were assigned to blades, the blades were iSCSI booted, and VMware ESXi 5.0 was installed on the blades to validate that B-Series can be mixed with a Cisco UCS C-Series in this FlexPod environment (Figure 4).

Figure 4 VMware vSphere Built On FlexPod With IP-Based Storage Environment

Cisco Nexus 1000v Deployment Procedure

The following sections provide detailed procedures for installing a high-availability Cisco Nexus 1000v in FlexPod configuration with IP-based storage. Since this configuration is intended to minimize costs, the primary and standby Cisco Nexus 1000v Virtual Supervisor Modules (vsm) will be installed in virtual machines in the environment. The deployment procedures that follow are customized to include the specific environment variables that have been noted in previous sections. By the end of this section, a Nexus 1000v distributed virtual switch (dvs) will be provisioned. This procedure assumes that the Cisco Nexus 1000v software version 4.2.1.SV1.5.1 has been downloaded from www.cisco.com and expanded. This procedure also assumes that VMware vSphere 5.0 Enterprise Plus licensing is installed.

Log into Both Cisco Nexus 5548 Switches

1. Using an ssh client, log into both Nexus 5548 switches as admin.

Add Packet-Control VLAN to Switch Trunk Ports

Nexus A and Nexus B

1. Type config t.

2. Type vlan <Pkt-Ctrl VLAN ID>.

3. Type name Pkt-Ctrl-VLAN.

4. Type exit.

5. Type interface Po10.

6. Type switchport trunk allowed vlan add <Pkt-Ctrl VLAN ID>.

7. Type exit.

8. Type interface Po13.

9. Type switchport trunk allowed vlan add <Pkt-Ctrl VLAN ID>.

10. Type exit.

11. Type interface Po14.

12. Type switchport trunk allowed vlan add <Pkt-Ctrl VLAN ID>.

13. Type exit.

14. Type copy run start.

15. Type exit twice to close the Nexus switch interface.

Log Into Cisco UCS Manager

Cisco UCS Manager

These steps provide details for logging into the Cisco UCS environment.

1. Open a Web browser and navigate to the Cisco UCS 6248 fabric interconnect cluster address.

2. Select the Launch link to download the Cisco UCS Manager software.

3. If prompted to accept security certificates, accept as necessary.

4. When prompted, enter admin for the username and enter the administrative password and click Login to log in to the Cisco UCS Manager software.

Add Packet-Control VLAN to Host Server vNICs

Cisco UCS Manager

1. Select the LAN tab at the top of the left window.

2. Under LAN Cloud, right-click VLANs and select Create VLANs.

3. Enter Pkt-Ctrl-VLAN as the VLAN Name/Prefix, make sure Common/Global is selected, and enter <Pkt-Ctrl VLAN ID> as the VLAN ID. Click OK.

4. Click OK.

5. Expand Policies > root > vNIC Templates.

6. Select vNIC Template vNIC_Template_A.

7. Under the General tab, select Modify VLANs.

8. Select the Pkt-Ctrl-VLAN checkbox and click OK.

9. Click OK.

10. Select vNIC Template vNIC_Template_B.

11. Under the General tab, select Modify VLANs.

12. Select the Pkt-Ctrl-VLAN checkbox and click OK.

13. Click OK.

Log in to the VMware vCenter

1. Using the vSphere Client, log into the vCenter Server as Administrator.

Install the Virtual Ethernet Module (VEM) on Each ESXi Host

1. From the main window, select the first server in the list under the FlexPod Management cluster.

2. Select the Summary tab.

3. Under Storage on the right, right-click infra_datastore_1 and select Browse Datastore...

4. Select the root folder (/) and click the 3rd button at the top to add a folder.

5. Name the folder VEM and click OK.

6. On the left, select the VEM folder.

7. Click the 4th button at the top and select Upload File...

8. Navigate to the cross_cisco-vem-v140-4.2.1.1.5.1.0-3.0.1.vib file and click Open.

9. Click Yes. The VEM file should now appear in the VEM folder in the datastore.

10. Open the VMware vSphere CLI Command Prompt.

11. In the VMware vSphere CLI, for each ESXi Host, enter the following: esxcli -s <Host Server IP> -u root -p <Root Password> software vib install -v /vmfs/volumes/infra_datastore_1/VEM/cross_cisco-vem-v140-4.2.1.1.5.1.0-3.0.1.vib.

Adjust ESXi Host Networking

1. With the VMware vSphere Client connected to vCenter, select the first host under the FlexPod_Management cluster. Select the Configuration tab. In the Hardware box, select Networking.

2. Click Properties... on vSwitch0.

3. Click the Network Adapters tab and select vmnic1.

4. Click Remove, then click Yes.

5. Click the Ports tab. Click Add...

6. Make sure Virtual Machine is selected and click Next.

7. Enter Pkt-Ctrl Network for the Network Label and enter the VLAN ID for the Nexus 1000v Packet and Control Network (Packet and Control are combined here). Click Next.

8. Click Finish.

9. Click Close.

10. Repeat this procedure on the second ESXi host.

Deploy the Primary VSM

1. In the VMware vSphere Client connected to vCenter, select File -> Deploy OVF Template.

2. Click Browse.

3. Navigate to the Nexus 1000v VSM nexus-1000v.4.2.1.SV1.5.1.ovf file and click Open.

4. Click Next.

5. Click Next.

6. Click Accept.

7. Click Next.

8. Give the virtual machine a name indicating it is the primary VSM and click Next.

9. Select infra_datastore_1 and click Next.

10. Click Next.

11. For Control and Packet, use the pull-downs to select Pkt-Ctrl Network. For Management, select MGMT Network. Click Next.

12. Select the Power on after deployment checkbox and click Finish.

13. The VSM virtual machine will deploy and be powered up. Click Close.

Base Configuration of the Primary VSM

1. When the VSM virtual machine is in the virtual machine list in vCenter, right-click it and select Open Console.

2. In the VSM virtual machine console window, enter and confirm the admin password.

3. Enter primary for the HA role.

4. Enter the Nexus 1000v unique domain id.

5. Enter yes to enter the basic configuration dialogue.

6. Enter n to Create another login account.

7. Enter n to Configure read-only SNMP community string.

8. Enter n to Configure read-write SNMP community string.

9. Enter the Nexus 1000v switch hostname.

10. Enter y to Continue with Out-of-band management configuration.

11. Enter the Nexus 1000v switch's management IP address.

12. Enter the Nexus 1000v switch's management subnet mask.

13. Enter y to Configure the default gateway.

14. Enter the Nexus 1000v switch's management gateway address.

15. Enter n to Configure advanced IP options.

16. Enter n to Enable the telnet service.

17. Enter y to Enable the ssh service.

18. Enter rsa for the type of ssh key to generate.

19. Enter 1024 for the Number of rsa key bits.

20. Enter y to Enable the http-server.

21. Enter y to Configure the ntp server.

22. Enter the NTP server's IP address.

23. Enter n to reconfigure the VEM feature level.

24. Enter y to Configure svs domain parameters.

25. Enter L2 for the SVS Control mode.

26. Enter <Pkt-Ctrl VLAN ID> for the control vlan.

27. Enter <Pkt-Ctrl VLAN ID> for the packet vlan.

28. Enter n to edit the configuration.

29. Enter y to Use the configuration and save it.

30. Log into the Nexus 1000v vsm as admin.

Register the Nexus 1000v as a vCenter Plugin

1. Using a web browser, navigate to the <Primary VSM IP Address> using https://.

2. Right-click the cisco_nexus_1000v_extension.xml hyperlink and select Save Target As.

3. Save the xml document to the local Desktop.

4. In the vSphere Client connected to vCenter, select Plug-ins -> Manage Plug-ins.

5. Right-click in the white space in the window and select New Plug-in.

6. Browse to the Desktop and select the cisco_nexus_1000v_extension.xml document saved earlier.

7. Click Open.

8. Click Register Plug-in.

9. Click Ignore.

10. Click OK.

11. The Cisco_Nexus_1000v should now appear in the list of available plug-ins.

12. Click Close to close the Plug-in Manager.

Base Configuration of the Primary VSM

1. Using an ssh client, log into the Primary Nexus 1000v VSM as admin.

2. If you have your Nexus 1000v license product authorization key (PAK), the license must be installed.

3. Type show license host-id. The command will output the license VDH number.

4. From your Cisco Nexus 1000v software license claim certificate, locate the product authorization key.

5. On the Web, go to the Product License Registration site on the Cisco Software Download Web site.

6. From the Product License Registration Web site, follow the instructions for registering your VSM license. The license key file is sent to you in an e-mail. The license key authorizes use on only the host ID device. You must obtain separate license key file(s) for each of your Primary VSMs.

7. Copy your license to a UNIX or Linux machine.

8. Using copy scp:// copy your license to the bootflash on the VSM.

9. Type install license bootflash:<license filename>.

10. Type show license usage and verify the license is installed.

11. Type copy run start to save the configuration.

12. Enter the global configuration mode by typing config t.

13. Type svs connection vCenter.

14. Type protocol vmware-vim.

15. Type remote ip address <vCenter Server IP> port 80.

16. Type vmware dvs datacenter-name FlexPod_DC_1.

17. Type connect.

18. Type exit.

19. Type ntp server <NTP Server IP> use-vrf management.

20. Type vlan <MGMT VLAN ID>.

21. Type name MGMT-VLAN.

22. Type vlan <NFS VLAN ID>.

23. Type name NFS-VLAN.

24. Type vlan <vMotion VLAN ID>.

25. Type name vMotion-VLAN.

26. Type vlan <Pkt-Ctrl VLAN ID>.

27. Type name Pkt-Ctrl-VLAN.

28. Type vlan <VM-Traffic VLAN ID>.

29. Type name VM-Traffic-VLAN.

30. Type vlan <Native VLAN ID>.

31. Type name Native-VLAN.

32. Type vlan <iSCSI-A VLAN ID>.

33. Type name iSCSI-A-VLAN.

34. Type vlan <iSCSI-B VLAN ID>.

35. Type name iSCSI-B-VLAN.

36. Type exit.

37. Type port-profile type ethernet system-uplink.

38. Type vmware port-group.

39. Type switchport mode trunk.

40. Type switchport trunk native vlan <Native VLAN ID>.

41. Type switchport trunk allowed vlan <MGMT VLAN ID>, <NFS VLAN ID>, <vMotion VLAN ID>, <Pkt-Ctrl VLAN ID>, <VM-Traffic VLAN ID>.

42. Type channel-group auto mode on mac-pinning.

43. Type no shutdown.

44. Type system vlan <MGMT VLAN ID>, <NFS VLAN ID>, <vMotion VLAN ID>, <Pkt-Ctrl VLAN ID>, <VM-Traffic VLAN ID>.

45. Type system mtu 9000.

46. Type state enabled.

47. Type port-profile type ethernet iSCSI-A-uplink.

48. Type vmware port-group.

49. Type switchport mode trunk.

50. Type switchport trunk native vlan <iSCSI-A VLAN ID>.

51. Type switchport trunk allowed vlan <iSCSI-A VLAN ID>.

52. Type no shutdown.

53. Type system vlan <iSCSI-A VLAN ID>.

54. Type system mtu 9000.

55. Type state enabled.

56. Type port-profile type ethernet iSCSI-B-uplink.

57. Type vmware port-group.

58. Type switchport mode trunk.

59. Type switchport trunk native vlan <iSCSI-B VLAN ID>.

60. Type switchport trunk allowed vlan <iSCSI-B VLAN ID>.

61. Type no shutdown.

62. Type system vlan <iSCSI-B VLAN ID>.

63. Type system mtu 9000.

64. Type state enabled.

65. Type port-profile type vethernet MGMT-VLAN.

66. Type vmware port-group.

67. Type switchport mode access.

68. Type switchport access vlan <MGMT VLAN ID>.

69. Type no shutdown.

70. Type system vlan <MGMT VLAN ID>.

71. Type state enabled.

72. Type port-profile type vethernet NFS-VLAN.

73. Type vmware port-group.

74. Type switchport mode access.

75. Type switchport access vlan <NFS VLAN ID>.

76. Type no shutdown.

77. Type system vlan <NFS VLAN ID>.

78. Type state enabled.

79. Type port-profile type vethernet vMotion-VLAN.

80. Type vmware port-group.

81. Type switchport mode access.

82. Type switchport access vlan <vMotion VLAN ID>.

83. Type no shutdown.

84. Type system vlan <vMotion VLAN ID>.

85. Type state enabled.

86. Type port-profile type vethernet VM-Traffic-VLAN.

87. Type vmware port-group.

88. Type switchport mode access.

89. Type switchport access vlan <VM-Traffic VLAN ID>.

90. Type no shutdown.

91. Type system vlan <VM-Traffic VLAN ID>.

92. Type state enabled.

93. Type port-profile type vethernet Pkt-Ctrl-VLAN.

94. Type vmware port-group.

95. Type switchport mode access.

96. Type switchport access vlan <Pkt-Ctrl VLAN ID>.

97. Type no shutdown.

98. Type system vlan <Pkt-Ctrl VLAN ID>.

99. Type state enabled.

100. Type port-profile type vethernet iSCSI-A-VLAN.

101. Type vmware port-group.

102. Type switchport mode access.

103. Type switchport access vlan <iSCSI-A VLAN ID>.

104. Type no shutdown.

105. Type system vlan <iSCSI-A VLAN ID>.

106. Type state enabled.

107. Type port-profile type vethernet iSCSI-B-VLAN.

108. Type vmware port-group.

109. Type switchport mode access.

110. Type switchport access vlan <iSCSI-B VLAN ID>.

111. Type no shutdown.

112. Type system vlan <iSCSI-B VLAN ID>.

113. Type state enabled.

114. Type exit.

115. Type copy run start.

Migrate the ESXi Hosts' Networking to the Nexus 1000v

1. In the VMware vSphere Client connected to vCenter, select Home>Networking.

2. Expand the vCenter, DataCenter, and Nexus 1000v Folder.

3. Select the Nexus 1000v switch.

4. Under Basic Tasks for the vSphere Distributed Switch, select Add a host.

5. For both hosts, select vmnic1 and use the pull-down to select the system-uplink Uplink port group.

6. Click Next.

7. For all except the iSCSI VMkernel ports, select the appropriate Destination port group from the Nexus 1000v.

8. Click Next.

9. Select the Migrate virtual machine networking checkbox. Expand each virtual machine and select the port groups for migration individually.

10. Click Next.

11. Click Finish. Wait for the migration process to complete.

12. In the vSphere Client vCenter window, select Home > Hosts and Clusters.

13. Select the first ESXi host and select the Configuration tab. In the Hardware box, select Networking.

14. Make sure vSphere Standard Switch is selected at the top next to View. vSwitch0 should have no active VMkernel or virtual machine network ports on it. On the upper-right of vSwitch0, click Remove.

15. Click Yes.

16. After vSwitch0 has disappeared from the screen, click vSphere Distributed Switch at the top next to View.

17. Click Manage Physical Adapters.

18. Scroll down to the system-uplink box and click <Click to Add NIC>.

19. Select vmnic0 and click OK.

20. Click OK to close the Manage Physical Adapters window. Two system-uplinks should now be present.

21. Click Manage Physical Adapters.

22. Scroll down to the iSCSI-A-uplink box and click <Click to Add NIC>.

23. Select vmnic2 and click OK.

24. Click Yes.

25. Click OK.

26. Click Manage Virtual Adapters.

27. Click Add.

28. Select Migrate existing virtual adapters and click Next.

29. Make sure only VMkernel-iSCSI-A is selected and select the iSCSI-A-VLAN port group. Click Next.

30. Click Yes.

31. Click Finish.

32. Click Close.

33. Click Manage Physical Adapters.

34. Scroll down to the iSCSI-B-uplink box and click <Click to Add NIC>.

35. Select vmnic3 and click OK.

36. Click Yes.

37. Click OK.

38. Click Manage Virtual Adapters.

39. Click Add.

40. Select Migrate existing virtual adapters and click Next.

41. Select VMkernel-iSCSI-B and select the iSCSI-B-VLAN port group. Click Next.

42. Click Yes.

43. Click Finish.

44. Click Close.

45. Click vSphere Standard Switch at the top next to View.

46. Using Remove, remove both of the Standard Switches.

47. Select the second ESXi host and select the Configuration tab. In the Hardware box, select Networking.

48. Make sure vSphere Standard Switch is selected at the top next to View. vSwitch0 should have no active VMkernel or virtual machine network ports on it. On the upper-right of vSwitch0, click Remove.

49. Click Yes.

50. After vSwitch0 has disappeared from the screen, click vSphere Distributed Switch at the top next to View:.

51. Click Manage Physical Adapters.

52. Scroll down to the system-uplink box and click <Click to Add NIC>.

53. Select vmnic0 and click OK.

54. Click OK to close the Manage Physical Adapters window. Two system-uplinks should now be present.

55. Click Manage Physical Adapters.

56. Scroll down to the iSCSI-B-uplink box and click <Click to Add NIC>.

57. Select vmnic2 and click OK.

58. Click Yes.

59. Click OK.

60. Click Manage Virtual Adapters.

61. Click Add.

62. Select Migrate existing virtual adapters and click Next.

63. Make sure only VMkernel-iSCSI-B is selected and select the iSCSI-B-VLAN port group. Click Next.

64. Click Yes.

65. Click Finish.

66. Click Close.

67. Click Manage Physical Adapters.

68. Scroll down to the iSCSI-A-uplink box and click <Click to Add NIC>.

69. Select vmnic3 and click OK.

70. Click Yes.

71. Click OK.

72. Click Manage Virtual Adapters.

73. Click Add.

74. Select Migrate existing virtual adapters and click Next.

75. Select VMkernel-iSCSI-A and select the iSCSI-A-VLAN port group. Click Next.

76. Click Yes.

77. Click Finish.

78. Click Close.

79. Click vSphere Standard Switch at the top next to View.

80. Using Remove, remove both of the Standard Switches.

81. Back in the ssh client connected to the Nexus 1000v, type show interface status to verify that all interfaces and port channels have been correctly configured.

82. Type show module and verify that the two ESXi hosts are present as modules.

83. On both hosts, go to the Storage Adapters section and verify that there are two connected paths to the NETAPP iSCSI Disk used to boot the host.

Deploy the Secondary VSM

1. In the VMware vSphere Client connected to vCenter, select File > Deploy OVF Template.

2. Click Browse.

3. Navigate to the Nexus 1000v VSM nexus-1000v.4.2.1.SV1.5.1.ovf file and click Open.

4. Click Next.

5. Click Next.

6. Click Accept.

7. Click Next.

8. Give the virtual machine a name indicating it is the secondary VSM and click Next.

9. Select infra_datastore_1 and click Next.

10. Click Next.

11. For Control and Packet, use the pull-downs to select Pkt-Ctrl-VLAN. For Management, select MGMT-VLAN. Click Next.

12. Select the Power on after deployment checkbox and click Finish.

13. The VSM virtual machine will deploy and be powered up. Click Close.

Base Configuration of the Secondary VSM

1. When the Secondary VSM virtual machine is in the virtual machine list in vCenter, right-click it and select Open Console.

2. In the VSM virtual machine console window, enter and confirm the admin password.

3. Enter secondary for the HA role.

4. Enter yes.

5. Enter the Nexus 1000v unique domain id that you entered on the Primary VSM. The virtual machine will reboot. Wait for a login prompt.

6. Back in the ssh client connected to the Primary Nexus 1000v VSM, type show module to verify that the secondary VSM has a status of ha-standby.

Nexus 5548 Reference Configurations

Nexus A

!Command: show running-config

!Time: Wed Apr 11 22:18:20 2012

version 5.1(3)N2(1)

no feature telnet

no telnet server enable

cfs eth distribute

feature lacp

feature vpc

feature lldp

username admin password 5 $1$VbxAbFP3$7nWUsAzHL8Ps9X1lCMqxG/ role network-admin

ip domain-lookup

switchname ice5548-1

class-map type qos class-fcoe

class-map type queuing class-fcoe

match qos-group 1

class-map type queuing class-all-flood

match qos-group 2

class-map type queuing class-ip-multicast

match qos-group 2

class-map type network-qos class-fcoe

match qos-group 1

class-map type network-qos class-all-flood

match qos-group 2

class-map type network-qos class-ip-multicast

match qos-group 2

policy-map type network-qos jumbo

class type network-qos class-default

mtu 9000

multicast-optimize

system qos

service-policy type network-qos jumbo

snmp-server user admin network-admin auth md5 0x2e8af112d36e9af1466f4e4db0ce36a3

priv 0x2e8af112d36e9af1466f4e4db0ce36a3 localizedkey

snmp-server enable traps entity fru

ntp server 192.168.175.4 use-vrf management

vrf context management

ip route 0.0.0.0/0 192.168.175.1

vlan 1

vlan 2

name Native-VLAN

vlan 3170

name NFS-VLAN

vlan 3171

name iSCSI-A-VLAN

vlan 3172

name iSCSI-B-VLAN

vlan 3173

name vMotion-VLAN

vlan 3174

name VM-Traffic-VLAN

vlan 3175

name MGMT-VLAN

spanning-tree port type edge bpduguard default

spanning-tree port type edge bpdufilter default

spanning-tree port type network default

vpc domain 7

role priority 10

peer-keepalive destination 192.168.175.70 source 192.168.175.69

interface port-channel10

description vPC peer-link

switchport mode trunk

vpc peer-link

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3175

spanning-tree port type network

interface port-channel11

description ice2240-1a

switchport mode trunk

vpc 11

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3172

spanning-tree port type edge trunk

interface port-channel12

description ice2240-1b

switchport mode trunk

vpc 12

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3172

spanning-tree port type edge trunk

interface port-channel13

description iceucsm-7a

switchport mode trunk

vpc 13

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3171,3173-3175

spanning-tree port type edge trunk

interface port-channel14

description iceucsm-7b

switchport mode trunk

vpc 14

switchport trunk native vlan 2

switchport trunk allowed vlan 3170,3172-3175

spanning-tree port type edge trunk

interface port-channel20

description icecore uplink

switchport mode trunk

vpc 20

switchport trunk native vlan 2

switchport trunk allowed vlan 3175

spanning-tree port type network

interface Ethernet1/1

description ice2240-1a:e1a

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3172

channel-group 11 mode active

interface Ethernet1/2

description ice2240-1b:e1a

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3172

channel-group 12 mode active

interface Ethernet1/3

description iceucsm-7a:Eth1/19

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3171,3173-3175

channel-group 13 mode active

interface Ethernet1/4

description iceucsm-7b:Eth1/19

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170,3172-3175

channel-group 14 mode active

interface Ethernet1/5

description ice5548-2:Eth1/5

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3175

channel-group 10 mode active

interface Ethernet1/6

description ice5548-2:Eth1/6

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3175

channel-group 10 mode active

interface Ethernet1/7

interface Ethernet1/8

interface Ethernet1/9

interface Ethernet1/10

interface Ethernet1/11

interface Ethernet1/12

interface Ethernet1/13

interface Ethernet1/14

interface Ethernet1/15

interface Ethernet1/16

interface Ethernet1/17

interface Ethernet1/18

interface Ethernet1/19

interface Ethernet1/20

description icecore:Eth1/21

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3175

channel-group 20 mode active

interface Ethernet1/21

interface Ethernet1/22

interface Ethernet1/23

interface Ethernet1/24

interface Ethernet1/25

interface Ethernet1/26

interface Ethernet1/27

interface Ethernet1/28

interface Ethernet1/29

interface Ethernet1/30

interface Ethernet1/31

interface Ethernet1/32

interface mgmt0

ip address 192.168.175.69/24

line console

line vty

boot kickstart bootflash:/n5000-uk9-kickstart.5.0.3.N2.2b.bin

boot system bootflash:/n5000-uk9.5.0.3.N2.2b.bin

Nexus B

!Command: show running-config

!Time: Wed Apr 11 22:26:15 2012

version 5.1(3)N2(1)

no feature telnet

no telnet server enable

cfs eth distribute

feature lacp

feature vpc

feature lldp

username admin password 5 $1$4ght.2ge$Yj1LG2JEPqWnJVwz544lQ0 role network-admin

ip domain-lookup

switchname ice5548-2

class-map type qos class-fcoe

class-map type queuing class-fcoe

match qos-group 1

class-map type queuing class-all-flood

match qos-group 2

class-map type queuing class-ip-multicast

match qos-group 2

class-map type network-qos class-fcoe

match qos-group 1

class-map type network-qos class-all-flood

match qos-group 2

class-map type network-qos class-ip-multicast

match qos-group 2

policy-map type network-qos jumbo

class type network-qos class-default

mtu 9000

multicast-optimize

system qos

service-policy type network-qos jumbo

snmp-server user admin network-admin auth md5 0xe481d1d2fee4aaa498237df1852270e8

priv 0xe481d1d2fee4aaa498237df1852270e8 localizedkey

snmp-server enable traps entity fru

ntp server 192.168.175.4 use-vrf management

vrf context management

ip route 0.0.0.0/0 192.168.175.1

vlan 1

vlan 2

name Native-VLAN

vlan 3170

name NFS-VLAN

vlan 3171

name iSCSI-A-VLAN

vlan 3172

name iSCSI-B-VLAN

vlan 3173

name vMotion-VLAN

vlan 3174

name VM-Traffic-VLAN

vlan 3175

name MGMT-VLAN

spanning-tree port type edge bpduguard default

spanning-tree port type edge bpdufilter default

spanning-tree port type network default

vpc domain 7

role priority 20

peer-keepalive destination 192.168.175.69 source 192.168.175.70

interface port-channel10

description vPC peer-link

switchport mode trunk

vpc peer-link

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3175

spanning-tree port type network

interface port-channel11

description ice2240-1a

switchport mode trunk

vpc 11

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3172

spanning-tree port type edge trunk

interface port-channel12

description ice2240-1b

switchport mode trunk

vpc 12

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3172

spanning-tree port type edge trunk

interface port-channel13

description iceucsm-7a

switchport mode trunk

vpc 13

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3171,3173-3175

spanning-tree port type edge trunk

interface port-channel14

description iceucsm-7b

switchport mode trunk

vpc 14

switchport trunk native vlan 2

switchport trunk allowed vlan 3170,3172-3175

spanning-tree port type edge trunk

interface port-channel20

description icecore uplink

switchport mode trunk

vpc 20

switchport trunk native vlan 2

switchport trunk allowed vlan 3175

spanning-tree port type network

interface Ethernet1/1

description ice2240-1a:e1b

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3172

channel-group 11 mode active

interface Ethernet1/2

description ice2240-1b:e1b

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3172

channel-group 12 mode active

interface Ethernet1/3

description iceucsm-7a:Eth1/20

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3171,3173-3175

channel-group 13 mode active

interface Ethernet1/4

description iceucsm-7b:Eth1/20

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170,3172-3175

channel-group 14 mode active

interface Ethernet1/5

description ice5548-1:Eth1/5

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3175

channel-group 10 mode active

interface Ethernet1/6

description ice5548-1:Eth1:6

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3170-3175

channel-group 10 mode active

interface Ethernet1/7

interface Ethernet1/8

interface Ethernet1/9

interface Ethernet1/10

interface Ethernet1/11

interface Ethernet1/12

interface Ethernet1/13

interface Ethernet1/14

interface Ethernet1/15

interface Ethernet1/16

interface Ethernet1/17

interface Ethernet1/18

interface Ethernet1/19

interface Ethernet1/20

description icecore:Eth1/22

switchport mode trunk

switchport trunk native vlan 2

switchport trunk allowed vlan 3175

channel-group 20 mode active

interface Ethernet1/21

interface Ethernet1/22

interface Ethernet1/23

interface Ethernet1/24

interface Ethernet1/25

interface Ethernet1/26

interface Ethernet1/27

interface Ethernet1/28

interface Ethernet1/29

interface Ethernet1/30

interface Ethernet1/31

interface Ethernet1/32

interface mgmt0

ip address 192.168.175.70/24

line console

line vty

boot kickstart bootflash:/n5000-uk9-kickstart.5.0.3.N2.2b.bin

boot system bootflash:/n5000-uk9.5.0.3.N2.2b.bin


[an error occurred while processing this directive]