Deployment Guide for FlexPod Datacenter with Cisco ACI Multi-Pod with NetApp MetroCluster IP and VMware vSphere 6.7
Last Updated: October 19, 2018
About the Cisco Validated Design Program
The Cisco Validated Design (CVD) program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information, see:
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unified Computing System (Cisco UCS), Cisco UCS B-Series Blade Servers, Cisco UCS C-Series Rack Servers, Cisco UCS S-Series Storage Servers, Cisco UCS Manager, Cisco UCS Management Software, Cisco Unified Fabric, Cisco Application Centric Infrastructure, Cisco Nexus 9000 Series, Cisco Nexus 7000 Series. Cisco Prime Data Center Network Manager, Cisco NX-OS Software, Cisco MDS Series, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2018 Cisco Systems, Inc. All rights reserved.
Table of Contents
MetroCluster IP Nexus 3132Q-V Switch Cabling
Cisco Application Policy Infrastructure Controller (APIC)
Cisco APIC Deployment Considerations
Initial ACI Fabric Setup Verification
Software Upgrade for Devices in the First Site (if required)
Fabric Wide Enforce Subnet Check for IP and MAC Learning
Cisco ACI Inter-Pod Deployment
Spine Configurations for Pod 1
Second Site - Spine Configuration
Software Upgrade for 40GbE Site devices (if required)
Setting Up Out-of-Band Management IP Addresses for New ACI Switches
Create LLDP Interface Policies
Create BPDU Filter/Guard Policies
Create Virtual Port Channels (VPCs)
VPC – UCS Fabric Interconnects
Configuring Common Tenant for In-Band Management Access
Create Security Filters in Tenant Common
Create Application Profile for Host Connectivity and MetroCluster
Configuring the Nexus 7000s for ACI Connectivity (Sample)
Configuring ACI Shared Layer 3 Out
Cluster Interconnect Switch Configuration
Verify HA Config State of Controllers and Chassis
MetroCluster Base Configuration
Upgrade Cisco UCS Manager Software to Version 3.2(3d)
Add Block of IP Addresses for KVM Access
Edit Policy to Automatically Discover Server Ports
Verify Server and Enable Uplink Ports
Acknowledge Cisco UCS Chassis and FEX
Create Uplink Port Channels to Cisco Nexus 93180 Switches
Create an Organization for this FlexPod
Create an IQN Pool for iSCSI Boot
Create iSCSI Boot IP Address Pools
Modify Default Host Firmware Package
Set Jumbo Frames in Cisco UCS Fabric
Create Local Disk Configuration Policy (Optional)
Create High Traffic VMware Adapter Policy
Update the Default Maintenance Policy
Create LAN Connectivity Policy for iSCSI Boot
Create iSCSI Boot Service Profile Template
Cisco UCS Configuration on 10GbE Site
Storage Configuration – SAN Boot
NetApp ONTAP Boot Storage Setup
Download Cisco Custom Image for ESXi 6.7
Log in to Cisco UCS Fabric Interconnect
Set Up VMware ESXi Installation
Set Up Management Networking for ESXi Hosts
Log in to VMware ESXi Hosts by Using VMware Host Client
Set Up VMkernel Ports and Virtual Switch
Install VMware ESXi Patches and VAAI plugin
Building the VMware vCenter Server Appliance
Setting Up VMware vCenter Server
ESXi Dump Collector setup for iSCSI-Booted Hosts
Add APIC-Integrated vSphere Distributed Switch (vDS)
Create Virtual Machine Manager (VMM) Domain in APIC
Attaching the VMM Domain to the IB-Mgmt EPG
Add ESXi Hosts to APIC-Integrated vDS
Adding 3-Tier-Application Profile
Data Center Failure and Recovery
Validation Environment Before Site Failure
Validation Environment after Site Failure
Validation Environment after Failback
Cisco Validated Designs deliver systems and solutions that are designed, tested, and documented to facilitate and improve customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of the customers and to guide them from design to deployment.
Customers looking to deploy applications using a shared data center infrastructure face a number of challenges. A recurrent infrastructure challenge is to achieve the required levels of IT agility and efficiency that can effectively meet the company’s business objectives. Addressing these challenges requires having an optimal solution with the following key characteristics:
· Availability: Help ensure applications and services availability at all times with no single point of failure
· Flexibility: Ability to support new services without requiring underlying infrastructure modifications
· Efficiency: Facilitate efficient operation of the infrastructure through re-usable policies
· Manageability: Ease of deployment and ongoing management to minimize operating costs
· Scalability: Ability to expand and grow with significant investment protection
· Compatibility: Minimize risk by ensuring compatibility of integrated components
Cisco and NetApp have partnered to deliver a series of FlexPod solutions that enable strategic data center platforms with the above characteristics. FlexPod solution delivers an integrated architecture that incorporates compute, storage, and network design best practices thereby minimizing IT risks by validating the integrated architecture to ensure compatibility between various components. The solution also addresses IT pain points by providing documented design guidance, deployment guidance and support that can be used in various stages (planning, designing and implementation) of a deployment.
FlexPod Datacenter with Cisco ACI Multi-Pod, NetApp MetroCluster IP CVD delivers a FlexPod Datacenter solution to provide a highly available multi-datacenter environment. The multi-datacenter architecture offers the ability to balance workloads between two datacenters utilizing non-disruptive workload mobility thereby enabling migration of services between sites without the need for sustaining an outage.
The FlexPod with ACI Multi-Pod and NetApp MetroCluster IP solution showcases:
· Seamless workload mobility across data centers
· Consistent policies across the sites
· Layer-2 extension across geographically dispersed DCs
· Enhanced downtime avoidance during maintenance
· Disaster avoidance and recovery
FlexPod solution is a pre-designed, integrated and validated architecture for data center that combines Cisco UCS servers, Cisco Nexus family of switches and NetApp Storage Arrays into a single, flexible architecture. FlexPod is designed for high availability, with no single points of failure, while maintaining cost-effectiveness and flexibility in the design to support a wide variety of workloads.
In the FlexPod datacenter with Cisco ACI Multi-Pod and NetApp MetroCluster IP solution, Cisco ACI Multi-Pod solution allows interconnecting and centrally managing two or more ACI fabrics deployed in separate, geographically dispersed datacenters. NetApp MetroCluster IP provides a synchronous replication solution between two NetApp controllers providing storage high availability and disaster recovery in a campus or metropolitan area. This validated design enables customers to quickly and reliably deploy VMware vSphere based private cloud on a distributed integrated infrastructure thereby delivering a unified solution which enables multiple sites to behave in much the same way as a single site.
This document covers the deployment and validation of the FlexPod datacenter with Cisco ACI Multi-Pod and NetApp MetroCluster IP solution. The deployment is a detailed walk through of the solution buildout for two geographically distributed data centers at a distance of 75 km from each other. The FlexPod deployment discussed in this document has been validated for resiliency (under fair load) and fault tolerance during system upgrades, component failures, and partial as well as total power loss scenarios.
The audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineers, and customers who want to take advantage of an infrastructure built to deliver IT efficiency and enable IT innovation.
This document provides a step-by-step configuration and implementation guidelines for the FlexPod Datacenter extending across multiple data centers. These data centers are implemented with Cisco UCS Fabric Interconnects, NetApp AFF, Cisco ACI Multi-Pod solution, and NetApp MetroCluster IP. This document covers deployment of VMware vSphere 6.7 on FlexPod Datacenter using iSCSI and NFS storage protocols across the configured data centers.
The following design elements distinguish this version of FlexPod from previous FlexPod models:
· Integration of Cisco ACI 3.2 Multi-Pod with FlexPod Datacenter (DC) for seamlessly supporting multiple sites.
· Integration of NetApp MetroCluster IP for synchronous data replication across the two DCs
· Support for vSphere 6.7
· Setting up, validating and highlighting operational aspects of this new multi-DC FlexPod design
· Deployment guidance for setting up and connecting two DCs using a Nexus 7000 based Inter-Pod Network.
For detailed information about the FlexPod Datacenter with Cisco ACI Multi-Pod and NetApp MetroCluster IP solution design elements, see: https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi67_n9k_aci_metrocluster_design.html
The FlexPod Datacenter with Cisco ACI Multi-Pod and NetApp MetroCluster IP solution couples Cisco ACI multi-DC functionality with NetApp multi-site software features. This solution combines array-based clustering with synchronous replication to deliver continuous availability and zero data loss. Cisco ACI Multi-Pod allows layer-2 extensions across both sites and allows layer-3 connectivity out of the ACI fabric utilizing the local gateways at each datacenter for optimal routing configuration. At the virtualization layer, VMware vSphere High Availability (HA) is enabled for all the ESXi hosts and a single vCenter instance manages this HA cluster. Two NetApp AFF A700 systems configured for MetroCluster IP provide a seamless, fully replicated storage solution for workloads deployed on these ESXi hosts in either datacenter. The validated solution achieves the following core design goals:
· Campus wide and metro wide protection and provide WAN based disaster recovery
· Design supporting active/active deployment use case
· Common management layer across multiple (two) datacenters for deterministic deployment
· Consistent policy and seamless workload migration across the sites
· IP based storage access and synchronous replication
Customers are encouraged to understand and follow the VMware vSphere Metro Storage Cluster (vMSC) guidelines outlined in the following document: https://storagehub.vmware.com/t/vsphere-storage/vmware-vsphere-r-metro-storage-cluster-recommended-practices/. A number of recommendations were implemented when setting up the validation environment but these recommendations were not explicitly outlined in the document since not all of the settings apply to every customer environment. Customers should therefore understand and select the vMSC settings which best fit their individual environments.
FlexPod Datacenter with Cisco ACI Multi-Pod, NetApp MetroCluster IP and VMware vSphere 6.7 solution was validated for two very similarly built datacenters separated by a distance of 75KM. The high-level physical topology is shown in Figure 1:
Figure 1 High-level Physical Topology
Each datacenter contains all the components highlighted in a FlexPod datacenter for Cisco ACI; Cisco UCS, Cisco Nexus 9000 spine and leaf switches, a multi-node cluster of Cisco APICs and NetApp AFF A700 storage controllers. These two datacenters are then connected over two 75km long fiber links using a pair of Cisco Nexus 7004 switches at each site as shown in Figure 1. Cisco ACI Multi-Pod configuration is enabled to manage the network at both of the datacenters as a single entity such that the single cluster of APIC controllers is utilized to manage both ACI fabrics. These FlexPod datacenter components are connected and configured according to the best practices of both Cisco and NetApp and provide an ideal platform for running a variety of workloads with confidence.
The reference architecture described in this document leverages the components detailed in the FlexPod Datacenter with VMware 6.5 Update1 and Cisco ACI 3.1 Design Guide. The FlexPod with ACI design at each datacenter was built using the core design principles and configurations outlined in the FlexPod Datacenter with VMware 6.5 Update1 and Cisco ACI 3.1 Design Guide. However, some software and hardware upgrades such as the new vSphere version 6.7, ACI version 3.2, and NetApp data ONTAP 9.4 running on NetApp AFF A700 controllers are new additions to this solution.
Table 1 lists the software versions for hardware and virtual components used in this solution. Each version used has been certified within interoperability matrixes supported by Cisco, NetApp, and VMware. For more information about supported versions, consult the following sources:
· Cisco UCS Hardware and Software Interoperability Tool
· NetApp Interoperability Matrix
· Cisco ACI Recommended Release
· ACI Virtualization Compatibility Matrix
When selecting a version that differs from the validated versions below, it is highly recommended to read the release notes of the selected version to be aware of any changes to features or commands.
Layer |
Device |
Image |
Comments |
Compute |
Cisco UCS Fabric Interconnects 6200, 6300 Series, UCS B-200 M5 |
3.2(3d) |
Includes the Cisco UCS-IOM 2208XP, IOM-2304, Cisco UCS Manager, and Cisco UCS VIC 1340 |
Network |
Cisco Nexus 9000 ACI Mode |
13.2(2l) |
iNxOS |
Cisco APIC |
3.2(2l) |
ACI Release |
|
|
Nexus 7004 |
6.2(20) |
Can be any release |
Storage |
NetApp AFF A300 |
ONTAP 9.4 |
|
|
Cisco 3132Q-V |
7.0(3)I4(1) |
NetApp recommended |
Software |
Cisco UCS Manager |
3.2(3d) |
|
VMware vSphere ESXi Cisco Custom ISO |
6.7 |
|
|
VMware vSphere nenic driver for ESXi |
1.0.16.0 |
|
|
VMware vCenter |
6.7 |
|
Table 2 outlines the VLANs necessary for solution deployment. In this table, ACI-VMM range indicates dynamically assigned VLANs from the APIC-Controlled VMware Virtual Distributed Switch.
VLAN Name |
VLAN Purpose |
ID Used in Validating This Document |
IB-MGMT |
VLAN for in-band management interfaces |
213 |
InterCluster |
VLAN for traffic between the AFF A700 controllers |
113 |
Native-VLAN |
VLAN to which untagged frames are assigned |
2 |
Foundation-NFS-VLAN |
VLAN for NFS traffic |
3050 |
vMotion-VLAN |
VLAN designated for the movement of VMs from one physical host to another. |
3000 |
iSCSI-A |
VLAN for iSCSI Boot on Fabric A |
3010 |
iSCSI-B |
VLAN for iSCSI Boot on Fabric B |
3020 |
ACI-VMM-[1101-1150] |
Dynamic VLANs for EPGs associated to the VMM vDS |
1101-1150 |
The vCenter and AD servers used in this validation were installed within the FlexPod environment however their installation is not covered in this document. Table 3 lists the required VMs:
Table 3 Infrastructure Virtual Machines
Virtual Machine Description |
Host Name |
Active Directory (AD) |
fpv-ad1, fpv-ad2 |
VMware vCenter |
fpv-vc |
The information in this section is provided as a reference for cabling the physical equipment in a FlexPod environment. The tables in this section contain details for the prescribed and supported configuration of the NetApp AFF A700 running NetApp ONTAP® 9.4.
For any modifications of this prescribed architecture, consult the NetApp Interoperability Matrix Tool (IMT) and Cisco FlexPod documents on cisco.com. Please log in to access these documents. Use cisco.com log in credentials to access FlexPod documents and NetApp support account to access the NetApp tool.
This document assumes that out-of-band management ports are plugged into an existing management infrastructure at the deployment sites. These interfaces will be used in various configuration steps. Make sure to use the cabling directions in this section as a guide.
The NetApp storage controller and disk shelves should be connected according to best practices for the specific storage controller and disk shelves. For disk shelf cabling, refer to the Universal SAS and ACP Cabling Guide: https://library.netapp.com/ecm/ecm_get_file/ECMM1280392.
Figure 2 details the cable connections used in the validation environment for one of the sites based on the Cisco UCS 6332-16UP Fabric Interconnect.
In this document, the site using Cisco UCS 6332-16UP FI and Nexus 93180LC switch will be referred to as 40GbE (or 40G) site because of 40GbE end to end connectivity.
An additional 1Gb management connection is required for an out-of-band network switch apart from the FlexPod infrastructure. Cisco UCS fabric interconnects and Cisco Nexus switches are connected to the out-of-band network switch, and each NetApp AFF controller has two connections to the out-of-band network switch.
Figure 2 FlexPod Cabling with Cisco UCS 6332-16UP Fabric Interconnect
Figure 3 details the cable connections used in the validation lab for the 10GbE site based on the Cisco UCS 6248UP Fabric Interconnect.
In this document, the site using Cisco UCS 6248 FI and Nexus 93180YC switch will be referred to as 10GbE (or 10G) site because the compute to network connection comprises of 10GbE links. Customers can bundle multiple 10 GbE links to obtain higher bandwidth
Figure 3 FlexPod Cabling with Cisco UCS 6248UP Fabric Interconnect
Figure 4 and Figure 5 show the ports and connections between the leaf and spine switches at two sites. Each leaf switch is connected to both of the spine switches in its respective site using 40G links. Customers can increase the number of links from each leaf switch to the spine switch based on their bandwidth requirements.
Figure 4 Leaf-Spine Connectivity for the 40GbE Site
Figure 5 Leaf-Spine Connectivity for 10GbE Site
The Inter Pod Network (IPN) consists of four Nexus 7004 switches connected to ACI spine switched as shown in Figure 6:
Figure 6 Inter Pod Network Physical Design
At each site, both of the Cisco Nexus 9504 Spine switches are connected to both the Cisco Nexus 7004 devices for high availability using 10GbE connections. Cisco Nexus 9504 Spine switches with N9K-X9732C-EX line-card only support 40G ports therefore to connect the 40G port on Cisco Nexus 9504 to a 10G port on the Cisco Nexus 7004, the CVR-QSFP-SFP10G adapter is utilized. The two long distance connections between the Cisco Nexus 7004 devices are achieved by using SFP-10G-ZR optics connected to 75KM long fiber. As shown in Figure 6, all the network connectivity is fully redundant and failure of one or in some cases more than one link (or device) will keep the connectivity between the sites intact.
The Cisco Nexus 3132Q-V intercluster switch is used for the cluster HA interconnect, MetroCluster IP iSCSI traffic, and IP HA and DR replication across the IP fabric. This switch is dedicated for back-end connectivity only and can be configured quickly with Remote Configuration File (RCF) files, downloadable from the NetApp Support site. The physical connectivity from the MetroCluster IP nodes to the intercluster switch is shown in Figure 7:
Figure 7 MetroCluster IP Physical Connectivity
In the validation setup, all the links shown in Figure 7 including the cross-site links are 40GbE links. If the workload does not require very high throughputs, 10GbE can be utilized for connecting the two sites.
This section provides a detailed procedure for configuring the Cisco ACI fabric for use in a FlexPod datacenter with Cisco ACI Multi-Pod and NetApp MetroCluster IP solution environment. In order to deploy an ACI Multi-Pod network, customer can have a single site ACI configuration already in place which can be easily extended to incorporate the Inter-Pod Network (IPN) configuration and a second site configuration. This deployment guide describes the Greenfield deployment scenario where two new sites are being brought up at the same time. When setting up Cisco ACI Multi-Pod for these two new datacenters, configuration steps are performed in the following order:
1. Setup base Cisco ACI configuration for the first site including leaf and spine switch discovery.
2. IPN device configurations and connectivity in both 10GbE site and 40GbE site.
3. Spine switch configuration for IPN connectivity on 10GbE site (configured first).
4. Multi-Pod configuration to connect the two sites together.
5. Spine and Leaf switch discovery on 40GbE site (configured second).
6. Setting up Spine switch configuration on 40GbE site to complete the Multi-Pod setup.
7. Setting up network, compute and storage configuration on both sites.
While not detailed in this deployment guide, if you are trying to incorporate multi-pod configuration to an existing datacenter with pre-configured ACI fabric, the following modified steps need to be followed:
1. IPN device configurations and connectivity in both sites.
2. Spine switch configuration for IPN connectivity on first site (10GbE site in this deployment).
3. Multi-Pod configuration to connect the two sites together.
4. Spine and Leaf switch discovery on second site (40GbE site in this deployment).
5. Setting up Spine switch configuration on second site to complete the Multi-Pod setup.
6. Setting up network, compute and storage configuration on the new site
This CVD describes configuration of both datacenters from scratch. The single site configuration and policies used in this CVD are based on the following FlexPod deployment guide: https://www.cisco.com/c/en/us/td/docs/unified_comp-ting/ucs/UCS_CVDs/flexpod_esxi65u1_n9k_aci.html.
Cisco APIC design considerations are covered in the FlexPod datacenter with Cisco ACI Multi-Pod and NetApp MetroCluster Design guide:
In the current release of Cisco ACI, 3.2(2l), the Pod IDs assigned to FlexPod configuration cannot exceed a value of 9 due to following Cisco defect: CSCvk32591. This defect has been resolved and will be incorporated in an upcoming ACI release.
In the following steps, the configuration information from a single (primary) APIC deployed in the first site is covered. Cisco recommends a cluster of at least 3 or more APICs controlling a single site ACI Fabric and to follow the recommendations outlined above for Multi-Pod setup.
It is possible to perform initial ACI configuration using a single APIC and increasing the APIC cluster at a later time. Following document covers the APIC cluster management: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/3-x/getting_started/b_APIC_Getting_Started_Guide_Rel_3_x/b_APIC_Getting_Started_Guide_Rel_3_x_chapter_0101.html
To verify the device setup using Cisco APIC, complete the following steps:
1. Log into the APIC GUI using a web browser, by browsing to the out of band IP address configured for APIC. Login with the admin user id and password.
In this validation, Google Chrome was used as the web browser. It might take a few minutes before APIC GUI is available after the initial setup.
2. Take the appropriate action to close any warning or information screens.
This section details the steps for adding the two Nexus 93180LC-EX leaf switches to the Fabric. This procedure assumes a green field deployment where new leaf switches are being added to the fabric. If the fabric has already been configured for the first site, skip this section and move on to Multi-Pod configuration sections.
These switches are automatically discovered in the ACI Fabric and available in the APIC for assigning a node name and node ID. To add Nexus 93180LC-EX leaf switches to the Fabric, complete the following steps:
1. At the top in the APIC home page, select the Fabric tab and make sure Inventory under Fabric is selected.
2. In the left pane, select Fabric Membership.
3. The two 93180 Leaf Switches will be listed on the Fabric Membership page with Node ID 0
It is possible that a single Leaf device is shown in the topology. As the devices are added to the topology, device discovery continues until all devices are discovered
4. Connect to the two Nexus 93180 leaf switches using serial consoles and login in as admin with no password (press enter). Use show inventory to get the leaf’s serial number.
5. Match the serial numbers from the leaf listing to determine the A and B switches under Fabric Membership.
6. In the APIC GUI, under Fabric Membership, double-click the A leaf in the list. Enter a Node ID and a Node Name for the Leaf switch and click Update.
Cisco APIC assigns the discovered devices to Pod ID 1 as the default POD ID defined during APIC setup. In this CVD, the first site was assigned the default POD ID of 1. The POD ID can align to site ID, rack ID or building ID depending on customer requirements and POD ID 1 does not necessarily mean a primary datacenter location. In this CVD, the datacenter with POD ID 11 is considered the primary site because of newer UCS fabric interconnects, UCS 6332 however since the datacenters are both active at the same time, the primary and secondary datacenter nomenclature is not completely applicable.
7. Repeat step 6-8 for the B leaf switch as well as any additional leaf switches that are discovered.
8. Click Topology in the left pane. The discovered ACI Fabric topology will appear. It may take a few minutes for the Nexus 93180 Leaf switches to appear and you will need to click the refresh button for the complete topology to appear.
The topology shown in the screenshot above is provided for reference only. The topology outlines a fabric containing 6 leaf switches, 2 spine switches, and 3 APICs connected within a single Pod. Customer topology will vary depending on both number and type of devices. Follow the APIC recommendation guidelines discussed in the above to determine the number and location of APICs in a production environment.
This section details the steps for the initial setup of the Cisco ACI Fabric, where
· The software release is updated (if required)
· NTP and DNS servers are setup
· BGP route reflectors are configured
· Fabric wide subnet enforcement
· Update QoS preservation setting
To verify and upgrade the Cisco ACI software, complete the following steps:
1. In the APIC GUI, at the top select Admin > Firmware.
2. This document was validated with ACI software release 3.2(2l). Select Fabric Node Firmware in the left pane under Firmware Management. All switches should show the same firmware release and the release version should be at minimum n9000-13.2(2l). The switch software version should also correlate with the APIC version.
3. Click Admin > Firmware > Controller Firmware. If all the APICs are not at the same release level i.e. a minimum of 3.2(2l), follow the Cisco APIC Management, Installation, Upgrade, and Downgrade Guide to upgrade both the APICs and switches if the APICs are not at a minimum release of 3.2(2l) and the switches are not at n9000-13.2(2l).
This procedure will allow customers to verify and add setup of an NTP server for synchronizing the fabric time. To verify the time zone and NTP server set up, complete the following steps:
1. To verify a previously added NTP setup in the fabric, select and expand Fabric > Fabric Policies
2. From the left pane, select Policies > Pod > Date and Time.
3. Select default. In the Datetime Format - default pane, verify the correct Time Zone is selected and that Offset State is enabled. Adjust as necessary and click Submit and Submit Changes in the resulting pop-up window.
4. On the left, expand Policy default. Verify that at least one NTP Server is listed.
5. If desired, select enabled for Server State to enable the ACI fabric switches as NTP servers. Click Submit.
6. If an NTP server had not been previously defined, on the right pane use the + sign next to NTP servers to add NTP servers accessible on the out of band management subnet. Enter an IP address and select the default (Out-of-Band) Management EPG. Click Submit to add the NTP server. Repeat this process to add all NTP servers.
To verify and add DNS server in the ACI fabric, complete the following steps:
1. Select and expand Fabric > Fabric Policies
2. In the left pane, expand Policies > Global > DNS Profiles and select default.
3. Verify the DNS Providers and DNS Domains.
4. If DNS server has not been previously defined, in the Management EPG drop-down, select the default (Out-of-Band) Management EPG. Use the + sign to the right of DNS Providers followed by the + sign to the right of DNS Domains to add DNS servers and the DNS domain name.
The DNS servers should be reachable from the out of band management subnet.
5. Click Submit to complete the DNS configuration.
In this ACI deployment, Enforce Subnet Check for IP & MAC Learning is enabled. To verify this setting, complete the following steps:
1. Select and expand System > System Settings > Fabric Wide Setting.
2. Make sure that Enforce Subnet Check is selected. If Enforce Subnet Check is not selected, select it and click Submit.
In this FlexPod with ACI deployment, QoS Preservation should be turned off, complete the following steps to verify and set this setting:
1. Select and expand Fabric > Access Policies > Policies > Global > QOS Class.
2. Make sure that Preserve QoS is not selected.
The reason “Preserve QoS” setting is disabled in FlexPod is that NetApp AFF/FAS storage controllers set CoS value of 4 on all tagged VLAN interfaces. Since FlexPod normally treats all UCS traffic as Best Effort with CoS value of 0, this setting clears the CoS value of 4 for proper handling of storage traffic within UCS.
In the previous steps, both leaf and spine switches in first site, POD 1, have already been added to the ACI environment. The steps below will walk you through setting up:
1. IPN device configurations in both sites.
2. Spines configuration on first site (10GbE site).
3. Multi-Pod configuration.
4. Spines and Leaves discovery on second site (40GbE site).
5. Setting up Spines configuration on the second site to complete the Multi-Pod setup.
Refer to Figure 8 for details about various device links and associated IP addresses and subnets used in the configurations below. The second site being added is assigned POD ID 11.
Figure 8 IPN and Spine Connectivity
This section details the relevant configurations of the four IPN devices. The configuration enables multicast, OSPF, jumbo MTU and DHCP relay on all the IPN devices. The IPN devices are connected using two dedicated 10Gbps links. The configuration below uses a single IPN device as RP for PIM bidir traffic. In a production network, deploying a phantom RP for high availability is recommended: https://www.cisco.com/c/dam/en/us/products/collateral/ios-nx-os-software/multicast-enterprise/prod_white_paper0900aecd80310db2.pdf.
Pod 1 7004-1
feature ospf
feature pim
feature lacp
feature dhcp
feature lldp
!
! Define the multicast groups and associated RP addresses
!
ip pim rp-address 10.241.255.1 group-list 225.0.0.0/8 bidir
ip pim rp-address 10.241.255.1 group-list 239.0.0.0/8 bidir
ip pim ssm range 232.0.0.0/8
!
service dhcp
ip dhcp relay
!
interface port-channel1
description To Pod 1 7004-2
mtu 9216
ip address 10.242.252.1/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
!
interface Ethernet3/21
mtu 9216
channel-group 1 mode active
no shutdown
interface Ethernet3/22
mtu 9216
channel-group 1 mode active
no shutdown
!
interface Ethernet3/13
description to Pod1 Spine-1 interface E4/29
mtu 9216
no shutdown
!
interface Ethernet3/13.4
mtu 9216
encapsulation dot1q 4
ip address 10.242.241.2/30
ip ospf network point-to-point
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
!
! DHCP relay for forwarding DHCP queries to APIC’s in-band IP address
!
ip dhcp relay address 10.12.0.1
ip dhcp relay address [APIC 2’s IP]
ip dhcp relay address [APIC 3’s IP]
no shutdown
!
interface Ethernet3/14
description Pod 11 Spine-2 interface E4/29
mtu 9216
no shutdown
!
interface Ethernet3/14.4
mtu 9216
encapsulation dot1q 4
ip address 10.242.243.2/30
ip ospf network point-to-point
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.12.0.1
ip dhcp relay address [APIC 2’s IP]
ip dhcp relay address [APIC 3’s IP]
no shutdown
!
interface Ethernet3/24
description Link to Pod 11 7004-1 E3/12
mtu 9216
ip address 10.241.253.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
no shutdown
!
interface loopback0
description Loopback to be used as Router-ID
ip address 10.242.255.31/32
ip router ospf 10 area 0.0.0.0
!
interface loopback1
description PIM RP Address
ip address 10.241.255.1/30
ip ospf network point-to-point
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
!
router ospf 10
router-id 10.242.255.31
log-adjacency-changes
!
Pod 1 7004 – 2
feature ospf
feature pim
feature lacp
feature dhcp
feature lldp
!
! Define the multicast groups and associated RP addresses
!
ip pim rp-address 10.241.255.1 group-list 225.0.0.0/8 bidir
ip pim rp-address 10.241.255.1 group-list 239.0.0.0/8 bidir
ip pim ssm range 232.0.0.0/8
!
service dhcp
ip dhcp relay
!
interface port-channel1
description To Pod 1 7004-1
mtu 9216
ip address 10.242.252.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
!
interface Ethernet3/21
mtu 9216
channel-group 1 mode active
no shutdown
interface Ethernet3/22
mtu 9216
channel-group 1 mode active
no shutdown
!
interface Ethernet3/13
description to Pod1 Spine-1 interface E4/30
mtu 9216
no shutdown
!
interface Ethernet3/13.4
mtu 9216
encapsulation dot1q 4
ip address 10.242.242.2/30
ip ospf network point-to-point
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
!
! DHCP relay to forward DHCP queries to APIC’s in-band IP address
!
ip dhcp relay address 10.12.0.1
ip dhcp relay address [APIC 2’s IP]
ip dhcp relay address [APIC 3’s IP]
no shutdown
!
interface Ethernet3/14
description to Pod1 Spine-2 interface E4/30
mtu 9216
no shutdown
!
interface Ethernet3/14.4
mtu 9216
encapsulation dot1q 4
ip address 10.242.244.2/30
ip ospf network point-to-point
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.12.0.1
ip dhcp relay address [APIC 2’s IP]
ip dhcp relay address [APIC 3’s IP]
no shutdown
!
interface Ethernet3/24
description Link to Pod 1 7004-2 E3/12
mtu 9216
ip address 10.241.254.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
no shutdown
!
interface loopback0
description Loopback to be used as Router-ID
ip address 10.242.255.32/32
ip router ospf 10 area 0.0.0.0
!
router ospf 10
router-id 10.241.254.2
log-adjacency-changes
!
Pod 11 7004-1
feature ospf
feature pim
feature lacp
feature dhcp
feature lldp
!
! Define the multicast groups and associated RP addresses
!
ip pim rp-address 10.241.255.1 group-list 225.0.0.0/8 bidir
ip pim rp-address 10.241.255.1 group-list 239.0.0.0/8 bidir
ip pim ssm range 232.0.0.0/8
!
service dhcp
ip dhcp relay
!
interface port-channel1
description To Pod 11 7004-2
mtu 9216
ip address 10.241.252.1/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
!
interface Ethernet3/9
mtu 9216
channel-group 1 mode active
no shutdown
interface Ethernet3/10
mtu 9216
channel-group 1 mode active
no shutdown
!
interface Ethernet3/1
description to Pod11 Spine-1 interface E4/29
mtu 9216
no shutdown
!
interface Ethernet3/1.4
mtu 9216
encapsulation dot1q 4
ip address 10.241.241.2/30
ip ospf network point-to-point
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
!
! DHCP relay for forwarding DHCP queries to APIC’s in-band IP address
!
ip dhcp relay address 10.12.0.1
ip dhcp relay address [APIC 2’s IP]
ip dhcp relay address [APIC 3’s IP]
no shutdown
!
interface Ethernet3/2
description Pod 11 Spine-2 interface E4/29
mtu 9216
no shutdown
!
interface Ethernet3/2.4
mtu 9216
encapsulation dot1q 4
ip address 10.241.243.2/30
ip ospf network point-to-point
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.12.0.1
ip dhcp relay address [APIC 2’s IP]
ip dhcp relay address [APIC 3’s IP]
no shutdown
!
interface Ethernet3/12
description Link to Pod 1 7004-1
mtu 9216
ip address 10.241.253.1/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
no shutdown
!
interface loopback0
description Loopback to be used as Router-ID
ip address 10.241.255.31/32
ip router ospf 10 area 0.0.0.0
router ospf 10
router-id 10.241.255.31
log-adjacency-changes
!
Pod 11 7004-2
feature ospf
feature pim
feature lacp
feature dhcp
feature lldp
!
! Define the multicast groups and associated RP addresses
!
ip pim rp-address 10.241.255.1 group-list 225.0.0.0/8 bidir
ip pim rp-address 10.241.255.1 group-list 239.0.0.0/8 bidir
ip pim ssm range 232.0.0.0/8
!
service dhcp
ip dhcp relay
!
interface port-channel1
description To POD 11 7004-1
mtu 9216
ip address 10.241.252.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
!
interface Ethernet3/9
mtu 9216
channel-group 1 mode active
no shutdown
!
interface Ethernet3/10
mtu 9216
channel-group 1 mode active
no shutdown
!
interface Ethernet3/1
description Pod 11 Spine-1 E4/30
mtu 9216
no shutdown
interface Ethernet3/1.4
mtu 9216
encapsulation dot1q 4
ip address 10.241.242.2/30
ip ospf network point-to-point
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.12.0.1
ip dhcp relay address [APIC 2’s IP]
ip dhcp relay address [APIC 3’s IP]
no shutdown
!
interface Ethernet3/2
description To Pod 11 Spine-2 E4/30
mtu 9216
no shutdown
!
interface Ethernet3/2.4
mtu 9216
encapsulation dot1q 4
ip address 10.241.244.2/30
ip ospf network point-to-point
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
ip dhcp relay address 10.12.0.1
ip dhcp relay address [APIC 2’s IP]
ip dhcp relay address [APIC 3’s IP]
no shutdown
!
interface Ethernet3/12
description Link to Pod 1 7004-2
mtu 9216
ip address 10.241.254.1/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
ip pim sparse-mode
no shutdown
!
interface loopback0
description Loopback to be used as Router-ID
ip address 10.241.255.32/32
ip router ospf 10 area 0.0.0.0
!
router ospf 10
router-id 10.241.255.32
log-adjacency-changes
!
This section describes the relevant configurations of the Spine switches to enable Multi-Pod configuration. As previously stated, both sites are configured with different Tunnel Endpoint (TEP) pools, 10.11.0.0/16 (Pod11) and 10.12.0.0/16 (Pod1).
1. Log into the APIC GUI and follow Fabric >Access Policies.
2. In the left pane, expand Pools and right-click VLAN and select Create VLAN Pool.
3. Provide a Name for the VLAN Pool (MultiPod-vlans), select static Allocation and click + to add a VLAN range.
4. In the VLAN range, enter a single VLAN 4, select Static Allocation and click OK.
5. Click Submit to finish creating VLAN pool
1. Log into the APIC GUI and follow Fabric > Access Policies.
2. In the left pane, expand Global Policies, right-click Attachable Access Entity Profile and select Create Attachable Access Entity Profile.
3. Provide a Name for the AEP (MultiPod-aep) and click Next.
4. Click Finish to complete the AEP creation without adding any interfaces.
1. Log into the APIC GUI and follow Fabric > Access Policies.
2. In the left pane, expand Physical and External Domains, right-click External Routed Domains and select Create Layer 3 Domain.
3. Provide a Name for the Layer 3 domain (MultiPod-L3).
4. From the Associated Attachable Entity Profile drop-down list, select the recently created AEP (MultiPod-aep).
5. From the VLAN Pool drop-down list, select the recently created VLAN Pool (MultiPod-vlans).
6. Click Submit to finish creating the Layer 3 domain.
1. Log into the APIC GUI and follow Fabric > Access Policies.
2. In the left pane, expand Interface Policies > Policies > Link Level.
3. Right-click Link Level and select Create Link Level Policy.
4. Provide a name for the Link Level Policy (MultiPod-inherit) and make sure Auto Negotiation is set to on and Speed is set to inherit.
5. Click Submit to create the policy.
1. Log into the APIC GUI and follow Fabric > Access Policies.
2. In the left pane, expand Interface Policies > Policy Groups.
3. Right-click Spine Policy Group and select Create Spine Access Port Policy Group.
4. Provide a Name for the Spine Access Port Policy Group (MultiPod-PolGrp).
5. From the Link Level Policy drop-down list, select recently created policy (MultiPod-Inherit).
6. From the CDP Policy drop-down list, select the previously created policy to enable CDP (CDP-Enabled).
7. From the Attached Entity Profile drop-down list, select the recently created AEP (MultiPod-aep).
8. Click Submit.
1. Log into the APIC GUI and follow Fabric > Access Policies.
2. In the left pane, expand Interface Policies > Profiles.
3. Right-click Spine Profiles and select Create Spine Interface Profile.
4. Provide a Name for the Spine Interface Profile (MultiPod-Spine-IntProf).
5. Click + to add Interface Selectors.
6. Provide a Name for the Spine Access Port Selector (Spine-Intf).
7. For Interface IDs, add interfaces that connect to the two IPN devices (4/29-4/30)
8. From the Interface Policy Group drop-down list, select the recently created Policy Group (MultiPod-PolGrp).
9. Click OK to finish creating Access Port Selector.
10. Click Submit to finish creation Spine Interface Profile.
1. Log into the APIC GUI and follow Fabric > Access Policies.
2. In the left pane, expand Switch Policies > Profiles.
3. Right-click Spine Profiles and select Create Spine Profile.
4. Provide a Name for the Spine Profile (Spine-Prof).
5. Click + to add Spine Selectors.
6. Provide a Name for the Spine Selector (Pod1-Spines) and from the drop-down list under Blocks, select the spine switch IDs (211-212).
7. Click Update and then Next
8. For the Interface Selector Profiles, select recently created interface selector profile (MultiPod-Spine-intProf).
9. Click Finish to complete creating Spine Profile.
When the original FlexPod with ACI setup was completed, a Pod (1) and the TEP Pool (10.12.0.0/16) was created as part of the setup. In this configuration step, Pod and TEP addresses are defined for the second Site and Multi-Pod is configuration is completed on the APIC. The Pod ID used for second Site is 11 and the TEP Pool used is 10.11.0.0/16.
1. Log into the APIC GUI and follow Fabric > Inventory.
2. In the left pane, right-click Pod Fabric Setup Policy and select Setup Pods.
3. Enter the Pod ID and TEP Pool for 40GbE Site.
4. Click Submit.
1. Log into the APIC GUI and follow Fabric > Inventory.
2. In the left pane, right-click Pod Fabric Setup Policy and select Create Multi-Pod.
3. Provide a Community string (extended:as2-nn4:5:16).
4. Select Enable Atomic Counters for Multi-pod Mode.
5. Select Peering Type as Full Mesh (since there are only two sites).
6. Click + to add a Pod Connection Profile.
7. Add Pod 1 and provide Dataplane TEP or ETEP shared by Spines at first site as outlined in TEP Interfaces and click Update.
8. Add Pod 11 and provide Dataplane TEP or ETEP shared by Spines at second site as outlined in TEP Interfaces and click Update.
9. Click + to add Fabric External Routing Profile.
10. Provide a name (FabExtRoutingProf) and define the subnets used for defining point-to-point connections between Spines and IPN devices. In this guide, all the point-to-point connections and within following two subnets: 10.241.0.0/16 and 10.242.0.0/16. Click Update.
11. Click Submit to complete the Multi-Pod configuration.
1. Log into the APIC GUI and follow Fabric > Inventory.
2. In the left pane, right-click Pod Fabric Setup Policy and select Create Routed Outside for Multipod.
3. Provide the OSPF Area ID as configured on the IPN devices (0.0.0.0).
4. Select the OSPF Area Type (Regular area).
5. Click Next.
6. Click + to add first Spine.
7. Select the first Spine (Pod-1/Node-211) and add Router ID (Loopback) as shown in Figure 8 and click Update.
8. Click + to add second Spine.
9. Select the second Spine (Pod-1/Node-212) and add Router ID (Loopback) as shown in Figure 8 and click Update.
10. From the drop-down list for OSPF Profile For Sub-Interfaces, select Create OSPF Interface Policy.
11. Provide a Name for the Policy (P2P).
12. Select Network Type Point-to-Point.
13. Check Advertise Subnet and MTU ignore.
14. Click Submit.
15. Click + to add Routed Sub-Interfaces.
16. Add all four interfaces (Path) and their respective IP addresses connecting the first Site’s spines switches to IPN devices (Figure 8).
17. Click Finish.
18. Browse to Tenants > Infra.
19. In the left pane, expand the Networking > External Routed Networks and click multipod.
20. From the External Routed Domain drop-down list on the main page, select MultiPod-L3.
21. Click Submit.
The 10GbE Site spine configuration is now complete. Log into the IPN devices to verify OSPF routing and neighbor relationship
With the Multi-Pod network configured on first site, the spine switches on the second site should now be visible under the Fabric > Inventory > Fabric Membership.
1. Log into the APIC GUI and follow Fabric > Inventory > Fabric Membership.
2. In the main window, double click and update Pod ID (11), Node ID (1111 and 1112) and Node Names for both the new Spines.
1. After adding the Spines to the Fabric, the next step is to configure the Spines on the second Site to correctly communicate with the IPN devices on the same Site. The leaf switches from this site will not be visible unless this step is complete.
2. Log into the APIC GUI and follow Fabric > Access Policies.
3. From the left pane, expand Switch Policies > Profiles > Spine Profiles.
4. Select the previously created Spine Profile (Spine-Prof).
5. In the main window, click + to add additional (second site) spines.
6. Provide a Name and select the Node IDs for the two spines (1111-1112).
7. Click Update.
1. Log into the APIC GUI and follow Fabric > Inventory.
2. In the left pane, right-click Pod Fabric Setup Policy and select Create Routed Outside for A Pod.
3. Click + to add first Spine.
4. Select the first Spine (Pod-11/Node-1111) and add Router ID (Loopback) as shown in Figure 8 and click Update.
5. Click + to add second Spine.
6. Select the second Spine (Pod-11/Node-1112) and add Router ID (Loopback) as shown in Figure 8 and click Update.
7. Click + to add Routed Sub-Interfaces.
8. Add all four interfaces (Path) and their respective IP addresses connecting the Spines to IPN devices (Figure 8).
9. Click Submit.
The second site spine configuration is now complete. Log into the IPN devices to verify OSPF routing and neighbor relationship.
With the Multi-Pod network configured on the second Site, all the Leaf switches on the site should now be visible under the Fabric > Inventory > Fabric Membership. In the main window, double-click and update Pod ID (11), Node ID, and Node Names for the leaf devices.
With the discovery and addition of all the ACI devices, the Multi-Pod portion of the configuration is now complete.
This section details the additional steps required during the initial setup of the Cisco ACI Fabric, where
· Optional Software upgrade of the 40GbE Site devices.
· Out of band management IPs are assigned to the new devices.
· BGP route reflectors are configured.
Repeat the software upgrade process outlined in the document above to upgrade the Leaf and Spine devices in the recently added Site.
To set up out-of-band management IP addresses, complete the following steps:
1. To add out-of-band management interfaces for all the switches in the ACI Fabric, select Tenants > mgmt.
2. Expand Tenant mgmt on the left. Right-click Node Management Addresses and select Create Static Node Management Addresses.
3. Enter the node number (or range) for the new switches as defined in the previous steps.
4. Select the checkbox for Out-of-Band Addresses.
5. Select default for Out-of-Band Management EPG.
6. Considering that the IPs will be applied in a consecutive range, enter a starting IP address and netmask in the Out-Of-Band IPV4 Address field.
7. Enter the out of band management gateway address in the Gateway field.
8. Click SUBMIT, then click YES.
9. On the left, expand Node Management Addresses and select Static Node Management Addresses. Verify the mapping of IPs to switching nodes.
10. On successful completion of this step, direct out-of-band access to the switches is now available using SSH.
In this ACI deployment, all the spine switches are set up as BGP route-reflectors to distribute the leaf routes throughout the fabric. To verify and set the BGP Route Reflector, complete the following steps:
1. Select and expand System > System Settings > BGP Route Reflector.
2. Verify that a unique Autonomous System Number has been selected for this ACI fabric. If not already defined, use the + sign on the right to add the spine nodes to the list of Route Reflector Nodes. Click Submit to complete configuring the BGP Route Reflector.
In the screen shot above, all four spines at both the datacenters have been added as BGP route reflectors.
3. To verify or enable the BGP Route Reflector, select and expand Fabric > Fabric Policies > Pods > Policy Groups. Under Policy Groups make sure a policy group has been created and selected. The BGP Route Reflector Policy field should show “default.”
4. If a Policy Group has not been created, on the left, right-click Policy Groups under Pods and select Create Pod Policy Group. In the Create Pod Policy Group window, name the Policy Group ppg-Pod1. Select the default BGP Route Reflector Policy.
5. Click Submit to complete creating the Policy Group.
6. On the left expand Profiles under Pods and select Pod Profile default > default.
7. Verify that the ppg-Pod1 or the Fabric Policy Group identified above is selected. If the Fabric Policy Group is not selected, view the drop-down list to select it and click Submit.
This section details the steps to create various access policies creating parameters for CDP, LLDP, LACP, etc. These policies are used during vPC and VM domain creation. In an existing fabric, these policies may already exist. To define fabric access policies, complete the following steps:
1. Log into the APIC AGUI.
2. In the APIC UI, select and expand Fabric > Access Policies > Policies > Interfaces.
This procedure will create link level policies for setting up the 1Gbps, 10Gbps, and 40Gbps link speeds. To create the link level policies, complete the following steps:
1. In the left pane, right-click Link Level and select Create Link Level Policy.
2. Name the policy as “1Gbps-Auto” and select the 1Gbps Speed.
3. Click Submit to complete creating the policy.
4. In the left pane, right-click Link Level and select Create Link Level Policy.
5. Name the policy “10Gbps-Auto” and select the 10Gbps Speed.
6. Click Submit to complete creating the policy.
7. In the left pane, right-click Link Level and select Create Link Level Policy.
8. Name the policy “40Gbps-Auto” and select the 40Gbps Speed.
9. Click Submit to complete creating the policy.
This procedure creates policies to enable or disable CDP on a link. To create a CDP policy, complete the following steps:
1. In the left pane, right-click CDP interface and select Create CDP Interface Policy.
2. Name the policy as “CDP-Enabled” and enable the Admin State.
3. Click Submit to complete creating the policy.
4. In the left pane, right-click the CDP Interface and select Create CDP Interface Policy.
5. Name the policy “CDP-Disabled” and disable the Admin State.
6. Click Submit to complete creating the policy.
This procedure will create policies to enable or disable LLDP on a link. To create an LLDP Interface policy, complete the following steps:
1. In the left pane, right-click LLDP interface and select Create LLDP interface Policy.
2. Name the policy as “LLDP-Enabled” and enable both Transmit State and Receive State.
3. Click Submit to complete creating the policy.
4. In the left, right-click the LLDP interface and select Create LLDP Interface Policy.
5. Name the policy as “LLDP-Disabled” and disable both the Transmit State and Receive State.
6. Click Submit to complete creating the policy.
This procedure will create policies to set LACP active mode configuration and the MAC-Pinning mode configuration. To create the Port Channel policy, complete the following steps:
1. In the left pane, right-click Port Channel and select Create Port Channel Policy.
2. Name the policy as “LACP-Active” and select LACP Active for the Mode. Do not change any of the other values.
3. Click Submit to complete creating the policy.
4. In the left pane, right-click Port Channel and select Create Port Channel Policy.
5. Name the policy as “MAC-Pinning” and select MAC Pinning-Physical-NIC-load for the Mode. Do not change any of the other values.
6. Click Submit to complete creating the policy.
This procedure will create policies to enable or disable BPDU filter and guard. To create a BPDU filter/Guard policy, complete the following steps:
1. In the left pane, right-click Spanning Tree Interface and select Create Spanning Tree Interface Policy.
2. Name the policy as “BPDU-FG-Enabled” and select both the BPDU filter enabled and BPDU Guard enabled Interface Controls.
3. Click Submit to complete creating the policy.
4. In the left pane, right-click Spanning Tree Interface and select Create Spanning Tree Interface Policy.
5. Name the policy as “BPDU-FG-Disabled” and make sure both the BPDU filter enabled and BPDU Guard enabled Interface Controls are cleared.
6. Click Submit to complete creating the policy.
To create policies to enable port local scope for all the VLANs, complete the following steps:
1. In the left pane, right-click the L2 Interface and select Create L2 Interface Policy.
2. Name the policy as VLAN-Scope-Port-Local and make sure Port Local scope is selected for VLAN Scope. Do not change any of the other values.
3. Click Submit to complete creating the policy.
4. Repeat above steps to create a VLAN-Scope-Global Policy and make sure Global scope is selected for VLAN Scope. Do not change any of the other values. See below.
When using the unified configuration wizard “Creating Interface and Switch Profiles” and a vCenter Domain Profile Using the GUI, Cisco APIC applies the firewall policy in the mode you chose: Learning, Enabled, or Disabled. Since this CVD does not cover distributed firewall configuration, a firewall policy is added to disable the distributed firewall. Complete the following steps:
1. In the left pane, right-click Firewall and select Create Firewall Policy.
2. Name the policy “Firewall-Disabled” and select Disabled for Mode. Do not change any of the other values.
3. Click Submit to complete creating the policy.
This section details the steps to setup VPCs for connectivity to the Cisco UCS and the NetApp Storage.
The single site FlexPod with ACI CVD included the configuration of In-Band Management switch connectivity through a VPC. In the Multi-Pod design, the In-Band Management network is connected to the existing non-ACI network using Shared-L3 Out connection and therefore the L2 mapping using a VPC to an external switch is no longer required.
To setup VPCs for connectivity to the UCS Fabric Interconnects at the two sites, complete the following steps:
Figure 9 Cisco UCS Connectivity – POD 11
Table 4 VLANs for Cisco UCS Hosts
Name |
VLAN |
Native |
<2> |
IB-Mgmt |
<213> |
vMotion |
<3000> |
NFS |
<3050> |
iSCSI-A |
<3010> |
iSCSI-B |
<3020> |
1. In the APIC GUI, select Fabric > Access Policies > Quick Start.
2. In the right pane, select Configure and interface, PC and VPC.
3. In the configuration window, configure a VPC domain between the 93180 leaf switches by clicking + under VPC Switch Pairs.
4. Enter a VPC Domain ID (21 in this example).
5. From the drop-down list, select 93180 Switch A and 93180 Switch B IDs to select the two leaf switches.
6. Click Save.
7. Click the + under Configured Switch Interfaces.
8. Select the two Nexus 93180 switches under the Switches drop-down list.
9. Click to add switch interfaces.
10. Configure various fields by utilizing the policies created earlier as shown in the figure below. In this screenshot, port 1/21 on both leaf switches is connected to UCS Fabric Interconnect A using 40Gbps links.
Ports 25-32 of the 93180LC-EX switch by default are setup as uplink ports. If these ports need to be used as downlink ports in a customer environment, a reconfiguration within APIC is required.
The VLANs specified (not pictured) for the above Domain within the Attached Device are: 213,3010,3020,3050 (from Table 4).
11. Click Save.
12. Click Save again to finish the configuring switch interfaces.
13. Click Submit.
14. From the right pane, select Configure and interface, PC and VPC.
15. Select the switches configured in the last step under Configured Switch Interfaces.
16. Click on the right to add switch interfaces.
17. Configure various fields as shown in the screenshot. In this screenshot, port 1/24 on both leaf switches is connected to UCS Fabric Interconnect B using 40Gbps links. Instead of creating a new domain, the External Bridged Domain created in the last step (UCS) is also used for FI-B as shown below.
The specification of the previously created External Bridge Domain will make re-entering the VLANs un-necessary.
18. Click Save.
19. Click Save again to finish the configuring switch interfaces.
20. Click Submit.
21. Repeat this procedure to configure the UCS domain in POD 1. For a uniform configuration, the same External Bridge Domain (UCS) will be utilized for all the Fabric Interconnects. Use the switch and port information in Figure 10 as a reference.
Figure 10 Cisco UCS Connectivity – POD 1
Complete the following steps to setup VPCs for connectivity to the NetApp AFF storage controllers. The VLANs configured for NetApp are shown in Table 5.
Figure 11 NetApp AFF A700 Connectivity – POD 1
Name |
VLAN |
IB-MGMT |
<213> |
InterCluster |
<113> |
NFS |
<3050> |
iSCSI-A |
<3010> |
iSCSI-B |
<3020> |
1. In the APIC GUI, select Fabric > Access Policies > Quick Start.
2. In the right pane, click to Configure and interface, PC and VPC.
3. Under Configured Switch Interfaces, select the paired Nexus 93180 switches configured in the previous step.
4. Click on the right to add switch interfaces.
5. Configure various fields as shown in the screenshot below. In this screen shot, port 1/23 on both leaf switches is connected to Storage Controller 1 using 40Gbps links.
6. Click Save.
7. Click Save again to finish the configuring switch interfaces.
8. Click Submit.
9. From the right pane, select Configure and interface, PC and VPC.
10. Select the previously configured paired Nexus 93180 switches.
11. Click to add switch interfaces.
12. Configure various fields as shown in the screenshot below. In this screenshot, port 1/24 on both leaf switches is connected to Storage Controller 2 using 40Gbps links. Instead of creating a new domain, the Bare Metal Device created in the previous step (NetApp-AFF) is attached to the storage controller 2 as shown below.
13. Click Save.
14. Click Save again to finish the configuring switch interfaces.
15. Click Submit.
16. Repeat this procedure to configure the NetApp AFF storage controller on second site (POD 1). For a uniform configuration, the same Bare Metal Domain (NetApp-AFF) will be utilized for all the Storage Controllers. Use the switch and port information in Figure 12 as a reference.
Figure 12 NetApp AFF 700 Connectivity – POD 1
The Ports 49 and 50 on Cisco Nexus 93180YC-EX switch are configured as uplink ports by default. These ports need to be converted to downlink ports for NetApp connectivity. Refer to https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACI-Fundamentals/b_ACI-Fundamentals_chapter_010011.html for more details around port types and port conversion.
This section details the steps to setup in-band management access in the Tenant common. This design will allow all the other tenant EPGs to access the common management segment for Core Services VMs such as AD/DNS.
1. In the APIC GUI, select Tenants > common.
2. In the left pane, expand Tenant common and Networking.
To create VRFs, complete the following steps:
1. Right-click VRFs and select Create VRF.
2. Enter “FPV-Common-IB” as the name of the VRF.
3. Uncheck Create A Bridge Domain.
4. Click Finish.
To create the incoming IB-MGMT Bridge domain for Core-Services, complete the following steps:
1. In the left pane, expand Tenant common and Networking.
2. Right-click Bridge Domains and select Create Bridge Domain.
3. Name the Bridge Domain as “FPV-Common-IB”.
4. Select “FPV-Common-IB” from the VRF drop-down list.
5. Select Custom under Forwarding and enable the flooding as shown in the screenshot below.
6. Click Next.
7. Under L3 Configurations, make sure Limit IP Learning to Subnet is selected. Click Next.
8. No changes are needed for Advanced/Troubleshooting. Click Finish.
To create an Application profile, complete the following steps:
1. In the left pane, expand Tenant common and Application Profiles.
2. Right-click the Application Profiles and select Create Application Profiles.
3. Enter “FPV-Common-IB-Mgmt” as the name of the application profile.
4. Click Submit.
To create the FPV-Common-Services EPG, complete the following steps:
1. Expand the “FPV-Common-IB-Mgmt” Application Profile and right-click Application EPGs.
2. Select Create Application EPG.
3. Enter “FPV-Common-Services” as the name of the EPG.
4. Select “FPV-Common-IB” from the drop-down list for Bridge Domain.
5. Click Finish.
To set Domains, complete the following steps:
1. Expand the newly create EPG and click Domains.
2. Right-click Domains and select Add L2 External Domain Association.
3. Select the “FPV-UCS-L2” as the L2 External Domain Profile. This domain was created when configuring the VPC for Cisco UCS.
4. Click Submit.
5. Right-click Domains and select Add Physical Domain Association.
6. Select the NetApp as the Physical Domain Profile. This domain was created when configuring a VPC for NetApp controllers.
7. Click Submit.
To set Static Ports, complete the following steps:
1. In the left pane, right-click Static Ports.
2. Select Deploy Static EPG on PC, VPC, or Interface.
3. In the next screen, for the Path Type, select Virtual Port Channel and from the Path drop-down list, select the VPC for UCS Fabric Interconnect A configured earlier.
4. Enter the Common-Services VLAN under Port Encap.
5. Change Deployment Immediacy to Immediate.
6. Leave the Mode set to Trunk.
7. Click Submit.
8. In the left pane, right-click Static Ports.
9. Select Deploy Static EPG on PC, VPC, or Interface.
10. In the next screen, for the Path Type, select Virtual Port Channel and from the Path drop-down list, select the VPC for UCS Fabric Interconnect B configured earlier.
11. Enter the Common-Services VLAN under Port Encap.
12. Change Deployment Immediacy to Immediate.
13. Leave the Mode set to Trunk.
14. Click Submit.
15. In the left pane, right-click Static Ports.
16. Select Deploy Static EPG on PC, VPC, or Interface.
17. In the next screen, for the Path Type, select Virtual Port Channel and from the Path drop-down list, select the first VPC for the AFF A700 configured earlier.
18. Enter the Common-Services VLAN under Port Encap.
19. Change Deployment Immediacy to Immediate.
20. Leave the Mode set to Trunk.
21. Click Submit.
22. In the left pane, right-click Static Ports.
23. Select Deploy Static EPG on PC, VPC, or Interface.
24. In the next screen, for the Path Type, select Virtual Port Channel and from the Path drop-down list, select the second VPC for the AFF A700configured earlier.
25. Enter the Common-Services VLAN under Port Encap.
26. Change Deployment Immediacy to Immediate.
27. Leave the Mode set to Trunk.
28. Click Submit.
A subnet gateway for this Common Services EPG provides Layer 3 connectivity to Tenant subnets. To create an EPG Subnet, complete the following steps:
1. In the left pane, right-click Subnets and select Create EPG Subnet.
2. In CIDR notation, enter an IP address and subnet mask to serve as the gateway within the ACI fabric for routing between the Common Services subnet and Tenant subnets. In this lab validation, 10.2.156.254/24 will be used for the EPG subnet gateway. Set the Scope of the subnet to Shared between VRFs, and Advertised Externally.
3. Click Submit to create the Subnet.
To create Provided Contract, complete the following steps:
1. In the left pane, within the FPV Common-Services EPG, right-click Contracts and select Add Provided Contract.
2. In the Add Provided Contract window, select Create Contract from the drop-down list.
3. Name the Contract FPV-Allow-Common-Services.
4. Set the Scope to Global.
5. Click + to add a Subject to the Contract.
The following steps create a contract to allow all the traffic between various tenants and the common management segment. You are encouraged to limit the traffic by setting restrictive filters.
6. Name the subject “Allow-All-Traffic”.
7. Click + under Filter Chain to add a Filter.
8. From the drop-down Name list, select common/default.
9. In the Create Contract Subject window, click Update to add the Filter Chain to the Contract Subject.
10. Click OK to add the Contract Subject.
The Contract Subject Filter Chain can be modified later.
11. Click Submit to finish creating the Contract.
12. Click Submit to finish adding a Provided Contract.
Security filters defined in the “common” tenant can be used by contracts in all the tenants of the ACI fabric. To create Security Filters for NFSv3 with NetApp Storage and for iSCSI, complete the following steps.
This section can also be used to set up other filters necessary in any environment. Customers can choose to define these filters within the appropriate tenants if they do not want to make these filters available system wide.
1. In the APIC GUI, at the top select Tenants > common.
2. On the left, expand Tenant common > Contracts > Filters.
3. Right-click Filters and select Create Filter.
4. Name the filter “Allow-All”.
5. Click the + sign to add an Entry to the Filter.
6. Name the Entry “Allow-All” and select EtherType IP.
7. Leave the IP Protocol set at Unspecified.
8. Click Update to add the Entry.
9. Click Submit to complete adding the Filter.
10. Right-click Filters and select Create Filter.
11. Name the filter “NFS”.
12. Click the + sign to add an Entry to the Filter.
13. Name the Entry “tcp-111” and select EtherType IP.
14. Select the tcp IP Protocol and enter 111 for From and To under the Destination Port / Range by backspacing over Unspecified and entering the number.
15. Click Update to add the Entry.
16. Click the + sign to add another Entry to the Filter.
17. Name the Entry “tcp-635” and select EtherType IP.
18. Select the tcp IP Protocol and enter 635 for From and To under the Destination Port / Range by backspacing over Unspecified and entering the number.
19. Click Update to add the Entry.
20. Click the + sign to add another Entry to the Filter.
21. Name the Entry “tcp-2049” and select EtherType IP.
22. Select the tcp IP Protocol and enter 2049 for From and To under the Destination Port / Range by backspacing over Unspecified and entering the number.
23. Click Update to add the Entry.
24. Click Submit to complete adding the Filter.
25. Right-click Filters and select Create Filter.
26. Name the filter “iSCSI”.
27. Click the + sign to add an Entry to the Filter.
28. Name the Entry “iSCSI” and select EtherType IP.
29. Select the tcp IP Protocol and enter 3260 for From and To under the Destination Port / Range by backspacing over Unspecified and entering the number.
30. Click Update to add the Entry.
31. Click Submit to complete adding the Filter.
This section details the steps for creating the FPV-Foundation Tenant in the ACI Fabric. This tenant will host infrastructure connectivity for the compute (VMware ESXi on UCS nodes) and the storage environments. To deploy the FPV-Foundation Tenant, complete the following steps:
1. In the APIC GUI, select Tenants > Add Tenant.
2. Name the Tenant “FPV-Foundation”.
3. For the VRF Name, enter “FPV-Foundation”. Keep the check box “Take me to this tenant when I click finish” checked.
4. Click Submit to finish creating the Tenant.
To create an application profile for host connectivity, complete the following steps:
1. In the left pane, under the Tenant FPV-Foundation, right-click Application Profiles and select Create Application Profile.
2. Name the Profile “Host-Conn” and click Submit to complete adding the Application Profile.
3. Repeat these two steps to create an Application Profile named “MetroCluster”.
The following EPGs and the corresponding mappings will be created under these application profiles.
Refer to Table 6 for the information required during the following configuration. Note that since all storage interfaces on a single Interface Group on a NetApp AFF A700 share the same MAC address, different bridge domains must be used for each storage EPG.
Table 6 EPGs and Mappings for Application Profile Host-Connectivity
EPG Name |
Bridge Domain |
Domain |
VLAN |
Static Ports |
vMotion |
bd-FPV-Foundation |
L2 External: FPV-UCS-L2 |
3000 |
vPC UCS-A & UCS-B |
iSCSI-A |
bd-FPV-Foundation-iSCSI-A |
L2 External: FPV-UCS-L2 Physical: NetApp- A700-phys |
3010 |
vPC UCS-A & UCS-B vPC A700-A & A700-B |
iSCSI-B |
bd-FPV-Foundation-iSCSI-B |
L2 External: FPV-UCS-L2 Physical: NetApp- A700-phys |
3020 |
vPC UCS-A & UCS-B vPC A700-A & A700-B |
NFS |
bd-FPV-Foundation-NFS |
L2 External: FPV-UCS-L2 Physical: NetApp-A700-phys |
3050 |
vPC UCS-A & UCS-B vPC A700-A & A700-B |
Table 7 EPG and Mappings for Application Profile MetroCluster
EPG Name |
Bridge Domain |
Domain |
VLAN |
Static Ports |
InterCluster |
bd-InterCluster |
Physical: NetApp-A700-phys |
113 |
vPC A700-A & A700-B |
To create bridge domains and EPGs, complete the following steps:
1. For each row in the tables above, in the left pane, under Tenant FPV-Foundation, expand Networking > Bridge Domains.
2. Right-click Bridge Domains and select Create Bridge Domain.
3. Name the Bridge Domain {bd-FPV-Foundation}.
4. Select FPV-Foundation from the VRF drop-down list.
5. Select Custom under Forwarding and enable flooding.
6. Click Next.
7. Under L3 Configurations, make sure Limit IP Learning to Subnet is selected and select EP Move Detection Mode – GARP based detection. Select Next.
8. No changes are needed for Advanced/Troubleshooting.
9. Click Finish to finish creating the Bridge Domain.
10. Repeat the above steps to add all Bridge Domains listed in Table 6 and Table 7.
11. In the left pane, expand Application Profiles > Host-Conn. Right-click Application EPGs and select Create Application EPG.
12. Name the EPG {vMotion}.
13. From the Bridge Domain drop-down list, select the Bridge Domain from the table.
14. Click Finish to complete creating the EPG.
15. In the left pane, expand the Application EPGs and EPG {vMotion}.
16. Right-click Domains and select Add L2 External Domain Association.
17. From the drop-down list, select the previously defined {FPV-UCS-L2} L2 External Domain Profile.
18. Click Submit to complete the L2 External Domain Association.
19. Repeat the Domain Association steps (6-9) to add appropriate EPG specific domains from Table 6 and Table 7.
20. Right-click Static Ports and select Deploy EPG on PC, VPC, or Interface.
21. In the Deploy Static EPG on PC, VPC, Or Interface Window, select the Virtual Port Channel Path Type.
22. From the drop-down list, select the appropriate VPCs.
23. Enter VLAN from Table 3. {3000} for Port Encap.
24. Select Immediate for Deployment Immediacy and for Mode select Trunk.
25. Click Submit to complete adding the Static Path Mapping.
26. Repeat steps these steps for each EPG listed in Table 6 and Table 7.
You may also add subnets (default GW) for each EPG created that can be pinged for troubleshooting purposes.
This section describes the procedure for deploying the ACI Shared Layer 3 Out. The FlexPod Datacenter with Cisco ACI Multi-Pod, NetApp MetroCluster IP and VMware vSphere 6.7 solution utilizes two separate geographically dispersed datacenters where each datacenter leverages its own connection to the external network domain for optimal routing. The connection is defined in the tenant common and provides a contract to be consumed by all the application EPGs that require access to non-ACI networks. Tenant network routes can be shared with the Nexus 7004 switches using OSPF and external routes from the Nexus 7000s are shared with the tenant.
In this CVD, a dedicated pair of Nexus 9372 leaf switches at each site was utilized to provide external L3 connectivity to show the flexibility and future expandability of the design. Customers can use the existing Nexus 93180 to achieve the same goal.
This section provides a detailed procedure for setting up the Shared Layer 3 Out in Tenant common to existing customer Nexus 7000 routers using sub-interfaces and VRF aware OSPF. Some highlights of this connectivity are:
· A new bridge domain and associated VRF is configured in Tenant common for external connectivity.
· The shared Layer 3 Out created in Tenant common “provides” an external connectivity contract that can be “consumed” from any tenant.
· Routes to tenant EPG subnets connected by contract are shared across VRFs with the Nexus 7000 core routers using OSPF.
· The Nexus 7000s’ default gateway is shared with the ACI fabric using OSPF.
· Each of the two Nexus 7000s is connected to each of the two Nexus 9000 leaf switches.
· Sub-interfaces are configured and used for external connectivity.
· The Nexus 7000s are configured to originate and send a default route to the Nexus 9000 leaf switches.
The following configuration is a sample from the virtual device contexts (VDCs) from four Nexus 7004s. Interfaces and a default route from the four Nexus 7000s also need to be set up, but is not shown here because that configuration is customer specific.
Figure 13 POD11 - ACI Shared Layer 3 Out Connectivity Details
feature ospf
feature lldp
!
interface Ethernet4/16
description TO POD11-7004-2 E4/16
no switchport
ip address 10.251.251.1/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
no shutdown
!
interface Ethernet4/4
description TO POD11-9372-1 E1/47
no switchport
no shutdown
!
interface Ethernet4/4.305
encapsulation dot1q 305
ip address 10.251.231.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet4/8
description TO POD11-9372-2 E1/47
no switchport
no shutdown
!
interface Ethernet4/8.307
encapsulation dot1q 307
ip address 10.251.233.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface loopback0
ip address 10.251.255.21/32
ip router ospf 10 area 0.0.0.0
!
router ospf 10
router-id 10.251.255.21
area 0.0.0.10 nssa no-summary no-redistribution default-information-originate
!
feature ospf
feature lldp
!
description TO POD11-9372-1 E1/48
no switchport
no shutdown
!
interface Ethernet4/4.306
encapsulation dot1q 306
ip address 10.251.232.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet4/8
description TO POD11-9372-2 E1/48
no switchport
no shutdown
!
interface Ethernet4/8.308
encapsulation dot1q 308
ip address 10.251.234.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet4/16
description TO POD11-7004-1 E4/16
no switchport
ip address 10.251.251.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
no shutdown
!
interface loopback0
ip address 10.251.255.22/32
ip router ospf 10 area 0.0.0.0
!
router ospf 10
router-id 10.251.255.22
area 0.0.0.10 nssa no-summary no-redistribution default-information-originat
!
Figure 14 POD1 - ACI Shared Layer 3 Out Connectivity Details
feature ospf
feature lldp
!
interface Ethernet4/4
description TO POD1-9372-1 E1/47
no switchport
no shutdown
!
interface Ethernet4/4.301
encapsulation dot1q 301
ip address 10.252.231.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet4/8
description To POD1-9372-2 E1/48
no switchport
no shutdown
interface Ethernet4/8.303
encapsulation dot1q 303
ip address 10.252.233.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet4/16
description TO POD1-7004-2 E3/16
no switchport
ip address 10.252.251.1/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
!
interface loopback0
ip address 10.252.255.21/32
ip router ospf 10 area 0.0.0.0
!
router ospf 10
router-id 10.252.255.21
area 0.0.0.10 nssa no-summary no-redistribution default-information-originate
!
feature ospf
feature lldp
!
interface Ethernet3/4
description TO POD1-9372-1 E1/48
no shutdown
!
interface Ethernet3/4.302
encapsulation dot1q 302
ip address 10.252.232.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet3/8
description TO POD1-9372-2 E1/47
no shutdown
interface Ethernet3/8.304
encapsulation dot1q 304
ip address 10.252.234.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.10
no shutdown
!
interface Ethernet3/16
description TO POD1-7004-1 E4/16
ip address 10.252.251.2/30
ip ospf network point-to-point
ip ospf mtu-ignore
ip router ospf 10 area 0.0.0.0
!
interface loopback0
ip address 10.252.255.22/32
ip router ospf 10 area 0.0.0.0
!
router ospf 10
router-id 10.252.255.22
area 0.0.0.10 nssa no-summary no-redistribution default-information-originate
!
1. Log into Cisco APIC. At the top, select Fabric > Access Policies.
2. On the left, expand Physical and External Domains.
3. Right-click External Routed Domains and select Create Layer 3 Domain.
4. Name the Domain SharedL3Out.
5. Use the Associated Attachable Entity Profile drop-down list to select Create Attachable Entity Profile.
6. Name the Profile aep-Shared-L3Out and click Next.
7. Click Finish to continue without specifying interfaces.
8. Back in the Create Layer 3 Domain window, use the VLAN Pool drop-down list to select Create VLAN Pool.
9. Name the VLAN Pool L3Out-vlans and select Static Allocation.
10. Click the + sign to add and Encap Block.
11. In the Create Ranges window, enter the From and To VLAN IDs for the Shared-L3-Out VLAN range (301-308). Select Static Allocation.
12. Click OK to complete adding the VLAN range.
13. Click Submit to complete creating the VLAN Pool.
14. Click Submit to complete creating the Layer 3 Domain.
15. At the top, select Fabric > Access Policies.
16. On the left, select Quick Start. Under Steps, select Configure an interface, PC, or VPC.
17. In the center pane, click the green plus sign to select switches.
18. Using the Switches pull-down, select the first leaf switch connected to the Nexus 7000.
19. Click the green plus sign to configure switch interfaces.
20. Next to interfaces, enter the port identifiers for the port connected to the Nexus 7000 and used for Shared-L3-Out. Fill in the policies, Attached Device Type, and External Route Domain as covered below.
21. On the lower right, click Save. Click Save again and then click Submit.
Repeat these steps to configure all the leaf switch interfaces connected to Nexus 7000s at both sites.
22. At the top, select Tenants > common.
23. On the left, expand Tenant common and Networking.
24. Right-click VRFs and select Create VRF.
25. Name the VRF “vrf-common-outside”. Select default for both the End Point Retention Policy and Monitoring Policy. Un-select the Create A Bridge Domain check box.
26. Click Finish to complete creating the VRF.
27. On the left, right-click External Routed Networks and select Create Routed Outside.
28. Name the Routed Outside N7K-SharedL3Out.
29. Select the checkbox next to OSPF.
30. Enter 0.0.0.10 (configured in the Nexus 7000s) as the OSPF Area ID.
31. Using the VRF drop-down list, select vrf-common-outside.
32. Using the External Routed Domain drop-down list, select SharedL3Out.
33. Click the + sign to the right of Nodes and Interfaces Protocol Profiles to add a Node Profile.
34. Name the Node Profile Nodes-1101-1102 for the Nexus 9000 Leaf Switches.
35. Click the + sign to the right of Nodes to add a Node.
36. In the select Node window, select Leaf switch 1101.
37. Provide a Router ID IP address that will also be used as the Loopback Address (10.251.255.1).
38. Click OK to complete selecting the Node.
39. Click the + sign to the right of Nodes to add a Node.
40. In the select Node window, select Leaf switch 1102.
41. Provide a Router ID IP address that will also be used as the Loopback Address (10.251.255.2).
42. Click OK to complete selecting the Node.
43. Click the + sign to the right of OSPF Interface Profiles to create an OSPF Interface Profile.
44. Name the profile “Node-1101-1102-IntfPol”.
45. Click Next.
46. Using the OSPF Policy drop-down list, select Create OSPF Interface Policy.
47. Name the policy “osfp-Nexus-7K”.
48. Select the Point-to-Point Network Type.
49. Select the Advertise subnet and MTU ignore Interface Controls.
50. Click SUBMIT to complete creating the policy.
51. Click Next.
52. Select Routed Sub-Interface under Interfaces.
53. Click the + sign to the right of Routed Sub-Interfaces to add a routed sub-interface.
54. In the Select Routed Sub-Interface window, select the interface on Node 1101 that is connected to Nexus 7000 as shown in Figure 13.
55. Enter VLAN <305> for Encap.
56. Enter the IPv4 Primary Address (10.251.231.1/30)
57. Leave the MTU set to inherit.
58. Click OK to complete creating the routed sub-interface.
59. Repeat steps to add all the remaining 3 sub interfaces as shown in Figure 13.
60. Click OK to complete creating the Node Interface Profile.
61. Click OK to complete creating the Node Profile.
62. Click + under Nodes and Interface Protocol Profiles and repeat steps 33-61 to add the Nodes 201 and 202 at POD1. Use Figure 14 as a reference for interface, VLAN and IP address information.
63. Click Next.
64. Click the + sign under External EPG Networks to create and External EPG Network.
65. Name the External Network Default-Route.
66. Click the + sign to add a Subnet.
67. Enter 0.0.0.0/0 as the IP Address. Select the checkboxes for External Subnets for the External EPG, Shared Route Control Subnet, and Shared Security Import Subnet.
68. Click OK to complete creating the subnet.
69. Click OK to complete creating the external network.
70. Click Finish to complete creating the N7K-SharedL3Out.
71. On the left, right-click Contracts and select Create Contract.
72. Name the contract “Allow-Shared-L3Out”.
73. Select the Global Scope to allow the contract to be consumed from all tenants.
74. Click the + sign to the right of Subjects to add a contract subject.
75. Name the subject “Allow-All”.
76. Click the + sign to the right of Filters to add a filter.
77. Use the drop-down list to select the Allow-All filter from Tenant common.
78. Click Update.
79. Click OK to complete creating the contract subject.
80. Click Submit to complete creating the contract.
81. On the left, expand Tenant common, Networking, External Routed Networks, Shared-L3-Out, and Networks. Select Default-Route.
82. On the right, under Policy, select Contracts.
83. Click the + sign to the right of Provided Contracts to add a Provided Contract.
84. Select the common/Allow-Shared-L3-Out contract and click Update.
Tenant EPGs can now consume the Allow-Shared-L3Out contract and connect outside of the fabric. More restrictive contracts can be built and provided here for controlled access to the outside.
The Installation and Setup Instructions for the NetApp AFF A700 systems are available here: https://library.netapp.com/ecm/ecm_download_file/ECMP12522110
The Disk Shelf Documentation to plan the Disk Shelf connectivity and initial configuration is available here: https://mysupport.netapp.com/documentation/productlibrary/index.html?productID=30147
Refer to the NetApp Hardware Universe to plan the physical location of the Storage Arrays in both data centers, available here:
The NetApp Hardware Universe (HWU) application provides supported hardware and software components for any specific ONTAP version. It provides configuration information for all the NetApp storage appliances currently supported by ONTAP software. It also provides a table of component compatibilities.
Refer to NetApp IMT to get a list of hardware, software, and firmware that are interoperable in a MetroCluster solution, available here:
· http://mysupport.netapp.com/matrix/
It is recommended to check IMT prior to first time deployment and subsequent upgrades and refreshes.
In a MetroCluster IP configuration the network switches will be configured to function as the Cluster Interconnects and also as the backend MetroCluster IP network between the two sites.
To download the Reference Configuration Files for the IP network switches, complete the following steps:
1. Browse to the URL: https://mysupport.netapp.com/NOW/download/software/metrocluster_ip/rcfs/download.shtml
2. Download the RCF zip file that corresponds to the switch model that you intend to use, Cisco Nexus 3132Q-V in this case.
The zip file contains four RCFs, two for each site. The RCFs need to be uploaded to the appropriate switches, failing which will lead to an incorrect configuration.
To Initialize the IP Network Switches in both the sites, complete the following steps:
1. Establish a console connection to the switch.
2. Erase the existing configuration:
write erase
3. Reload the switch:
reload
The switch will reboot and enter the configuration wizard.
4. Setup the switch once it enters the configuration wizard:
Abort Auto Provisioning and continue with normal setup?(yes/no) [n]: yes
---- System Admin Account Setup ----
Do you want to enforce secure password standard (yes/no) [y]: yes
Enter the password for "admin": <password>
Confirm the password for "admin": <password>
---- Basic System Configuration Dialog VDC: 1 ----
This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.
Please register Cisco Nexus3000 Family devices promptly with your
supplier. Failure to register may affect response times for initial
service calls. Nexus3000 devices must be registered to receive
entitled support services.
Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
Create another login account (yes/no) [n]: Enter
Configure read-only SNMP community string (yes/no) [n]: Enter
Configure read-write SNMP community string (yes/no) [n]: Enter
Enter the switch name : <<switch-name>>
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: Enter
Mgmt0 IPv4 address : <<management-IP-address>>
Mgmt0 IPv4 netmask : <<management-IP-netmask>>
Configure the default gateway? (yes/no) [y]: Enter
IPv4 address of the default gateway : <<gateway-IP-address>>
Configure advanced IP options? (yes/no) [n]: Enter
Enable the telnet service? (yes/no) [n]: Enter
Enable the ssh service? (yes/no) [y]: Enter
Type of ssh key you would like to generate (dsa/rsa) [rsa]: Enter
Number of rsa key bits <1024-2048> [1024]: Enter
Configure the ntp server? (yes/no) [n]: Enter
Configure default interface layer (L3/L2) [L2]: L2
Configure default switchport interface state (shut/noshut) [noshut]: shut
Configure CoPP system profile (strict/moderate/lenient/dense) [strict]: strict
Would you like to edit the configuration? (yes/no) [n]: Enter
Use this configuration and save it? (yes/no) [y]: Enter
To upgrade the NX-OS to the latest version on the switches in both the sites, refer to the Cluster Network Compatibility Matrix: https://mysupport.netapp.com/NOW/download/software/cm_switches/
Table 8 Switch to RCF File Mapping
Switch |
Site |
RCF File |
IP_switch_A_1 |
Site A |
switch-model_RCF_v1.2-MetroCluster-IPswitch-A-1.txt |
IP_switch_A_2 |
Site A |
switch-model_RCF_v1.2-MetroCluster-IPswitch-A-2.txt |
IP_switch_B_1 |
Site B |
switch-model_RCF_v1.2-MetroCluster-IPswitch-B-1.txt |
IP_switch_B_2 |
Site B |
switch-model_RCF_v1.2-MetroCluster-IPswitch-B-2.txt |
1. Download the supported NX-OS software from https://software.cisco.com/download/home/283734368
2. Copy the NX-OS software to the switch using any supported Transfer Protocol, example:
copy sftp://root@server-ip-address/tftpboot/NX-OS-file-name bootflash: vrf management
3. Refer to Table 1 and copy the appropriate RCF file to the switch, example:
copy sftp://root@FTP-server-IP-address/tftpboot/RCF-filename bootflash: vrf management
4. Verify that the RCF and NX-OS files are present on the switch’s bootflash directory:
dir bootflash:
5. Install the switch software:
install all nxos bootflash:nxos.version-number.bin
Switch will be reloaded for disruptive upgrade.
Do you want to continue with the installation (y/n)? [n] y
The switch will reload (reboot) automatically after the switch software has been installed.
6. Merge the RCF with the running-config on the switch:
copy bootflash: RCF-filename running-config
The switch should not be modified unless the cabling deviates from the configuration provided in the RCF.
7. Save the running config on the switch:
copy running-config startup-config
With ONTAP 9.4, MetroCluster IP configurations support new installations with AFF systems using Advanced Disk Partitioning (ADP). It is recommended to have a minimum of eight SAS disk shelves (four shelves at each site) to allow automatic disk ownership on a per-shelf basis. If the recommendation cannot be met, a minimum of four disk shelves are required (two shelves at each site) for an initial deployment and the ownership must be set manually.
For new systems, the disk and partition assignments are configured in the factory. For existing systems that are being repurposed for MetroCluster IP, the disk and partition auto-assign can be configured in systems that meet the minimum recommendation of four shelves per site or total numbers of shelves need to be a multiple of four.
If the disk shelf count does not meet either criteria, a manual assignment should be performed.
Pool 0 always contains the disks that are found on the same site as the storage system that owns them.
Pool 1 always contains the disks that are found on the remote site.
To configure the partitions and ownership manually, complete the following steps:
1. Delete all the existing partitions on the Nodes and remove ownership of all the drives.
2. Run “set-defaults” on the LOADER prompt.
3. From the Boot Menu select option 4.
4. From the Maintenance Mode, delete the root aggregate and remove ownership of all the partitions.
5. From the Maintenance Mode Assign Disk and Partition ownership to the Nodes.
6. At the LOADER prompt set the root aggregate disk limit using the argument “setenv root-configuration "-d # -p # -s #"
7. From the Boot Menu select option 4.
For more information, see: MetroCluster IP Installation and Configuration Guide.
In this section, the variable Site-A refers to the 40GbE Site and Site-B refers to 10GbE site. These variables are used to improve the readability of the commands.
Perform the following operation on both clusters of the MetroCluster configuration.
1. Establish a console session to the Storage Node and complete the following steps:
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
This system will send event messages and periodic reports to NetApp Technical
Support. To disable this feature, enter autosupport modify -support disable
within 24 hours.
Enabling AutoSupport can significantly speed problem determination and
resolution should a problem occur on your system.
For further information on AutoSupport, see:
http://support.netapp.com/autosupport/
Type yes to confirm and continue {yes}: yes
Enter the node management interface port [e0M]: Enter
Enter the node management interface IP address: <node_01_mgmt_ip>
Enter the node management interface netmask: <node_mgmt_netmask>
Enter the node management interface default gateway: <node_mgmt_gateway>
A node management interface on port e0M with IP address <node_01_mgmt_ip> has been created.
Use your web browser to complete cluster setup by accessing
https://<node_01_mgmt_ip>
Otherwise, press Enter to complete cluster setup using the command line
interface:
Do you want to create a new cluster or join an existing cluster? {create, join}: create
Do you intend for this node to be used as a single node cluster? {yes, no} [no]: no
Will the cluster network be configured to use network switches? [yes]: yes
Existing cluster interface configuration found:
Port MTU IP Netmask
e4a 9000 169.254.153.174 255.255.0.0
e4e 9000 169.254.130.68 255.255.0.0
Do you want to use this configuration? {yes, no} [yes]: yes
Enter the cluster administrator's (username "admin") password: <password>
Retype the password: <password>
Step 1 of 5: Create a Cluster
You can type "back", "exit", or "help" at any question.
2. Enter the cluster name.
Enter the cluster name: <cluster_name>
3. Enter the cluster base license key.
Enter the cluster base license key: <cluster_base_license>
4. Provide any additional license keys that will be required for this solution.
In this deployment, the NFS, iSCSI, SnapMirror and FlexClone licenses were added. After you have added all the necessary licenses, press Enter to proceed to the next step.
Step 2 of 5: Add Feature License Keys
You can type "back", "exit", or "help" at any question.
Enter an additional license key []: Enter
5. Configure the cluster management interface.
Step 3 of 5: Set Up a Vserver for Cluster Administration
You can type "back", "exit", or "help" at any question.
Enter the cluster management interface port: e0M
Enter the cluster management interface IP address: <cluster_mgmt>
Enter the cluster management interface netmask: <cluster_mgmt_netmask>
Enter the cluster management interface default gateway [<node_mgmt_gateway>]: Enter
A cluster management interface on port e0M with IP address <cluster_mgmt> has been created. You can use this address to connect to and manage the cluster.
6. Configure the DNS.
Enter the DNS domain names: <dns_domain_names>
Enter the name server IP addresses: <dns_name_server_IPs>
DNS lookup for the admin Vserver will use the <dns_domain_names> domain.
7. Configure storage failover.
Step 4 of 5: Configure Storage Failover (SFO)
You can type "back", "exit", or "help" at any question.
SFO will be enabled when the partner joins the cluster.
8. Enter the controller’s physical location.
Where is the controller located []: <Location>
Perform this operation on both clusters in the MetroCluster configuration.
1. Establish a console session to the storage node and complete the following steps.
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
This system will send event messages and periodic reports to NetApp Technical
Support. To disable this feature, enter autosupport modify -support disable
within 24 hours.
Enabling AutoSupport can significantly speed problem determination and
resolution should a problem occur on your system.
For further information on AutoSupport, see:
http://support.netapp.com/autosupport/
Type yes to confirm and continue {yes}: yes
Enter the node management interface port [e0M]:
Enter the node management interface IP address: <node_02_mgmt_ip>
Enter the node management interface netmask: <node_mgmt_netmask>
Enter the node management interface default gateway: <node__mgmt_gateway>
A node management interface on port e0M with IP address <node_02_mgmt_ip> has been created.
Use your web browser to complete cluster setup by accessing
https://<node_02_mgmt_ip>
Otherwise, press Enter to complete cluster setup using the command line
interface:
This node's storage failover partner is already a member of a cluster.
Storage failover partners must be members of the same cluster.
The cluster setup wizard will default to the cluster join dialog.
2. Initiate the Cluster Join process.
Do you want to create a new cluster or join an existing cluster? {join}: Enter
Existing cluster interface configuration found:
Port MTU IP Netmask
e4a 9000 169.254.234.170 255.255.0.0
e4e 9000 169.254.230.183 255.255.0.0
Do you want to use this configuration? {yes, no} [yes]: yes
3. Join the existing cluster.
Step 1 of 3: Join an Existing Cluster
You can type "back", "exit", or "help" at any question.
Enter the name of the cluster you would like to join [<cluster_name>]: Enter
This node has joined the cluster <cluster_name>.
Step 2 of 3: Configure Storage Failover (SFO)
You can type "back", "exit", or "help" at any question.
SFO is enabled.
Step 3 of 3: Set Up the Node
You can type "back", "exit", or "help" at any question.
This node has been joined to cluster "<cluster_name>".
To make sure that the HA state of the controllers and chassis is configured for MetroCluster, complete the following steps:
1. From the Loader prompt, enter the maintenance mode.
LOADER-A> boot_ontap maint
Continue with boot? yes
2. Verify the HA config settings from the maintenance mode.
*> ha-config show
The controller module and chassis should show the value mccip.
3. If the displayed system state of the controller is not mccip, set the HA state for the controller.
ha-config modify controller mccip
4. If the displayed system state of the chassis is not mccip, set the HA state for the chassis.
ha-config modify chassis mccip
Run the following command on all the nodes in the MetroCluster, only if your configuration has less than four storage shelves per site or if you need to manually assign drives:
storage disk option modify -node * -autoassign off
Make sure that the local disks on each site are visible and assigned correctly:
disk show -fields bay,shelf,owner,pool
To configure HA on each storage node in the MetroCluster, complete the following steps:
1. Establish an SSH session to the cluster management IP <cluster_mgmt>.
2. Query the status of HA.
storage failover show -fields mode
3. Configure HA, if it not done already.
storage failover modify -mode ha -node <node_name>
To configure the service processors on each storage node in the MetroCluster, complete the following steps.
1. Establish an SSH session to the cluster management IP <cluster_mgmt>.
2. List the status of the service processors in the cluster.
sp show
3. Configure the service processor IP address.
sp network modify -node <node_name> -address-family IPv4 -enable true -ip-address <sp_ip> -netmask <sp_netmask> -gateway <sp_gateway>
Zero all the spare disks in the cluster by running the following command in both the clusters:
disk zerospares
To set the auto-revert parameter on the cluster management interface, complete the following step on both clusters:
network interface modify –vserver <cluster_name> -lif cluster_mgmt –auto-revert true
Configure the time synchronization on the MetroCluster by completing the following steps on both clusters.
1. Set the time zone for the cluster.
timezone <timezone>
For example, in the eastern United States, the time zone is America/New_York.
2. Set the date for the cluster.
date <ccyymmddhhmm.ss>
The format for the date is <[Century][Year][Month][Day][Hour][Minute].[Second]>. For example, 201808081735.17.
3. Configure the Network Time Protocol (NTP) servers for the cluster.
cluster time-service ntp server create -server <switch-a-ntp-ip>
cluster time-service ntp server create -server <switch-b-ntp-ip>
You can use your existing NTP server IP addresses to keep the time on the storage system in sync.
To configure the Simple Network Management Protocol (SNMP), complete the following steps in both the clusters:
1. Configure basic SNMP information, such as the location and contact. When polled, this information is visible as the sysLocation and sysContact variables in SNMP.
snmp contact <snmp-contact>
snmp location “<snmp-location>”
snmp init 1
options snmp.enable on
2. Configure SNMP traps to send to remote hosts such as a DFM server or another fault management system.
snmp traphost add <oncommand-um-server-fqdn>
To configure SNMPv1 access, set the shared, secret plaintext password (called a community):
snmp community add ro <snmp-community>
To enable the Cisco Discovery Protocol (CDP) on the NetApp storage controllers, run the following command to enable CDP on all nodes in the MetroCluster:
node run -node * options cdpd.enable on
NetApp recommends disabling flow control on all 40GbE and UTA2 ports that are connected to external devices. To disable flow control, run the following commands on each node in the MetroCluster.
network port modify -node * -port e4a,e4e,e8a,e8e -flowcontrol-admin none
Warning: Changing the network port settings will cause a several second
interruption in carrier.
Do you want to continue? {y|n}: y
Set the administrative speed of the cluster and data ports on the storage nodes to 40Gbp.
net port modify -node * -port e8* -speed-admin 40000
net port modify -node * -port e4* -speed-admin 40000
All interfaces of the storage nodes are by default placed in the Default broadcast domain. To use these interfaces, they need to be removed from the Default domain.
broadcast-domain remove-ports -broadcast-domain Default -ports node1:e8a,node1:e8e,node2:e8a,node2:e8e
If Jumbo Frames are required for the solution, run the following commands to configure them in the broadcast domains in both clusters.
broadcast-domain create -broadcast-domain Infra_NFS -mtu 9000
broadcast-domain create -broadcast-domain Infra_iSCSI-A -mtu 9000
broadcast-domain create -broadcast-domain Infra_iSCSI-B -mtu 9000
broadcast-domain create -broadcast-domain Inter-Cluster -mtu 9000
Create a broadcast domain for the in-band management traffic and set the MTU to 1500.
broadcast-domain create -broadcast-domain IB-MGMT -mtu 1500
Create LACP interface groups for the 40GbE data interfaces on all the nodes in the MetroCluster.
To configure the interface groups on both nodes of the clusters, run the following commands:
ifgrp create -node <node01> -ifgrp a0a -distr-func port -mode multimode_lacp
ifgrp add-port -node <node01> -ifgrp a0a -port e8a
ifgrp add-port -node <node01> -ifgrp a0a -port e8e
ifgrp create -node <node02> -ifgrp a0a -distr-func port -mode multimode_lacp
ifgrp add-port -node <node02> -ifgrp a0a -port e8a
ifgrp add-port -node <node02> -ifgrp a0a -port e8e
ifgrp show
To configure jumbo frames on the interface groups on both nodes of the clusters, run the following commands:
network port modify –node <node01> -port a0a –mtu 9000
network port modify –node <node02> -port a0a –mtu 9000
Create the VLANs that are required for this solution and add the VLAN interfaces to the respective broadcast domains.
IB-MGMT VLAN
network port vlan create –node <node01> -vlan-name a0a-<ib-mgmt-vlan-id>
network port vlan create –node <node02> -vlan-name a0a-<ib-mgmt-vlan-id>
broadcast-domain add-ports -broadcast-domain IB-MGMT -ports <node01>:a0a-<ib-mgmt-vlan-id>,<node02>:a0a-<ib-mgmt-vlan-id>
Inter-Cluster VLAN
network port vlan create –node <node01> -vlan-name a0a-<intercluster-vlan-id>
network port vlan create –node <node02> -vlan-name a0a-<intercluster-vlan-id>
broadcast-domain add-ports -broadcast-domain Inter-Cluster -ports <node01>:a0a-<intercluster-vlan-id>,<node02>:a0a-<intercluster-vlan-id>
NFS VLAN
network port vlan create –node <node01> -vlan-name a0a-<nfs-vlan-id>
network port vlan create –node <node02> -vlan-name a0a-<nfs-vlan-id>
broadcast-domain add-ports -broadcast-domain Infra_NFS -ports <node01>:a0a-<nfs-vlan-id>,<node02>:a0a-<nfs-vlan-id>
iSCSI-A VLAN
network port vlan create –node <node01> -vlan-name a0a-<iscsi-a-vlan-id>
network port vlan create –node <node02> -vlan-name a0a-<iscsi-a-vlan-id>
broadcast-domain add-ports -broadcast-domain Infra_iSCSI-A -ports <node01>:a0a-<iscsi-a-vlan-id>,<node02>:a0a-<iscsi-a-vlan-id>
iSCSI-B VLAN
network port vlan create –node <node01> -vlan-name a0a-<iscsi-b-vlan-id>
network port vlan create –node <node02> -vlan-name a0a-<iscsi-b-vlan-id>
broadcast-domain add-ports -broadcast-domain Infra_iSCSI-B -ports <node01>:a0a-<iscsi-b-vlan-id>,<node02>:a0a-<iscsi-b-vlan-id>
To create the intercluster LIFs on both the nodes in both the clusters, run the following commands:
net int create -vserver <cluster_name> -lif <node_01_ic> -role intercluster -address <node_01_ic_lif_ip> -netmask <ic_netmask> -home-node <node_01> -home-port a0a-<intercluster-vlan-id> -failover-policy disabled -status-admin up
net int create -vserver <cluster_name> -lif <node_02_ic> -role intercluster -address <node_02_ic_lif_ip> -netmask <ic_netmask> -home-node <node_02> -home-port a0a-<intercluster-vlan-id> -failover-policy disabled -status-admin up
Make a note of the Intercluster LIF IP addresses from both the clusters, which will be used to set up the cluster-peer relationship.
Site |
Node |
LIF Name |
IP |
A |
Node 01 |
node_01_ic |
node_01_ic_lif_ip |
A |
Node 02 |
node_02_ic |
node_02_ic_lif_ip |
B |
Node 01 |
node_01_ic |
node_01_ic_lif_ip |
B |
Node 02 |
node_02_ic |
node_02_ic_lif_ip |
If the deployment has fewer than four storage shelves per site, disable automatic drive assignment on all the nodes.
storage disk option modify -node node_name -autoassign off
To set up the cluster-peer relationship between the two clusters in the MetroCluster configuration, run the following command:
40GbE Site
cluster peer create -address-family ipv4 -peer-addrs <10GbE-Site>: <node_01_ic_lif_ip>,<10GbE-Site >:<node_02_ic_lif_ip>
Enter the passphrase: <passphrase>
Confirm the passphrase: <passphrase>
Notice: Now use the same passphrase in the "cluster peer create" command in the other cluster.
10GbE Site
cluster peer create -address-family ipv4 -peer-addrs <40GbE-Site >: <node_01_ic_lif_ip>,<40GbE-Site >:<node_02_ic_lif_ip>
Enter the passphrase: <passphrase>
Confirm the passphrase: <passphrase>
Enter the same passphrase that was created in when setting up storage system in 40GbE Site.
Verify whether the cluster-peer relationship is created successfully by running the following commands on either site:
cluster peer show -instance
cluster peer health show
To create the Disaster Recovery (DR) group relationships between the clusters, complete the following steps.
The DR relationships cannot be changed after the DR group is created.
1. Verify that the nodes in both clusters are ready for creation of the DR group.
metrocluster configuration-settings show-status
The command output should indicate that both nodes in both the clusters are ready for creation of the DR group.
2. Create the DR group. You can run the following command from either cluster in the MetroCluster configuration.
metrocluster configuration-settings dr-group create -partner-cluster <partner-cluster-name> -local-node <local-node_01> -remote-node <remote-node_01>
Complete the following steps to configure the MetroCluster IP interfaces for replication of node storage and nonvolatile cache.
The MetroCluster IP addresses must be chosen carefully because they cannot be changed after assignment. Two interfaces will be created on each node.
Use the following table to record the IP addresses that you plan to use.
Site |
Node |
Interface |
IP Address |
Subnet |
A |
Node_01
|
MetroCluster IP interface 1 |
|
|
MetroCluster IP interface 2 |
|
|
||
A |
Node_02
|
MetroCluster IP interface 1 |
|
|
MetroCluster IP interface 2 |
|
|
||
B |
Node_01 |
MetroCluster IP interface 1 |
|
|
MetroCluster IP interface 2 |
|
|
||
B |
Node_02 |
MetroCluster IP interface 1 |
|
|
MetroCluster IP interface 2 |
|
|
1. If there are at least two shelves connected to each node, ensure that each node has disk auto assignment enabled.
storage disk option show
Look for the value in the Auto Assign column.
2. Check whether the nodes are ready for interface creation.
metrocluster configuration-settings show-status
3. Create two interfaces on each node using the ports e5a and e5b.
metrocluster configuration-settings interface create -cluster-name <cluster-name> -home-node <node-name> -home-port e5a -address ip-address -netmask <netmask>
metrocluster configuration-settings interface create -cluster-name <cluster-name> -home-node <node-name> -home-port e5b -address ip-address -netmask <netmask>
Port numbers might vary based on the controller model. This document refers to the AFF A700 storage systems.
4. Verify that the interfaces have been configured.
metrocluster configuration-settings interface show
To establish the connection between the two sites using the MetroCluster IP interfaces created previously, complete the following steps:
1. Verify that the nodes are ready to connect.
metrocluster configuration-settings show-status
2. Establish the connection.
metrocluster configuration-settings connection connect
Do you want to continue? {y|n|: y
Issue the above command from either cluster in the MetroCluster configuration.
3. Verify that the connection is established.
metrocluster configuration-settings show-status
4. Verify that the iSCSI connections are established.
set -privilege advanced
Do you want to continue? {y|n}: y
storage iscsi-initiator show
set -privilege admin
If drives are being auto assigned, verify pool 1 assignment. If you are using manual assignment, assign drives to pool 1.
For more information, see the MetroCluster IP Installation and Configuration Guide.
If automatic drive assignment was disabled as directed previously, re-enable it on all nodes.
storage disk option modify -node node_name -autoassign on
Mirror the root aggregates on each node in the MetroCluster configuration to provide data protection.
storage aggregate mirror aggr0_<node_01>
storage aggregate mirror aggr0_<node_02>
Create mirrored data aggregates in Node_01 and Node_02 at both sites.
storage aggr create -aggregate aggr1_<node_01> -node <node_01> -diskcount 10 -mirror true
storage aggr create -aggregate aggr1_<node_02> -node <node_02> -diskcount 10 -mirror true
You can create mirrored data aggregates on any node in the DR group based on your requirement.
Make sure that the following are met before configuring the MetroCluster:
· At least two non-root mirrored data aggregates on each cluster
· The ha-config state of the controllers and chassis must be mccip
1. Run the following command from any one node in the MetroCluster deployment.
metrocluster configure -node-name <local_node_01>
2. Verify the networking status on both sites.
network port show
3. Verify the MetroCluster configuration from both sites.
metrocluster show
1. Run the following command from either cluster to check whether the components and relationships are working correctly.
metrocluster check run
Component Result
------------------- ---------
nodes ok
lifs ok
config-replication ok
aggregates ok
clusters ok
connections ok
6 entries were displayed.
2. Run the following commands for more detailed results from the most recent metrocluster check run.
metrocluster check aggregate show
metrocluster check cluster show
metrocluster check config-replication show
metrocluster check lif show
metrocluster check node show
Before moving the MetroCluster deployment to production, it is recommended to verify the switchover, healing, and switchback operations.
For detailed operational procedures, see the MetroCluster Management and Disaster Recovery Guide.
To create an SVM on both the sites, complete the following steps:
1. Create the SVM.
vserver create -vserver Infra_<site_id> -rootvolume infra_<site_id>_root -aggregate aggr1_<node_01> -rootvolume-security-style unix
2. Remove unused protocols from the SVM.
vserver remove-protocols -vserver Infra_<site_id> -protocols fcp,cifs,ndmp
3. Add the data aggregates to the aggregate list.
vserver add-aggregates -vserver Infra_<site_id> -aggregates aggr1_<node_01>,aggr1_<node_02>
To create the NFS service on both sites, complete the following steps:
1. Enable and run the NFS protocol in the SVM.
nfs create -vserver Infra_<site_id> -udp disabled
2. Enable the SVM vstorage parameter for the NetApp NFS VAAI plug-in.
vserver nfs modify -vserver Infra_<site_id> -vstorage enabled
vserver nfs show
Create the iSCSI service on the SVMs in both sites.
iscsi create -vserver Infra_<site_id>
iscsi show
To create load-sharing mirrors of the SVM root volumes in both sites, complete the following steps:
1. Create a volume on each node to be the load-sharing mirror of the SVM root volume.
40GbE Site
volume create –vserver Infra_A –volume rootvol_A_m01 –aggregate aggr1_<node_01> –size 1GB –type DP
volume create –vserver Infra_A –volume rootvol_A_m02 –aggregate aggr1_<node_02> –size 1GB –type DP
10GbE Site
volume create –vserver Infra_B –volume rootvol_B_m01 –aggregate aggr1_<node_01> –size 1GB –type DP
volume create –vserver Infra_B –volume rootvol_B_m02 –aggregate aggr1_<node_02> –size 1GB –type DP
2. Create a job schedule at both the sites to update the root volume mirror relationships every 15 minutes.
job schedule interval create -name 15min -minutes 15
3. Create the mirroring relationships.
40GbE Site
snapmirror create –source-path Infra_A:infra_A_root –destination-path Infra_A:rootvol_A_m01 –type LS -schedule 15min
snapmirror create –source-path Infra_A:infra_A_root –destination-path Infra_A:rootvol_A_m02 –type LS -schedule 15min
10GbE Site
snapmirror create –source-path Infra_B:infra_B_root –destination-path Infra_B:rootvol_B_m01 –type LS -schedule 15min
snapmirror create –source-path Infra_B:infra_B_root –destination-path Infra_B:rootvol_B_m02 –type LS -schedule 15min
4. Initialize the mirroring relationship on both the sites.
snapmirror initialize-ls-set –source-path Infra_<site_id>:infra_<site_id>_root
snapmirror show
To configure secure access to the storage controllers, complete the following steps at both sites:
1. Increase the privilege level to access the certificate commands.
set -privilege diag
Do you want to continue? {y|n}: y
2. Generally, a self-signed certificate is already in place. Verify the certificate and obtain parameters (for example, <serial- number>) by running the following command:
security certificate show
3. For each SVM shown, the certificate common name should match the DNS FQDN of the SVM. Delete the default certificates and replace them with either self-signed certificates or certificates from a certificate authority (CA). To delete the default certificates, run the following command:
security certificate delete -vserver Infra_<site_id> -common-name Infra_<site_id> -ca Infra_<site_id> -type server -serial <serial-number>
Deleting expired certificates before creating certificates is a best practice. Run the security certificate delete command to delete the expired certificates. In the following command, use TAB completion to select and delete each default certificate.
4. To generate and install self-signed certificates, run the following commands as one-time commands. Generate a server certificate for the Infra_<site_id> and the cluster SVMs. Use TAB completion to aid in the completion of these commands.
security certificate create -common-name <cert-common-name> -type server -size 2048 -country <certcountry> -state <cert-state> -locality <cert-locality> -organization <cert-org> -unit <cert-unit> -email-addr <cert-email> -expire-days <cert-days> -protocol SSL -hash-function SHA256 -vserver Infra_<site_id>
5. Enable each certificate that was created by using the server-enabled true and client-enabled false parameters. Use TAB completion to aid in the completion of these commands.
security ssl modify -vserver <clustername> -server-enabled true -client-enabled false -ca <cert-ca> -
serial <cert-serial> -common-name <cert-common-name>
To obtain the values for the parameters required in this step (<cert- ca> and <cert- serial>), run the security certificate show command.
6. Disable HTTP cluster management access.
system services firewall policy delete -policy mgmt -service http –vserver <clustername>
It is normal for some of these commands to return an error message stating that the entry does not exist.
7. Revert to the normal admin privilege level and set up the system to allow SVM logs to be available by web.
set –privilege admin
vserver services web modify –name spi|ontapi|compat –vserver * -enabled true
To configure NFSv3 on the SVM, complete the following steps on both sites:
1. Create a rule for the infrastructure NFS subnet in the default export policy.
vserver export-policy rule create –vserver Infra_<site_id> -policyname default –ruleindex 1 –protocol nfs -clientmatch <infra-nfs-subnet-cidr> -rorule sys –rwrule sys -superuser sys –allow-suid false
2. Assign the FlexPod export policy to the infrastructure SVM root volume.
volume modify –vserver Infra_<site_id> –volume infra_<site_id>_root –policy default
To create volumes, run the following commands on both sites.
40GbE Site
volume create -vserver Infra_A -volume infra_A_datastore_1 -aggregate aggr1_<node_01> -size 500g -state online -policy default -junction-path /infra_A_datastore_1 -space-guarantee none -percent-snapshot-space 0
volume create -vserver Infra_A -volume esxi_boot_A -aggregate aggr1_<node_02> -size 100g -state online -policy default -space-guarantee none -percent-snapshot-space 0
10GbE Site
volume create -vserver Infra_B -volume infra_B_datastore_1 -aggregate aggr1_<node_01> -size 500g -state online -policy default -junction-path /infra_B_datastore_1 -space-guarantee none -percent-snapshot-space 0
volume create -vserver Infra_B -volume esxi_boot_B -aggregate aggr1_<node_02> -size 100g -state online -policy default -space-guarantee none -percent-snapshot-space 0
To create NFS volumes, run the following command on both sites:
40GbE Site
volume create -vserver Infra_A -volume site_A_heartbeat -aggregate aggr1_<node_01> -size 10g - state online -policy default -junction-path /site_A_heartbeat -space-guarantee none -percent-snapshot-space 0 -snapshot-policy none
10GbE Site
volume create -vserver Infra_B -volume site_B_heartbeat -aggregate aggr1_<node_01> -size 10g - state online -policy default -junction-path /site_B_heartbeat -space-guarantee none -percent-snapshot-space 0 -snapshot-policy none
To create volumes to host LUNs for datastore heartbeats, run the following commands on both sites:
40GbE Site
volume create -vserver Infra_A -volume site_A_LUN_heartbeat -aggregate aggr1_<node_02> -size 10g -state online -policy default -space-guarantee none -percent-snapshot-space 0
10GbE Site
volume create -vserver Infra_B -volume site_B_LUN_heartbeat -aggregate aggr1_<node_02> -size 10g -state online -policy default -space-guarantee none -percent-snapshot-space 0
To create ESXi boot LUNs, run the following command on both sites.
40GbE Site
lun create -vserver Infra_A -volume esxi_boot_A -lun VM-Host-Infra-A-01 - size 15g -ostype vmware -space-reserve disabled
lun create -vserver Infra_A -volume esxi_boot_A -lun VM-Host-Infra-A-02 - size 15g -ostype vmware -space-reserve disabled
10GbE Site
lun create -vserver Infra_B -volume esxi_boot_B -lun VM-Host-Infra-B-01 - size 15g -ostype vmware -space-reserve disabled
lun create -vserver Infra_B -volume esxi_boot_B -lun VM-Host-Infra-B-02 - size 15g -ostype vmware -space-reserve disabled
To create LUNs for datastore heartbeats, run the following command on both sites:
40GbE Site
lun create -vserver Infra_A -volume site_A_LUN_heartbeat -lun site_A_heartbeat -size 5g -ostype vmware -space-reserve disabled
10GbE Site
lun create -vserver Infra_B -volume site_B_LUN_heartbeat -lun site_B_heartbeat -size 5g -ostype vmware -space-reserve disabled
On NetApp AFF systems, deduplication is enabled by default. To schedule deduplication, complete the following steps on both sites:
1. Enable deduplication on the ESXi boot volumes.
efficiency on -vserver Infra_<site_id> -volume esxi_boot_<site_id>
2. Assign a deduplication schedule to run during the weekends.
efficiency modify –vserver Infra_<site_id> –volume esxi_boot_<site_id> –schedule sun-sat@0
3. Create a cron schedule.
cron create -name 1min -minute 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35 ,36,37,38,39,40,41,42,43,44,45,46,47,48,48,50,51,52,53,54,55,56,57,58,59
4. Create a policy to always run deduplication.
efficiency policy create -vserver Infra_<site_id> -policy Always_On_Deduplication -type scheduled -schedule 1min -qos-policy background -enabled true
5. Enable deduplication on the infrastructure data volumes.
efficiency on -vserver Infra_<site_id> -volume infra_<site_id>_datastore_1
6. Assign the policy to always run deduplication.
efficiency modify –vserver Infra_<site_id> –volume infra_<site_id>_datastore_1 -policy Always_On_Deduplication
Refer to the following table and run the following commands to create the iSCSI Data LIFs on both the nodes at each site.
Site |
SVM Name |
Home Node |
LIF Name |
LIF Address |
LIF Netmask |
A |
Infra_A |
<node01> |
iscsi_A_lif01a |
<node01_iscsi_A_lif01a_ip> |
<node01_iscsi_A_lif01a_mask> |
iscsi_A_lif01b |
<node01_iscsi_A_lif01b_ip> |
<node01_iscsi_A_lif01b_mask> |
|||
<node02> |
iscsi_A_lif02a |
<node02_iscsi_A_lif02a_ip> |
<node02_iscsi_A_lif02a_mask> |
||
iscsi_A_lif02b |
<node02_iscsi_A_lif02b_ip> |
<node02_iscsi_A_lif02b_mask> |
|||
B |
Infra_B |
<node01> |
iscsi_B_lif01a |
<node01_iscsi_B_lif01a_ip> |
<node01_iscsi_B_lif01a_mask> |
iscsi_B_lif01b |
<node01_iscsi_B_lif01b_ip> |
<node01_iscsi_B_lif01b_mask> |
|||
<node02> |
iscsi_B_lif02a |
<node02_iscsi_B_lif02a_ip> |
<node02_iscsi_B_lif02a_mask> |
||
iscsi_B_lif02b |
<node02_iscsi_B_lif02b_ip> |
<node02_iscsi_B_lif02b_mask> |
40GbE Site
network interface create -vserver Infra_A -lif iscsi_A_lif01a -role data -data-protocol iscsi -homenode <node01> -home-port a0a-<iscsi-a-vlan-id> -address <node01_iscsi_A_lif01a_ip> -netmask <node01_iscsi_A_lif01a_mask> –status-admin up –failover-policy disabled –firewall-policy
data –auto-revert false
network interface create -vserver Infra_A -lif iscsi_A_lif01b -role data -data-protocol iscsi -homenode <node01> -home-port a0a-<iscsi-b-vlan-id> -address <node01_iscsi_A_lif01b_ip> -netmask <node01_iscsi_A_lif01b_mask> –status-admin up –failover-policy disabled –firewall-policy
data –auto-revert false
network interface create -vserver Infra_A -lif iscsi_A_lif02a -role data -data-protocol iscsi -homenode <node02> -home-port a0a-<iscsi-a-vlan-id> -address <node02_iscsi_A_lif02a_ip> -netmask <node02_iscsi_A_lif02a_mask> –status-admin up –failover-policy disabled –firewall-policy
data –auto-revert false
network interface create -vserver Infra_A -lif iscsi_A_lif02b -role data -data-protocol iscsi -homenode <node02> -home-port a0a-<iscsi-b-vlan-id> -address <node02_iscsi_A_lif02b_ip> -netmask <node02_iscsi_A_lif02b_mask> –status-admin up –failover-policy disabled –firewall-policy
data –auto-revert false
network interface show
10 GbE Site
network interface create -vserver Infra_B -lif iscsi_B_lif01a -role data -data-protocol iscsi -homenode <node01> -home-port a0a-<iscsi-a-vlan-id> -address <node01_iscsi_B_lif01a_ip> -netmask <node01_iscsi_B_lif01a_mask> –status-admin up –failover-policy disabled –firewall-policy
data –auto-revert false
network interface create -vserver Infra_B -lif iscsi_B_lif01b -role data -data-protocol iscsi -homenode <node01> -home-port a0a-<iscsi-b-vlan-id> -address <node01_iscsi_B_lif01b_ip> -netmask <node01_iscsi_B_lif01b_mask> –status-admin up –failover-policy disabled –firewall-policy
data –auto-revert false
network interface create -vserver Infra_B -lif iscsi_B_lif02a -role data -data-protocol iscsi -homenode <node02> -home-port a0a-<iscsi-a-vlan-id> -address <node02_iscsi_B_lif02a_ip> -netmask <node02_iscsi_B_lif02a_mask> –status-admin up –failover-policy disabled –firewall-policy
data –auto-revert false
network interface create -vserver Infra_B -lif iscsi_B_lif02b -role data -data-protocol iscsi -homenode <node02> -home-port a0a-<iscsi-b-vlan-id> -address <node02_iscsi_B_lif02b_ip> -netmask <node02_iscsi_B_lif02b_mask> –status-admin up –failover-policy disabled –firewall-policy
data –auto-revert false
network interface show
Refer to the table below and run the following commands to create the NFS data LIFs on both nodes in each site.
Site |
SVM Name |
Home Node |
LIF Name |
LIF Address |
LIF Netmask |
A |
Infra_A |
<node01> |
nfs_A_lif01 |
<node01_nfs_A_lif01_ip> |
<node01_nfs_A_lif01_mask> |
<node02> |
nfs_A_lif02 |
<node02_nfs_A_lif02_ip> |
<node02_nfs_A_lif02_mask> |
||
B |
Infra_B |
<node01> |
nfs_B_lif01 |
<node01_nfs_B_lif01_ip> |
<node01_nfs_B_lif01_mask> |
<node02> |
nfs_B_lif02 |
<node02_nfs_B_lif02_ip> |
<node02_nfs_B_lif02_mask> |
40 GbE Site
network interface create -vserver Infra_A -lif nfs_A_lif01 -role data -data-protocol nfs -home-node
<node01> -home-port a0a-<nfs-vlan-id> –address <node01_nfs_A_lif01_ip> -netmask <node01-
nfs_A_lif01_mask> -status-admin up –failover-policy broadcast-domain-wide –firewall-policy data –autorevert true
network interface create -vserver Infra_A -lif nfs_A_lif02 -role data -data-protocol nfs -home-node
<node02> -home-port a0a-<nfs-vlan-id> –address <node02_nfs_A_lif02-ip> -netmask <node02-
nfs_A_lif02_mask> -status-admin up –failover-policy broadcast-domain-wide –firewall-policy data –autorevert true
10 GbE Site
network interface create -vserver Infra_B -lif nfs_B_lif01 -role data -data-protocol nfs -home-node
<node01> -home-port a0a-<nfs-vlan-id> –address <node01_nfs_B_lif01_ip> -netmask <node01-
nfs_B_lif01_mask> -status-admin up –failover-policy broadcast-domain-wide –firewall-policy data –autorevert true
network interface create -vserver Infra_B -lif nfs_B_lif02 -role data -data-protocol nfs -home-node
<node02> -home-port a0a-<nfs-vlan-id> –address <node02_nfs_B_lif02-ip> -netmask <node02-
nfs_B_lif02_mask> -status-admin up –failover-policy broadcast-domain-wide –firewall-policy data –autorevert true
To add the infrastructure SVM administrator and SVM administration LIF in the out-of-band management network, complete the following steps on both the sites:
1. Create a LIF for SVM Management
network interface create –vserver Infra_<site_id> –lif Infra_<site_id>_svm_mgmt –role data –data-protocol none –home-node <node02> -home-port a0a-<ib-mgmt-vlan-id> -address Infra_<site_id>_svm_mgmt_ip -netmask Infra_<site_id>_svm_mgmt_netmask -status-admin up –failover-policy broadcast-domain-wide –firewall-policy mgmt –auto-revert true
The SVM management IP in this step should be in the same subnet as the storage cluster management IP.
2. Create a default route to allow the SVM management interface to reach the outside world.
network route create –vserver Infra_<site_id> -destination 0.0.0.0/0 –gateway Infra_<site_id>_svm_mgmt_gw
network route show
3. Set a password for the SVM vsadmin user and unlock the user.
security login password –username vsadmin –vserver Infra_<site_id>
Enter a new password: <password>
Enter it again: <password>
security login unlock –username vsadmin –vserver Infra_<site_id>
This section describes the configuration steps for the Cisco UCS 6332-16UP Fabric Interconnects (FI) in a design that will support iSCSI boot to the NetApp AFF through the Cisco ACI Fabric. Fibre Channel (FC) boot is not covered in this design, as FC traffic would not be supported over the InterPod Network.
The procedure outlined here will need to be repeated for setting up the UCS 6248 Fabric Interconnect with appropriate adjustments to UUID, IQN, IP and Mac address pools. The VLANs required for storage, management and vMotion traffic shown in Table 9 will stay the same across the two UCS domains.
Table 9 Lab Validation Infrastructure (FPV-Foundation) Tenant Configuration
EPG |
VLAN |
Subnet / Gateway |
Bridge Domain |
IB-Mgmt |
213 |
10.2.156.254/24 |
BD-FP-common-Core-Services |
iSCSI-A |
3010 |
192.168.10.0/24 – L2 |
BD- FPV-Foundation-iSCSI-A |
iSCSI-B |
3020 |
192.168.20.0/24 – L2 |
BD- FPV-Foundation-iSCSI-B |
NFS |
3050 |
192.168.50.0/24 – L2 |
BD- FPV-Foundation-NFS |
vMotion |
3000/DVS |
192.168.100.0/24 – L2 |
BD-Internal |
Native |
2 |
N/A |
N/A |
VMware vDS Pool |
1100-1150 |
Varies |
Varies |
This section provides detailed steps to configure the Cisco Unified Computing System (Cisco UCS) for use in a FlexPod environment. The steps are necessary to provision the Cisco UCS B-Series and C-Series servers and should be followed precisely to avoid improper configuration.
To configure the Cisco UCS for use in a FlexPod environment, complete the following steps:
1. Connect to the console port on the first Cisco UCS fabric interconnect.
Enter the configuration method: gui
Physical switch Mgmt0 IP address: <ucsa-mgmt-ip>
Physical switch Mgmt0 IPv4 netmask: <ucsa-mgmt-mask>
IPv4 address of the default gateway: <ucsa-mgmt-gateway>
2. Using a supported web browser, connect to http://<ucsa-mgmt-ip>, accept the security prompts, and click the ‘Express Setup’ link.
3. Select Initial Setup and click Submit.
4. Select Enable clustering, Fabric A, and IPv4.
5. Fill in the Virtual IP Address with the UCS cluster IP.
6. Completely fill in the System setup section. For system name, use the overall UCS system name. For the Mgmt IP Address, use <ucsa-mgmt-ip>.
7. Click Submit.
To configure the second Fabric Interconnect, complete the following steps:
1. Connect to the console port on the second Cisco UCS fabric interconnect.
Enter the configuration method: gui
Physical switch Mgmt0 IP address: <ucsb-mgmt-ip>
Physical switch Mgmt0 IPv4 netmask: <ucsb-mgmt-mask>
IPv4 address of the default gateway: <ucsb-mgmt-gateway>
2. Using a supported web browser, connect to http://<ucsb-mgmt-ip>, accept the security prompts, and click the ‘Express Setup’ link.
3. Under System setup, enter the Admin Password entered above and click Submit.
4. Enter <ucsb-mgmt-ip> for the Mgmt IP Address and click Submit.
To log in to the Cisco Unified Computing System (UCS) environment, complete the following steps:
1. Open a web browser and navigate to the Cisco UCS fabric interconnect cluster address.
You may need to wait at least 5 minutes after configuring the second fabric interconnect for Cisco UCS Manager to come up.
2. Click the Launch UCS Manager link under HTML to launch Cisco UCS Manager.
3. If prompted to accept security certificates, accept as necessary.
4. When prompted, enter admin as the user name and enter the administrative password.
5. Click Login to log in to Cisco UCS Manager.
This document assumes the use of Cisco UCS 3.2(3d) release. To upgrade the Cisco UCS Manager software and the Cisco UCS Fabric Interconnect software to version 3.2(3d), refer to Cisco UCS Manager Install and Upgrade Guides.
To create anonymous reporting, complete the following step:
1. In the Anonymous Reporting window, select whether to send anonymous data to Cisco for improving future products. If you select Yes, enter the IP address of your SMTP Server. Click OK.
It is highly recommended by Cisco to configure Call Home in Cisco UCS Manager. Configuring Call Home will accelerate resolution of support cases. To configure Call Home, complete the following steps:
1. In Cisco UCS Manager, click the Admin icon on the left.
2. Select All > Communication Management > Call Home.
3. Change the State to On.
4. Fill in all the fields according to your Management preferences and click Save Changes and OK to complete configuring Call Home.
To create a block of IP addresses for in band server Keyboard, Video, Mouse (KVM) access in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN icon on the left.
2. Expand Pools > root > IP Pools.
3. Right-click IP Pool ext-mgmt and select Create Block of IPv4 Addresses.
4. Enter the starting IP address of the block, number of IP addresses required, and the subnet mask and gateway information.
5. Click OK to create the block.
6. Click OK in the confirmation message.
To synchronize the Cisco UCS environment to the NTP servers in the Nexus switches, complete the following steps:
1. In Cisco UCS Manager, click the Admin icon on the left.
2. Expand All > Time Zone Management.
3. Select Timezone.
4. In the Properties pane, select the appropriate time zone in the Timezone menu.
5. Click Save Changes, and then click OK.
6. Click Add NTP Server.
7. Enter <oob-ntp-ip> and click OK. Click OK on the confirmation.
8. Repeat if there is a second NTP server to be added.
If the UCS Port Auto-Discovery Policy is enabled, server ports will be discovered automatically. To enable the Port Auto-Discovery Policy, complete the following steps:
1. In Cisco UCS Manager, click the Equipment icon on the left and select Equipment in the second list.
2. In the right pane, click the Policies tab.
3. Under Policies, select the Port Auto-Discovery Policy tab.
4. Under Properties, set Auto Configure Server Port to Enabled.
5. Click Save Changes.
6. Click OK.
Setting the discovery policy simplifies the addition of Cisco UCS B-Series chassis and of additional fabric extenders for further Cisco UCS C-Series connectivity. To modify the chassis discovery policy, complete the following steps:
1. In Cisco UCS Manager, click the Equipment icon on the left and select Equipment in the second list.
2. In the right pane, click the Policies tab.
3. Under Global Policies, set the Chassis/FEX Discovery Policy to match the minimum number of uplink ports that are cabled between the chassis or fabric extenders (FEXes) and the fabric interconnects.
4. Set the Link Grouping Preference to Port Channel. If Backplane Speed Preference appears, leave it set at 40G. For Backplane Speed Preference, select 40G if you have servers with VIC1340s and Port Expander cards. Otherwise, select 4x10G.
5. Click Save Changes.
6. Click OK.
To enable server and uplink ports, complete the following steps:
1. In Cisco UCS Manager, click the Equipment icon on the left.
2. Expand Equipment > Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.
3. Expand and select Ethernet Ports.
4. On the right, verify that the ports that are connected to the chassis, Cisco FEX, and direct connect UCS C-Series servers are configured as Server ports. If any Server ports are not configured correctly, right-click them, and select “Configure as Server Port.” Click Yes to confirm server ports and click OK.
In lab testing, for C220M4 servers with VIC 1385 PCIE cards, it has been necessary at times to manually configure Server ports.
5. Select the ports that are connected to the Cisco Nexus 93180 switches, right-click them, and select Configure as Uplink Port.
The last 6 ports (ALE) of the Cisco UCS 6332 and UCS 6332-16UP FIs require the use of active (optical) or AOC cables when connecting to a Nexus 9000 switches
6. Click Yes to confirm uplink ports and click OK.
7. Select Equipment > Fabric Interconnects > Fabric Interconnect B (subordinate) > Fixed Module.
8. Expand and select Ethernet Ports.
9. Repeat the steps above to configure server and uplink ports
To acknowledge all Cisco UCS chassis and any external 2232 FEX modules, complete the following steps:
1. In Cisco UCS Manager, click the Equipment icon on the left.
2. Expand Chassis and select each chassis that is listed.
3. Right-click each chassis and select Acknowledge Chassis.
4. Click Yes and then click OK to complete acknowledging the chassis.
5. If Nexus FEX are part of the configuration, expand Rack Mounts and FEX.
6. Right-click each FEX that is listed and select Acknowledge FEX.
7. Click Yes and then click OK to complete acknowledging the FEX.
To configure the necessary port channels out of the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN icon on the left.
In this procedure, two port channels are created: one from fabric A to both Cisco Nexus 93180 switches and one from fabric B to both Cisco Nexus 93180 switches.
2. Under LAN > LAN Cloud, expand the Fabric A tree.
3. Right-click Port Channels.
4. Select Create Port Channel.
5. Enter 119 as the unique ID of the port channel.
Port Channel ID 119 was selected as the connecting upstream ACI Leafs are both connecting as 1/19. This is not a requirement, but may be helpful in associating the connections. Otherwise a unique number must be used within the UCS domain for the Port Channel ID.
6. Enter Po119-ACI as the name of the port channel.
7. Click Next.
8. Select the ports connected to the Nexus switches to be added to the port channel:
a. Click >> to add the ports to the port channel.
b. Click Finish to create the port channel.
c. Click OK.
9. Expand Port Channels and select Port-Channel 119. Since the vPC has already been configured in the ACI fabric, this port channel should come up. Note that it may take a few minutes for the port channel to come up.
10. In the navigation pane, under LAN > LAN Cloud, expand the fabric B tree.
11. Right-click Port Channels.
12. Select Create Port Channel.
13. Enter 120 as the unique ID of the port channel.
As with the previous Port Channel, Port Channel ID 120 was selected as the connecting upstream ACI Leafs are both connecting as 1/20. This is not a requirement, but may be helpful in associating the connections. Otherwise a unique number must be used within the UCS domain for the Port Channel ID.
14. Enter Po120-ACI as the name of the port channel.
15. Click Next.
16. Select the ports connected to the Nexus switches to be added to the port channel:
a. Click >> to add the ports to the port channel.
b. Click Finish to create the port channel.
c. Click OK.
17. Expand Port Channels and select Port-Channel 120. Since the vPC has already been configured in the ACI fabric, this port channel should come up. Note that it may take a few minutes for the port channel to come up.
To create a UCS Organization to contain unique parameters for this particular FlexPod, complete the following steps on Cisco UCS Manager.
1. Select the Servers icon on the left.
2. Under Servers > Service-Profiles > root, right-click Sub-Organizations and select Create Organization.
3. Name the Organization “FPV-FlexPod”, enter an optional Description, and click OK.
4. Click OK for the confirmation.
To configure the necessary IQN pool for the Cisco UCS environment, complete the following steps on Cisco UCS Manager.
1. Select the SAN icon on the left.
2. Select Pools > root.
3. Right-click IQN Pools under the root organization.
4. Select Create IQN Suffix Pool to create the IQN pool.
5. Enter IQN-Pool for the name of the IQN pool.
6. Optional: Enter a description for the IQN pool.
7. Enter iqn.2010-11.com.flexpod for the Prefix.
8. Select Sequential for Assignment Order.
9. Click Next.
10. Click Add.
11. Enter a name to identify the individual UCS host for the Suffix.
12. Enter 1 for the From field.
13. Specify a size of the IQN block sufficient to support the available server resources.
14. Click OK.
15. Click Finish and OK to complete creating the IQN pool.
To configure the necessary iSCSI IP Address pools for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN icon on the left.
2. Select and expand Pools > root > Sub-Organizations > FPV-FlexPod.
In this procedure, two IP pools are created, one for each switching fabric.
3. Right-click IP Pools under the FPV-FlexPod organization.
4. Select Create IP Pool to create the IP pool.
5. Enter iSCSI-IP-Pool-A as the name of the first IP pool.
6. Optional: Enter a description for the IP pool.
7. Select Sequential for Assignment Order.
8. Click Next.
9. Click Add to add a Block of IPs to the pool.
10. Specify a starting IP address and subnet mask in the subnet <192.168.10.0/24> for iSCSI boot on Fabric A. It is not necessary to specify the Default Gateway or DNS server addresses.
11. Specify a size for the IP pool that is sufficient to support the available blade or server resources.
12. Click OK.
13. Click Next.
14. Click Finish.
15. In the confirmation message, click OK.
16. Right-click IP Pools under the FPV-FlexPod organization.
17. Select Create IP Pool to create the IP pool.
18. Enter iSCSI-IP-Pool-B as the name of the second IP pool.
19. Optional: Enter a description for the IP pool.
20. Select Sequential for Assignment Order
21. Click Next.
22. Click Add to add a Block of IPs to the pool.
23. Specify a starting IP address and subnet mask in the subnet <192.168.20.0/24> for iSCSI boot on Fabric B. It is not necessary to specify the Default Gateway or DNS server addresses.
24. Specify a size for the IP pool that is sufficient to support the available blade or server resources.
25. Click OK.
26. Click Next.
27. Click Finish.
28. In the confirmation message, click OK.
To configure the necessary MAC address pools for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN icon on the left.
2. Select Pools > root.
In this procedure, two MAC address pools are created, one for each switching fabric.
3. Right-click MAC Pools under the root organization.
4. Select Create MAC Pool to create the MAC address pool.
5. Enter MAC-Pool-A as the name of the MAC pool.
6. Optional: Enter a description for the MAC pool.
7. Select Sequential as the option for Assignment Order.
8. Click Next.
9. Click Add.
10. Specify a starting MAC address.
For the FlexPod solution, the recommendation is to place 0A in the next-to-last octet of the starting MAC address to identify all of the MAC addresses as fabric A addresses. In our example, we also embedded the cabinet number (13) information giving us 00:25:B5:13:0A:00 as our first MAC address.
11. Specify a size for the MAC address pool that is sufficient to support the available blade or server resources assuming that multiple vNICs can be configured on each server.
12. Click OK.
13. Click Finish.
14. In the confirmation message, click OK.
15. Right-click MAC Pools under the root organization.
16. Select Create MAC Pool to create the MAC address pool.
17. Enter MAC-Pool-B as the name of the MAC pool.
18. Optional: Enter a description for the MAC pool.
19. Select Sequential as the option for Assignment Order.
20. Click Next.
21. Click Add.
22. Specify a starting MAC address.
For the FlexPod solution, the recommendation is to place 0B in the next-to-last octet of the starting MAC address to identify all of the MAC addresses as fabric B addresses. In our example, we have also embedded the cabinet number (13) information giving us 00:25:B5:13:0B:00 as our first MAC address.
23. Specify a size for the MAC address pool that is sufficient to support the available blade or server resources.
24. Click OK.
25. Click Finish.
26. In the confirmation message, click OK.
To configure the necessary universally unique identifier (UUID) suffix pool for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers icon on the left.
2. Select Pools > root.
3. Right-click UUID Suffix Pools.
4. Select Create UUID Suffix Pool.
5. Enter UUID-Pool as the name of the UUID suffix pool.
6. Optional: Enter a description for the UUID suffix pool.
7. Keep the prefix at the Derived option.
8. Select Sequential for the Assignment Order.
9. Click Next.
10. Click Add to add a block of UUIDs.
11. Keep the From field at the default setting. Optionally, specify identifiers such as UCS location.
12. Specify a size for the UUID block that is sufficient to support the available blade or server resources.
13. Click OK.
14. Click Finish.
15. Click OK.
To configure the necessary server pool for the VMware management environment, complete the following steps:
Consider creating unique server pools to achieve the granularity that is required in your environment.
1. In Cisco UCS Manager, click the Servers icon on the left.
2. Expand Pools > root > Sub-Organizations > FPV-FlexPod.
3. Right-click Server Pools under the FPV-FlexPod Organization.
4. Select Create Server Pool.
5. Enter FPV-MGMT-Pool as the name of the server pool.
6. Optional: Enter a description for the server pool.
7. Click Next.
8. Select two (or more) servers to be used for the VMware management cluster and click >> to add them to the FPV-MGMT-Pool server pool.
9. Click Finish.
10. Click OK.
To configure the necessary virtual local area networks (VLANs) for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN icon on the left.
In this procedure, 6 unique and 50 sequential VLANs are created. See Table 9 .
2. Select LAN > LAN Cloud.
3. Right-click VLANs.
4. Select Create VLANs.
5. Enter “Native-VLAN” as the name of the VLAN to be used as the native VLAN.
6. Keep the Common/Global option selected for the scope of the VLAN.
7. Enter the native VLAN ID <2>.
8. Keep the Sharing Type as None.
9. Click OK and then click OK again.
10. Expand the list of VLANs in the navigation pane, right-click the newly created “Native-VLAN” and select Set as Native VLAN.
11. Click Yes and then click OK.
12. Right-click VLANs.
13. Select Create VLANs.
14. Enter “IB-MGMT” as the name of the VLAN to be used for management traffic.
15. Keep the Common/Global option selected for the scope of the VLAN.
16. Enter the UCS In-Band management VLAN ID <213>.
17. Keep the Sharing Type as None.
18. Click OK, and then click OK again.
19. Right-click VLANs.
20. Select Create VLANs.
21. Enter “NFS” as the name of the VLAN to be used for infrastructure NFS.
22. Keep the Common/Global option selected for the scope of the VLAN.
23. Enter the UCS Infrastructure NFS VLAN ID <3150>.
24. Keep the Sharing Type as None.
25. Click OK, and then click OK again.
26. Right-click VLANs.
27. Select Create VLANs.
28. Enter “iSCSI-A” as the name of the VLAN to be used for UCS Fabric A iSCSI boot.
29. Keep the Common/Global option selected for the scope of the VLAN.
30. Enter the UCS Fabric A iSCSI boot VLAN ID <3110>.
31. Keep the Sharing Type as None.
32. Click OK, and then click OK again.
33. Right-click VLANs.
34. Select Create VLANs.
35. Enter “iSCSI-B” as the name of the VLAN to be used for UCS Fabric B iSCSI boot.
36. Keep the Common/Global option selected for the scope of the VLAN.
37. Enter the UCS Fabric B iSCSI boot VLAN ID <3120>.
38. Keep the Sharing Type as None.
39. Click OK and then click OK again.
40. Right-click VLANs.
41. Select Create VLANs.
42. Enter “vMotion” as the name of the VLAN to be used for VMware vMotion.
43. Keep the Common/Global option selected for the scope of the VLAN.
44. Enter the vMotion VLAN ID <3000>.
45. Keep the Sharing Type as None.
46. Click OK and then click OK again.
47. Right-click VLANs.
48. Select Create VLANs.
49. Enter “FPV-vSwitch-Pool” as the prefix for this VLAN pool.
50. Keep the Common/Global option selected for the scope of the VLAN.
51. Enter a range of 50 VLANs for VLAN ID. <1101-1150> was used in this validation.
52. Keep the Sharing Type as None.
53. Click OK and then click OK again.
Firmware management policies allow the administrator to select the corresponding packages for a given server configuration. These policies often include packages for adapter, BIOS, board controller, FC adapters, host bus adapter (HBA) option ROM, and storage controller properties.
To specify the UCS 3.2(3d) release for the Default firmware management policy for a given server configuration in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers icon on the left.
2. Select Policies > root.
3. Expand Host Firmware Packages.
4. Select default.
5. In the Actions pane, select Modify Package Versions.
6. Select the version 3.2(3d)B for the Blade Package, and 3.2(3d)C (optional) for the Rack Package.
7. Leave Excluded Components with only Local Disk selected.
8. Click OK then click OK again to modify the host firmware package.
To configure jumbo frames and enable the base quality of service in the Cisco UCS fabric, complete the following steps:
1. In Cisco UCS Manager, click the LAN icon on the left.
2. Select LAN > LAN Cloud > QoS System Class.
3. In the right pane, click the General tab.
4. On the Best Effort row, enter 9216 in the box under the MTU column.
5. Click Save Changes in the bottom of the window.
6. Click OK.
A local disk configuration for the Cisco UCS environment is necessary if the servers in the environment do not have a local disk.
This policy should not be used on servers that contain local disks.
To create a local disk configuration policy, complete the following steps:
1. In Cisco UCS Manager, click the Servers icon on the left.
2. Select Policies > root.
3. Right-click Local Disk Config Policies.
4. Select Create Local Disk Configuration Policy.
5. Enter SAN-Boot as the local disk configuration policy name.
6. Change the mode to No Local Storage.
7. Click OK to create the local disk configuration policy.
8. Click OK.
To create a network control policy that enables CDP and LLDP on virtual network ports, complete the following steps:
1. In Cisco UCS Manager, click the LAN icon on the left.
2. Select Policies > root.
3. Right-click Network Control Policies.
4. Select Create Network Control Policy.
5. Enter Enable-CDP-LLDP as the policy name.
6. For CDP, select the Enabled option.
7. For LLDP, scroll down and select Enabled for both Transmit and Receive.
8. Click OK to create the network control policy.
9. Click OK.
To create a power control policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers icon on the left.
2. Select Policies > root.
3. Right-click Power Control Policies.
4. Select Create Power Control Policy.
5. Enter No-Power-Cap as the power control policy name.
6. Change the power capping setting to No Cap.
7. Click OK to create the power control policy.
8. Click OK.
To create a server BIOS policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers icon on the left.
2. Select Policies > root.
3. Right-click BIOS Policies.
4. Select Create BIOS Policy.
5. Enter Virtual-Host as the BIOS policy name.
6. Click OK then OK again.
7. Expand BIOS Policies and select Virtual-Host.
8. Set the following within the Main tab:
a. CDN Control > Enabled
b. Quiet Boot > Disabled
9. Click Save Changes and OK.
10. Click the Advanced tab and then select the Processor sub-tab.
11. Set the following within the Processor sub-tab:
a. DRAM Clock Throttling > Performance
b. Frequency Floor Override > Enabled
c. Processor C State > Disabled
d. Processor C1E > Disabled
e. Processor C3 Report > Disabled
f. Processor C7 Report > Disabled
g. Energy Performance > Performance
12. Click Save Changes and OK.
13. Click the RAS Memory sub-tab and select:
a. LV DDR Mode > Performance-Mode
14. Click Save Changes and OK.
To create the option VMware-High-Traffic Ethernet Adapter policy to provide higher vNIC performance, complete the following steps:
1. In Cisco UCS Manager, click the Servers icon on the left.
2. Select Policies > root.
3. Right-click Adapter Policies and select Create Ethernet Adapter Policy.
4. Name the policy VMware-HighTrf.
5. Expand Resources and set the values as shown below.
6. Expand Options and select Enabled for Receive Side Scaling (RSS).
7. Click OK, then OK again to complete creating the Ethernet Adapter Policy.
To update the default Maintenance Policy, complete the following steps:
1. In Cisco UCS Manager, click the Servers icon on the left.
2. Select Policies > root.
3. Select Maintenance Policies > default.
4. Change the Reboot Policy to User Ack.
5. Select “On Next Boot” to delegate maintenance windows to server administrators.
6. Click OK to save changes.
7. Click OK to accept the change.
To create multiple virtual network interface card (vNIC) templates for the Cisco UCS environment, complete the following steps. A total of 6 vNIC Templates will be created.
1. In Cisco UCS Manager, click the LAN icon on the left.
2. Expand Policies > root > Sub-Organizations > FPV-FlexPod.
3. Right-click vNIC Templates under FPV-FlexPod.
4. Select Create vNIC Template.
5. Enter Infra-A as the vNIC template name.
6. Keep Fabric A selected.
7. Do not select the Enable Failover checkbox.
8. Select Primary Template for Redundancy Type.
9. Leave the Peer Redundancy Template set to <not set>.
10. Under Target, make sure that only the Adapter checkbox is selected.
11. Select Updating Template as the Template Type.
12. Under VLANs, select the checkboxes for the IB-MGMT, Native, NFS, and vMotion VLANs.
13. Set Native-VLAN as the native VLAN.
14. Select vNIC Name for the CDN Source.
15. For MTU, enter 9000.
16. In the MAC Pool list, select MAC-Pool-A.
17. In the Network Control Policy list, select Enable-CDP-LLDP.
18. Click OK to create the vNIC template.
19. Click OK.
1. Select the LAN icon on the left.
2. Expand Policies > root > Sub-Organizations > FPV-FlexPod.
3. Right-click vNIC Templates under FPV-FlexPod.
4. Select Create vNIC Template.
5. Enter Infra-B as the vNIC template name.
6. Select Fabric B.
7. Do not elect the Enable Failover checkbox.
8. Set Redundancy Type to Secondary Template.
9. Select Infra-A for the Peer Redundancy Template.
10. In the MAC Pool list, select MAC-Pool-B. The MAC Pool is all that needs to be selected for the Secondary Template.
11. Click OK to create the vNIC template.
12. Click OK.
To create iSCSI Boot vNICs, complete the following steps:
1. In Cisco UCS Manager, click the LAN icon on the left.
2. Expand Policies > root > Sub-Organizations > FPV-FlexPod.
3. Right-click vNIC Templates under FPV-FlexPod.
4. Select Create vNIC Template.
5. Enter iSCSI-A as the vNIC template name.
6. Keep Fabric A selected.
7. Do not select the Enable Failover checkbox.
8. Select No Redundancy for Redundancy Type.
9. Under Target, make sure that only the Adapter checkbox is selected.
10. Select Updating Template as the Template Type.
11. Under VLANs, select the checkbox for iSCSI-A.
12. Set iSCSI-A as the native VLAN.
13. Select vNIC Name for the CDN Source.
14. For MTU, enter 9000.
15. In the MAC Pool list, select MAC-Pool-A.
16. In the Network Control Policy list, select Enable-CDP-LLDP.
17. Click OK to create the vNIC template.
18. Click OK.
1. Select the LAN icon on the left.
2. Expand Policies > root > Sub-Organizations > FPV-FlexPod.
3. Right-click vNIC Templates under FPV-FlexPod.
4. Select Create vNIC Template.
5. Enter iSCSI-B as the vNIC template name.
6. Select Fabric B.
7. Do not elect the Enable Failover checkbox.
8. Select No Redundancy for Redundancy Type.
9. Under Target, make sure that only the Adapter checkbox is selected.
10. Select Updating Template as the Template Type.
11. Under VLANs, select the checkbox for iSCSI-B.
12. Set iSCSI-B as the native VLAN.
13. Select vNIC Name for the CDN Source.
14. For MTU, enter 9000.
15. In the MAC Pool list, select MAC-Pool-B.
16. In the Network Control Policy list, select Enable-CDP-LLDP.
17. Click OK to create the vNIC template.
18. Click OK.
To create vNIC templates for APIC-controlled vDS, complete the following steps:
1. In Cisco UCS Manager, click the LAN icon on the left.
2. Expand Policies > root > Sub-Organizations > FPV-FlexPod.
3. Right-click vNIC Templates under FPV-FlexPod.
4. Select Create vNIC Template.
5. Enter APIC-vDS-A as the vNIC template name.
6. Keep Fabric A selected.
7. Do not select the Enable Failover checkbox.
8. Select Primary Template for Redundancy Type.
9. Leave the Peer Redundancy Template set to <not set>.
10. Under Target, make sure that only the Adapter checkbox is selected.
11. Select Updating Template as the Template Type.
12. Under VLANs, select the checkboxes for the 50 Virtual-Switch-Pool VLANs.
13. Do not set a native VLAN.
14. Select vNIC Name for the CDN Source.
15. For MTU, enter 9000.
16. In the MAC Pool list, select MAC-Pool-A.
17. In the Network Control Policy list, select Enable-CDP-LLDP.
18. Click OK to create the vNIC template.
19. Click OK.
1. Select the LAN icon on the left.
2. Expand Policies > root > Sub-Organizations > FPV-FlexPod.
3. Right-click vNIC Templates under FPV-FlexPod.
4. Select Create vNIC Template:
a. Enter APIC-vDS-B as the vNIC template name.
b. Select Fabric B.
c. Do not elect the Enable Failover checkbox.
d. Set Redundancy Type to Secondary Template.
e. Select APIC-vDS-A for the Peer Redundancy Template.
f. In the MAC Pool list, select MAC-Pool-B. The MAC Pool is all that needs to be selected for the Secondary Template.
g. Click OK to create the vNIC template.
5. Click OK.
To configure the necessary Infrastructure LAN Connectivity Policy, complete the following steps:
1. In Cisco UCS Manager, click the LAN icon on the left.
2. Expand Policies > root > Sub-Organizations > FPV-FlexPod.
3. Right-click LAN Connectivity Policies under FPV-FlexPod.
4. Select Create LAN Connectivity Policy.
5. Enter iSCSI-Boot as the name of the policy.
6. Click the upper Add button to add a vNIC.
7. In the Create vNIC dialog box, enter 00-Infra-A as the name of the vNIC.
8. Select the Use vNIC Template checkbox.
9. In the vNIC Template list, select Infra-A.
10. In the Adapter Policy list, select VMware.
11. Click OK to add this vNIC to the policy.
12. Click the upper Add button to add another vNIC to the policy.
13. In the Create vNIC box, enter 01-Infra-B as the name of the vNIC.
14. Select the Use vNIC Template checkbox.
15. In the vNIC Template list, select Infra-B.
16. In the Adapter Policy list, select VMware.
17. Click OK to add the vNIC to the policy.
18. Click the upper Add button to add another vNIC to the policy.
19. In the Create vNIC box, enter 02-iSCSI-A as the name of the vNIC.
20. Select the Use vNIC Template checkbox.
21. In the vNIC Template list, select iSCSI-A.
22. In the Adapter Policy list, select VMware.
23. Click OK to add the vNIC to the policy.
24. Click the upper Add button to add another vNIC to the policy.
25. In the Create vNIC box, enter 03-iSCSI-B as the name of the vNIC.
26. Select the Use vNIC Template checkbox.
27. In the vNIC Template list, select iSCSI-B.
28. In the Adapter Policy list, select VMware.
29. Click OK to add the vNIC to the policy.
30. Click the upper Add button to add another vNIC to the policy.
31. In the Create vNIC box, enter 04-APIC-vDS-A as the name of the vNIC.
32. Select the Use vNIC Template checkbox.
33. In the vNIC Template list, select APIC-vDS-A.
34. In the Adapter Policy list, select VMware. Optionally, select the VMware-HighTrf Adapter Policy.
35. Click OK to add the vNIC to the policy.
36. Click the upper Add button to add another vNIC to the policy.
37. In the Create vNIC box, enter 05-APIC-vDS-B as the name of the vNIC.
38. Select the Use vNIC Template checkbox.
39. In the vNIC Template list, select APIC-vDS-B.
40. In the Adapter Policy list, select VMware. Optionally, select the VMware-HighTrf Adapter Policy.
41. Click OK to add the vNIC to the policy.
42. Expand the Add iSCSI vNICs section.
43. Click the lower Add button to add an iSCSI boot vNIC to the policy.
44. In the Create iSCSI vNIC box, enter iSCSI-Boot-A as the name of the vNIC.
45. Select 02-iSCSI-A for the Overlay vNIC.
46. Select the default iSCSI Adapter Policy.
47. FPV-Foundation-iSCSI-A (native) should be selected as the VLAN.
48. Do not select anything for MAC Address Assignment.
49. Click OK to add the vNIC to the policy.
50. Click the lower Add button to add an iSCSI boot vNIC to the policy.
51. In the Create iSCSI vNIC box, enter iSCSI-Boot-B as the name of the vNIC.
52. Select 03-iSCSI-B for the Overlay vNIC.
53. Select the default iSCSI Adapter Policy.
54. FPV-Foundation-iSCSI-B (native) should be selected as the VLAN.
55. Do not select anything for MAC Address Assignment.
56. Click OK to add the vNIC to the policy.
57. Click OK, then OK again to create the LAN Connectivity Policy.
This procedure applies to a Cisco UCS environment in which two iSCSI logical interfaces (LIFs) are on storage cluster node 1 (iscsi_lif01a and iscsi_lif01b) and two iSCSI LIFs are on storage cluster node 2 (iscsi_lif02a and iscsi_lif02b).
This boot policy configures the primary target to be iscsi_lif01a with four SAN paths.
To create a boot policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers icon on the left.
2. Expand Policies > root > Sub-Organizations > FPV-FlexPod.
3. Right-click Boot Policies under FPV-FlexPod.
4. Select Create Boot Policy.
5. Enter iSCSI-Boot as the name of the boot policy.
6. Optional: Enter a description for the boot policy.
7. Keep the Reboot on Boot Order Change option cleared.
8. Expand the Local Devices drop-down list and select Add Remote CD/DVD.
9. Expand the iSCSI vNICs drop-down list and select Add iSCSI Boot.
10. Enter iSCSI-Boot-A in the iSCSI vNIC field.
11. Click OK.
12. From the iSCSI vNICs drop-down list, select Add iSCSI Boot.
13. Enter iSCSI-Boot-B in the iSCSI vNIC field.
14. Click OK.
15. Click OK, then click OK again to create the boot policy.
In this procedure, one service profile template is created for Fabric A boot.
To create the service profile template, complete the following steps:
1. In Cisco UCS Manager, click the Servers icon on the left.
2. Expand Service Profile Templates > root > Sub-Organizations > FPV-FlexPod.
3. Select and right-click FPV-FlexPod.
4. Select Create Service Profile Template to open the Create Service Profile Template wizard.
5. Enter iSCSI-Boot-A as the name of the service profile template. This service profile template is configured to boot from storage node 1 on fabric A.
6. Select the “Updating Template” option.
7. Under UUID, select UUID-Pool as the UUID pool.
8. Click Next.
1. If you have servers with no physical disks, click the Local Disk Configuration Policy and select the SAN-Boot Local Storage Policy. Otherwise, select the default Local Storage Policy.
2. Click Next.
1. Keep the default setting for Dynamic vNIC Connection Policy.
2. Select the “Use Connectivity Policy” option to configure the LAN connectivity.
3. Select iSCSI-Boot from the LAN Connectivity Policy drop-down list.
4. Select IQN-Pool from the 192 Name Assignment drop-down list.
5. Click Next.
1. Select the No vHBAs option for the “How would you like to configure SAN connectivity?” field.
2. Click Next.
1. Configure no Zoning Options and click Next.
Configure vNIC/HBA Placement
1. In the “Select Placement” list, leave the placement policy as “Let System Perform Placement.”
2. Click Next.
1. Do not select a vMedia Policy.
2. Click Next.
1. Select iSCSI-Boot for Boot Policy.
2. Under Boot Order, expand Boot Order and select the iSCSI-Boot-A row.
3. Select the Set iSCSI Boot Parameters button.
4. Select iSCSI-IP-Pool-A for the Initiator IP Address Policy.
5. Scroll to the bottom of the window and click Add.
6. Enter the IQN (Target Name) from the Infra-A iSCSI Target Name.
To get this IQN, ssh into the storage cluster interface and type “iscsi show”.
7. For IPv4 address, enter the IP address of iscsi_a_lif02a from the Infra-A.
To get this IP, ssh into the storage cluster interface and type “network interface show –vserver Infra-A”.
8. Click OK to complete configuring the iSCSI target.
9. Click Add to add a second target.
10. Enter the previously captured IQN (Target Name) again.
11. For IPv4 address, enter the IP address of iscsi_a_lif01a from the Infra-A SVM.
12. Click OK to complete configuring the iSCSI target.
13. Click OK to complete setting the iSCSI Boot Parameters for Fabric A Boot.
14. Under Boot Order, select the iSCSI-Boot-B row.
15. Select the Set iSCSI Boot Parameters button.
16. Select iSCSI-IP-Pool-B for the Initiator IP Address Policy.
17. Scroll to the bottom of the window and click Add.
18. Enter the previously captured IQN (Target Name)
19. For IPv4 address, enter the IP address of iscsi_lif02b from the Infra-A SVM.
20. Click OK to complete configuring the iSCSI target.
21. Click Add to add a second target.
22. Enter the previously captured IQN (Target Name.
23. For IPv4 address, enter the IP address of iscsi_lif01b from the Infra-A SVM.
24. Click OK to complete configuring the iSCSI target.
25. Click OK to complete setting the iSCSI Boot Parameters for Fabric A Boot.
26. Click Next.
To configure the Maintenance Policy, complete the following steps:
27. Change the Maintenance Policy to default.
28. Click Next.
To configure server assignment, complete the following steps:
1. In the Pool Assignment list, select FPV-MGMT-Pool.
2. Select Down as the power state to be applied when the profile is associated with the server.
3. Expand Firmware Management at the bottom of the page and select the default policy.
4. Click Next.
To configure the operational policies, complete the following steps:
1. In the BIOS Policy list, select Virtual-Host.
2. Expand Power Control Policy Configuration and select No-Power-Cap in the Power Control Policy list.
3. Click Finish to create the service profile template.
4. Click OK in the confirmation message.
To create service profiles from the service profile template, complete the following steps:
1. Connect to UCS Manager and click the Servers icon on the left.
2. Select Service Profile Templates > root > Sub-Organizations > FPV-FlexPod > Service Template iSCSI-Boot-A.
3. Right-click iSCSI-Boot-A and select Create Service Profiles from Template.
4. Enter fpv-esxi-0 as the service profile prefix.
5. Enter 1 as “Name Suffix Starting Number.”
6. Enter 2 as the “Number of Instances.”
7. Click OK to create the service profiles.
8. Click OK in the confirmation message.
All the steps in this section will be repeated for configuring Cisco UCS 6248 Fabric Interconnect. Customers should make sure unique values are used for various configuration parameters including:
· Management IP address Pools
· UUID Pool
· IQN range in the IQN Pool
· iSCSI IP Pools
· MAC Address Pool
· UUID Pool
When UCS configuration is complete, the next step is to configure boot from SAN.
After the Cisco UCS service profiles have been created, each infrastructure server in the environment will have a unique configuration. To proceed with the UCS boot from SAN deployment, specific information must be gathered from each Cisco UCS server and from the NetApp controllers. Insert the required information into Table 10 and Table 11.
In the tables below, Infra-A SVM is defined on the NetApp A700 system in POD11 (40GbE site) while Infra-B SVM is defined on the NetApp A700 system in POD1 (10GbE site). The boot LUNs for first two ESXi server, fpv-esxi-01 and fpv-esxi-02 are created in the Infra-A SVM while the SVMs for other two ESXi servers are created in
Table 10 iSCSI LIFs for iSCSI IQN
SVM |
NetApp A700 System |
Target: IQN |
Infra-A |
NetApp A700 system in 40GbE site (POD11) |
|
Infra-B |
NetApp A700 system in 10GbE site (POD 1) |
|
To obtain the iSCSI IQN, run “iscsi show” command on the storage cluster management interface.
Table 11 vNIC iSCSI IQNs for Fabric A and Fabric B
Cisco UCS Service Profile Name |
Cisco UCS |
iSCSI IQN |
Variables |
fpv-esxi-01 |
UCS 6332 in 40GbE site (POD11) |
|
<fpv-esxi-01-iqn> |
fpv-esxi-02 |
UCS 6332 in 40GbE site (POD11) |
|
<fpv-esxi-02-iqn> |
fpv-esxi-03 |
UCS 6248 in 10GbE site (POD1) |
|
<fpv-esxi-03-iqn> |
fpv-esxi-04 |
UCS 6248 in 10GbE site (POD1) |
|
<fpv-esxi-04-iqn> |
To obtain the iSCSI vNIC IQN information in Cisco UCS Manager GUI, go to Servers > Service Profiles > root. Click each service profile and then click the “iSCSI vNICs” tab on the right. The “Initiator Name” is displayed at the top of the page under the “Service Profile Initiator Name.”
1. To create igroups for the first two ESXi servers in 40 GbE site, run the following commands on the NetApp A700 system:
igroup create –vserver Infra-A –igroup fpv-esxi-01 –protocol iscsi –ostype vmware –initiator <fpv-esxi-01-iqn>
igroup create –vserver Infra-A –igroup fpv-esxi-02 –protocol iscsi –ostype vmware –initiator <fpv-esxi-02-iqn>
igroup create –vserver Infra-A –igroup MGMT-Hosts-All –protocol iscsi –ostype vmware –initiator <fpv-esxi-01-iqn>,<fpv-esxi-02-iqn>
igroup show –vserver Infra-A
2. To create igroups for the second two ESXi servers in 10GbE site, run the following commands on the NetApp A700 system:
igroup create –vserver Infra-B –igroup fpv-esxi-03 –protocol iscsi –ostype vmware –initiator <fpv-esxi-03-iqn>
igroup create –vserver Infra-A –igroup fpv-esxi-04 –protocol iscsi –ostype vmware –initiator <fpv-esxi-04-iqn>
igroup create –vserver Infra-B –igroup MGMT-Hosts-All –protocol iscsi –ostype vmware –initiator <fpv-esxi-03-iqn>,<fpv-esxi-04-iqn>
igroup show –vserver Infra-B
1. To map LUNs to igroups, run the following commands on 40GbE site:
lun map –vserver Infra-A -volume esxi_a_boot –lun VM-Host-Infra-A-01 –igroup fpv-esxi-01 –lun-id 0
lun map –vserver Infra-A –volume esxi_a_boot –lun VM-Host-Infra-A-02 –igroup fpv-esxi-02 –lun-id 0
lun show –vserver Infra-A -m
2. To map LUNs to igroups, run the following commands on 10GbE site:
lun map –vserver Infra-B -volume esxi_a_boot –lun VM-Host-Infra-B-01 –igroup fpv-esxi-01 –lun-id 0
lun map –vserver Infra-B –volume esxi_a_boot –lun VM-Host-Infra-B-02 –igroup fpv-esxi-02 –lun-id 0
lun show –vserver Infra-B -m
This section provides detailed instructions for installing VMware ESXi 6.7 in a FlexPod environment. After the procedures are completed, two ESXi hosts will be provisioned in each site.
Several methods exist for installing ESXi in a VMware environment. These procedures focus on how to use the built-in keyboard, video, mouse (KVM) console and mapped CD/DVD in Cisco UCS Manager to map remote installation media to individual servers and connect to their boot logical unit numbers (LUNs).
If the VMware ESXi custom image has not been downloaded, complete the following steps to download:
1. Click the following link: VMware vSphere Hypervisor (ESXi) 6.7.
2. You will need a user id and password on vmware.com to download this software.
3. Download the .iso file.
The Cisco UCS IP KVM enables the administrator to begin the installation of the operating system (OS) through remote media. It is necessary to log in to the Cisco UCS environment to run the IP KVM.
To log in to the Cisco UCS environment, complete the following steps:
1. Open a web browser and enter the IP address for the Cisco UCS cluster address. This step launches the Cisco UCS Manager application.
2. Click the Launch UCS Manager link under HTML to launch the HTML 5 UCS Manager GUI.
3. If prompted to accept security certificates, accept as necessary.
4. When prompted, enter admin as the user name and enter the administrative password.
5. To log in to Cisco UCS Manager, click Login.
6. From the main menu, click Servers on the left.
7. Select the fpv-esxi-xx Service Profile (xx represents the servers 01-04)
8. On the right, under the General tab, click the >> to the right of KVM Console.
9. Follow the prompts to launch the KVM console.
Skip this section if you are using vMedia policies (not covered in this document). ISO file should already be connected to KVM if vMedia is configured.
To prepare the server for the OS installation, complete the following steps on each ESXi host:
1. In the KVM window, click Virtual Media in the top right corner.
Hovering the mouse over the icons on top right will bring up the function associated with the buttons.
2. Click Activate Virtual Devices.
3. If prompted to accept an Unencrypted KVM session, accept as necessary.
4. Click Virtual Media and select Map CD/DVD.
5. Browse to the ESXi installer ISO image file and click Open.
6. Click Map Device.
7. Click the KVM tab to monitor the server boot.
To install VMware ESXi to the iSCSI-bootable LUN of the hosts, complete the following steps on all the hosts:
1. Boot the server by selecting Boot Server and clicking OK, then click OK two more times.
2. On reboot, the machine detects the presence of the ESXi installation media. Select the ESXi installer from the boot menu that is displayed.
3. After the installer is finished loading, press Enter to continue with the installation.
4. Read and accept the end-user license agreement (EULA). Press F11 to accept and continue.
5. Select the LUN that was previously set up as the installation disk for ESXi and press Enter to continue with the installation.
6. Select the appropriate keyboard layout and press Enter.
7. Enter and confirm the root password and press Enter.
8. The installer issues a warning that the selected disk will be repartitioned. Press F11 to continue with the installation.
9. After the installation is complete, press Enter to reboot the server. The mapped iso will be automatically unmapped.
Adding a management network for each VMware host is necessary for managing the host. To add a management network for the VMware hosts, complete the steps in the following subsections.
To configure each ESXi host with access to the management network, complete the following steps:
1. After the server has finished rebooting, press F2 to customize the system.
2. Log in as root, enter the password set in the last step, and press Enter to log in.
3. Select Troubleshooting Options and press Enter.
4. Select Enable ESXi Shell and press Enter.
5. Select Enable SSH and press Enter.
6. Press Esc to exit the Troubleshooting Options menu.
7. Select the Configure Management Network option and press Enter.
8. Select Network Adapters and press Enter.
9. Verify that the numbers in the Hardware Label field match the numbers in the Device Name field.
10. Use the arrow keys and spacebar to highlight and select vmnic1.
11. Press Enter.
12. Select the VLAN (Optional) option and press Enter.
13. Enter the UCS <ib-mgmt-vlan-id> <213> and press Enter.
14. Select IPv4 Configuration and press Enter.
15. Select the Set static IPv4 address and network configuration option by using the space bar.
16. Enter the IP address for managing the ESXi host.
17. Enter the subnet mask for the ESXi host.
18. Enter the default gateway for the ESXi host.
19. Press Enter to accept the changes to the IP configuration.
20. Select the DNS Configuration option and press Enter.
Because the IP address is assigned manually, the DNS information must also be entered manually.
21. Enter the IP address of the primary DNS server.
22. Optional: Enter the IP address of the secondary DNS server.
23. Enter the fully qualified domain name (FQDN) for the ESXi host.
24. Press Enter to accept the changes to the DNS configuration.
25. Press Esc to exit the Configure Management Network menu. Enter Y to Apply changes and restart management network.
26. Select Test Management Network to verify that the management network is set up correctly and press Enter.
27. Press Enter to run the test, press Enter again once the test has completed, review environment if there is a failure.
28. Re-select the Configure Management Network and press Enter.
29. Select the IPv6 Configuration option and press Enter.
30. Using the spacebar, select Disable IPv6 (restart required) and press Enter.
31. Press Esc to exit the Configure Management Network submenu.
32. Press Y to confirm the changes and reboot the ESXi host.
To log in to the fpv-esxi-xx (xx is server number 01-04) ESXi host by using the VMware Host Client, complete the following steps:
1. Open a web browser on the management workstation and navigate to the fpv-esxi-xx management IP address. Respond to any security prompts.
2. Enter root for the user name.
3. Enter the root password.
4. Click Login to connect.
5. Repeat this process to log into all the ESXi hosts in a separate browser tabs or windows.
To set up the VMkernel ports and the virtual switches on all the ESXi hosts, complete the following steps:
1. From the Host Client, select Networking on the left.
2. In the center pane, select the Virtual switches tab.
3. Highlight vSwitch0.
4. Select Edit settings.
5. Change the MTU to 9000.
6. Expand NIC teaming and highlight vmnic1. Select Mark active.
7. Click Save.
8. Select Networking on the left.
9. In the center pane, select the Virtual switches tab.
10. Highlight iScsiBootvSwitch.
11. Select Edit settings.
12. Change the MTU to 9000
13. Click Save.
14. Select Add standard virtual switch.
15. Name the vSwitch iScsiBootvSwitch-B.
16. Set the MTU to 9000.
17. Select vmnic3 for the Uplink.
18. Click Add.
19. Select the VMkernel NICs tab.
20. Highlight vmk1 iScsiBootPG.
21. Select Edit settings.
22. Change the MTU to 9000.
23. Expand IPv4 settings and change the IP address to an address outside of the UCS iSCSI-IP-Pool-A.
To avoid IP address conflicts if the Cisco UCS iSCSI IP Pool addresses should get reassigned, it is recommended to use different IP addresses in the same subnet for the iSCSI VMkernel ports.
24. Click Save.
25. Select Add VMkernel NIC
26. Specify a New port group name of iScsiBootPG-B.
27. Select iScsciBootvSwitch-B for Virtual switch.
28. Set the MTU to 9000. Do not enter a VLAN ID since the iSCSI-B VLAN is also the native VLAN on this vNIC.
29. Select Static for the IPv4 settings and expand the option to provide the Address and Subnet Mask within the Configuration.
30. Click Create.
31. Select Add VMkernel NIC.
32. Specify a New port group name of VMkernel-Infra-NFS.
33. Select vSwitch0 for Virtual switch.
34. Enter the UCS Foundation Tenant NFS VLAN id <3050>.
35. Set the MTU to 9000.
36. Select Static for the IPv4 settings and expand the option to provide the Address and Subnet Mask in the Foundation Tenant NFS subnet <192.168.50.0/24>.
37. Click Create.
38. Select Add VMkernel NIC
39. Specify a New port group name of VMkernel-vMotion.
40. Select vSwitch0 for Virtual switch.
41. Enter the UCS vMotion VLAN id <3000>.
42. Set the MTU to 9000.
43. Select Static for the IPv4 settings and expand the option to provide the Address and Subnet Mask in the vMotion subnet <192.168.100.0/24>.
44. Select the vMotion stack for the TCP/IP stack.
45. Click Create.
46. Optionally, if you have 40GE vNICs in this FlexPod, create two more vMotion VMkernel ports in the same subnet and VLAN. These will need to be in new port groups.
47. On the left, select Networking, then select the Port groups tab.
48. In the center pane, right-click VM Network and select Remove.
49. Click Remove to complete removing the port group.
50. In the center pane, select Add port group.
51. Name the port group IB-MGMT Network, enter <ib-mgmt-vlan-id> <213> in the VLAN ID field, and make sure Virtual switch vSwitch0 is selected.
52. Click Add to finalize the edits for the IB-MGMT Network.
53. Highlight the VMkernel-vMotion Port group and select Edit settings.
54. Expand NIC teaming and select the radio button next to Override failover order.
55. Select vmnic0 and then select Mark standby to pin vMotion traffic to UCS Fabric Interconnect B (vmnic1) with failover.
56. Click Save.
Repeat steps 53-56 to pin all vMotion traffic to UCS Fabric Interconnect B (vmnic1) with failover if a secondary vMotion VMkernel and Port Group were created.
57. Select the Virtual Switches tab, then vSwitch0. The properties for vSwitch0 VMkernel NICs should be similar to the following example:
58. Select the VMkernel NICs tab to confirm configured virtual adapters. The adapters listed should be similar to the following example:
To setup the iSCSI multipathing on all the ESXi hosts, complete the following steps:
1. From each Host Client, select Storage on the left.
2. In the center pane, click Adapters.
3. Select the iSCSI software adapter and click Configure iSCSI.
4. Under Dynamic targets, click Add dynamic target.
5. Enter the IP Address of NetApp storage iscsi_a_lif01a and press Enter.
6. Repeat putting the ip address of iscsi_a_lif01b, iscsi_a_lif02a, iscsi_a_lif02b.
7. Click Save configuration.
To get all the iscsi_lif IP addresses, login to NetApp storage cluster management interface at each site and type “network interface show”.
The host will automatically rescan the storage adapter and 4 targets will be added to Static targets. This can be verified by selecting Configure iSCSI again.
To mount the required datastores, complete the following steps on each ESXi host by entering the appropriate IP addresses from the NetApp controllers at both the sites:
1. From the Host Client, select Storage on the left.
2. In the center pane, select the Datastores tab.
3. In the center pane, select New Datastore to add a new datastore.
4. In the New datastore popup, select Mount NFS datastore and click Next.
5. Enter “infra_a_datastore_1” for the datastore name
6. Enter the IP address for the nfs_lif01 LIF for the NFS server from A700 in 40GbE site.
7. Enter “/infra_a_datastore_1” for the NFS share. Leave the NFS version set at NFS 3.
8. Click Next.
9. Repeat this process till all four NFS datastores i.e. infra_a_datastore_1, infra_a_datastore_2, infra_b_datastore_1, infra_b_datastore_2 are added to the ESXi server. Make sure the correct IP address from appropriate storage controller is used.
10. Mount these datastores on all the ESXi hosts at both sites.
To configure Network Time Protocol (NTP) on the ESXi hosts, complete the following steps on each host:
1. From the Host Client, select Manage on the left.
2. In the center pane, select the Time & Date tab.
3. Click Edit settings.
4. Make sure Use Network Time Protocol (enable NTP client) is selected.
5. Use the drop-down list to select Start and stop with host.
6. Enter the appropriate NTP address(es) in the NTP servers box separated by a comma.
7. Click Save to save the configuration changes.
8. Select Actions > NTP service > Start.
9. Verify that NTP service is now running and the clock is now set to approximately the correct time.
The NTP server time sync make take a few minutes.
To install necessary ESXi patches and updated device drivers, complete the following steps on each host:
1. Download VMware ESXi patch ESXi670-201806001 from https://my.vmware.com/group/vmware/patch#search.
2. Download the NetApp NFS Plug-in for VMware VAAI 1.1.2 offline bundle from https://mysupport.netapp.com/NOW/download/software/nfs_plugin_vaai_esxi5.5/1.1.2/.
3. From the Host Client, select Storage on the left.
4. In the center pane, select infra_a_datastore_1 and then select Datastore browser.
5. In the Datastore browser, select Create directory and create a Drivers folder in the datastore.
6. Select the Drivers folder.
7. Use Upload to upload the two downloaded items above to the Drivers folder on infra_a_datastore_1. Since infra_a_datastore_1 is accessible to all the ESXi hosts, this upload only needs to be done on the first host.
8. Use ssh to connect to each ESXi host as the root user.
9. Enter the following commands on each host.
cd /vmfs/volumes/infra_a_datastore_1/Drivers
esxcli software vib update -d /vmfs/volumes/infra_a_datastore_1/Drivers/ESXi670-201806001.zip
esxcli software vib install -d /vmfs/volumes/infra_a_datastore_1/Drivers/NetAppNasPlugin.v23.zip
reboot
The procedures in the following subsections provide detailed instructions for installing the VMware vCenter 6.7 Server Appliance in an environment. After the procedures are completed, a VMware vCenter Server will be configured.
The VCSA deployment consists of 2 stages: install and configuration. To build the VMware vCenter virtual machine, complete the following steps:
1. Locate and copy the VMware-VCSA-all-6.7.0-8832884.iso file to the desktop of the management workstation. This ISO is for the VMware vSphere 6.5 vCenter Server Appliance.
2. Using ISO mounting software, mount the ISO image as a disk on the management workstation. (For example, with the Mount command in Windows Server 2012).
3. In the mounted disk directory, navigate to the vcsa-ui-installer > win32 directory and double-click installer.exe. The vCenter Server Appliance Installer wizard appears.
4. Click Install to start the vCenter Server Appliance deployment wizard.
5. Click Next in the Introduction section.
6. Read and accept the license agreement and click Next.
7. In the “Select deployment type” section, select vCenter Server with an Embedded Platform Services Controller and click Next.
8. In the “Appliance deployment target”, enter the ESXi host name or IP address for fpv-esxi-01, User name and Password. Click Next.
9. Click Yes to accept the certificate.
10. Enter the Appliance name and password details in the “Set up appliance VM” section. Click Next.
11. In the “Select deployment size” section, select the deployment size and Storage size. For example, “Small” Deployment size was selected in this CVD.
12. Click Next.
13. Select preferred datastore e.g. the “infra_a_datastore_2” was selected in the CVD. Click Next.
14. In the “Network Settings” section, configure the following settings:
a. Choose a Network: Core-Services Network
b. IP version: IPV4
c. IP assignment: static
d. System name: <vcenter-fqdn>
e. IP address: <vcenter-ip>
f. Subnet mask or prefix length: <vcenter-subnet-mask>
g. Default gateway: <vcenter-gateway>
h. DNS Servers: <dns-server>
15. Click Next.
16. Review all values and click Finish to complete the installation.
17. The vCenter appliance installation will take a few minutes to complete.
18. Click Continue to proceed with stage 2 configuration.
19. Click Next.
20. In the Appliance Configuration, configure the below settings:
a. Time Synchronization Mode: Synchronize time with the ESXi host.
Since the ESXi host has been configured to synchronize the time with an NTP server, vCenter time can be synced to ESXi host. Customer can choose a different time synchronization setting.
b. SSH access: Enabled.
21. Click Next.
22. Complete the SSO configuration as shown below.
23. Click Next.
24. If preferred, select Join the VMware’s Customer Experience Improvement Program (CEIP).
25. Click Next.
26. Review the configuration and click Finish.
27. Click OK.
28. Make note of the access URL shown in the completion screen.
29. Click Close.
To set up the VMware vCenter Server, complete the following steps:
1. Using a web browser, navigate to https://<vcenter-ip>/vsphere-client.
2. Log in using the Single Sign-On username (Administrator@flexpod.local) and password created during the vCenter installation.
3. Click “Create Datacenter” in the center pane.
4. Type a name for the FlexPod Datacenter <FPV-FlexPod-DC> in the Datacenter name field.
5. Click OK.
6. Right-click the data center just created and select New Cluster.
7. Name the cluster FPV-Foundation.
8. Check the box to turn on DRS. Leave the default values.
9. Check the box to turn on vSphere HA. Leave the default values.
10. Click OK to create the new cluster.
11. On the left pane, expand the Datacenter.
12. Right-click the FPV-Foundation cluster and select Add Host.
13. In the Host field, enter either the IP address or the FQDN name of one of the VMware ESXi hosts. Click Next.
14. Type root as the user name and the root password. Click Next to continue.
15. Click Yes to accept the certificate.
16. Review the host details and click Next to continue.
17. Assign a license or leave in evaluation mode and click Next to continue.
18. Click Next to continue.
19. Click Next to continue.
20. Review the configuration parameters. Click Finish to add the host.
21. Repeat the steps 12 to 20 to add the remaining VMware ESXi hosts from both the sites to the cluster.
All four VMware ESXi hosts across the two sites are added to the same cluster to enable VMware HA and other cluster related features.
ESXi hosts booted with iSCSI using the VMware iSCSI software initiator need to be configured to do core dumps to the ESXi Dump Collector that is part of vCenter. The Dump Collector is not enabled by default on the vCenter Appliance. To setup the ESXi Dump Collector, complete the following steps:
1. Log into the vSphere web client as Administrator@flexpod.local.
2. In the vSphere web client, select Home.
3. In the center pane, click System Configuration.
4. In the left pane, select Services.
5. Under services, click VMware vSphere ESXi Dump Collector.
6. In the center pane, click the green start icon to start the service.
7. In the Actions menu, click Edit Startup Type.
8. Select Automatic.
9. Click OK.
10. Connect to each ESXi host via ssh as root
11. Run the following commands:
esxcli system coredump network set –v vmk0 –j <vcenter-ip>
esxcli system coredump network set –e true
esxcli system coredump network check
The APIC-Integrated vDS is an integration between the Cisco ACI fabric and VMware allowing EPGs to be created in the ACI fabric and pushed into the vDS as port groups. The Virtual Machine Manager (VMM) domain in the APIC is configured with a pool of VLANs (50 in this validation) that are used as EPGs are assigned as port groups by associating the VMM domain with the EPG. These VLANs are already assigned to the UCS server vNICs and piped through Cisco UCS.
1. In the APIC GUI, select Virtual Networking > Inventory.
2. On the left, expand VMM Domains > VMware.
3. Right-click VMware and select Create vCenter Domain.
4. Name the Virtual Switch fpv-vc-vDS. Leave VMware vSphere Distributed Switch selected.
5. Select the “FPV-UCS-L2_AttEntityP” Associated Attachable Entity Profile.
6. Under VLAN Pool, select Create VLAN Pool.
7. Name the VLAN Pool FPV-VC-DS. Leave Dynamic Allocation selected.
8. Click the “+” to add a block of VLANs to the pool.
9. Enter the VLAN range <1100-1150> and click OK.
10. Click Submit to complete creating the VLAN Pool.
11. Click the “+” to the right of vCenter Credentials to add credentials for the vCenter.
12. For name, enter the vCenter hostname <fpv-vc>. For the username and password, the vCenter Admin, or AD established account with the appropriate privileges.
13. Click OK to complete creating the vCenter credentials.
14. Click the “+” to the right of vCenter to add the vCenter linkage.
15. Enter the vCenter hostname for Name. Enter the vCenter FQDN or IP address.
16. Leave DVS Version as vCenter Default.
17. Enable Stats Collection.
18. For Datacenter, enter the exact Datacenter name specified in vCenter.
19. Do not select a Management EPG.
20. For Associated Credential, select the vCenter credentials entered in step 13.
21. Click OK to complete the vCenter linkage.
22. For Port Channel Mode, select MAC Pinning-Physical-NIC-load.
23. For vSwitch Policy, select LLDP.
24. Click Submit to complete creating the vCenter Domain.
The vDS should now appear in vCenter. Since both the UCS domains are associated with the same Attachable Entity Profile, FPV-UCS-L2_AttEntityP, there is no need to configure the UCS domains separately.
1. In the APIC GUI, select Tenants > common.
2. Under Tenant common, expand Application Profiles > FPV-Common-IB-Mgmt > Application EPGs > FPV-Common-Services.
3. Under the FPV-Common-Services EPG, right-click Domains and select Add VMM Domain Association.
4. Use the drop-down list to select the fpv-vc-vDS VMM Domain Profile. Select On Demand Deploy Immediacy and Pre-provision Resolution Immediacy.
5. Click Submit to create the FPV-Common-Services port group in the vDS.
1. Log into the vCenter vSphere Web Client with the Admin user.
2. Under the Navigator on the left, select the Networking icon.
3. Expand the vCenter, Datacenter, and vDS folder. Right-click the vDS and select Add and Manage Hosts.
4. Select Add hosts and click Next.
5. Click the to add New hosts.
6. Select all four hosts and click OK.
7. Click Next.
8. Select only Manage physical adapters and click Next.
9. On both hosts, assign vmnic4 as uplink1 and vmnic 5 as uplink2. Click Next.
10. Click Next and Finish to complete adding all the ESXi hosts to the vDS. VMs can now be assigned to the FPV-Common-Services port group in the vDS.
The following table lists the VLANs, Subnets, and Bridge Domains for the sample application tenant called FPV-App-A set up as part of this lab validation:
Table 12 Tenant FPV-App-A Configuration
EPG |
Storage VLAN |
UCS VLAN |
Subnet / Gateway |
Bridge Domain |
Web |
N/A |
Virtual Switch |
172.16.0.254/24 |
BD-Internal |
App |
N/A |
Virtual Switch |
172.16.1.254/24 |
BD-Internal |
DB |
N/A |
Virtual Switch |
172.16.2.254/24 |
BD-Internal |
This tenant will host the three application tiers of the sample three-tier application. This tenant will be hosted on the existing ESXi servers and will utilize the infrastructure datastore.
In single site FlexPod with ACI CVD, a separate Storage Virtual Machine (SVM) was configure to host the tenant VMs. While this CVD does not cover creating a separate SVM for the application tenant, customers are encouraged to evaluate the benefits of a dedicated SVM in their application deployment environments and proceed accordingly.
To deploy the FPV-App-A Tenant, complete the following steps:
1. In the APIC GUI, at the top select Tenants > Add Tenant.
2. Name the Tenant FPV-App-A. Select the default Monitoring Policy.
3. For the VRF Name, also enter FPV-App-A. Leave the Take me to this tenant when I click finish checkbox checked.
4. Click Submit to finish creating the Tenant.
The Application Profile described below provides Web, App and DB EPGs to deploy a sample 3-tier application.
1. On the left, right-click Application Profiles and select Create Application Profile.
2. Name the Application Profile 3-Tier-App and select the default Monitoring Policy.
3. Click Submit to complete creating the Application Profile.
4. Expand 3-Tier-App, right-click Application EPGs under 3-Tier-App and select Create Application EPG.
5. Name the EPG Web and leave Intra EPG Isolation set at Unenforced.
6. Use the Bridge Domain drop-down list to select create Bridge Domain.
7. Name the Bridge Domain BD-Internal and select the FPV-App-A VRF.
8. Change Forwarding to Custom, leave Hardware Proxy for L2 Unknown Unicast, and leave Flood for L3 Unknown Multicast Flooding.
9. Click Next.
10. Leave L3 Configurations settings at their defaults, and click Next.
11. Select the default Monitoring Policy and click Finish to complete creating the Bridge Domain.
12. Click Finish to complete creating the EPG.
13. On the left expand 3-Tier-App, Application EPGs, and EPG Web.
14. Under EPG Web, right-click Domains and select Add VMM Domain Association.
15. Select the fpv-vc-vDS VMM Domain Profile.
16. Select Immediate for both the Deploy Immediacy and the Resolution Immediacy. Leave VLAN Mode set at Dynamic.
17. Click Submit to compete adding the VMM Domain Association.
18. On the left under EPG Web, right-click Contracts and select Add Provided Contract.
19. In the Add Provided Contract window, use the Contract drop-down list to select Create Contract.
20. Name the Contract Allow-Web-App. Select the Application Profile Scope.
21. Click the “+” sign to add a Contract Subject.
22. Name the subject Allow-All.
23. Click the “+” sign to add a Contract filter.
24. Use the drop-down list to select the Allow-All filter from Tenant common. Click Update.
25. Click OK to complete creating the Contract Subject.
26. Click Submit to complete creating the Contract.
27. Click Submit to complete adding the Provided Contract.
28. Right-click Contracts and select Add Consumed Contract.
29. In the Add Consumed Contract window, use the Contract drop-down list to select the common/Allow-Shared-L3Out contract.
30. Click Submit to complete adding the Consumed Contract.
31. Optionally, repeat steps 169-171 to add the common/FPV-Allow-Common-Services Consumed Contract.
32. On the left under EPG Web, right-click Subnets and select Create EPG Subnet.
33. For the Default Gateway IP, enter a gateway IP address and mask from a subnet in the Supernet (172.16.0.0/16) that was set up for assigning Tenant IP addresses.
34. For scope, select Advertised Externally and Shared between VRFs.
35. Click Submit to complete creating the EPG Subnet.
36. Right-click Application EPGs under 3-Tier-App and select Create Application EPG.
37. Name the EPG App and leave Intra EPG Isolation set at Unenforced.
38. Use the Bridge Domain drop-down list to select BD-Internal within the current Tenant. Select the default Monitoring Policy.
39. Click Finish to complete creating the EPG.
40. On the left expand 3-Tier-App, Application EPGs, and EPG App.
41. Under EPG App, right-click Domains and select Add VMM Domain Association.
42. Select the fpv-vc-vDS VMM Domain Profile.
43. Select Immediate for both the Deploy Immediacy and the Resolution Immediacy. Select the Dynamic VLAN Mode.
44. Click Submit to compete adding the VMM Domain Association.
45. On the left under EPG App, right-click Contracts and select Add Provided Contract.
46. In the Add Provided Contract window, use the Contract drop-down list to select Create Contract.
47. Name the Contract Allow-App-DB. Select the Application Profile Scope.
48. Click the “+” sign to add a Contract Subject.
49. Name the subject Allow-All.
50. Click the “+” sign to add a Contract filter.
51. Use the drop-down list to select the Allow-All filter from Tenant common. Click Update.
52. Click OK to complete creating the Contract Subject.
53. Click Submit to complete creating the Contract.
54. Click Submit to complete adding the Provided Contract.
55. Right-click Contracts and select Add Consumed Contract.
56. In the Add Consumed Contract window, use the Contract drop-down list to select the Allow-Web-App contract in the current tenant.
57. Click Submit to complete adding the Consumed Contract.
58. Optionally, repeat steps 196-198 to add the common/FPV-Allow-Common-Services Consumed Contract.
59. On the left under EPG App, right-click Subnets and select Create EPG Subnet.
60. For the Default Gateway IP, enter a gateway IP address and mask from a subnet in the Supernet (172.16.0.0/16) that was set up for assigning Tenant IP addresses.
61. If this EPG was connected to FPV-Allow-Common-Services by contract, select only the Shared between VRFs scope. Otherwise, if the tenant SVM management interface will only be accessed from EPGs within the tenant, leave only the Private to VRF Scope selected.
62. Click Submit to complete creating the EPG Subnet.
63. Right-click Application EPGs under 3-Tier-App and select Create Application EPG.
64. Name the EPG DB and leave Intra EPG Isolation set at Unenforced.
65. Use the Bridge Domain drop-down list to select BD-Internal in the current tenant. Select the default Monitoring Policy.
66. Click Finish to complete creating the EPG.
67. On the left expand 3-Tier-App, Application EPGs, and EPG DB.
68. Under EPG DB, right-click Domains and select Add VMM Domain Association.
69. Select the fpv-vc-vDS VMM Domain Profile.
70. Select Immediate for both the Deploy Immediacy and the Resolution Immediacy. Select the Dynamic VLAN Mode.
71. Click Submit to compete adding the VMM Domain Association.
72. On the left under EPG DB, right-click Contracts and select Add Consumed Contract.
73. In the Add Consumed Contract window, use the Contract drop-down list to select the Allow-App-DB contract in the current tenant.
74. Click Submit to complete adding the Consumed Contract.
If required, the above steps can be repeated to add the common/FPV-Allow-Common-Services in the current tenant Consumed Contracts.
75. On the left under EPG DB, right-click Subnets and select Create EPG Subnet.
76. For the Default Gateway IP, enter a gateway IP address and mask from a subnet in the Supernet (172.16.0.0/16) that was set up for assigning Tenant IP addresses.
77. If the FPV-Allow-Common-Services contract was consumed, select only the Shared between VRFs scope. Otherwise, select only the Private to VRF scope.
78. Click Submit to complete creating the EPG Subnet.
All three EPGs configured for hosting the 3-Tier application are attached to the Virtual Machine Manager domain. At the end of these configuration steps, three port-groups will be available on the APIC controlled vDS. Customers can deploy VMs in the appropriate EPGs by attaching the VMs in VMware environment with the correct port-groups.
The validation of the environment was centered on resiliency tests of the Multi-Pod environment. Storage, compute and network failover simulations were run under fair load to verify application continuity when possible. The current FlexPod datacenter system, like the previous FlexPod datacenter solutions is highly redundant and system survived with little or no impact to applications under most of the testing scenarios outline in Table 13. Workload was generated using IOmeter while the failures were introduced via cable pulls, interface shutdowns, and the powering off the equipment. Various failure test scenarios covered during validation are shown below. Functionality verification test cases are not covered in the Table 13:
Table 13 Validation Test Scenarios
Test ID |
Compute to Storage Connectivity Test Cases |
Observations |
1.1 |
Storage vMotion of VMs from 40 GbE Site to 10GbE Site datastores |
VMs migrated without issues across ESXi hosts and the datastores. VMs could easily run in one site while the datastore was in the other site. |
1.2 |
Verify SAN access and boot when primary path to local storage array is unavailable |
iSCSI path A was brought down. The access through iSCSI path B stayed up and access to the LUNs was not impacted. Boot from SAN worked as well through the iSCSI-B path. |
1.3 |
SAN Boot After Server Profile Migration to a New Blade |
Service profile was disassociated from a blade and re-associated with another blade. The blade booted up and worked without any issues. |
1.4 |
Verify single storage controller connectivity failure in a DC |
The connections from the leaf switches to one of the controllers were disconnected. Momentary drop in IOPs were observed, but performance return to previous levels almost immediately. |
1.5 |
Verify single host failure in a DC |
VMs from the failed host restarted within the VMware cluster. |
1.6 |
Compute isolation from storage in a DC |
NFS and iSCSI paths were disconnected at 10GbE Site for about 5 minutes. Datastores for all the hosts in 10GbE site were inaccessible and eventually VMs became inaccessible as well. ESXi hosts showed error “lost connectivity to device…”. Rebooting the hosts and restarting the VMs after storage access was restored brought the environment back to normal. |
1.7 |
VMs run from opposing data center from location of their datastore |
The VMs worked fine across the DCs. However, IOMeter reported loss of IOPs due to added latency. |
|
MetroCluster Validation Test Cases |
|
2.1 |
Perform a negotiated MetroCluster switchover |
The VMs lost connectivity for a few seconds on both switchover and switchback but the VMs stayed up and kept functioning. |
2.2 |
ISL link between the Nexus 3Ks Failure |
The links between the sites were brought down. The MetroCluster configuration could not take any action. Each node continued to serve data normally, but the mirrors were not updated in the respective disaster recovery sites because of loss of connectivity. |
2.3 |
Site-wide controller power down |
Automatic switchover is not part of the solution. VMs and Data on failed site are not available until a forced switchover is initiated. |
|
Multi-Pod Validation Test Cases |
|
3.1 |
Loss of a single Leaf Switch (one after another) |
Minimal loss of traffic was observed while network converged. If the traffic was not going through the leaf under test (dual leaf switches), there was no traffic loss. |
3.2 |
Loss of a single Spine (one after another) |
Minimal loss of traffic was observed while network converged. If the traffic was not going through the spine (dual spines) under test, there was no traffic loss. |
3.3 |
Loss of an IPN device (one after another) |
Minimal loss of traffic was observed while network converged. If the traffic was not going through the IPN device (dual IPN devices) under test, there was no traffic loss. |
3.4 |
Loss of a single path between the IPN devices (one after another) |
Minimal loss of traffic was observed while network converged. If the traffic was not going through the a particular IPN link, there was no traffic loss. |
|
Datacenter Validation Test Cases |
|
4.1 |
Datacenter Isolation and recovery (split brain) |
VMs kept running on their individual sites but all the communication between the two sites stopped. When the links were restored, the storage as well as VMware systems reconnected and operations returned back to normal. |
4.2 |
Datacenter maintenance use case– move all workloads to a single DC. |
The VMs moved seamlessly between the datacenters. IOMeter reported lower IOPs because initially the worker VMs were moved across the WAN away from their datastores. Storage vMotions to the new datacenter solved the latency and IOPs issues. |
4.3 |
Datacenter failure and recover – complete power-out failure of a DC |
Covered below. |
One of the major failure scenarios covered during the FlexPod Datacenter with Cisco ACI Multi-Pod and NetApp MetroCluster IP CVD validation was simulating a power outage in one of the datacenters resulting in failing the applications over to the second site. Since this failure scenario covers network, compute, virtualization and storage failure, various observations are outlined in this section for a better customer understanding.
Before any system failure occurs, the validation environment is in the following state:
· All physical components were up and working as expected.
· POD 11 was considered the primary site for the application being tested and this site would be powered off
· POD 1 was considered the secondary site and this site would be used to restore the application VMs from primary site
· The vCenter and the backup AD was hosted in the secondary site
· Primary AD and the application VMs (Linux based) were hosted in the primary site.
In the primary site, power was shut down for all the devices.
From the vCenter hosted in the secondary site, the hosts, VMs and datastores in the primary site are no longer accessible as shown in Figure 15 and Figure 16:
Figure 15 Primary Site Hosts and VMs Inaccessible
Figure 16 Primary Site Datastores Inaccessible
From the AFF A700 in the secondary site, the primary site is not reachable:
flexpod-b::> metrocluster show
Configuration: IP-fabric
Cluster Entry Name State
------------------------------ ---------------------- ---------------------
Local: flexpod-b
Configuration State configured
Mode normal
AUSO Failure Domain auso-disabled
Remote: flexpod-a
Configuration State not-reachable
Mode -
AUSO Failure Domain not-reachable
To restore the application, the datastores from primary site need to be brought up at the secondary site. To perform the storage system recovery, the MetroCluster failover has to be initiated manually. The command to initiate the switchover is entered on the secondary site AFF A700.
flexpod-b::> metrocluster switchover -forced-on-disaster true
Warning: MetroCluster switchover is a disaster recovery operation that could cause some data loss. The nodes on the other site must either be prevented from serving data or be simply powered off.
Do you want to continue? {y|n}: y
[Job 963] Job succeeded: Switchover is successful.
flexpod-b::> metrocluster show
Configuration: IP-fabric
Cluster Entry Name State
------------------------------ ---------------------- ---------------------
Local: flexpod-b
Configuration State configured
Mode switchover
AUSO Failure Domain auso-disabled
Remote: flexpod-a
Configuration State not-reachable
Mode -
AUSO Failure Domain not-reachable
When the datastores are available at the secondary site, vSphere HA kicks in and the application VMs and the primary AD VM is brought up on secondary site hosts.
After the application running in the primary site is recovered and is running in the secondary site, the validation environment is in the following state:
· All physical components in POD11 (40GbE site) are down.
· All the VMs including vCenter and AD are hosted in the secondary site.
· ESXi hosts and datastores from the primary site (POD 11) are inaccessible.
With the application and storage from the primary site running on the secondary site, the primary site is powered back up. When the communication is restored, devices from two datacenters start communication with each other. On initial powerup of the primary site AFF A700, the MetroCluster IP state is changed to “waiting-for-switchback”:
flexpod-b::> metrocluster show
Configuration: IP-fabric
Cluster Entry Name State
------------------------------ ---------------------- ---------------------
Local: flexpod-b
Configuration State configured
Mode switchover
AUSO Failure Domain auso-disabled
Remote: flexpod-a
Configuration State configured
Mode waiting-for-switchback
AUSO Failure Domain auso-disabled
Within the primary site, the AFF A700 will be ready after a few minutes of internal checks and boot up processes:
flexpod-a::> cluster show
Node Health Eligibility
--------------------- ------- ------------
flexpod-a-01 true true
flexpod-a-02 true true
2 entries were displayed.
Giveback from the secondary site can be sped-up with the following commands:
flexpod-a::> storage failover show
Takeover
Node Partner Possible State Description
-------------- -------------- -------- -------------------------------------
flexpod-a-01 flexpod-a-02 - Waiting for giveback
flexpod-a-02 flexpod-a-01 false In takeover, Auto giveback will be
initiated in 527 seconds
2 entries were displayed.
flexpod-a::> storage failover giveback -ofnode flexpod-a-01
Info: Run the storage failover show-giveback command to check giveback status.
flexpod-a::> storage failover show-giveback
Partner
Node Aggregate Giveback Status
-------------- ----------------- ---------------------------------------------
flexpod-a-01
No aggregates to give back
flexpod-a-02
No aggregates to give back
2 entries were displayed.
With the primary site AFF A700 ready, as switchback recovery will need to be initiated:
flexpod-b::> metrocluster node show
DR Configuration DR
Group Cluster Node State Mirroring Mode
----- ------- ------------------ -------------- --------- --------------------
1 flexpod-b
flexpod-b-01 configured enabled switchover completed
flexpod-b-02 configured enabled switchover completed
flexpod-a
flexpod-a-01 configured enabled waiting for switchback recovery
flexpod-a-02 configured enabled waiting for switchback recovery
4 entries were displayed.
flexpod-b::> metrocluster heal -phase aggregates
[Job 964] Job succeeded: Heal Aggregates is successful.
flexpod-b::>
flexpod-b::> metrocluster heal -phase root-aggregates
[Job 965] Job succeeded: Heal Root Aggregates completed with warnings. Use the "metrocluster operation show" command to view the warnings.
Switchback of primary site datastore back from the secondary site:
flexpod-b::> metrocluster switchback
Warning: switchback is about to start. It will stop all the switched over data Vservers on cluster
"flexpod-b" and automatically restart them on cluster "flexpod-a".
Do you want to continue? {y|n}: y
[Job 966] Synchronizing cluster configuration from the local cluster to the remote cluster (attempt 1 o[Job 966] Synchronizing Vserver configuration from the local cluster to the remote cluster (attempt 1 o[Job 966] Waiting for the "MetroCluster Switchback Continuation Agent" job to complete on the remote cl[Job 966] Job succeeded: Switchback is successful.
flexpod-b::> metrocluster show
Configuration: IP-fabric
Cluster Entry Name State
------------------------------ ---------------------- ---------------------
Local: flexpod-b
Configuration State configured
Mode normal
AUSO Failure Domain auso-disabled
Remote: flexpod-a
Configuration State configured
Mode normal
AUSO Failure Domain auso-disabled
The switchback brings a full return to functionality. The VMs will need to vMotion back over to 40 GbE site when the datastore services via NFS have returned to normal.
You can configure host affinity rules in VMware HA to move the VMs back to preferred site.
After the environment is fully restored and VMs are migrated back to their original sites, the validation environment is back to the same state where:
· All physical components were up and working as expected.
· The vCenter and the backup AD are hosted in the secondary site
· Primary AD and the application VMs (Linux based) are hosted in the primary site.
In this failure scenario, various components of the solutions were successfully validate including:
· Multi-Pod configuration and design
· Routing inside and outside ACI fabric
· Layer-2 extension between the two datacenters
· MetroCluster failover and recovery
· VMware HA
· Storage and VM vMotion
FlexPod Datacenter with Cisco ACI Multi-Pod, NetApp MetroCluster IP, and VMware vSphere 6.7 solution allows interconnecting and centrally managing two or more ACI fabrics deployed in separate, geographically dispersed datacenters. NetApp MetroCluster IP provides a synchronous replication solution between two NetApp controllers providing storage high availability and disaster recovery in a campus or metropolitan area. This validated design enables customers to quickly and reliably deploy VMware vSphere based private cloud on a distributed integrated infrastructure thereby delivering a unified solution which enables multiple sites to behave in much the same way as a single site.
The validated solution achieves the following core design goals:
· Campus-wide and metro-wide protection and provide WAN based disaster recovery
· Design supporting active/active deployment use case
· Common management layer across multiple (two) datacenters for deterministic deployment
· Consistent policy and seamless workload migration across the sites
Cisco Unified Computing System:
http://www.cisco.com/en/US/products/ps10265/index.html
Cisco UCS 6200 Series Fabric Interconnects:
http://www.cisco.com/en/US/products/ps11544/index.html
Cisco UCS 6200 Series Fabric Interconnects:
Cisco UCS 5100 Series Blade Server Chassis:
http://www.cisco.com/en/US/products/ps10279/index.html
Cisco UCS B-Series Blade Servers:
Cisco UCS C-Series Rack Servers:
http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-c-series-rack-servers/index.html
Cisco UCS Adapters:
http://www.cisco.com/en/US/products/ps10277/prod_module_series_home.html
Cisco UCS Manager:
http://www.cisco.com/en/US/products/ps10281/index.html
Cisco Nexus 9000 Series Switches:
Cisco Application Centric Infrastructure:
VMware vCenter Server:
http://www.vmware.com/products/vcenter-server/overview.html
NetApp Data ONTAP:
http://www.netapp.com/us/products/platform-os/ontap/index.aspx
NetApp AFF A700:
https://www.netapp.com/us/products/storage-systems/all-flash-array/aff-a-series.aspx
https://www.netapp.com/us/products/backup-recovery/metrocluster-bcdr.aspx
Cisco UCS Hardware Compatibility Matrix:
https://ucshcltool.cloudapps.cisco.com/public/
VMware Compatibility Guide:
http://www.vmware.com/resources/compatibility
NetApp Interoperability Matric Tool:
http://mysupport.netapp.com/matrix/
Haseeb Niazi, Technical Marketing Engineer, Cisco Systems, Inc.
Haseeb Niazi has over 19 years of experience at Cisco in the Data Center, Enterprise and Service Provider Solutions and Technologies. As a member of various solution teams and Advanced Services, Haseeb has helped many enterprise and service provider customers evaluate and deploy a wide range of Cisco solutions. As a technical marking engineer at Cisco UCS Solutions group, Haseeb focuses on network, compute, virtualization, storage and orchestration aspects of various Compute Stacks. Haseeb holds a master's degree in Computer Engineering from the University of Southern California and is a Cisco Certified Internetwork Expert (CCIE 7848).
Ramesh Issac, Technical Marketing Engineer, Cisco Systems, Inc.
Ramesh Isaac is a Technical Marketing Engineer in the Cisco UCS Data Center Solutions Group. Ramesh has worked in data center and mixed-use lab settings since 1995. He started in information technology supporting Unix environments and focused on designing and implementing multi-tenant virtualization solutions in Cisco labs before entering Technical Marketing. Ramesh holds certifications from Cisco, VMware, and Red Hat.
Arvind Ramakrishnan, Solutions Architect, NetApp, Inc.
Arvind Ramakrishnan works for the NetApp Infrastructure and Cloud Engineering team. He is focused on the development, validation and implementation of Cloud Infrastructure solutions that include NetApp products. Arvind has more than 10 years of experience in the IT industry specializing in Data Management, Security and Data Center technologies. Arvind holds a bachelor’s degree in Electronics and Communication.
· John George, Technical Marketing Engineer, Cisco
· Archana Sharma, Technical Marketing Engineer, Cisco
· Aaron Kirk, Product Manager, NetApp