Cisco Validated Design for a 6000 Seat Virtual Desktop Infrastructure Built on Cisco UCS B200 M5, Cisco UCS Manager 3.2 with NetApp AFF A-Series using Citrix XenDesktop/XenApp 7.15, and VMware vSphere ESXi 6.5 Update 1 Hypervisor Platform
Last Updated: June 26, 2018
About the Cisco Validated Design Program
The Cisco Validated Design (CVD) program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information, visit:
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unified Computing System (Cisco UCS), Cisco UCS B-Series Blade Servers, Cisco UCS C-Series Rack Servers, Cisco UCS S-Series Storage Servers, Cisco UCS Manager, Cisco UCS Management Software, Cisco Unified Fabric, Cisco Application Centric Infrastructure, Cisco Nexus 9000 Series, Cisco Nexus 7000 Series. Cisco Prime Data Center Network Manager, Cisco NX-OS Software, Cisco MDS Series, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2018 Cisco Systems, Inc. All rights reserved.
Table of Contents
Cisco Desktop Virtualization Solutions: Datacenter
Cisco Desktop Virtualization Focus
Cisco Unified Computing System
Cisco Unified Computing System Components
Cisco UCS B200 M5 Blade Server
Cisco UCS VIC1340 Converged Network Adapter
Cisco Nexus 93180YC EX Switches
Cisco MDS 9148S Fiber Channel Switch
Improved Database Flow and Configuration
Multiple Notifications before Machine Updates or Scheduled Restarts
API Support for Managing Session Roaming
API Support for Provisioning VMs from Hypervisor Templates
Support for New and Additional Platforms
Citrix Provisioning Services 7.15
Benefits for Citrix XenApp and Other Server Farm Administrators
Benefits for Desktop Administrators
Citrix Provisioning Services Solution
Citrix Provisioning Services Infrastructure
NetApp Storage Virtual Machine (SVM)
Architecture and Design Considerations for Desktop Virtualization
Understanding Applications and Data
Project Planning and Solution Sizing Sample Questions
Citrix XenDesktop Design Fundamentals
Example XenDesktop Deployments
Distributed Components Configuration
Designing a XenDesktop Environment for a Mixed Workload
Deployment Hardware and Software
Cisco Unified Computing System Configuration
Cisco UCS Manager Software Version 3.2(3c)
Configure Fabric Interconnects at Console
Base Cisco UCS System Configuration
Set Fabric Interconnects to Fibre Channel End Host Mode
Configure Fibre Channel Uplink Ports
Enable Server and Ethernet Uplink Ports
Create Uplink Port Channels to Cisco Nexus 93180YC-FX Switches
Create Required Shared Resource Pools
Set Jumbo Frames in Cisco UCS Fabric
Create Network Control Policy for Cisco Discovery Protocol
Cisco UCS System Configuration for Cisco UCS B-Series
Configuration of AFF 300 with NetApp ONTAP 9.3
NetApp A300 Storage System Configuration
NetApp All Flash FAS A300 Controllers
Complete Configuration Worksheet
Set Onboard Unified Target Adapter 2 Port Personality
Set Auto-Revert on Cluster Management
Set Up Management Broadcast Domain
Set Up Service Processor Network Interface
Disable Flow Control on 10GbE and 40GbE Ports
Disable Unused FCoE Capability on CNA Ports
Configure Network Time Protocol
Configure Simple Network Management Protocol
Enable Cisco Discovery Protocol
Create Jumbo Frame MTU Broadcast Domains in ONTAP
Create Storage Virtual Machine
Create Load-Sharing Mirrors of SVM Root Volume
Create Block Protocol (FC) Service
Add Infrastructure SVM Administrator
NetApp A300 Storage Configuration
Network Port Interface Group Settings
Network Port Broadcast Domains
NAS Logical Interface Settings
FCP Logical Interface Settings
Storage Efficiency and Space Management
NetApp Storage Configuration for CIFS Shares
Create Storage Volumes for PVS vDisks
NetApp VDI Write-Cache Volume Considerations
Deduplication on Write Cache Volumes
Thin-Provision the Write Cache Volumes
Hypervisor Considerations with Write Cache Volumes
Configure Feature for MDS Switch A and MDS Switch B
Configure VSANs for MDS Switch A and MDS Switch B
Installing and Configuring VMware ESXi 6.5
Download Cisco Custom Image for ESXi 6.5 Update 1
Set Up VMware ESXi Installation
Download VMware vSphere Client
Log in to VMware ESXi Hosts by using VMware vSphere Client
Download and Install Updated Cisco VIC eNIC Drivers
Install and Configure VMware vCenter Appliance
ESXi Dump Collector Setup for SAN-Booted Hosts
FlexPod VMware vSphere Distributed Switch (vDS)
Building the Virtual Machines and Environment
Software Infrastructure Configuration
Installing and Configuring XenDesktop and XenApp
Install XenDesktop Delivery Controller, Citrix Licensing, and StoreFront
Installing Citrix License Server
Additional XenDesktop Controller Configuration
Configure the XenDesktop Site Hosting Connection
Configure the XenDesktop Site Administrators
Installing and configuring StoreFront
Additional StoreFront Configuration
Installing and Configuring Citrix Provisioning Server 7.15
Install Additional PVS Servers
Install XenDesktop Virtual Desktop Agents
Install the Citrix Provisioning Server Target Device Software
Create Citrix Provisioning Server vDisks
Provision Virtual Desktop Machines
Citrix Provisioning Services XenDesktop Setup Wizard
Citrix Machine Creation Services
Citrix XenDesktop Policies and Profile Management
Configure Citrix XenDesktop Policies
Configuring User Profile Management
Install and Configure NVIDIA P6 Card
Physical Install of P6 Card into B200 M5 Server
Install the NVIDIA VMware VIB Driver
Install the GPU Drivers inside your Windows VM
Configure NVIDIA Grid License Server on Virtual Machine
Installing Cisco UCS Performance Manager
Configure the Control Center Host Mode
Enabling Access to Browser Interfaces
Deploy Cisco UCS Performance Manager
Setting up Cisco UCS Performance Manager
Add Nexus 9000 Series Switches
Cisco UCS Performance Manager Sample Test Data
Cisco UCS Test Configuration for Single Blade Scalability
Cisco UCS Configuration for Cluster Testing
Cisco UCS Configuration for Full Scale Testing
Testing Methodology and Success Criteria
Server-Side Response Time Measurements
Single-Server Recommended Maximum Workload
Single-Server Recommended Maximum Workload Testing
Single-Server Recommended Maximum Workload for RDS with 270 Users
Single-Server Recommended Maximum Workload for VDI Non-Persistent with 205 Users
Single-Server Recommended Maximum Workload for VDI Persistent with 205 Users
Cluster Workload Testing with 1900 RDS Users
Cluster Workload Testing with 2050 Non-Persistent Desktop Users
Cluster Workload Testing with 2050 Persistent Desktop Users
Full Scale Mixed Workload Testing with 6000 Users
AFF A300 Storage Detailed Test Results for Cluster Scalability Test
AFF A300 Storage Test Results for 1900 Citrix HSD (XenApp) Windows 2016 Sessions
2050 Users Persistent Desktops Cluster Test
2050 Users PVS Non-Persistent desktops Cluster Test
NetApp AFF A300 Storage Test Results for 6000 User Full Scale, Mixed Workload Scalability
Scalability Considerations and Guidelines
NetApp FAS Storage Guidelines for Mixed Desktop Virtualization Workloads
Scalability of Citrix XenDesktop 7.15 Configuration
Appendix A Cisco Switch Configuration
Cisco MDS 9148S - A Configuration
Cisco MDS 9148S - B Configuration
Appendix B NetApp AFF300 Monitoring with PowerShell Scripts
NetApp AFF A300 Monitoring with Powershell Scripts
Provisioning Persistent Desktops Powershell Script that Utilizes Storage Copy Offload with VAAI
Creating User Home Directory Folders with a Powershell Script
Appendix C Full Scale Mixed Workload Test Results
Cisco UCS Manager Configuration Guides
Cisco UCS Virtual Interface Cards
Cisco Nexus Switching References
Cisco MDS 9000 Service Switch References
Cisco Validated Designs include systems and solutions that are designed, tested, and documented to facilitate and improve customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of customers. Cisco and NetApp have partnered to deliver this document, which serves as a specific step by step guide for implementing this solution. This Cisco Validated Design provides an efficient architectural design that is based on customer requirements. The solution that follows is a validated approach to deploying Cisco, NetApp, Citrix and VMware technologies as a shared, high performance, resilient, virtual desktop infrastructure.
This document provides a Reference Architecture for a virtual desktop and application design using Citrix XenApp/XenDesktop 7.15 built on Cisco UCS with a NetApp® All Flash FAS (AFF) A300 storage and the VMware vSphere ESXi 6.5 hypervisor platform.
The landscape of desktop and application virtualization is changing constantly. The new M5 high-performance Cisco UCS Blade Servers and Cisco UCS Unified Fabric combined as part of the FlexPod proven Infrastructure, with the latest generation NetApp AFF storage result in a more compact, more powerful, more reliable and more efficient platform.
This document provides the architecture and design of a virtual desktop infrastructure for up to 6000 mixed use-case users. The solution virtualized on fifth generation Cisco UCS B200 M5 blade servers, booting VMware vSphere 6.5 Update 1 through FC SAN from the AFF A300 storage array. The virtual desktops are powered using Citrix Provisioning Server 7.15 and Citrix XenApp/XenDesktop 7.15, with a mix of RDS hosted shared desktops (1900), pooled/non-persistent hosted virtual Windows 10 desktops (2050) and persistent hosted virtual Windows 10 desktops provisioned with Citrix Machine Creation Services (2050) to support the user population. Where applicable, the document provides best practice recommendations and sizing guidelines for customer deployments of this solution.
The solution is fully capable of supporting hardware accelerated graphicss workloads. The Cisco UCS B200 M5 server supports up to two NVIDIA P6 cards for high density, high-performance graphics workload support. See our Cisco Graphics White Paper for details about integrating NVIDIA GPU with Citrix XenDesktop.
This solution provides an outstanding virtual desktop end-user experience as measured by the Login VSI 4.1.25.6 Knowledge Worker workload running in benchmark mode, along with the 6000-seat solution providing a large-scale building block that can be replicated to confidently scale-out to tens of thousands of users.
The current industry trend in data center design is towards shared infrastructures. By using virtualization along with pre-validated IT platforms, enterprise customers have embarked on the journey to the cloud by moving away from application silos and toward shared infrastructure that can be quickly deployed, thereby increasing agility and reducing costs. Cisco, NetApp storage, and VMware have partnered to deliver this Cisco Validated Design, which uses best of breed storage, server and network components to serve as the foundation for desktop virtualization workloads, enabling efficient architectural designs that can be quickly and confidently deployed.
The audience for this document includes, but is not limited to; sales engineers, field consultants, professional services, IT managers, partner engineers, and customers who want to take advantage of an infrastructure built to deliver IT efficiency and enable IT innovation.
This document provides a step-by-step design, configuration and implementation guide for the Cisco Validated Design for a large-scale Citrix XenDesktop 7.15 mixed workload solution with NetApp AFF A300, Cisco UCS Blade Servers, Cisco Nexus 9000 series Ethernet switches and Cisco MDS 9000 series fibre channel switches.
This is the first Citrix XenDesktop desktop virtualization Cisco Validated Design with Cisco UCS 5th generation servers and a NetApp AFF A-Series system.
It incorporates the following features:
· Cisco UCS B200 M5 blade servers with Intel Xeon Scalable Family processors and 2666 MHz memory
· Validation of Cisco Nexus 9000 with NetApp AFF A300 system
· Validation of Cisco MDS 9000 with NetApp AFF A300 system
· Support for the Cisco UCS 3.2(3d) release and Cisco UCS B200-M5 servers
· Support for the latest release of NetApp AFF A300 hardware and NetApp ONTAP® 9.3
· VMware vSphere 6.5 U1 Hypervisor
· Citrix XenDesktop 7.15 Server 2016 RDS hosted shared virtual desktops
· Citrix XenDesktop 7.15 non-persistent hosted virtual Windows 10 desktops provisioned with Citrix Provisioning Services
· Citrix XenDesktop 7.15 persistent full clones hosted virtual Windows 10 desktops provisioned with Citrix Machine Creation Services
The datacenter market segment is shifting toward heavily virtualized private, hybrid and public cloud computing models running on industry-standard systems. These environments require uniform design points that can be repeated for ease of management and scalability.
These factors have led to the need for predesigned computing, networking and storage building blocks optimized to lower the initial design cost, simplify management, and enable horizontal scalability and high levels of utilization.
The use cases include:
· Enterprise Datacenter
· Service Provider Datacenter
· Large Commercial Datacenter
This Cisco Validated Design prescribes a defined set of hardware and software that serves as an integrated foundation for both Citrix XenDesktop Microsoft Windows 10 virtual desktops and Citrix XenApp server desktop sessions based on Microsoft Server 2016.
The mixed workload solution includes NetApp AFF A300 storage, Cisco Nexus® and MDS networking, the Cisco Unified Computing System (Cisco UCS®), Citrix XenDesktop and VMware vSphere software in a single package. The design is space optimized such that the network, compute, and storage required can be housed in one data center rack. Switch port density enables the networking components to accommodate multiple compute and storage configurations of this kind.
The infrastructure is deployed to provide Fibre Channel-booted hosts with access to shared storage using NFS mounts. The reference architecture reinforces the "wire-once" strategy because as additional storage is added to the architecture, no re-cabling is required from the hosts to the Cisco UCS fabric interconnect.
The combination of technologies from Cisco Systems, Inc., NetApp Inc., Citrix Inc., and VMware Inc. produced a highly efficient, robust and affordable desktop virtualization solution for a hosted virtual desktop and hosted shared desktop mixed deployment supporting different use cases. Key components of the solution include the following:
· More power, same size. Cisco UCS B200 M5 half-width blade with dual 18-core 2.3 GHz Intel ® Xeon ® Gold (6140) processors and 768 GB of memory supports more virtual desktop workloads than the previously released generation processors on the same hardware. The Intel 18-core 2.3 GHz Intel ® Xeon ® Gold (6140) processors used in this study provided a balance between increased per-blade capacity and cost.
· Fault-tolerance with high availability built into the design. The various designs are based on using one Unified Computing System chassis with multiple Cisco UCS B200 M5 blades for virtualized desktop and infrastructure workloads. The design provides N+1 server fault tolerance for hosted virtual desktops, hosted shared desktops and infrastructure services.
· Stress-tested to the limits during simulated login storms. All 6000 simulated users logged in and started running workloads up to steady state in 48-minutes without overwhelming the processors, exhausting memory or exhausting the storage subsystems, providing customers with a desktop virtualization system that can easily handle the most demanding login and startup storms.
· Ultra-condensed computing for the datacenter. The rack space required to support the system is a single 42U rack, conserving valuable data center floor space.
· All Virtualized: This CVD presents a validated design that is 100 percent virtualized on VMware ESXi 6.5. All of the virtual desktops, user data, profiles, and supporting infrastructure components, including Active Directory, SQL Servers, Citrix XenDesktop components, XenDesktop VDI desktops and XenApp servers were hosted as virtual machines. This provides customers with complete flexibility for maintenance and capacity additions because the entire system runs on the FlexPod converged infrastructure with stateless Cisco UCS Blade servers and NetApp FC storage.
· Cisco maintains industry leadership with the new Cisco UCS Manager 3.2(3d) software that simplifies scaling, guarantees consistency, and eases maintenance. Cisco’s ongoing development efforts with Cisco UCS Manager, Cisco UCS Central, and Cisco UCS Director ensure that customer environments are consistent locally, across Cisco UCS Domains and across the globe, our software suite offers increasingly simplified operational and deployment management and it continues to widen the span of control for customer organizations’ subject matter experts in compute, storage and network.
· Our 40G unified fabric story gets additional validation on Cisco UCS 6300 Series Fabric Interconnects as Cisco runs more challenging workload testing, while maintaining unsurpassed user response times.
· NetApp AFF A300 array provides industry-leading storage solutions that efficiently handle the most demanding I/O bursts (for example, login storms), profile management, and user data management, deliver simple and flexible business continuance, and help reduce storage cost per desktop.
· NetApp AFF A300 array provides a simple to understand storage architecture for hosting all user data components (VMs, profiles, user data) on the same storage array.
· NetApp clustered Data ONTAP software enables to seamlessly add, upgrade or remove storage from the infrastructure to meet the needs of the virtual desktops.
· Citrix XenDesktop and XenApp Advantage. XenApp and XenDesktop are virtualization solutions that give IT control of virtual machines, applications, licensing, and security while providing anywhere access for any device.
· XenApp and XenDesktop provides the following:
- End users to run applications and desktops independently of the device's operating system and interface.
- Administrators to manage the network and control access from selected devices or from all devices.
- Administrators to manage an entire network from a single data center.
XenApp and XenDesktop share a unified architecture called FlexCast Management Architecture (FMA). FMA's key features are the ability to run multiple versions of XenApp or XenDesktop from a single Site and integrated provisioning.
· Optimized to achieve the best possible performance and scale. For hosted shared desktop sessions, the best performance was achieved when the number of vCPUs assigned to the XenApp virtual machines did not exceed the number of hyper-threaded (logical) cores available on the server. In other words, maximum performance is obtained when not overcommitting the CPU resources for the virtual machines running virtualized RDS systems.
· Provisioning desktop machines made easy. Citrix provides two core provisioning methods for XenDesktop and XenApp virtual machines: Citrix Provisioning Services for pooled virtual desktops and XenApp virtual servers and Citrix Machine Creation Services for pooled or persistent virtual desktops. This paper provides guidance on how to use each method and documents the performance of each technology.
Today’s IT departments are facing a rapidly evolving workplace environment. The workforce is becoming increasingly diverse and geographically dispersed, including offshore contractors, distributed call center operations, knowledge and task workers, partners, consultants, and executives connecting from locations around the world at all times.
This workforce is also increasingly mobile, conducting business in traditional offices, conference rooms across the enterprise campus, home offices, on the road, in hotels, and at the local coffee shop. This workforce wants to use a growing array of client computing and mobile devices that they can choose based on personal preference. These trends are increasing pressure on IT to ensure the protection of corporate data and prevent data leakage or loss through any combination of user, endpoint device, and desktop access scenarios (Figure 1).
These challenges are compounded by desktop refresh cycles to accommodate aging PCs and bounded local storage and migration to new operating systems, specifically Microsoft Windows 10 and productivity tools, specifically Microsoft Office 2016.
Figure 1 Cisco Data Center Partner Collaboration
Some of the key drivers for desktop virtualization are increased data security and reduced TCO through increased control and reduced management costs.
Cisco focuses on three key elements to deliver the best desktop virtualization data center infrastructure: simplification, security, and scalability. The software combined with platform modularity provides a simplified, secure, and scalable desktop virtualization platform.
Cisco UCS provides a radical new approach to industry-standard computing and provides the core of the data center infrastructure for desktop virtualization. Among the many features and benefits of Cisco UCS are the drastic reduction in the number of servers needed and in the number of cables used per server, and the capability to rapidly deploy or reprovision servers through Cisco UCS service profiles. With fewer servers and cables to manage and with streamlined server and virtual desktop provisioning, operations are significantly simplified. Thousands of desktops can be provisioned in minutes with Cisco UCS Manager service profiles and Cisco storage partners’ storage-based cloning. This approach accelerates the time to productivity for end users, improves business agility, and allows IT resources to be allocated to other tasks.
Cisco UCS Manager (UCSM) automates many mundane, error-prone data center operations such as configuration and provisioning of server, network, and storage access infrastructure. In addition, Cisco UCS B-Series Blade Servers and C-Series Rack Servers with large memory footprints enable high desktop density that helps reduce server infrastructure requirements.
Simplification also leads to more successful desktop virtualization implementation. Cisco and its technology partners like VMware Technologies, NetApp, and Citrix Inc. have developed integrated, validated architectures, including predefined converged architecture infrastructure packages such as FlexPod. Cisco Desktop Virtualization Solutions have been tested with VMware vSphere, Citrix XenDesktop.
Although virtual desktops are inherently more secure than their physical predecessors, they introduce new security challenges. Mission-critical web and application servers using a common infrastructure such as virtual desktops are now at a higher risk for security threats. Inter–virtual machine traffic now poses an important security consideration that IT managers need to address, especially in dynamic environments in which virtual machines, using VMware vMotion, move across the server infrastructure.
Desktop virtualization, therefore, significantly increases the need for the virtual machine–level awareness of policy and security, especially given the dynamic and fluid nature of virtual machine mobility across an extended computing infrastructure. The ease with which new virtual desktops can proliferate magnifies the importance of a virtualization-aware network and security infrastructure. Cisco data center infrastructure (Cisco UCS and Cisco Nexus Family solutions) for desktop virtualization provides strong data center, network, and desktop security, with comprehensive security from the desktop to the hypervisor. Security is enhanced with segmentation of virtual desktops, virtual machine–aware policies and administration, and network security across the LAN and WAN infrastructure.
The growth of a desktop virtualization solution is all but inevitable, so a solution must be able to scale, and scale predictably, with that growth. The Cisco Desktop Virtualization Solutions built on FlexPod Datacenter infrastructure supports high virtual-desktop density (desktops per server), and additional servers and storage scale with near-linear performance. FlexPod Datacenter provides a flexible platform for growth and improves business agility. Cisco UCS Manager service profiles allow on-demand desktop provisioning and make it just as easy to deploy dozens of desktops as it is to deploy thousands of desktops.
Cisco UCS servers provide near-linear performance and scale. Cisco UCS implements the patented Cisco Extended Memory Technology to offer large memory footprints with fewer sockets (with scalability to up to 1 terabyte (TB) of memory with 2- and 4-socket servers). Using unified fabric technology as a building block, Cisco UCS server aggregate bandwidth can scale to up to 80 Gbps per server, and the northbound Cisco UCS fabric interconnect can output 2 terabits per second (Tbps) at line rate, helping prevent desktop virtualization I/O and memory bottlenecks. Cisco UCS, with its high-performance, low-latency unified fabric-based networking architecture, supports high volumes of virtual desktop traffic, including high-resolution video and communications traffic. In addition, Cisco storage partners NetApp help maintain data availability and optimal performance during boot and login storms as part of the Cisco Desktop Virtualization Solutions. Recent Cisco Validated Designs for End User Computing based on FlexPod solutions have demonstrated scalability and performance, with up to 6000 desktops up and running in less than 30 minutes.
FlexPod Datacenter provides an excellent platform for growth, with transparent scaling of server, network, and storage resources to support desktop virtualization, data center applications, and cloud computing.
The simplified, secure, scalable Cisco data center infrastructure for desktop virtualization solutions saves time and money compared to alternative approaches. Cisco UCS enables faster payback and ongoing savings (better ROI and lower TCO) and provides the industry’s greatest virtual desktop density per server, reducing both capital expenditures (CapEx) and operating expenses (OpEx). The Cisco UCS architecture and Cisco Unified Fabric also enables much lower network infrastructure costs, with fewer cables per server and fewer ports required. In addition, storage tiering and deduplication technologies decrease storage costs, reducing desktop storage needs by up to 50 percent.
The simplified deployment of Cisco UCS for desktop virtualization accelerates the time to productivity and enhances business agility. IT staff and end users are more productive more quickly, and the business can respond to new opportunities quickly by deploying virtual desktops whenever and wherever they are needed. The high-performance Cisco systems and network deliver a near-native end-user experience, allowing users to be productive anytime and anywhere.
The ultimate measure of desktop virtualization for any organization is its efficiency and effectiveness in both the near term and the long term. The Cisco Desktop Virtualization Solutions are very efficient, allowing rapid deployment, requiring fewer devices and cables, and reducing costs. The solutions are also very effective, providing the services that end users need on their devices of choice while improving IT operations, control, and data security. Success is bolstered through Cisco’s best-in-class partnerships with leaders in virtualization and storage, and through tested and validated designs and services to help customers throughout the solution lifecycle. Long-term success is enabled through the use of Cisco’s scalable, flexible, and secure architecture as the platform for desktop virtualization.
Figure 2 illustrates the physical architecture.
Figure 2 Physical Architecture
The reference hardware configuration includes:
· Two Cisco Nexus 93180YC-FX switches
· Two Cisco MDS 9148S 16GB Fibre Channel switches
· Two Cisco UCS 6332-16UP Fabric Interconnects
· Four Cisco UCS 5108 Blade Chassis
· Two Cisco UCS B200 M4 Blade Servers (2 Infra Server hosting Infrastructure VMs)
· 30 Cisco UCS B200 M5 Blade Servers (for workload)
· One NetApp AFF A300 Storage System
· One NetApp DS224C Disk Shelf
For desktop virtualization, the deployment includes Citrix XenDesktop 7.15 running on VMware vSphere 6.5.
The design is intended to provide a large-scale building block for XenDesktop mixed workloads consisting of RDS Windows Server 2016 hosted shared desktop sessions and Windows 10 non-persistent and persistent hosted desktops in the following ratio:
· 1900 Random Hosted Shared Windows 2016 user sessions with office 2016 (PVS)
· 2050 Random Pooled Windows 10 Desktops with office 2016 (PVS)
· 2050 Static Full Copy Windows 10 Desktops with office 2016 (MCS)
The data provided in this document will allow our customers to adjust the mix of HSD and HSD desktops to suit their environment. For example, additional blade servers and chassis can be deployed to increase compute capacity, additional disk shelves can be deployed to improve I/O capability and throughput, and special hardware or software features can be added to introduce new features. This document guides you through the detailed steps for deploying the base architecture. This procedure covers everything from physical cabling to network, compute and storage device configurations.
This Cisco Validated Design provides details for deploying a fully redundant, highly available 6000 seats mixed workload virtual desktop solution with VMware on a FlexPod Datacenter architecture. Configuration guidelines are provided that refer the reader to which redundant component is being configured with each step. For example, storage controller 01and storage controller 02 are used to identify the two AFF A300 storage controllers that are provisioned with this document, Cisco Nexus A or Cisco Nexus B identifies the pair of Cisco Nexus switches that are configured and Cisco MDS A or Cisco MDS B identifies the pair of Cisco MDS switches that are configured.
The Cisco UCS 6332-16UP Fabric Interconnects are similarly configured. Additionally, this document details the steps for provisioning multiple Cisco UCS hosts, and these are identified sequentially: VM-Host-Infra-01, VM-Host-Infra-02, VM-Host-RDSH-01, VM-Host-VDI-01 and so on. Finally, to indicate that you should include information pertinent to your environment in a given step, <text> appears as part of the command structure.
This section describes the components used in the solution outlined in this study.
FlexPod is a best practice data center architecture that includes the following components:
· Cisco Unified Computing System
· Cisco Nexus switches
· Cisco MDS switches
· NetApp All Flash FAS (AFF) systems
Figure 3 FlexPod Component Families
These components are connected and configured according to the best practices of both Cisco and NetApp to provide an ideal platform for running a variety of enterprise workloads with confidence. FlexPod can scale up for greater performance and capacity (adding compute, network, or storage resources individually as needed), or it can scale out for environments that require multiple consistent deployments (such as rolling out of additional FlexPod stacks). The reference architecture covered in this document leverages Cisco Nexus 9000 for the network switching element and pulls in the Cisco MDS 9000 for the SAN switching component.
One of the key benefits of FlexPod is its ability to maintain consistency during scale. Each of the component families shown (Cisco UCS, Cisco Nexus, and NetApp AFF) offers platform and resource options to scale the infrastructure up or down, while supporting the same features and functionality that are required under the configuration and connectivity best practices of FlexPod.
The following lists the benefits of FlexPod:
· Consistent Performance and Scalability
- Consistent sub-millisecond latency with 100 percent flash storage
- Consolidate 100’s of enterprise-class applications in a single rack
- Scales easily, without disruption
- Continuous growth through multiple FlexPod CI deployments
· Operational Simplicity
- Fully tested, validated, and documented for rapid deployment
- Reduced management complexity
- Auto-aligned 512B architecture removes storage alignment issues
- No storage tuning or tiers necessary
· Lowest TCO
- Dramatic savings in power, cooling, and space with 100 percent Flash
- Industry leading data reduction
· Enterprise-Grade Resiliency
- Highly available architecture with no single point of failure
- Nondisruptive operations with no downtime
- Upgrade and expand without downtime or performance loss
- Native data protection: snapshots and replication
- Suitable for even large resource-intensive workloads such as real-time analytics or heavy transactional databases
Cisco UCS Manager provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System™ (Cisco UCS) through an intuitive GUI, a command-line interface (CLI), and an XML API. The manager provides a unified management domain with centralized management capabilities and can control multiple chassis and thousands of virtual machines.
Cisco UCS is a next-generation data center platform that unites computing, networking, and storage access. The platform, optimized for virtual environments, is designed using open industry-standard technologies and aims to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency; lossless 40 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. It is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain.
The main components of Cisco UCS are:
· Compute: The system is based on an entirely new class of computing system that incorporates blade servers based on Intel® Xeon® processor E5-2600/4600 v3 and E7-2800 v3 family CPUs.
· Network: The system is integrated on a low-latency, lossless, 40-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing (HPC) networks, which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables needed, and by decreasing the power and cooling requirements.
· Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.
· Storage access: The system provides consolidated access to local storage, SAN storage, and network-attached storage (NAS) over the unified fabric. With storage access unified, Cisco UCS can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Small Computer System Interface over IP (iSCSI) protocols. This capability provides customers with a choice for storage access and investment protection. In addition, server administrators can preassign storage-access policies for system connectivity to storage resources, simplifying storage connectivity and management and helping increase productivity.
· Management: Cisco UCS uniquely integrates all system components, enabling the entire solution to be managed as a single entity by Cisco UCS Manager. The manager has an intuitive GUI, a CLI, and a robust API for managing all system configuration processes and operations.
Figure 4 Cisco Data Center Overview
Cisco UCS is designed to deliver:
· Reduced TCO and increased business agility
· Increased IT staff productivity through just-in-time provisioning and mobility support
· A cohesive, integrated system that unifies the technology in the data center; the system is managed, serviced, and tested as a whole
· Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand
· Industry standards supported by a partner ecosystem of industry leaders
Cisco UCS Manager provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System across multiple chassis, rack servers, and thousands of virtual machines. Cisco UCS Manager manages Cisco UCS as a single entity through an intuitive GUI, a command-line interface (CLI), or an XML API for comprehensive access to all Cisco UCS Manager Functions.
The Cisco UCS 6300 Series Fabric Interconnects are a core part of Cisco UCS, providing both network connectivity and management capabilities for the system. The Cisco UCS 6300 Series offers line-rate, low-latency, lossless 40 Gigabit Ethernet, FCoE, and Fibre Channel functions.
The fabric interconnects provide the management and communication backbone for the Cisco UCS B-Series Blade Servers and Cisco UCS 5100 Series Blade Server Chassis. All chassis, and therefore all blades, attached to the fabric interconnects become part of a single, highly available management domain. In addition, by supporting unified fabric, the Cisco UCS 6300 Series provides both LAN and SAN connectivity for all blades in the domain.
For networking, the Cisco UCS 6300 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 40 Gigabit Ethernet on all ports, 2.4 plus terabit (Tb) switching capacity, and 320 Gbps of bandwidth per chassis IOM, independent of packet size and enabled services. The product series supports Cisco low-latency, lossless, 40 Gigabit Ethernet unified network fabric capabilities, increasing the reliability, efficiency, and scalability of Ethernet networks. The fabric interconnects support multiple traffic classes over a lossless Ethernet fabric, from the blade server through the interconnect. Significant TCO savings come from an FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated.
Figure 5 Cisco UCS 6300 Series Fabric Interconnect
The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of the Cisco Unified Computing System, delivering a scalable and flexible blade server chassis. The Cisco UCS 5108 Blade Server Chassis, is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors.
Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support non-redundant, N+1 redundant, and grid redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for Cisco UCS Fabric Extenders. A passive mid-plane provides up to 40 Gbps of I/O bandwidth per server slot from each Fabric Extender. The chassis is capable of supporting 40 Gigabit Ethernet standards.
Figure 6 Cisco UCS 5108 Blade Chassis Front and Rear Views
The Cisco UCS B200 M5 Blade Server (Figure 7 and Figure 8) is a density-optimized, half-width blade server that supports two CPU sockets for Intel Xeon processor 6140 Gold series CPUs and up to 24 DDR4 DIMMs. It supports one modular LAN-on-motherboard (LOM) dedicated slot for a Cisco virtual interface card (VIC) and one mezzanine adapter. In additions, the Cisco UCS B200 M5 supports an optional storage module that accommodates up to two SAS or SATA hard disk drives (HDDs) or solid-state disk (SSD) drives. You can install up to eight Cisco UCS B200 M5 servers in a chassis, mixing them with other models of Cisco UCS blade servers in the chassis if desired. Latest features of Cisco UCS Virtual Interface Cards (VICs)
Figure 7 Cisco UCS B200 M5 Front View
Figure 8 Cisco UCS B200 M5 Back View
Cisco UCS combines Cisco UCS B-Series Blade Servers and C-Series Rack Servers with networking and storage access into a single converged system with simplified management, greater cost efficiency and agility, and increased visibility and control. The Cisco UCS B200 M5 Blade Server is one of the newest servers in the Cisco UCS portfolio.
The Cisco UCS B200 M5 delivers performance, flexibility, and optimization for data centers and remote sites. This enterprise-class server offers market-leading performance, versatility, and density without compromise for workloads ranging from web infrastructure to distributed databases. The Cisco UCS B200 M5 can quickly deploy stateless physical and virtual workloads with the programmable ease of use of the Cisco UCS Manager software and simplified server access with Cisco® Single Connect technology. Based on the Intel Xeon processor 6140 Gold product family, it offers up to 3 TB of memory using 128GB DIMMs, up to two disk drives, and up to 320 Gbps of I/O throughput. The Cisco UCS B200 M5 offers exceptional levels of performance, flexibility, and I/O throughput to run your most demanding applications.
In addition, Cisco UCS has the architectural advantage of not having to power and cool excess switches, NICs, and HBAs in each blade server chassis. With a larger power budget per blade server, it provides uncompromised expandability and capabilities, as in the new Cisco UCS B200 M5 server with its leading memory-slot capacity and drive capacity.
The Cisco UCS B200 M5 Blade Server delivers performance, flexibility, and optimization for deployments in data centers, in the cloud, and at remote sites. This enterprise-class server offers market-leading performance, versatility, and density without compromise for workloads including Virtual Desktop Infrastructure (VDI), web infrastructure, distributed databases, converged infrastructure, and enterprise applications such as Oracle and SAP HANA. The Cisco UCS B200 M5 server can quickly deploy stateless physical and virtual workloads through programmable, easy-to-use Cisco UCS Manager software and simplified server access through Cisco SingleConnect technology. It includes the following:
· Latest Intel® Xeon® Scalable processors with up to 28 cores per socket
· Up to 24 DDR4 DIMMs for improved performance
· Intel 3D XPoint-ready support, with built-in support for next-generation nonvolatile memory technology
· Two GPUs
· Two Small-Form-Factor (SFF) drives
· Two Secure Digital (SD) cards or M.2 SATA drives
· Up to 80 Gbps of I/O throughput
The Cisco UCS B200 M5 server is a half-width blade. Up to eight servers can reside in the 6-Rack-Unit (6RU) Cisco UCS 5108 Blade Server Chassis, offering one of the highest densities of servers per rack unit of blade chassis in the industry. You can configure the Cisco UCS B200 M5 to meet your local storage requirements without having to buy, power, and cool components that you do not need.
The Cisco UCS B200 M5 provides these main features:
· Up to two Intel Xeon Scalable CPUs with up to 28 cores per CPU
· 24 DIMM slots for industry-standard DDR4 memory at speeds up to 2666 MHz, with up to 3 TB of total memory when using 128-GB DIMMs
· Modular LAN On Motherboard (mLOM) card with Cisco UCS Virtual Interface Card (VIC) 1340, a 2-port, 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)–capable mLOM mezzanine adapter
· Optional rear mezzanine VIC with two 40-Gbps unified I/O ports or two sets of 4 x 10-Gbps unified I/O ports, delivering 80 Gbps to the server; adapts to either 10- or 40-Gbps fabric connections
· Two optional, hot-pluggable, hard-disk drives (HDDs), solid-state drives (SSDs), or NVMe 2.5-inch drives with a choice of enterprise-class RAID or passthrough controllers
· Cisco FlexStorage local drive storage subsystem, which provides flexible boot and local storage capabilities and allows you to boot from dual, mirrored SD cards
· Support for up to two optional GPUs
· Support for up to one rear storage mezzanine card
Table 1 Product Specifications
Table 2 System Requirements
Table 3 Ordering Information
Table 4 Capabilities and Features
For detailed information, refer to the Cisco UCS B200 M5 Blade Server Spec Sheet and the Cisco UCS B200 M5 Blade Server Data Sheet.
The Cisco UCS Virtual Interface Card (VIC) 1340 (Figure 9) is a 2-port 40-Gbps Ethernet or dual 4 x 10-Gbps Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) designed exclusively for the M5 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1340 capabilities is enabled for two ports of 40-Gbps Ethernet.
The Cisco UCS VIC 1340 enables a policy-based, stateless, agile server infrastructure that can present over 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1340 supports Cisco® Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment and management.
Figure 10 illustrates the Cisco UCS VIC 1340 Virtual Interface Cards Deployed in the Cisco UCS B-Series B200 M5 Blade Servers.
Figure 10 Cisco UCS VIC 1340 Deployed in the Cisco UCS B200 M5
The Cisco Nexus 93180YC-EX Switch has 48 1/10/25G-Gbps Small Form Pluggable Plus (SFP+) ports and 6 40/100-Gbps Quad SFP+ (QSFP+) uplink ports. All ports are line rate, delivering 3.6 Tbps of throughput in a 1-rack-unit (1RU) form factor.
· Includes top-of-rack, fabric extender aggregation, or middle-of-row fiber-based server access connectivity for traditional and leaf-spine architectures
· Includes leaf node support for Cisco ACI architecture
· Increase scale and simplify management through Cisco Nexus 2000 Fabric Extender support
· Enhanced Cisco NX-OS Software is designed for performance, resiliency, scalability, manageability, and programmability
· ACI-ready infrastructure helps users take advantage of automated policy-based systems management
· Virtual extensible LAN (VXLAN) routing provides network services
· Rich traffic flow telemetry with line-rate data collection
· Real-time buffer utilization per port and per queue, for monitoring traffic micro-bursts and application traffic patterns
· Cisco Tetration Analytics Platform support with built-in hardware sensors for rich traffic flow telemetry and line-rate data collection
· Cisco Nexus Data Broker support for network traffic monitoring and analysis
· Real-time buffer utilization per port and per queue, for monitor traffic micro-bursts and application traffic patterns
· High-performance, non-blocking architecture
· Easily deployed into either a hot-aisle and cold-aisle configuration
· Redundant, hot-swappable power supplies and fan trays
· Pre-boot execution environment (PXE) and Power-On Auto Provisioning (POAP) support allows for simplified software upgrades and configuration file installation
· Automate and configure switches with DevOps tools like Puppet, Chef, and Ansible
· An intelligent API offers switch management through remote procedure calls (RPCs, JSON, or XML) over a HTTP/HTTPS infrastructure
· Python scripting gives programmatic access to the switch command-line interface (CLI)
· Includes hot and cold patching and online diagnostics
· A Cisco 40-Gb bidirectional transceiver allows for reuse of an existing 10 Gigabit Ethernet multimode cabling plant for 40 Gigabit Ethernet
· Support for 10-Gb and 25-Gb access connectivity and 40-Gb and 100-Gb uplinks facilitate data centers migrating switching infrastructure to faster speeds
· 1.44 Tbps of bandwidth in a 1 RU form factor
· 48 fixed 1/10-Gbps SFP+ ports
· 6 fixed 40-Gbps QSFP+ for uplink connectivity that can be turned into 10 Gb ports through a QSFP to SFP or SFP+ Adapter (QSA)
· Latency of 1 to 2 microseconds
· Front-to-back or back-to-front airflow configurations
· 1+1 redundant hot-swappable 80 Plus Platinum-certified power supplies
· Hot swappable 2+1 redundant fan tray
Figure 11 Cisco Nexus 93180YC-EX Switch
The Cisco MDS 9148S 16G Multilayer Fabric Switch is the next generation of the highly reliable Cisco MDS 9100 Series Switches. It includes up to 48 auto-sensing line-rate 16-Gbps Fibre Channel ports in a compact easy to deploy and manage 1-rack-unit (1RU) form factor. In all, the Cisco MDS 9148S is a powerful and flexible switch that delivers high performance and comprehensive Enterprise-class features at an affordable price.
MDS 9148S has a pay-as-you-grow model which helps you scale from a 12 port base license to a 48 port with an incremental 12 port license. This helps customers to pay and activate only the required ports.
MDS 9148S has a dual power supply and FAN trays to provide physical redundancy. The software features, like ISSU and ISSD, helps with upgrading and downgrading code without reloading the switch and without interrupting the live traffic.
Figure 12 Cisco 9148S MDS Fibre Channel Switch
Benefits
· Flexibility for growth and virtualization
· Easy deployment and management
· Optimized bandwidth utilization and reduced downtime
· Enterprise-class features and reliability at low cost
Features
· PowerOn Auto Provisioning and intelligent diagnostics
· In-Service Software Upgrade and dual redundant hot-swappable power supplies for high availability
· Role-based authentication, authorization, and accounting services to support regulatory requirements
· High-performance interswitch links with multipath load balancing
· Smart zoning and virtual output queuing
· Hardware-based slow port detection and recovery
Performance and Port Configuration
· 2/4/8/16-Gbps auto-sensing with 16 Gbps of dedicated bandwidth per port
· Up to 256 buffer credits per group of 4 ports (64 per port default, 253 maximum for a single port in the group)
· Supports configurations of 12, 24, 36, or 48 active ports, with pay-as-you-grow, on-demand licensing
Advanced Functions
· Virtual SAN (VSAN)
· Inter-VSAN Routing (IVR)
· PortChannel with multipath load balancing
· Flow-based and zone-based QoS
This Cisco Validated Design includes VMware vSphere 6.5.
VMware provides virtualization software. VMware’s enterprise software hypervisors for servers VMware vSphere ESX, vSphere ESXi, and VSphere—are bare-metal hypervisors that run directly on server hardware without requiring an additional underlying operating system. VMware vCenter Server for vSphere provides central management and complete control and visibility into clusters, hosts, virtual machines, storage, networking, and other critical elements of your virtual infrastructure.
VMware vSphere 6.5 introduces many enhancements to vSphere Hypervisor, VMware virtual machines, vCenter Server, virtual storage, and virtual networking, further extending the core capabilities of the vSphere platform.
Today VMware announced vSphere 6.5, which is one of the most feature rich releases of vSphere in quite some time. The vCenter Server Appliance is taking charge in this release with several new features which we’ll cover in this blog article. For starters, the installer has gotten an overhaul with a new modern look and feel. Users of both Linux and Mac will also be ecstatic since the installer is now supported on those platforms along with Microsoft Windows. If that wasn’t enough, the vCenter Server Appliance now has features that are exclusive such as:
· Migration
· Improved Appliance Management
· VMware Update Manager
· Native High Availability
· Built-in Backup / Restore
With vSphere 6.5 I’m excited to say that we have a fully supported version of the HTML5-based vSphere Client that will run alongside the vSphere Web Client. The vSphere Client is built right into vCenter Server 6.5 (both Windows and Appliance) and is enabled by default. While the vSphere Client does not yet have full feature parity the team has prioritized many of the day to day tasks of administrators and continue to seek feedback on what is missing that will enable customers to use it full-time. The vSphere Web Client will continue to be accessible via “http://<vcenter_fqdn>/vsphere-client” while the vSphere Client will be reachable via “http://<vcenter_fqdn>/ui”. VMware will also be periodically updating the vSphere Client outside of the normal vCenter Server release cycle. To make sure it is easy and simple for customers to stay up to date the vSphere Client will be able to be updated without any effects to the rest of vCenter Server.
The following are some of the benefits of the new vSphere Client:
· Clean, consistent UI built on VMware’s new Clarity UI standards (to be adopted across our portfolio)
· Built on HTML5 so it is truly a cross-browser and cross-platform application
· No browser plugins to install/manage
· Integrated into vCenter Server for 6.5 and fully supported
· Fully supports Enhanced Linked Mode
· Users of the Fling have been extremely positive about its performance
vSphere 6.5 introduces a number of new features in the hypervisor:
· Scalability Improvements
ESXi 6.5 dramatically increases the scalability of the platform. With vSphere Hypervisor 6.0, clusters can scale to as many as 64 hosts, up from 32 in previous releases. With 64 hosts in a cluster, vSphere 6.0 can support 8000 virtual machines in a single cluster. This capability enables greater consolidation ratios, more efficient use of VMware vSphere Distributed Resource Scheduler (DRS), and fewer clusters that must be separately managed. Each vSphere Hypervisor 6.5 instance can support up to 480 logical CPUs, 12 terabytes (TB) of RAM, and 1024 virtual machines. By using the newest hardware advances, ESXi 6.5 enables the virtualization of applications that previously had been thought to be non-virtualizable.
· Security Enhancements
- Account management: ESXi 6.5 enables management of local accounts on the ESXi server using new ESXi CLI commands. The capability to add, list, remove, and modify accounts across all hosts in a cluster can be centrally managed using a vCenter Server system. Previously, the account and permission management functions for ESXi hosts were available only for direct host connections. The setup, removal, and listing of local permissions on ESXi servers can also be centrally managed.
- Account lockout: ESXi Host Advanced System Settings have two new options for the management of failed local account login attempts and account lockout duration. These parameters affect Secure Shell (SSH) and vSphere Web Services connections, but not ESXi direct console user interface (DCUI) or console shell access.
- Password complexity rules: In previous versions of ESXi, password complexity changes had to be made by manually editing the /etc/pam.d/passwd file on each ESXi host. In vSphere 6.0, an entry in Host Advanced System Settings enables changes to be centrally managed for all hosts in a cluster.
- Improved auditability of ESXi administrator actions: Prior to vSphere 6.0, actions at the vCenter Server level by a named user appeared in ESXi logs with the vpxuser username: for example, [user=vpxuser]. In vSphere 6.5, all actions at the vCenter Server level for an ESXi server appear in the ESXi logs with the vCenter Server username: for example, [user=vpxuser: DOMAIN\User]. This approach provides a better audit trail for actions run on a vCenter Server instance that conducted corresponding tasks on the ESXi hosts.
- Flexible lockdown modes: Prior to vSphere 6.5, only one lockdown mode was available. Feedback from customers indicated that this lockdown mode was inflexible in some use cases. With vSphere 6.5, two lockdown modes are available:
o In normal lockdown mode, DCUI access is not stopped, and users on the DCUI access list can access the DCUI.
o In strict lockdown mode, the DCUI is stopped.
o Exception users: vSphere 6.0 offers a new function called exception users. Exception users are local accounts or Microsoft Active Directory accounts with permissions defined locally on the host to which these users have host access. These exception users are not recommended for general user accounts, but they are recommended for use by third-party applications—for service accounts, for example—that need host access when either normal or strict lockdown mode is enabled. Permissions on these accounts should be set to the bare minimum required for the application to perform its task and with an account that needs only read-only permissions on the ESXi host.
- Smart card authentication to DCUI: This function is for U.S. federal customers only. It enables DCUI login access using a Common Access Card (CAC) and Personal Identity Verification (PIV). The ESXi host must be part of an Active Directory domain.
Enterprise IT organizations are tasked with the challenge of provisioning Microsoft Windows apps and desktops while managing cost, centralizing control, and enforcing corporate security policy. Deploying Windows apps to users in any location, regardless of the device type and available network bandwidth, enables a mobile workforce that can improve productivity. With Citrix XenDesktop 7.15, IT can effectively control app and desktop provisioning while securing data assets and lowering capital and operating expenses.
The XenDesktop 7.15 release offers these benefits:
· Comprehensive virtual desktop delivery for any use case. The XenDesktop 7.15 release incorporates the full power of XenApp, delivering full desktops or just applications to users. Administrators can deploy both XenApp published applications and desktops (to maximize IT control at low cost) or personalized VDI desktops (with simplified image management) from the same management console. Citrix XenDesktop 7.15 leverages common policies and cohesive tools to govern both infrastructure resources and user access.
· Simplified support and choice of BYO (Bring Your Own) devices. XenDesktop 7.15 brings thousands of corporate Microsoft Windows-based applications to mobile devices with a native-touch experience and optimized performance. HDX technologies create a “high definition” user experience, even for graphics intensive design and engineering applications.
· Lower cost and complexity of application and desktop management. XenDesktop 7.15 helps IT organizations take advantage of agile and cost-effective cloud offerings, allowing the virtualized infrastructure to flex and meet seasonal demands or the need for sudden capacity changes. IT organizations can deploy XenDesktop application and desktop workloads to private or public clouds.
· Protection of sensitive information through centralization. XenDesktop decreases the risk of corporate data loss, enabling access while securing intellectual property and centralizing applications since assets reside in the datacenter.
· Virtual Delivery Agent improvements. Universal print server and driver enhancements and support for the HDX 3D Pro graphics acceleration for Windows 10 are key additions in XenDesktop 7.15
· Improved high-definition user experience. XenDesktop 7.15 continues the evolutionary display protocol leadership with enhanced Thinwire display remoting protocol and Framehawk support for HDX 3D Pro.
Citrix XenApp and XenDesktop are application and desktop virtualization solutions built on a unified architecture so they're simple to manage and flexible enough to meet the needs of all your organization's users. XenApp and XenDesktop have a common set of management tools that simplify and automate IT tasks. You use the same architecture and management tools to manage public, private, and hybrid cloud deployments as you do for on premises deployments.
Citrix XenApp delivers:
· XenApp published apps, also known as server-based hosted applications: These are applications hosted from Microsoft Windows servers to any type of device, including Windows PCs, Macs, smartphones, and tablets. Some XenApp editions include technologies that further optimize the experience of using Windows applications on a mobile device by automatically translating native mobile-device display, navigation, and controls to Windows applications; enhancing performance over mobile networks; and enabling developers to optimize any custom Windows application for any mobile environment.
· XenApp published desktops, also known as server-hosted desktops: These are inexpensive, locked-down Windows virtual desktops hosted from Windows server operating systems. They are well suited for users, such as call center employees, who perform a standard set of tasks.
· Virtual machine–hosted apps: These are applications hosted from machines running Windows desktop operating systems for applications that can’t be hosted in a server environment.
· Windows applications delivered with Microsoft App-V: These applications use the same management tools that you use for the rest of your XenApp deployment.
· Citrix XenDesktop: Includes significant enhancements to help customers deliver Windows apps and desktops as mobile services while addressing management complexity and associated costs. Enhancements in this release include:
· Unified product architecture for XenApp and XenDesktop: The FlexCast Management Architecture (FMA). This release supplies a single set of administrative interfaces to deliver both hosted-shared applications (RDS) and complete virtual desktops (VDI). Unlike earlier releases that separately provisioned Citrix XenApp and XenDesktop farms, the XenDesktop 7.15 release allows administrators to deploy a single infrastructure and use a consistent set of tools to manage mixed application and desktop workloads.
· Support for extending deployments to the cloud. This release provides the ability for hybrid cloud provisioning from Microsoft Azure, Amazon Web Services (AWS) or any Cloud Platform-powered public or private cloud. Cloud deployments are configured, managed, and monitored through the same administrative consoles as deployments on traditional on-premises infrastructure.
Citrix XenDesktop delivers:
· VDI desktops: These virtual desktops each run a Microsoft Windows desktop operating system rather than running in a shared, server-based environment. They can provide users with their own desktops that they can fully personalize.
· Hosted physical desktops: This solution is well suited for providing secure access to powerful physical machines, such as blade servers, from within your data center.
· Remote PC access: This solution allows users to log in to their physical Windows PC from anywhere over a secure XenDesktop connection.
· Server VDI: This solution is designed to provide hosted desktops in multitenant, cloud environments.
· Capabilities that allow users to continue to use their virtual desktops: These capabilities let users continue to work while not connected to your network.
This product release includes the following new and enhanced features:
Some XenDesktop editions include the features available in XenApp.
Deployments that span widely-dispersed locations connected by a WAN can face challenges due to network latency and reliability. Configuring zones can help users in remote regions connect to local resources without forcing connections to traverse large segments of the WAN. Using zones allows effective Site management from a single Citrix Studio console, Citrix Director, and the Site database. This saves the costs of deploying, staffing, licensing, and maintaining additional Sites containing separate databases in remote locations.
Zones can be helpful in deployments of all sizes. You can use zones to keep applications and desktops closer to end users, which improves performance.
For more information, see the Zones article.
When you configure the databases during Site creation, you can now specify separate locations for the Site, Logging, and Monitoring databases. Later, you can specify different locations for all three databases. In previous releases, all three databases were created at the same address, and you could not specify a different address for the Site database later.
You can now add more Delivery Controllers when you create a Site, as well as later. In previous releases, you could add more Controllers only after you created the Site.
For more information, see the Databases and Controllers articles.
Configure application limits to help manage application use. For example, you can use application limits to manage the number of users accessing an application simultaneously. Similarly, application limits can be used to manage the number of simultaneous instances of resource-intensive applications, this can help maintain server performance and prevent deterioration in service.
For more information, see the Manage applications article.
You can now choose to repeat a notification message that is sent to affected machines before the following types of actions begin:
· Updating machines in a Machine Catalog using a new master image
· Restarting machines in a Delivery Group according to a configured schedule
If you indicate that the first message should be sent to each affected machine 15 minutes before the update or restart begins, you can also specify that the message is repeated every five minutes until the update/restart begins.
For more information, see the Manage Machine Catalogs and Manage Delivery Groups articles.
By default, sessions roam between client devices with the user. When the user launches a session and then moves to another device, the same session is used and applications are available on both devices. The applications follow, regardless of the device or whether current sessions exist. Similarly, printers and other resources assigned to the application follow.
You can now use the PowerShell SDK to tailor session roaming. This was an experimental feature in the previous release.
For more information, see the Sessions article.
When using the PowerShell SDK to create or update a Machine Catalog, you can now select a template from other hypervisor connections. This is in addition to the currently-available choices of VM images and snapshots.
See the System requirements article for full support information. Information about support for third-party product versions is updated periodically.
By default, SQL Server 2014 SP2 Express is installed when installing the Controller, if an existing supported SQL Server installation is not detected.
You can install Studio or VDAs for Windows Desktop OS on machines running Windows 10.
You can create connections to Microsoft Azure virtualization resources.
Figure 13 Logical Architecture of Citrix XenDesktop
Most enterprises struggle to keep up with the proliferation and management of computers in their environments. Each computer, whether it is a desktop PC, a server in a data center, or a kiosk-type device, must be managed as an individual entity. The benefits of distributed processing come at the cost of distributed management. It costs time and money to set up, update, support, and ultimately decommission each computer. The initial cost of the machine is often dwarfed by operating costs.
Citrix PVS takes a very different approach from traditional imaging solutions by fundamentally changing the relationship between hardware and the software that runs on it. By streaming a single shared disk image (vDisk) rather than copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage, even as the number of machines continues to grow, simultaneously providing the efficiency of centralized management and the benefits of distributed processing.
In addition, because machines are streaming disk data dynamically and in real time from a single shared image, machine image consistency is essentially ensured. At the same time, the configuration, applications, and even the OS of large pools of machines can be completed changed in the time it takes the machines to reboot.
Using PVS, any vDisk can be configured in standard-image mode. A vDisk in standard-image mode allows many computers to boot from it simultaneously, greatly reducing the number of images that must be maintained and the amount of storage that is required. The vDisk is in read-only format, and the image cannot be changed by target devices.
If you manage a pool of servers that work as a farm, such as Citrix XenApp servers or web servers, maintaining a uniform patch level on your servers can be difficult and time consuming. With traditional imaging solutions, you start with a clean golden master image, but as soon as a server is built with the master image, you must patch that individual server along with all the other individual servers. Rolling out patches to individual servers in your farm is not only inefficient, but the results can also be unreliable. Patches often fail on an individual server, and you may not realize you have a problem until users start complaining or the server has an outage. After that happens, getting the server resynchronized with the rest of the farm can be challenging, and sometimes a full reimaging of the machine is required.
With Citrix PVS, patch management for server farms is simple and reliable. You start by managing your golden image, and you continue to manage that single golden image. All patching is performed in one place and then streamed to your servers when they boot. Server build consistency is assured because all your servers use a single shared copy of the disk image. If a server becomes corrupted, simply reboot it, and it is instantly back to the known good state of your master image. Upgrades are extremely fast to implement. After you have your updated image ready for production, you simply assign the new image version to the servers and reboot them. You can deploy the new image to any number of servers in the time it takes them to reboot. Just as important, rollback can be performed in the same way, so problems with new images do not need to take your servers or your users out of commission for an extended period of time.
Because Citrix PVS is part of Citrix XenDesktop, desktop administrators can use PVS’s streaming technology to simplify, consolidate, and reduce the costs of both physical and virtual desktop delivery. Many organizations are beginning to explore desktop virtualization. Although virtualization addresses many of IT’s needs for consolidation and simplified management, deploying it also requires deployment of supporting infrastructure. Without PVS, storage costs can make desktop virtualization too costly for the IT budget. However, with PVS, IT can reduce the amount of storage required for VDI by as much as 90 percent. And with a single image to manage instead of hundreds or thousands of desktops, PVS significantly reduces the cost, effort, and complexity for desktop administration.
Different types of workers across the enterprise need different types of desktops. Some require simplicity and standardization, and others require high performance and personalization. XenDesktop can meet these requirements in a single solution using Citrix FlexCast delivery technology. With FlexCast, IT can deliver every type of virtual desktop, each specifically tailored to meet the performance, security, and flexibility requirements of each individual user.
Not all desktops applications can be supported by virtual desktops. For these scenarios, IT can still reap the benefits of consolidation and single-image management. Desktop images are stored and managed centrally in the data center and streamed to physical desktops on demand. This model works particularly well for standardized desktops such as those in lab and training environments and call centers and thin-client devices used to access virtual desktops.
Citrix PVS streaming technology allows computers to be provisioned and re-provisioned in real time from a single shared disk image. With this approach, administrators can completely eliminate the need to manage and patch individual systems. Instead, all image management is performed on the master image. The local hard drive of each system can be used for runtime data caching or, in some scenarios, removed from the system entirely, which reduces power use, system failure rate, and security risk.
The PVS solution’s infrastructure is based on software-streaming technology. After PVS components are installed and configured, a vDisk is created from a device’s hard drive by taking a snapshot of the OS and application image and then storing that image as a vDisk file on the network. A device used for this process is referred to as a master target device. The devices that use the vDisks are called target devices. vDisks can exist on a PVS, file share, or in larger deployments, on a storage system with which PVS can communicate (iSCSI, SAN, network-attached storage [NAS], and Common Internet File System [CIFS]). vDisks can be assigned to a single target device in private-image mode, or to multiple target devices in standard-image mode.
The Citrix PVS infrastructure design directly relates to administrative roles within a PVS farm. The PVS administrator role determines which components that administrator can manage or view in the console.
A PVS farm contains several components. Figure 14 provides a high-level view of a basic PVS infrastructure and shows how PVS components might appear within that implementation.
Figure 14 Logical Architecture of Citrix Provisioning Services
The following new features are available with Provisioning Services 7.15:
· Linux streaming
· XenServer proxy using PVS-Accelerator
With the new NetApp A-Series All Flash FAS (AFF) controller lineup, NetApp provides industry-leading performance while continuing to provide a full suite of enterprise-grade data management and data protection features. The A-Series lineup offers double the IOPS, while decreasing the latency. The AFF A-Series lineup includes the A200, A300, A700, and A700s. These controllers and their specifications listed in Table 5. For more information about the A-Series AFF controllers, see:
· http://www.netapp.com/us/products/storage-systems/all-flash-array/aff-a-series.aspx
· https://hwu.netapp.com/Controller/Index
Table 5 NetApp A-Series Controller Specifications
| AFF A200 | AFF A300 | AFF A700 | AFF A700s |
NAS Scale-out | 2-8 nodes | 2-24 nodes | 2-24 nodes | 2-24 nodes |
SAN Scale-out | 2-8 nodes | 2-12 nodes | 2-12 nodes | 2-12 nodes |
Per HA Pair Specifications (Active-Active Dual Controller) | ||||
Maximum SSDs | 144 | 384 | 480 | 216 |
Maximum Raw Capacity | 2.2PB | 5.9PB | 7.3PB | 3.3PB |
Effective Capacity | 8.8PB | 23.8PB | 29.7PB | 13PB |
Chassis Form Factor | 2U chassis with two HA controllers and 24 SSD slots | 3U chassis with two HA controllers | 8u chassis with two HA controllers | 4u chassis with two HA controllers and 24 SSD slots |
This solution utilizes the NetApp AFF A300, seen in Figure 15 and Figure 16. This controller provides the high-performance benefits of 40GbE and all flash SSDs, offering better performance than previous models, and occupying only 3U of rack space versus 6U with the AFF8040. When combined with the 2U disk shelf of 3.8TB disks, this solution can provide ample horsepower and over 90TB of raw capacity, all while occupying only 5U of valuable rack space. This makes it an ideal controller for a shared workload converged infrastructure. The A700s would be an ideal fit for situations where more performance is needed
The FlexPod reference architecture supports a variety of NetApp FAS controllers such as FAS9000, FAS8000, FAS2600 and FAS2500; AFF A-Series platforms such as AFF8000 and legacy NetApp storage.
For more information about the AFF A-Series product family, see: http://www.netapp.com/us/products/storage-systems/all-flash-array/aff-a-series.aspx
The 40GbE cards are installed in the expansion slot 2 and the ports are e2a, e2e.
Figure 15 NetApp AFF A300 Front View
Figure 16 NetApp AFF A300 Rear View
Storage efficiency has always been a primary architectural design point of ONTAP. A wide array of features allows businesses to store more data using less space. In addition to deduplication and compression, businesses can store their data more efficiently by using features such as unified storage, multi-tenancy, thin provisioning, and NetApp Snapshot® technology.
Starting with ONTAP 9, NetApp guarantees that the use of NetApp storage efficiency technologies on AFF systems reduce the total logical capacity used to store customer data by 75 percent, a data reduction ratio of 4:1. This space reduction is a combination of several different technologies, such as deduplication, compression, and compaction, which provide additional reduction to the basic features provided by ONTAP.
Compaction, which is introduced in ONTAP 9, is the latest patented storage efficiency technology released by NetApp. In the NetApp WAFL® file system, all I/O takes up 4KB of space, even if it does not actually require 4KB of data. Compaction combines multiple blocks that are not using their full 4KB of space together into one block. This one block can be more efficiently stored on the disk-to-save space. This process is illustrated in Figure 17.
A cluster serves data through at least one and possibly multiple storage virtual machines (SVMs, formerly called Vservers). An SVM is a logical abstraction that represents the set of physical resources of the cluster. Data volumes and network logical interfaces (LIFs) are created and assigned to an SVM and may reside on any node in the cluster to which the SVM has been given access. An SVM may own resources on multiple nodes concurrently, and those resources can be moved non-disruptively from one node to another. For example, a flexible volume can be non-disruptively moved to a new node and aggregate, or a data LIF can be transparently reassigned to a different physical network port. The SVM abstracts the cluster hardware and it is not tied to any specific physical hardware.
An SVM can support multiple data protocols concurrently. Volumes within the SVM can be joined together to form a single NAS namespace, which makes all of an SVM's data available through a single share or mount point to NFS and CIFS clients. SVMs also support block-based protocols, and LUNs can be created and exported by using iSCSI, FC, or FCoE. Any or all of these data protocols can be configured for use within a given SVM.
Because it is a secure entity, an SVM is only aware of the resources that are assigned to it and has no knowledge of other SVMs and their respective resources. Each SVM operates as a separate and distinct entity with its own security domain. Tenants can manage the resources allocated to them through a delegated SVM administration account. Each SVM can connect to unique authentication zones such as Active Directory, LDAP, or NIS. A NetApp cluster can contain multiple SVMs. If you have multiple SVMs, you can delegate an SVM to a specific application. This allows administrators of the application to access only the dedicated SVMs and associated storage, increasing manageability, and reducing risk.
NetApp recommends implementing SAN boot for Cisco UCS servers in the FlexPod Datacenter solution. Doing so enables the operating system to be safely secured by the NetApp All Flash FAS storage system, providing better performance. In this design, FC SAN boot is validated.
In FC SAN boot, each Cisco UCS server boots by connecting the NetApp All Flash FAS storage to the Cisco MDS switch. The 16G FC storage ports, in this example 0g and 0h, are connected to Cisco MDS switch. The FC LIFs are created on the physical ports and each FC LIF is uniquely identified by its target WWPN. The storage system target WWPNs can be zoned with the server initiator WWPNs in the Cisco MDS switches. The FC boot LUN is exposed to the servers through the FC LIF using the MDS switch; this enables only the authorized server to have access to the boot LUN. Refer to Figure 18 for the port and LIF layout
Figure 18 FC - SVM ports and LIF layout
Unlike NAS network interfaces, the SAN network interfaces are not configured to fail over during a failure. Instead if a network interface becomes unavailable, the host chooses a new optimized path to an available network interface. ALUA is a standard supported by NetApp used to provide information about SCSI targets, which allows a host to identify the best path to the storage.
ONTAP 9.3 brought an innovation in scale-out NAS file systems: NetApp FlexGroup volumes.
With FlexGroup volumes, a storage administrator can easily provision a massive single namespace in a matter of seconds. FlexGroup volumes have virtually no capacity or file count constraints outside of the physical limits of hardware or the total volume limits of ONTAP. Limits are determined by the overall number of constituent member volumes that work in collaboration to dynamically balance load and space allocation evenly across all members. There is no required maintenance or management overhead with a FlexGroup volume. You simply create the FlexGroup volume and share it with your NAS clients and ONTAP does the rest.
Figure 19 Illustration of FlexGroups
There are many reasons to consider a virtual desktop solution such as an ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and even Bring Your Own Device (BYOD) to work programs. The first step in designing a virtual desktop solution is to understand the user community and the type of tasks that are required to successfully execute their role. The following user classifications are provided:
· Knowledge Workers today do not just work in their offices all day – they attend meetings, visit branch offices, work from home, and even coffee shops. These anywhere workers expect access to all of their same applications and data wherever they are.
· External Contractors are increasingly part of your everyday business. They need access to certain portions of your applications and data, yet administrators still have little control over the devices they use and the locations they work from. Consequently, IT is stuck making trade-offs on the cost of providing these workers a device vs. the security risk of allowing them access from their own devices.
· Task Workers perform a set of well-defined tasks. These workers access a small set of applications and have limited requirements from their PCs. However, since these workers are interacting with your customers, partners, and employees, they have access to your most critical data.
· Mobile Workers need access to their virtual desktop from everywhere, regardless of their ability to connect to a network. In addition, these workers expect the ability to personalize their PCs, by installing their own applications and storing their own data, such as photos and music, on these devices.
· Shared Workstation users are often found in state-of-the-art university and business computer labs, conference rooms or training centers. Shared workstation environments have the constant requirement to re-provision desktops with the latest operating systems and applications as the needs of the organization change, tops the list.
After the user classifications have been identified and the business requirements for each user classification have been defined, it becomes essential to evaluate the types of virtual desktops that are needed based on user requirements. There are essentially five potential desktops environments for each user:
· Traditional PC: A traditional PC is what typically constitutes a desktop environment: a physical device with a locally installed operating system.
· Hosted Shared Desktop: A hosted, server-based desktop is a desktop where the user interacts through a delivery protocol. With hosted, server-based desktops, a single installed instance of a server operating system, such as Microsoft Windows Server 2016, is shared by multiple users simultaneously. Each user receives a desktop "session" and works in an isolated memory space. Remoted Desktop Server Hosted Server sessions: A hosted virtual desktop is a virtual desktop running on a virtualization layer (ESX). The user does not work with and sit in front of the desktop, but instead, the user interacts through a delivery protocol.
· Published Applications: Published applications run entirely on the Citrix XenApp server virtual machines and the user interacts through a delivery protocol. With published applications, a single installed instance of an application, such as Microsoft Office, is shared by multiple users simultaneously. Each user receives an application "session" and works in an isolated memory space.
· Streamed Applications: Streamed desktops and applications run entirely on the user‘s local client device and are sent from a server on demand. The user interacts with the application or desktop directly but the resources may only available while they are connected to the network.
· Local Virtual Desktop: A local virtual desktop is a desktop running entirely on the user‘s local device and continues to operate when disconnected from the network. In this case, the user’s local device is used as a type 1 hypervisor and is synced with the data center when the device is connected to the network.
When the desktop user groups and sub-groups have been identified, the next task is to catalog group application and data requirements. This can be one of the most time-consuming processes in the VDI planning exercise, but is essential for the VDI project’s success. If the applications and data are not identified and co-located, performance will be negatively affected.
The process of analyzing the variety of application and data pairs for an organization will likely be complicated by the inclusion of cloud applications, for example, SalesForce.com. This application and data analysis is beyond the scope of this Cisco Validated Design, but should not be omitted from the planning process. There are a variety of third-party tools available to assist organizations with this crucial exercise.
Now that user groups, their applications, and their data requirements are understood, some key project and solution sizing questions may be considered.
General project questions should be addressed at the outset, including:
§ Has a VDI pilot plan been created based on the business analysis of the desktop groups, applications, and data?
§ Is there infrastructure and budget in place to run the pilot program?
§ Are the required skill sets to execute the VDI project available? Can we hire or contract for them?
§ Do we have end user experience performance metrics identified for each desktop sub-group?
§ How will we measure success or failure?
§ What is the future implication of success or failure?
Below is a short, non-exhaustive list of sizing questions that should be addressed for each user sub-group:
§ What is the desktop OS planned? Windows 8 or Windows 10?
§ 32 bit or 64 bit desktop OS?
§ How many virtual desktops will be deployed in the pilot? In production? All Windows 8/10?
§ How much memory per target desktop group desktop?
§ Are there any rich media, Flash, or graphics-intensive workloads?
§ Are there any applications installed? What application delivery methods will be used, Installed, Streamed, Layered, Hosted, or Local?
§ What is the OS planned for RDS Server Roles? Windows Server 2012 or Server 2016?
§ What is the hypervisor for the solution?
§ What is the storage configuration in the existing environment?
§ Are there sufficient IOPS available for the write-intensive VDI workload?
§ Will there be storage dedicated and tuned for VDI service?
§ Is there a voice component to the desktop?
§ Is anti-virus a part of the image?
§ What is the SQL server version for the database? SQL server 2012 or 2016?
§ Is user profile management (for example, non-roaming profile based) part of the solution?
§ What is the fault tolerance, failover, disaster recovery plan?
§ Are there additional desktop sub-group specific questions?
VMware vSphere has been identified as the hypervisor for both HSD Sessions and HVD based desktops.
VMware vSphere comprises the management infrastructure or virtual center server software and the hypervisor software that virtualizes the hardware resources on the servers. It offers features like Distributed Resource Scheduler, vMotion, high availability, Storage vMotion, VMFS, and a multi-pathing storage layer. More information on vSphere can be obtained at the VMware website: http://www.vmware.com/products/datacentervirtualization/vsphere/overview.html.
For this CVD, the hypervisor used was VMware ESXi 6.5 Update 1.
An ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and even Bring Your Own (BYO) device to work programs are prime reasons for moving to a virtual desktop solution.
Citrix XenDesktop 7.15 integrates Hosted Shared and VDI desktop virtualization technologies into a unified architecture that enables a scalable, simple, efficient, and manageable solution for delivering Windows applications and desktops as a service.
Users can select applications from an easy-to-use “store” that is accessible from tablets, smartphones, PCs, Macs, and thin clients. XenDesktop delivers a native touch-optimized experience with HDX high-definition performance, even over mobile networks.
Collections of identical Virtual Machines (VMs) or physical computers are managed as a single entity called a Machine Catalog. In this CVD, VM provisioning relies on Citrix Provisioning Services to make sure that the machines in the catalog are consistent. In this CVD, machines in the Machine Catalog are configured to run either a Windows Server OS (for RDS hosted shared desktops) or a Windows Desktop OS (for hosted pooled VDI desktops).
To deliver desktops and applications to users, you create a Machine Catalog and then allocate machines from the catalog to users by creating Delivery Groups. Delivery Groups provide desktops, applications, or a combination of desktops and applications to users. Creating a Delivery Group is a flexible way of allocating machines and applications to users. In a Delivery Group, you can:
· Use machines from multiple catalogs
· Allocate a user to multiple machines
· Allocate multiple users to one machine
As part of the creation process, you specify the following Delivery Group properties:
· Users, groups, and applications allocated to Delivery Groups
· Desktop settings to match users' needs
· Desktop power management options
Figure 20 illustrates how users access desktops and applications through machine catalogs and delivery groups.
The Server OS and Desktop OS Machines configured in this CVD support the hosted shared desktops and hosted virtual desktops (both non-persistent and persistent).
Figure 20 Access Desktops and Applications through Machine Catalogs and Delivery Groups
Citrix XenDesktop 7.15 can be deployed with or without Citrix Provisioning Services (PVS). The advantage of using Citrix PVS is that it allows virtual machines to be provisioned and re-provisioned in real-time from a single shared-disk image. In this way administrators can completely eliminate the need to manage and patch individual systems and reduce the number of disk images that they manage, even as the number of machines continues to grow, simultaneously providing the efficiencies of a centralized management with the benefits of distributed processing.
The Provisioning Services solution’s infrastructure is based on software-streaming technology. After installing and configuring Provisioning Services components, a single shared disk image (vDisk) is created from a device’s hard drive by taking a snapshot of the OS and application image, and then storing that image as a vDisk file on the network. A device that is used during the vDisk creation process is the Master target device. Devices or virtual machines that use the created vDisks are called target devices.
When a target device is turned on, it is set to boot from the network and to communicate with a Provisioning Server. Unlike thin-client technology, processing takes place on the target device.
Figure 21 Citrix Provisioning Services Functionality
The target device downloads the boot file from a Provisioning Server (Step 2) and boots. Based on the boot configuration settings, the appropriate vDisk is mounted on the Provisioning Server (Step 3). The vDisk software is then streamed to the target device as needed, appearing as a regular hard drive to the system.
Instead of immediately pulling all the vDisk contents down to the target device (as with traditional imaging solutions), the data is brought across the network in real-time as needed. This approach allows a target device to get a completely new operating system and set of software in the time it takes to reboot. This approach dramatically decreases the amount of network bandwidth required and making it possible to support a larger number of target devices on a network without impacting performance
Citrix PVS can create desktops as Pooled or Private:
· Pooled Desktop: A pooled virtual desktop uses Citrix PVS to stream a standard desktop image to multiple desktop instances upon boot.
· Private Desktop: A private desktop is a single desktop assigned to one distinct user.
The alternative to Citrix Provisioning Services for pooled desktop deployments is Citrix Machine Creation Services (MCS), which is integrated with the XenDesktop Studio console.
When considering a PVS deployment, there are some design decisions that need to be made regarding the write cache for the target devices that leverage provisioning services. The write cache is a cache of all data that the target device has written. If data is written to the PVS vDisk in a caching mode, the data is not written back to the base vDisk. Instead, it is written to a write cache file in one of the following locations:
· Cache on device hard drive. Write cache exists as a file in NTFS format, located on the target-device’s hard drive. This option frees up the Provisioning Server since it does not have to process write requests and does not have the finite limitation of RAM.
· Cache on device hard drive persisted. (Experimental Phase) This is the same as “Cache on device hard drive”, except that the cache persists. At this time, this method is an experimental feature only, and is only supported for NT6.1 or later (Windows 10 and Windows 2008 R2 and later). This method also requires a different bootstrap.
· Cache in device RAM. Write cache can exist as a temporary file in the target device’s RAM. This provides the fastest method of disk access since memory access is always faster than disk access.
· Cache in device RAM with overflow on hard disk. This method uses VHDX differencing format and is only available for Windows 10 and Server 2008 R2 and later. When RAM is zero, the target device write cache is only written to the local disk. When RAM is not zero, the target device write cache is written to RAM first. When RAM is full, the least recently used block of data is written to the local differencing disk to accommodate newer data on RAM. The amount of RAM specified is the non-paged kernel memory that the target device will consume.
· Cache on a server. Write cache can exist as a temporary file on a Provisioning Server. In this configuration, all writes are handled by the Provisioning Server, which can increase disk I/O and network traffic. For additional security, the Provisioning Server can be configured to encrypt write cache files. Since the write-cache file persists on the hard drive between reboots, encrypted data provides data protection in the event a hard drive is stolen.
· Cache on server persisted. This cache option allows for the saved changes between reboots. Using this option, a rebooted target device is able to retrieve changes made from previous sessions that differ from the read only vDisk image. If a vDisk is set to this method of caching, each target device that accesses the vDisk automatically has a device-specific, writable disk file created. Any changes made to the vDisk image are written to that file, which is not automatically deleted upon shutdown.
In this CVD, Provisioning Server 7.15 was used to manage Pooled/Non-Persistent VDI Machines and XenApp RDS Machines with “Cache in device RAM with Overflow on Hard Disk” for each virtual machine. This design enables good scalability to many thousands of desktops. Provisioning Server 7.15 was used for Active Directory machine account creation and management as well as for streaming the shared disk to the hypervisor hosts.
Two examples of typical XenDesktop deployments are the following:
· A distributed components configuration
· A multiple site configuration
Since XenApp and XenDesktop 7.15 are based on a unified architecture, combined they can deliver a combination of Hosted Shared Desktops (HSDs, using a Server OS machine) and Hosted Virtual Desktops (HVDs, using a Desktop OS).
You can distribute the components of your deployment among a greater number of servers, or provide greater scalability and failover by increasing the number of controllers in your site. You can install management consoles on separate computers to manage the deployment remotely. A distributed deployment is necessary for an infrastructure based on remote access through NetScaler Gateway (formerly called Access Gateway).
Figure 22 shows an example of a distributed components configuration. A simplified version of this configuration is often deployed for an initial proof-of-concept (POC) deployment. The CVD described in this document deploys Citrix XenDesktop in a configuration that resembles this distributed components configuration shown. Two Cisco UCS B200M4 blade servers host the required infrastructure services (AD, DNS, DHCP, License Server, SQL, Citrix XenDesktop management, and StoreFront servers).
Figure 22 Example of a Distributed Components Configuration
If you have multiple regional sites, you can use Citrix NetScaler to direct user connections to the most appropriate site and StoreFront to deliver desktops and applications to users.
In Figure 23 depicting multiple sites, a site was created in two data centers. Having two sites globally, rather than just one, minimizes the amount of unnecessary WAN traffic.
You can use StoreFront to aggregate resources from multiple sites to provide users with a single point of access with NetScaler. A separate Studio console is required to manage each site; sites cannot be managed as a single entity. You can use Director to support users across sites.
Citrix NetScaler accelerates application performance, load balances servers, increases security, and optimizes the user experience. In this example, two NetScalers are used to provide a high availability configuration. The NetScalers are configured for Global Server Load Balancing and positioned in the DMZ to provide a multi-site, fault-tolerant solution.
Easily deliver the Citrix portfolio of products as a service. Citrix Cloud services simplify the delivery and management of Citrix technologies extending existing on-premises software deployments and creating hybrid workspace services.
· Fast: Deploy apps and desktops, or complete secure digital workspaces in hours, not weeks.
· Adaptable: Choose to deploy on any cloud or virtual infrastructure — or a hybrid of both.
· Secure: Keep all proprietary information for your apps, desktops, and data under your control.
· Simple: Implement a fully-integrated Citrix portfolio via a single-management plane to simplify administration
With Citrix XenDesktop 7.15, the method you choose to provide applications or desktops to users depends on the types of applications and desktops you are hosting and available system resources, as well as the types of users and user experience you want to provide.
Server OS machines | You want: Inexpensive server-based delivery to minimize the cost of delivering applications to a large number of users, while providing a secure, high-definition user experience. Your users: Perform well-defined tasks and do not require personalization or offline access to applications. Users may include task workers such as call center operators and retail workers, or users that share workstations. Application types: Any application. |
Desktop OS machines | You want: A client-based application delivery solution that is secure, provides centralized management, and supports a large number of users per host server (or hypervisor), while providing users with applications that display seamlessly in high-definition. Your users: Are internal, external contractors, third-party collaborators, and other provisional team members. Users do not require off-line access to hosted applications. Application types: Applications that might not work well with other applications or might interact with the operating system, such as .NET framework. These types of applications are ideal for hosting on virtual machines. Applications running on older operating systems such as Windows XP or Windows Vista, and older architectures, such as 32-bit or 16-bit. By isolating each application on its own virtual machine, if one machine fails, it does not impact other users. |
Remote PC Access | You want: Employees with secure remote access to a physical computer without using a VPN. For example, the user may be accessing their physical desktop PC from home or through a public WIFI hotspot. Depending upon the location, you may want to restrict the ability to print or copy and paste outside of the desktop. This method enables BYO device support without migrating desktop images into the datacenter. Your users: Employees or contractors that have the option to work from home, but need access to specific software or data on their corporate desktops to perform their jobs remotely. Host: The same as Desktop OS machines. Application types: Applications that are delivered from an office computer and display seamlessly in high definition on the remote user's device. |
For the Cisco Validated Design described in this document, a mix of Windows Server 2016 based Hosted Shared Desktop sessions (RDS) and Windows 10 Hosted Virtual desktops (Statically assigned Persistent and Random Pooled) were configured and tested.
The architecture deployed is highly modular. While each customer’s environment might vary in its exact configuration, once the reference architecture contained in this document is built, it can easily be scaled as requirements and demands change. This includes scaling both up (adding additional resources within a Cisco UCS Domain) and out (adding additional Cisco UCS Domains and NetApp AFF Storage platform).
The Citrix solution includes Cisco networking, Cisco UCS, and NetApp AFF storage, which efficiently fits into a single data center rack, including the access layer network switches.
This validated design document details the deployment of the multiple configurations extending to 6000 users for a mixed XenDesktop workload featuring the following software:
· Citrix XenApp 7.15 Hosted Shared Virtual Desktops (HSD) with PVS write cache on NFS storage
· Citrix XenDesktop 7.15 Non-Persistent Hosted Virtual Desktops (HVD) with PVS write cache on NFS storage
· Citrix XenDesktop 7.15 Persistent Hosted Virtual Desktops (VDI) provisioned with MCS and stored on NFS storage
· Citrix Provisioning Server 7.15
· Citrix User Profile Manager 7.15
· Citrix StoreFront 7.15
· VMware vSphere ESXi 6.5 Update 1 Hypervisor
· Microsoft Windows Server 2016 and Windows 10 64-bit virtual machine Operating Systems
· Microsoft SQL Server 2016
Figure 24 Virtual Desktop and Application Workload Architecture
The workload contains the following hardware as shown in Figure 24:
· Two Cisco Nexus 93180YC-FX Layer 2 Access Switches
· Four Cisco UCS 5108 Blade Server Chassis with two built-in UCS-IOM-2208XP IO Modules
· Two Cisco UCS B200 M4 Blade servers with Intel Xeon E5-2660v3 2.60-GHz 10-core processors, 128GB 2133MHz RAM, and one Cisco VIC1340 mezzanine card for the hosted infrastructure, providing N+1 server fault tolerance
· Eight Cisco UCS B200 M5 Blade Servers with Intel Xeon Gold 6140 2.30-GHz 18-core processors, 768GB 2666MHz RAM, and one Cisco VIC1340 mezzanine card for the Hosted Shared Desktop workload, providing N+1 server fault tolerance at the workload cluster level
· Eleven Cisco UCS B200 M5 Blade Servers with Intel Xeon Gold 6140 2.30-GHz 18-core processors, 768GB 2666MHz RAM, and one Cisco VIC1340 mezzanine card for the Random Pooled desktops workload, providing N+1 server fault tolerance at the workload cluster level
· Nine Cisco UCS B200 M5 Blade Servers with Intel Xeon Gold 6140 2.30-GHz 18-core processors, 768GB 2666MHz RAM, and one Cisco VIC1340 mezzanine card for the Static (Full Clones) desktops workload, providing N+1 server fault tolerance at the workload cluster level
· NetApp AFF A300 Storage System with dual redundant controllers, 1x DS224C disk shelf, and 24x 3.8 TB solid-state drives providing storage and FC/NFS/CIFS connectivity.
The LoginVSI Test infrastructure is not a part of this solution. The NetApp AFF300 configuration is detailed later in this document.
The logical architecture of the validated solution which is designed to support up to 6000 users within a single 42u rack containing 32 blades in 4 chassis, with physical redundancy for the blade servers for each workload type is outlined in Figure 25.
Figure 25 Logical Architecture Overview
Table 6 lists the software versions of the primary products installed in the environment.
Vendor | Product | Version |
Cisco | UCS Component Firmware | 3.2(3d) bundle release |
Cisco | UCS Manager | 3.2(3d) bundle release |
Cisco | UCS B200 M4 Blades | 3.2(3d) bundle release |
Cisco | VIC 1340 | 4.1(1d) |
Cisco | UCS B200 M5 Blades | 3.2(3d) bundle release |
Cisco | UCS Performance Manager | 2.0.3 |
Citrix | XenApp VDA | 7.15 |
Citrix | XenDesktop VDA | 7.15 |
Citrix | XenDesktop Controller | 7.15 |
Citrix | Provisioning Services | 7.15 |
Citrix | StoreFront Services | 7.15 |
VMware | vCenter Server Appliance | 6.5.0.5973321 |
VMware | vSphere ESXi 6.5 Update 1a | 6.5.0.7967591 |
NetApp | Clustered Data ONTAP | 9.3 |
The Citrix XenDesktop solution described in this document provides details for configuring a fully redundant, highly-available configuration. Configuration guidelines are provided that refer to which redundant component is being configured with each step, whether that be A or B. For example Nexus A and Nexus B identify the pair of Cisco Nexus switches that are configured. The Cisco UCS Fabric Interconnects are configured similarly.
This document is intended to allow the reader to configure the Citrix XenDesktop 7.15 customer environment as a stand-alone solution.
The VLAN configuration recommended for the environment includes a total of seven VLANs as outlined in Table 7.
VLAN Name | VLAN ID | VLAN Purpose | VLAN Name |
Default | 1 | Native VLAN | Default |
In-Band-Mgmt | 60 | VLAN for in-band management | In-Band-Mgmt |
Infra-Mgmt | 61 | VLAN for Virtual Infrastructure | Infra-Mgmt |
CIFS | 62 | VLAN for CIFS traffic | CIFS |
NFS | 63 | VLAN for Infrastructure NFS traffic | NFS |
vMotion | 66 | VLAN for VMware vMotion | vMotion |
VDI | 102 | VLAN for Desktop traffic | VDI |
OB-Mgmt | 164 | VLAN for out-of-band management | OB-Mgmt |
Five VMware Clusters were utilized in one vCenter datacenter to support the solution and testing environment:
· VDI Cluster FlexPod Data Center with Cisco UCS
- Infrastructure: Infra VMs (vCenter, Active Directory, DNS, DHCP, SQL Server, XenDesktop Controllers, Provisioning Servers, and NetApp VSC, VSMs, etc.)
- HSD-CLSTR: XenApp RDS VMs (Windows Server 2016 streamed with PVS)
- HVDNonPersistent-CLSTR: XenDesktop VDI VMs (Windows 10 64-bit non-persistent virtual desktops streamed with PVS)
- HVDPersistent-CLSTR: XenDesktop VDI VMs (Windows 10 64-bit persistent virtual desktops)
· VSI Launchersand Launcher Cluster
- LVS-Launcher-CLSTR: Login VSI Cluster (The Login VSI launcher infrastructure was connected using the same set of switches and vCenter instance, but was hosted on separate storage and servers.)
Figure 26 vCenter Data Center and Clusters Deployed
This section details the configuration and tuning that was performed on the individual components to produce a complete, validated solution.
Figure 27 Component Layers for the FlexPod Data Center with Cisco UCS
Figure 27 captures the architectural diagram for the purpose of this study. The architecture is divided into three distinct layers:
· Cisco UCS Compute Platform
· Network Access layer and LAN
· Storage Access to the NetApp AFF300
Figure 28 illustrates the physical connectivity configuration of the Citrix XenDesktop 7.15 environment.
Figure 28 Cabling Diagram of the FlexPod Datacenter with Cisco UCS
Table 8 through Table 13 list the details of all the connections in use.
Table 8 Cisco Nexus A Cabling Information
Local Device | Local Port | Connection | Remote Device | Remote Port |
Cisco Nexus 93180 A
| Eth1/49 | 40GbE | Cisco Nexus B | Eth1/49 |
Eth1/50 | 40GbE | Cisco Nexus B | Eth1/50 | |
Eth1/51 | 40GbE | Cisco UCS fabric interconnect A | Eth1/35 | |
Eth1/52 | 40GbE | Cisco UCS fabric interconnect B | Eth1/36 | |
Eth1/53 | 40GbE | NetApp Controller 1 | e2a | |
Eth1/54 | 40GbE | NetApp Controller 2 | e2e | |
MGMT0 | GbE | GbE management switch | Any |
For devices requiring GbE connectivity, use the GbE Copper SFP+s (GLC-T=).
Table 9 Cisco Nexus B Cabling Information
Local Device | Local Port | Connection | Remote Device | Remote Port |
Cisco Nexus 93180 B
| Eth1/49 | 40GbE | Cisco Nexus A | Eth1/49 |
Eth1/50 | 40GbE | Cisco Nexus A | Eth1/50 | |
Eth1/51 | 40GbE | Cisco UCS fabric interconnect B | Eth1/35 | |
Eth1/52 | 40GbE | Cisco UCS fabric interconnect A | Eth1/36 | |
Eth1/53 | 40GbE | NetApp Controller 1 | e2e | |
Eth1/54 | 40GbE | NetApp Controller 2 | e2a | |
MGMT0 | GbE | GbE management switch | Any |
Table 10 Cisco MDS A Cabling Information
Local Device | Local Port | Connection | Remote Device | Remote Port |
Cisco MDS 9148-A | FC1/37 | 16Gb | NetApp Controller 1 | e0g |
FC1/38 | 16Gb | NetApp Controller 2 | e0g | |
FC1/43 | 16Gb | Cisco UCS fabric interconnect A | FC1/1 | |
FC1/44 | 16Gb | Cisco UCS fabric interconnect A | FC1/2 | |
FC1/45 | 16Gb | Cisco UCS fabric interconnect A | FC1/3 | |
FC1/46 | 16Gb | Cisco UCS fabric interconnect A | FC1/4 |
When the term e0M is used, the physical Ethernet port to which the table is referring is the port indicated by a wrench icon on the rear of the chassis.
Table 11 Cisco MDS B Cabling Information
Local Device | Local Port | Connection | Remote Device | Remote Port |
Cisco MDS 9148-B | FC1/37 | 16Gb | NetApp Controller 1 | e0h |
FC1/38 | 16Gb | NetApp Controller 2 | e0h | |
FC1/43 | 16Gb | Cisco UCS fabric interconnect B | FC1/1 | |
FC1/44 | 16Gb | Cisco UCS fabric interconnect B | FC1/2 | |
FC1/45 | 16Gb | Cisco UCS fabric interconnect B | FC1/3 | |
FC1/46 | 16Gb | Cisco UCS fabric interconnect B | FC1/4 |
Table 12 Cisco UCS Fabric Interconnect A Cabling Information
Local Device | Local Port | Connection | Remote Device | Remote Port |
Cisco UCS 6332-16 A
| FC1/1 | 16Gb | Cisco MDS9148-A | FC1/43 |
FC1/2 | 16Gb | Cisco MDS9148-A | FC1/44 | |
FC1/3 | 16Gb | Cisco MDS9148-A | FC1/45 | |
FC1/4 | 16Gb | Cisco MDS9148-A | FC1/46 | |
Eth1/17 | 40GbE | Cisco UCS Chassis1 FEX A | IOM 1/1 | |
Eth1/18 | 40GbE | Cisco UCS Chassis1 FEX A | IOM 1/2 | |
Eth1/19 | 40GbE | Cisco UCS Chassis2 FEX A | IOM 1/1 | |
Eth1/20 | 40GbE | Cisco UCS Chassis2 FEX A | IOM 1/2 | |
Eth1/21 | 40GbE | Cisco UCS Chassis3 FEX A | IOM 1/1 | |
Eth1/22 | 40GbE | Cisco UCS Chassis3 FEX A | IOM 1/2 | |
Eth1/23 | 40GbE | Cisco UCS Chassis4 FEX A | IOM 1/1 | |
Eth1/24 | 40GbE | Cisco UCS Chassis4 FEX A | IOM 1/2 | |
Eth1/35 | 40GbE | Cisco Nexus 93180 A | Eth1/51 | |
Eth1/36 | 40GbE | Cisco Nexus 93180 B | Eth1/52 | |
MGMT0 | GbE | GbE management switch | Any | |
L1 | GbE | Cisco UCS fabric interconnect B | L1 | |
L2 | GbE | Cisco UCS fabric interconnect B | L2 |
Table 13 Cisco UCS Fabric Interconnect B Cabling Information
Local Device | Local Port | Connection | Remote Device | Remote Port |
Cisco UCS 6332-16 B
| FC1/1 | 16Gb | Cisco MDS9148-B | FC1/43 |
FC1/2 | 16Gb | Cisco MDS9148-B | FC1/44 | |
FC1/3 | 16Gb | Cisco MDS9148-B | FC1/45 | |
FC1/4 | 16Gb | Cisco MDS9148-B | FC1/46 | |
Eth1/17 | 40GbE | Cisco UCS Chassis1 FEX B | IOM 1/1 | |
Eth1/18 | 40GbE | Cisco UCS Chassis1 FEX B | IOM 1/2 | |
Eth1/19 | 40GbE | Cisco UCS Chassis2 FEX B | IOM 1/1 | |
Eth1/20 | 40GbE | Cisco UCS Chassis2 FEX B | IOM 1/2 | |
Eth1/21 | 40GbE | Cisco UCS Chassis3 FEX B | IOM 1/1 | |
Eth1/22 | 40GbE | Cisco UCS Chassis3 FEX B | IOM 1/2 | |
Eth1/23 | 40GbE | Cisco UCS Chassis4 FEX B | IOM 1/1 | |
Eth1/24 | 40GbE | Cisco UCS Chassis4 FEX B | IOM 1/2 | |
Eth1/35 | 40GbE | Cisco Nexus 93180 B | Eth1/51 | |
Eth1/36 | 40GbE | Cisco Nexus 93180 A | Eth1/52 | |
MGMT0 | GbE | GbE management switch | Any | |
L1 | GbE | Cisco UCS fabric interconnect B | L1 | |
L2 | GbE | Cisco UCS fabric interconnect B | L2 |
This section details the Cisco UCS configuration that was done as part of the infrastructure build-out. The racking, power, and installation of the chassis are described in the Installation guide (see www.cisco.com/c/en/us/support/servers-unified-computing/ucs-manager/products-installation-guides-list.html) and it is beyond the scope of this document.
For more information about each step, refer to the following documents: Cisco UCS Manager Configuration Guides – GUI and Command Line Interface (CLI) Cisco UCS Manager - Configuration Guides - Cisco
This document assumes the use of Cisco UCS Manager Software version 3.2(3c). To upgrade the Cisco UCS Manager software and the Cisco UCS 6332-16UP Fabric Interconnect software to a higher version of the firmware,) refer to Cisco UCS Manager Install and Upgrade Guides.
To configure the fabric interconnect, complete the following steps:
1. Connect a console cable to the console port on what will become the primary fabric interconnect.
2. If the fabric interconnects was previously deployed and you want to erase it to redeploy, follow these steps:
a. Login with the existing username and password
b. Enter: connect local-mgmt
c. Enter: erase config
d. Enter: yes to confirm
3. After the fabric interconnect restarts, the out-of-box first time installation prompt appears, type “console” and press Enter.
4. Type “setup” at the setup/restore prompt, then press Enter.
5. Type “y” then press Enter to confirm the setup.
6. Type “y” or “n” depending on your organization’s security policies, then press Enter.
7. Enter and confirm the password and enter switch Fabric A.
8. Complete the setup dialog questions.
9. Review the selections and type “yes”.
10. Console on to the second fabric interconnect, select console as the configuration method, and provide the following inputs.
11. Open a web browser and go to the Virtual IP address configured above.
To configure the Cisco Unified Computing System, complete the following steps:
1. Open a web browser and navigate to the Cisco UCS 6332-16UP Fabric Interconnect cluster address.
2. Click the Launch UCS Manager link to download the Cisco UCS Manager software.
3. If prompted to accept security certificates, accept as necessary.
4. When prompted, enter admin as the username and enter the administrative password.
5. To log in to Cisco UCS Manager, click Login.
To set the Fabric Interconnects to the Fibre Channel End Host Mode, complete the following steps:
1. On the Equipment tab, expand the Fabric Interconnects node and click Fabric Interconnect A.
2. On the General tab in the Actions pane, click Set FC End Host mode.
3. Follow the dialogs to complete the change.
Both Fabric Interconnects automatically reboot sequentially when you confirm you want to operate in this mode.
To configure the Fibre Channel Uplink Ports, complete the following steps:
1. After the restarts are complete, from the General tab, Actions pane, click Configure Unified ports.
2. Click Yes to confirm in the pop-up window.
3. Move the slider to the right to configure first 6 ports as FC ports
Ports to the left of the slider will become FC ports. For our study, we configured the 6 ports on the as FC ports.
4. Click OK, then click Yes to confirm. This action will cause a reboot of the Fabric Interconnect.
After the reboot, your FC Ports configuration should look like the screenshot below:
5. Repeat this procedure for Fabric Interconnect B.
6. Insert Cisco SFP 16 Gbps FC (DS-SFP-FC16-SW) modules into ports 1 through 4 on both Fabric Interconnects and cable as prescribed later in this document.
Four FC Uplinks from Fabric A are connected to Cisco MDS 9148S switch A and four FC Uplinks from Fabric B to Cisco MDS 9148S switch B.
Setting the discovery policy simplifies the addition of B-Series Cisco UCS chassis.
To modify the chassis discovery policy, complete the following steps:
1. In Cisco UCS Manager, in the navigation pane, click the Equipment node and select Equipment in the list on the left.
2. In the right pane, click the Policies tab.
3. Under Global Policies, set the Chassis/FEX Discovery Policy to 2-link.
4. Set the Link Grouping Preference to Port Channel.
5. Click Save Changes.
6. Click OK.
To acknowledge all Cisco UCS chassis, complete the following steps:
1. In Cisco UCS Manager, in the navigation pane, click the Equipment tab.
2. Expand Chassis and select each chassis that is listed.
3. Right-click each chassis and select Acknowledge Chassis.
4. Click Yes and then click OK to complete acknowledging the chassis.
5. Repeat for each of the remaining chassis.
To synchronize the Cisco UCS environment to the NTP server, complete the following steps:
1. In Cisco UCS Manager, in the navigation pane, click the Admin tab.
2. Select All > Timezone Management.
3. In the Properties pane, select the appropriate time zone in the Timezone menu.
4. Click Save Changes and then click OK.
5. Click Add NTP Server.
6. Enter the NTP server IP address and click OK.
7. Click OK.
To enable server and uplink ports, complete the following steps:
1. In Cisco UCS Manager, in the navigation pane, click the Equipment tab.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A > Physical Ports > Ethernet Ports.
3. Expand Fixed Module.
4. Select ports 17 through 24 that are connected to the Cisco IO Modules of the four B-Series 5108 Chassis, right-click them and select Configure as Server Port.
5. Click Yes to confirm uplink ports and click OK.
6. In the left pane, navigate to Fabric Interconnect A. In the right pane, navigate to the Physical Ports tab > Ethernet Ports tab. Confirm that ports have been configured correctly in the in the Role column.
7. Repeat the above steps for Fabric Interconnect B. The screenshot below shows the server ports for Fabric B.
To configure network ports used to uplink the Fabric Interconnects to the Cisco Nexus 9172PX switches, complete the following steps:
1. In Cisco UCS Manager, in the navigation pane, click the Equipment tab.
2. Select Equipment > Fabric Interconnects > Fabric Interconnect A > Physical Ports > Ethernet Ports.
3. Expand Fixed Module.
4. Select ports 35 through 36 that are connected to the Nexus 93180YC-FX switches, right-click them and select Configure as Uplink Port.
5. Click Yes to confirm ports and click OK.
6. Verify the Ports connected to Cisco Nexus upstream switches are now configured as network ports.
7. Successful configuration should result in ports 35-36 configured as network ports as shown in the screenshot below:
8. Repeat the above steps for Fabric Interconnect B. The screenshot shows the network uplink ports for Fabric B.
In this procedure, two port channels are created: one from Fabric A to both Cisco Nexus 93180YC-FX switches and one from Fabric B to both Cisco Nexus 93180YC-FX switches.
To configure the necessary port channels in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Under LAN > LAN Cloud, expand node Fabric A tree:
3. Right-click Port Channels.
4. Select Create Port Channel.
5. Enter 11 as the unique ID of the port channel.
6. Enter FI-A-Uplink as the name of the port channel.
7. Click Next.
8. Select Ethernet ports 35-36 for the port channel.
9. Click Finish.
Repeat steps 1-9 for Fabric Interconnect B, substituting 12 for the port channel number and FI-B-Uplink for the name. The configuration should look like the screenshot below:
This section details how to create the MAC address, iSCSI IQN, iSCSI IP, UUID suffix and server pools.
To configure the necessary MAC address pools for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Pools > root.
3. Right-click MAC Pools under the root organization.
4. Select Create MAC Pool to create the MAC address pool.
5. Enter MAC_Pool_A as the name for MAC pool.
6. Optional: Enter a description for the MAC pool.
7. Enter the seed MAC address and provide the number of MAC addresses to be provisioned.
8. Click OK, then click Finish.
9. In the confirmation message, click OK.
An IP address pool on the out of band management network must be created to facilitate KVM access to each compute node in the Cisco UCS domain.
To create the pool, complete the following steps:
1. Click the LAN tab in UCS Manager, expand the Pools node, expand the root node, right-click IP Pools, then click Create IP Pool.
2. Provide a Name, choose Default or Sequential, and then click Next.
3. Click the green + sign to add an IPv4 address block.
4. Complete the starting IP address, size, subnet mask, default gateway, primary and secondary DNS values for your network, then click OK.
5. Click Finish.
6. Click OK.
To configure the necessary WWPN pools for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Select Pools > root.
3. Under WWPN Pools, right-click WWPN Pools and select Create WWPN Pool.
4. Assign a name and optional description.
5. Assignment order can remain Default.
6. Click Next.
7. Click Add to add a block of Ports.
8. Specify the size of a WWNN block sufficient enough to support 4 fully populated chassis.
9. Click Finish.
To configure the necessary universally unique identifier (UUID) suffix pool for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Pools > root.
3. Right-click UUID Suffix Pools.
4. Select Create UUID Suffix Pool.
5. Enter UUID_Pool-VDI as the name of the UUID suffix pool.
6. Optional: Enter a description for the UUID suffix pool.
7. Keep the prefix at the derived option.
8. Click Next.
9. Click Add to add a block of UUIDs.
10. Create a starting point UUID seed for your environment.
11. Specify a size for the UUID block that is sufficient to support the available blade or server resources.
To configure the necessary server pool for the Cisco UCS environment, complete the following steps:
Consider creating unique server pools to achieve the granularity that is required in your environment.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Pools > root.
3. Right-click Server Pools.
4. Select Create Server Pool.
5. Enter Infra_Pool as the name of the server pool.
6. Optional: Enter a description for the server pool.
7. Click Next.
8. Select two (or more) servers to be used for the VMware management cluster and click >> to add them to the Infra_Pool server pool.
9. Click Finish.
10. Click OK.
11. Create additional Server Pools for persistent, persistent, and RDS hosts
To configure the necessary virtual local area networks (VLANs) for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
In this procedure, eight unique VLANs are created. Refer to Table 14.
VLAN Name | VLAN ID | VLAN Purpose | vNIC Assignment |
Default | 1 | Native VLAN | vNIC-Template-A vNIC-Template-B |
In-Band-Mgmt | 60 | VLAN for in-band management interfaces | vNIC-Template-A vNIC-Template-B |
Infra-Mgmt | 61 | VLAN for Virtual Infrastructure | vNIC-Template-A vNIC-Template-B |
NFS-Vlan | 62 | VLAN for NFS Share | vNIC-Template-A vNIC-Template-B |
CIFS-Vlan | 63 | VLAN-CIFS Share User Profiles | vNIC-Template-A vNIC-Template-B |
vMotion | 66 | VLAN for VMware vMotion | vNIC-Template-A vNIC-Template-B |
VDI | 102 | Virtual Desktop traffic | vNIC-Template-A vNIC-Template-B |
OB-Mgmt | 164 | VLAN for out-of-band management interfaces | vNIC-Template-A vNIC-Template-B |
2. Select LAN > LAN Cloud.
3. Right-click VLANs.
4. Select Create VLANs
5. Enter MGMT as the name of the VLAN to be used for in-band management traffic.
6. Keep the Common/Global option selected for the scope of the VLAN.
7. Enter 60 as the ID of the management VLAN.
8. Keep the Sharing Type as None.
9. Click OK and then click OK again.
10. Repeat the above steps to create all VLANs and configure the Default VLAN as native.
To configure the necessary virtual storage area networks (VSANs) for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
In this procedure, two VSANs are created. When these VSANs are created, be sure to add them to the uplink FC Interfaces created earlier.
2. Select SAN > SAN Cloud.
3. Under Fabric A, right-click VSANs.
4. Select Create VSANs.
5. Enter VSAN-400-A as the name of the VSAN to be used for in-band management traffic.
6. Select Fabric A for the scope of the VSAN.
7. Enter 400 as the ID of the VSAN.
8. Click OK and then click OK again.
9. Repeat the above steps on Fabric B with VSAN-401-B to create the VSANs necessary for this solution.
VSAN 400 and 401 are configured as shown below:
10. After configuring VSANs both sides, go into the port-channel created earlier in the section ‘Create uplinks for MDS 9148S and add the respective VSANs to their port channels. VSAN400 in this study is assigned to Fabric A and VSAN401 is assigned to Fabric B.
VSAN400 should only be on Fabric A and VSAN401 on Fabric B.
11. Go to the Uplink FC interfaces for each Fabric and assign the VSAN appropriately to each FC Interface.
Firmware management policies allow the administrator to select the corresponding packages for a given server configuration. These policies often include packages for adapter, BIOS, board controller, FC adapters, host bus adapter (HBA) option ROM, and storage controller properties.
To create a firmware management policy for a given server configuration in the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Host Firmware Packages.
4. Select Create Host Firmware Package.
5. Enter VM-Host as the name of the host firmware package.
6. Leave Simple selected.
7. Select the version 3.2(3d) for the Blade Package
8. Click OK to create the host firmware package.
To configure jumbo frames and enable quality of service in the Cisco UCS fabric, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select LAN > LAN Cloud > QoS System Class.
3. In the right pane, click the General tab.
4. On the Best Effort row, enter 9216 in the box under the MTU column.
5. Click Save Changes in the bottom of the window.
6. Click OK.
To create a network control policy that enables Cisco Discovery Protocol (CDP) on virtual network ports, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies > root.
3. Right-click Network Control Policies.
4. Select Create Network Control Policy.
5. Enter Enable_CDP as the policy name.
6. For CDP, select the Enabled option.
7. Click OK to create the network control policy.
To create a power control policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click Power Control Policies.
4. Select Create Power Control Policy.
5. Enter No-Power-Cap as the power control policy name.
6. Change the power capping setting to No Cap.
7. Click OK to create the power control policy.
To create a server BIOS policy for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Right-click BIOS Policies.
4. Select Create BIOS Policy.
5. Enter B200-M5-BIOS as the BIOS policy name.
6. Configure the remaining BIOS policies as follows and click Finish.
7. Click Finish.
To update the default Maintenance Policy, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Select Policies > root.
3. Select Maintenance Policies > default.
4. Change the Reboot Policy to User Ack.
5. Check On Next Boot check box
6. Click Save Changes.
7. Click OK to accept the change.
To create multiple virtual network interface card (vNIC) templates for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Select Policies > root.
3. Right-click vNIC Templates.
4. Select Create vNIC Template.
5. Enter vNIC_Template_A as the vNIC template name.
6. Keep Fabric A selected.
7. Do not select the Enable Failover checkbox.
8. Under Target, make sure that the VM checkbox is not selected.
9. Select Updating Template as the Template Type.
10. Under VLANs, select the checkboxes for MGMT, Default, Infra, VDI, and vMotion.
11. Set Native-VLAN as the native VLAN.
12. For MTU, enter 9000.
13. In the MAC Pool list, select MAC_Pool_A.
14. In the Network Control Policy list, select CDP_Enabled.
15. Click OK to create the vNIC template.
16. Click OK.
17. In the navigation pane, select the LAN tab.
18. Select Policies > root.
19. Right-click vNIC Templates.
20. Select Create vNIC Template.
21. Enter vNIC_Template_B as the vNIC template name.
22. Select Fabric B.
23. Do not select the Enable Failover checkbox.
24. Under Target, make sure the VM checkbox is not selected.
25. Select Updating Template as the template type.
26. Under VLANs, select the checkboxes for MGMT, Default, VDI, Infra, and vMotion.
27. Set Native-VLAN as the native VLAN.
28. For MTU, enter 9000.
29. In the MAC Pool list, select MAC_Pool_B.
30. In the Network Control Policy list, select CDP_Enabled.
31. Click OK to create the vNIC template.
32. Click OK.
To create multiple virtual host bus adapter (vHBA) templates for the Cisco UCS environment, complete the following steps:
1. In Cisco UCS Manager, click the SAN tab in the navigation pane.
2. Select Policies > root.
3. Right-click vHBA Templates.
4. Select Create vHBA Template.
5. Enter vHBA-FAB-A as the vHBA template name.
6. Keep Fabric A selected.
7. Select VSAN-400-A for Fabric A from the drop down.
8. Change to Updating Template.
9. For Max Data Field keep 2048.
10. Select VDI-WWPN (created earlier) for our WWPN Pool.
11. Leave the remaining as is.
12. Click OK.
13. In the navigation pane, select the LAN tab.
14. Select Policies > root.
15. Right-click vHBA Templates.
16. Select Create vHBA Template.
17. Enter vHBA-FAB-B as the vHBA template name.
18. Select Fabric B.
19. Select VSAN-401-B for Fabric B from the drop down.
20. Change to Updating Template.
21. For Max Data Field keep 2048.
22. Select VDI-Pool-WWPN (created earlier) for our WWPN Pool.
23. Leave the remaining as is.
24. Click OK.
All ESXi hosts were set to boot from SAN for the Cisco Validated Design as part of the Service Profile template. The benefits of booting from SAN are numerous; disaster recovery, lower cooling, and power requirements for each server since a local drive is not required, and better performance, name just a few.
To create a boot from SAN policy, complete the following steps:
1. Go to UCS Manager, right-click the ‘Boot Policies’ option shown below and select ‘Create Boot Policy.'
2. Name the boot policy and expand the ‘vHBAs’ menu as shown below:
3. After selecting the ‘Add SAN Boot’ option, add the primary vHBA as shown below. Note that the vHBA name needs to match exactly. We will use the vHBA templates created in the previous step.
4. Repeat the steps to add a secondary SAN Boot option.
5. Add the SAN Boot Targets to the primary and secondary. The SAN boot targets will also include primary and secondary options in order to maximize resiliency and number of paths.
6. Using the following command, find and record the WWPN for each FC LIF:
Network interface show -vserver <vserver> -data-protocol fcp
7. When the AFF A300 WWNs have been recorded, use fcp_01a for the first Boot Target WWPN:
8. Add a secondary SAN Boot Target by clicking Add SAN Boot Target to SAN Primary while the primary SAN Boot option is highlighted. This time enter the AFF A300 WWPN for fcp_02a.
9. Repeat these steps for the secondary SAN boot target and use WWPN fcp_01b and fcp_02b in the primary and secondary SAN boot options.
10. For information about configuring boot and data LUNs on the NetApp A300 storage system, please refer to section NetApp A300 Storage System Configuration.
To create service profile templates for the Cisco UCS B-Series environment, complete the following steps:
1. Under the Servers tab in UCSM Select Service Profile Templates.
2. Right-click and select Create Service Profile Template.
3. Name the template B-Series.
4. Select the UUID pool created earlier from the dropdown in the UUID Assignment dialog.
5. Click Next.
6. Click Next through Storage Provisioning.
7. Under Networking, in the “How would you like to configure LAN connectivity?” dialogue, select the Expert radio button.
8. Click Add.
9. Name it vNIC-A.
10. Select check box for Use vNIC Template.
11. Under vNIC template select the vNIC-A.
12. For Adapter Policy select VMware.
13. Repeat networking steps for vNIC-B.
14. Click Next.
15. Click Next.
16. Under SAN Connectivity, select the Expert button in the “How would you like to configure SAN Connectivity?
17. Select WWNN Assignment from the Pool created earlier.
18. Click Add.
19. Name the adapter vHBA-A.
20. Click Use vHBA Template.
21. Select vHBA Template: vHBA-A.
22. Select Adapter Policy: VMWare.
23. Repeat steps for vHBA-B on Fabric B.
24. No Zoning will be used. Click Next.
25. Click Next through vNIC/vHBA Placement policy.
26. Click Next through vMedia Policy.
27. Use the Boot Policy drop down to select the Boot Policy created earlier, then click Finish.
28. Select maintenance Policy and Server Assignment.
29. Click Finish and complete the Service Profile creation.
To create service profiles for each of the blades in the NetApp solution, complete the following steps:
1. From the Servers tab in UCS Manager, under the Service Profile Templates node, right-click the Service Profile Template created in the previous step, then click Create Service Profiles from Template.
2. Provide a naming prefix, a starting number, and the number of services profiles to create, then click OK.
The requested number of service profiles (for example, 25) are created in the Service Profiles root organization.
This design includes instructions on the steps necessary to perform initial setup and configuration of the NetApp A300 storage system. Specific details of the configuration as tested can be found in the NetApp A300 Storage Configuration and NetApp ONTAP9.3 sections below.
See the following sections (NetApp Hardware Universe) for planning the physical location of the storage systems:
· Site Preparation
· System Connectivity Requirements
· Circuit Breaker, Power Outlet Balancing, System Cabinet Power Cord Plugs, and Console Pinout Requirements
· AFF Systems
The NetApp Hardware Universe (HWU) application provides supported hardware and software components for any specific ONTAP version. It provides configuration information for all the NetApp storage appliances currently supported by ONTAP software. It also provides a table of component compatibilities.
§ Confirm that the hardware and software components that you would like to use are supported with the version of ONTAP that you plan to install by using the HWU application at the NetApp Support site.
§ Access the HWU application to view the System Configuration guides. Click the Controllers tab to view the compatibility between different version of the ONTAP software and the NetApp storage appliances with your desired specifications.
Follow the physical installation procedures for the controllers found in the AFF A300 Series product documentation at the NetApp Support site.
NetApp storage systems support a wide variety of disk shelves and disk drives. The complete list of disk shelves that are supported by the AFF A300 is available at the NetApp Support site.
Before running the setup script, complete the cluster setup worksheet from the ONTAP 9.1 Software Setup Guide. You must have access to the NetApp Support site to open the cluster setup worksheet.
Before running the setup script, review the configuration worksheets in the ONTAP 9.1 Software Setup Guide to learn about configuring ONTAP. Table 15 lists the information needed to configure two ONTAP nodes. Customize the cluster detail values with the information applicable to your deployment.
Table 15 ONTAP Software Installation Prerequisites
Cluster Detail | Cluster Detail Value |
Cluster node 01 IP address | <node01-mgmt-ip> |
Cluster node 01 netmask | <node01-mgmt-mask> |
Cluster node 01 gateway | <node01-mgmt-gateway> |
Cluster node 02 IP address | <node02-mgmt-ip> |
Cluster node 02 netmask | <node02-mgmt-mask> |
Cluster node 02 gateway | <node02-mgmt-gateway> |
ONTAP 9.1 URL | <url-boot-software> |
To configure node 01, complete the following steps:
1. Connect to the storage system console port. You should see a Loader-A prompt. However, if the storage system is in a reboot loop, press Ctrl-C to exit the autoboot loop when the following message displays:
Starting AUTOBOOT press Ctrl-C to abort…
2. Allow the system to boot up.
autoboot
3. Press Ctrl-C when prompted.
If ONTAP 9.1 is not the version of software being booted, continue with the following steps to install new software. If ONTAP 9.1 is the version being booted, select option 8 and y to reboot the node. Then continue with step 14.
4. To install new software, select option 7.
5. Enter y to perform an upgrade.
6. Select e0M for the network port you want to use for the download.
7. Enter y to reboot now.
8. Enter the IP address, netmask, and default gateway for e0M.
<node01-mgmt-ip> <node01-mgmt-mask> <node01-mgmt-gateway>
9. Enter the URL where the software can be found.
This web server must be pingable.
<url-boot-software>
10. Press Enter for the user name, indicating no user name.
11. Enter y to set the newly installed software as the default to be used for subsequent reboots.
12. Enter y to reboot the node.
When installing new software, the system might perform firmware upgrades to the BIOS and adapter cards, causing reboots and possible stops at the Loader-A prompt. If these actions occur, the system might deviate from this procedure.
13. Press Ctrl-C when the following message displays:
Press Ctrl-C for Boot Menu
14. Select option 4 for Clean Configuration and Initialize All Disks.
15. Enter y to zero disks, reset config, and install a new file system.
16. Enter y to erase all the data on the disks.
The initialization and creation of the root aggregate can take 90 minutes or more to complete, depending on the number and type of disks attached. When initialization is complete, the storage system reboots. Note that SSDs take considerably less time to initialize. You can continue with the node 02 configuration while the disks for node 01 are zeroing.
To configure node 02, complete the following steps:
1. Connect to the storage system console port. You should see a Loader-A prompt. However, if the storage system is in a reboot loop, press Ctrl-C to exit the autoboot loop when the following message displays:
Starting AUTOBOOT press Ctrl-C to abort…
2. Allow the system to boot up.
autoboot
3. Press Ctrl-C when prompted.
If ONTAP 9.1 is not the version of software being booted, continue with the following steps to install new software. If ONTAP 9.1 is the version being booted, select option 8 and y to reboot the node, then continue with step 14.
4. To install new software, select option 7.
5. Enter y to perform an upgrade.
6. Select e0M for the network port you want to use for the download.
7. Enter y to reboot now.
8. Enter the IP address, netmask, and default gateway for e0M.
<node02-mgmt-ip> <node02-mgmt-mask> <node02-mgmt-gateway>
9. Enter the URL where the software can be found.
This web server must be pingable.
<url-boot-software>
10. Press Enter for the user name, indicating no user name.
11. Enter y to set the newly installed software as the default to be used for subsequent reboots.
12. Enter y to reboot the node.
When installing new software, the system might perform firmware upgrades to the BIOS and adapter cards, causing reboots and possible stops at the Loader-A prompt. If these actions occur, the system might deviate from this procedure.
13. Press Ctrl-C when you see this message:
Press Ctrl-C for Boot Menu
14. Select option 4 for Clean Configuration and Initialize All Disks.
15. Enter y to zero disks, reset config, and install a new file system.
16. Enter y to erase all the data on the disks.
The initialization and creation of the root aggregate can take 90 minutes or more to complete, depending on the number and type of disks attached. When initialization is complete, the storage system reboots. Note that SSDs take considerably less time to initialize.
From a console port program attached to the storage controller A (node 01) console port, run the node setup script. This script appears when ONTAP 9.1 boots on the node for the first time.
1. Follow the prompts to set up node 01:
Welcome to node setup.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing “cluster setup”.
To accept a default or omit a question, do not enter a value.
This system will send event messages and weekly reports to NetApp Technical Support.
To disable this feature, enter "autosupport modify -support disable" within 24 hours.
Enabling AutoSupport can significantly speed problem determination and resolution should a problem occur on your system.
For further information on AutoSupport, see:
http://support.netapp.com/autosupport/
Type yes to confirm and continue {yes}: yes
Enter the node management interface port [e0M]: Enter
Enter the node management interface IP address: <node01-mgmt-ip>
Enter the node management interface netmask: <node01-mgmt-mask>
Enter the node management interface default gateway: <node01-mgmt-gateway>
A node management interface on port e0M with IP address <node01-mgmt-ip> has been created
Use your web browser to complete cluster setup by accesing https://<node01-mgmt-ip>
Otherwise press Enter to complete cluster setup using the command line interface:
2. To complete the cluster setup, open a web browser and navigate to https://<node01-mgmt-ip.
Table 16 Cluster Create in ONTAP Prerequisites
Cluster Detail | Cluster Detail Value |
Cluster name | <clustername> |
ONTAP base license | <cluster-base-license-key> |
Cluster management IP address | <clustermgmt-ip> |
Cluster management netmask | <clustermgmt-mask> |
Cluster management gateway | <clustermgmt-gateway> |
Cluster node 01 IP address | <node01-mgmt-ip> |
Cluster node 01 netmask | <node01-mgmt-mask> |
Cluster node 01 gateway | <node01-mgmt-gateway> |
Cluster node 02 IP address | <node02-mgmt-ip> |
Cluster node 02 netmask | <node02-mgmt-mask> |
Cluster node 02 gateway | <node02-mgmt-gateway> |
Node 01 service processor IP address | <node01-SP-ip> |
Node 02 service processor IP address | <node02-SP-ip> |
DNS domain name | <dns-domain-name> |
DNS server IP address | <dns-ip> |
NTP server IP address | <ntp-ip> |
Cluster setup can also be done using command line interface. This document describes the cluster setup using NetApp System Manager guided setup.
3. Click Guided Setup on the Welcome screen.
4. In the Cluster screen, do the following:
a. Enter the cluster and node names.
b. Select the cluster configuration.
c. Enter and confirm the password.
d. (Optional) Enter the cluster base and feature licenses.
The nodes are discovered automatically; if they are not, click the Refresh link. By default, the cluster interfaces are created on all new factory shipping storage controllers.
If all the nodes are not discovered, then configure the cluster using the command line.
Cluster license and feature licenses can also be installed after completing the cluster creation.
5. Click Submit.
6. In the network page, complete the following sections:
- Cluster Management
o Enter the IP address, netmask, gateway and port details.
- Node Management
o Enter the node management IP addresses and port details for all the nodes.
- Service Processor Management
o Enter the IP addresses for all the nodes.
- DNS Details
o Enter the DNS domain names and server address.
- NTP Details
o Enter the primary and alternate NTP server.
7. Click Submit.
8. In the Support page, configure the AutoSupport and Event Notifications sections.
9. Click Submit.
10. In the Summary page, review the configuration details if needed.
The node management interface can be on the same subnet as the cluster management interface, or it can be on a different subnet. In this document, we assume that it is on the same subnet.
To log in to the cluster, complete the following steps:
1. Open an SSH connection to either the cluster IP or host name.
2. Log in to the admin user with the password you provided earlier.
To zero all spare disks in the cluster, run the following command:
disk zerospares
Advanced Data Partitioning creates a root partition and two data partitions on each SSD drive in an All Flash FAS configuration. Disk auto assign should have assigned one data partition to each node in an HA pair. If a different disk assignment is required, disk auto assignment must be disabled on both nodes in the HA pair by running the disk option modify command. Spare partitions can then be moved from one node to another by running the disk removeowner and disk assign commands.
To set the personality of the onboard unified target adapter 2 (UTA2), complete the following steps:
1. Verify the Current Mode and Current Type properties of the ports by running the ucadmin show command:
ucadmin show
Current Current Pending Pending Admin
Node Adapter Mode Type Mode Type Status
------------ ------- ------- --------- ------- --------- -----------
<st-node01>
0e cna target - - online
<st-node01>
0f cna target - - online
<st-node01>
0g fc target - - online
<st-node01>
0h fc target - - online
<st-node02>
0e cna target - - online
<st-node02>
0f cna target - - online
<st-node02>
0g fc target - - online
<st-node02>
0h fc target - - online
8 entries were displayed.
2. Verify that the Current Mode and Current Type properties for all ports are set properly. Ports 0g and 0h are used for FC connectivity and should be set to mode fc if not already configured. The port type for all proto-cols should be set to target. Change the port personality by running the following command:
ucadmin modify -node <home-node-of-the-port> -adapter <port-name> -mode fc -type target
The ports must be offline to run this command. To take an adapter offline, run the fcp adapter modify –node <home-node-of-the-port> -adapter <port-name> -state down command. Ports must be converted in pairs (for example, 0e and 0f).
After conversion, a reboot is required. After reboot, bring the ports online by running fcp adapter modify -node <home-node-of-the-port> -adapter <port-name> -state up.
1. To set the auto-revert parameter on the cluster management interface, complete the following step:
A storage virtual machine (SVM) is referred to as a Vserver (or vserver) in the GUI and CLI.
2. Run the following command:
network interface modify –vserver <clustername> -lif cluster_mgmt –auto-revert true
By default, all network ports are included in the default broadcast domain. Network ports used for data services (for example, e0d, e2a, and e2e) should be removed from the default broadcast domain, leaving just the management network ports (e0c and e0M). To perform this task, run the following commands:
broadcast-domain remove-ports -broadcast-domain Default -ports <st-node01>:e0d, <st-node01>:e0e, <st-node01>:e0e, <st-node01>:e2a, <st-node01>:e2e, <st-node02>:e0d, <st-node02>:e0e, <st-node02>:e0f, <st-node02>:e2a, <st-node02>:e2e
broadcast-domain show
To assign a static IPv4 address to the service processor on each node, run the following commands:
system service-processor network modify –node <st-node01> -address-family IPv4 –enable true –dhcp none –ip-address <node01-sp-ip> -netmask <node01-sp-mask> -gateway <node01-sp-gateway>
system service-processor network modify –node <st-node02> -address-family IPv4 –enable true –dhcp none –ip-address <node02-sp-ip> -netmask <node02-sp-mask> -gateway <node02-sp-gateway>
The service processor IP addresses should be in the same subnet as the node management IP addresses.
An aggregate containing the root volume is created during the ONTAP setup process. To create additional aggregates, determine the aggregate name, the node on which to create it, and the number of disks it should contain.
This solution was validated using 1 data aggregate on each controller with 23 data partitions per aggregate. To create the required aggregates, run the following commands:
aggr create -aggregate aggr1_node01 -node <st-node01> -diskcount 23
aggr create -aggregate aggr1_node02 -node <st-node02> -diskcount 23
You should have the minimum number of hot spare disks for hot spare disk partitions recommended for your aggregate.
For all flash aggregates, you should have a minimum of one hot spare disk or disk partition. For nonflash homogenous aggregates, you should have a minimum of two hot spare disks or disk partitions. For NetApp Flash Pool™ aggregates, you should have a minimum of two hot spare disks or disk partitions for each disk type.
The aggregate cannot be created until disk zeroing completes. Run the aggr show command to display aggregate creation status. Do not proceed until both aggr1_node1 and aggr1_node2 are online.
aggr show
aggr rename –aggregate aggr0 –newname <node01-rootaggrname>
To confirm that storage failover is enabled, run the following commands for a failover pair:
1. Verify the status of the storage failover.
storage failover show
Both <st-node01> and <st-node02> must be capable of performing a takeover. Continue with step 3 if the nodes are capable of performing a takeover.
2. Enable failover on one of the two nodes.
storage failover modify -node <st-node01> -enabled true
Enabling failover on one node enables it for both nodes.
3. Verify the HA status for a two-node cluster.
This step is not applicable for clusters with more than two nodes.
cluster ha show
4. Continue with step 6 if high availability is configured.
Only enable HA mode for two-node clusters. Do not run this command for clusters with more than two nodes because it causes problems with failover.
cluster ha modify -configured true
Do you want to continue? {y|n}: y
5. Verify that hardware assist is correctly configured and, if needed, modify the partner IP address.
storage failover hwassist show
storage failover modify –hwassist-partner-ip <node02-mgmt-ip> -node <st-node01>
storage failover modify –hwassist-partner-ip <node01-mgmt-ip> -node <st-node02>
NetApp recommends disabling flow control on all the 10GbE and UTA2 ports that are connected to external devices. To disable flow control, complete the following steps:
1. Run the following commands to configure node 01:
network port modify -node <st-node01> -port e0a,e0b,e0e,e0f,e0g,e0h,e2a,e2e -flowcontrol-admin none
Warning: Changing the network port settings will cause a several second interruption in carrier.
Do you want to continue? {y|n}: y
2. Run the following commands to configure node 02:
network port modify -node <st-node02> -port e0a,e0b,e0e,e0f,e0g,e0h,e2a,e2e -flowcontrol-admin none
Warning: Changing the network port settings will cause a several second interruption in carrier.
Do you want to continue? {y|n}: y
network port show –fields flowcontrol-admin
If the UTA2 port is set to CNA mode and is only expected to handle Ethernet data traffic (for example NFS), then the unused FCoE capability of the port should be disabled by setting the corresponding FCP adapter to state down with the fcp adapter modify command. Here are some examples:
fcp adapter modify -node <st-node01> -adapter 0e –status-admin down
fcp adapter modify -node <st-node01> -adapter 0f –status-admin down
fcp adapter modify -node <st-node02> -adapter 0e –status-admin down
fcp adapter modify -node <st-node02> -adapter 0f –status-admin down
fcp adapter show –fields status-admin
To configure time synchronization on the cluster, complete the following steps:
1. Set the time zone for the cluster.
timezone <timezone>
For example, in the eastern United States, the time zone is America/New_York.
2. Set the date for the cluster.
date <ccyymmddhhmm.ss>
The format for the date is <[Century][Year][Month][Day][Hour][Minute].[Second]> (for example, 201703231549.30).
3. Configure the Network Time Protocol (NTP) servers for the cluster.
cluster time-service ntp server create -server <switch-a-ntp-ip>
cluster time-service ntp server create -server <switch-b-ntp-ip>
To configure the Simple Network Management Protocol (SNMP), complete the following steps:
1. Configure basic SNMP information, such as the location and contact. When polled, this information is visible as the sysLocation and sysContact variables in SNMP.
snmp contact <snmp-contact>
snmp location “<snmp-location>”
snmp init 1
options snmp.enable on
2. Configure SNMP traps to send to remote hosts, such as a DFM server or another fault management system.
snmp traphost add <oncommand-um-server-fqdn>
To configure SNMPv1 access, set the shared, secret plain-text password (called a community):
snmp community add ro <snmp-community>
NetApp AutoSupport® sends support summary information to NetApp through HTTPS. To configure AutoSupport, run the following command:
system node autosupport modify -node * -state enable –mail-hosts <mailhost> -transport https -support enable -noteto <storage-admin-email>
To enable the Cisco Discovery Protocol (CDP) on the NetApp storage controllers, run the following command to enable CDP on ONTAP:
node run -node * options cdpd.enable on
To be effective, CDP must also be enabled on directly connected networking equipment such as switches and routers.
To create a data broadcast domain with an MTU of 9000, run the following commands to create a broadcast domain for NFS on ONTAP:
broadcast-domain create -broadcast-domain Infra_NFS -mtu 9000
To create the LACP interface groups for the 10GbE data interfaces, run the following commands:
ifgrp create -node <st-node01> -ifgrp a0a -distr-func port -mode multimode_lacp
ifgrp add-port -node <st-node01> -ifgrp a0a -port e2a
ifgrp add-port -node <st-node01> -ifgrp a0a -port e2e
ifgrp create -node <st-node02> -ifgrp a0a -distr-func port -mode multimode_lacp
ifgrp add-port -node <st-node02> -ifgrp a0a -port e2a
ifgrp add-port -node <st-node02> -ifgrp a0a -port e2e
ifgrp show
To create VLANs, create NFS VLAN ports and add them to the NFS broadcast domain:
network port modify –node <st-node01> -port a0a –mtu 9000
network port modify –node <st-node02> -port a0a –mtu 9000
network port vlan create –node <st-node01> -vlan-name a0a-<infra-nfs-vlan-id>
network port vlan create –node <st-node02> -vlan-name a0a-<infra-nfs-vlan-id>
broadcast-domain add-ports -broadcast-domain Infra_NFS -ports <st-node01>:a0a-<infra-nfs-vlan-id>, <st-node02>:a0a-<infra-nfs-vlan-id>
To create an infrastructure SVM, complete the following steps:
vserver create –vserver Infra-SVM –rootvolume rootvol –aggregate aggr1_node01 –rootvolume-security-style unix
3. Remove the unused data protocols from the SVM - CIFS, iSCSI, and NDMP.
vserver remove-protocols –vserver Infra-SVM -protocols iscsi,cifs,ndmp
4. Add the two data aggregates to the Infra-SVM aggregate list for the NetApp Virtual Storage Console (VSC).
vserver modify –vserver Infra-SVM –aggr-list aggr1_node01,aggr1_node02
5. Enable and run the NFS protocol in the Infra-SVM.
nfs create -vserver Infra-SVM -udp disabled
If NFS license is not installed during the cluster configuration, make sure install the license for staring the NFS service.
6. Turn on the SVM vstorage parameter for the NetApp NFS VAAI plug-in.
vserver nfs modify –vserver Infra-SVM –vstorage enabled
vserver nfs show
To create a load-sharing mirror of an SVM root volume, complete the following steps:
1. Create a volume to be the load-sharing mirror of the infrastructure SVM root volume on each node.
volume create –vserver Infra-SVM –volume rootvol_m01 –aggregate aggr1_node01 –size 1GB –type DP
volume create –vserver Infra-SVM –volume rootvol_m02 –aggregate aggr1_node02 –size 1GB –type DP
2. Create a job schedule to update the root volume mirror relationships every 15 minutes.
job schedule interval create -name 15min -minutes 15
3. Create the mirroring relationships.
snapmirror create –source-path Infra-SVM:rootvol –destination-path Infra-SVM:rootvol_m01 –type LS -schedule 15min
snapmirror create –source-path Infra-SVM:rootvol –destination-path Infra-SVM:rootvol_m02 –type LS -schedule 15min
4. Initialize the mirroring relationship.
snapmirror initialize-ls-set –source-path Infra-SVM:rootvol
snapmirror show
Run the following command to create the FCP service on each SVM. This command also starts the FCP service and sets the WWN for the SVM.
fcp create -vserver Infra-SVM
fcp show
If FC license is not installed during the cluster configuration, make sure install the license for creating FC service
To configure secure access to the storage controller, complete the following steps:
1. Increase the privilege level to access the certificate commands.
set -privilege diag
Do you want to continue? {y|n}: y
2. Generally, a self-signed certificate is already in place. Verify the certificate and obtain parameters (for example, <serial-number>) by running the following command:
security certificate show
For each SVM shown, the certificate common name should match the DNS FQDN of the SVM. Delete the two default certificates and replace them with either self-signed certificates or certificates from a certificate authority (CA). To delete the default certificates, run the following commands:
security certificate delete -vserver Infra-SVM -common-name Infra-SVM -ca Infra-SVM -type server -serial <serial-number>
Deleting expired certificates before creating new certificates is a best practice. Run the security certificate delete command to delete the expired certificates. In the following command, use TAB completion to select and delete each default certificate.
3. To generate and install self-signed certificates, run the following commands as one-time commands. Generate a server certificate for the Infra-SVM and the cluster SVM. Use TAB completion to aid in the completion of these commands.
security certificate create -common-name <cert-common-name> -type server -size 2048 -country <cert-country> -state <cert-state> -locality <cert-locality> -organization <cert-org> -unit <cert-unit> -email-addr <cert-email> -expire-days <cert-days> -protocol SSL -hash-function SHA256 -vserver Infra-SVM
4. To obtain the values for the parameters required in step 5 (<cert-ca> and <cert-serial>), run the security certificate show command.
5. Enable each certificate that was just created by using the –server-enabled true and –client-enabled false parameters. Use TAB completion to aid in the completion of these commands.
security ssl modify -vserver <clustername> -server-enabled true -client-enabled false -ca <cert-ca> -serial <cert-serial> -common-name <cert-common-name>
6. Disable HTTP cluster management access.
system services firewall policy delete -policy mgmt -service http –vserver <clustername>
It is normal for some of these commands to return an error message stating that the entry does not exist.
7. Change back to the normal admin privilege level and set up the system to allow SVM logs to be available by web.
set –privilege admin
vserver services web modify –name spi|ontapi|compat –vserver * -enabled true
To configure NFSv3 on the SVM, complete the following steps:
1. Create a new rule for the infrastructure NFS subnet in the default export policy.
vserver export-policy rule create –vserver Infra-SVM -policyname default –ruleindex 1 –protocol nfs -clientmatch <infra-nfs-subnet-cidr> -rorule sys –rwrule sys -superuser sys –allow-suid false
2. Assign the FlexPod export policy to the infrastructure SVM root volume.
volume modify –vserver Infra-SVM –volume rootvol –policy default
The following information is required to create a NetApp FlexVol® volume:
· The volume name
· The volume size
· The aggregate on which the volume exists
FlexVol volumes are created to house boot LUNs for ESXi servers, datastore NFS volumes for virtual desktops and RDS hosts, and for the Citrix PVS VDI desktops. For specific details about the volumes created during this validation, see the Storage Configuration section below.
To create a FlexVol volume, run the following command(s):
volume create -vserver Infra-SVM -volume vdi_nfs_01 -aggregate aggr1_AFF300_01 -size 10TB -state online -policy default -space-guarantee none -percent-snapshot-space 0
The following information is required to create a NetApp FlexVol® volume:
· The volume name
· The volume size
· The aggregate on which the volume exists
FlexGroup volumes are created to house the CIFS shares hosting user profile data and PVS vDisks. For specific details about the volumes created during this validation, see the Storage Configuration section below.
To create a FlexGroup volume, run the following command(s):
volume flexgroup deploy -vserver Infra -volume vdi_cifs -size 10TB -space-guarantee none -type RW
Boot LUNs are created for each ESXi host, and data LUNs are created to host virtual desktop and RDS host VMs. For specific details about the LUNs created during this validation, see the Storage Configuration section below. To create boot and data LUNs, run the following commands:
lun create -vserver Infra-SVM -volume esxi_boot -lun VDI-1 -size 10GB -ostype vmware -space-reserve disabled
lun create -vserver Infra-SVM -volume esxi_boot -lun VDI-2 -size 10GB -ostype vmware -space-reserve disabled
Igroups are created to map host initiators to the LUNs they are allowed to access. Igroups can be FCP protocol, iSCSI protocol, or both. An igroup is created for each ESXi host to map for access to a boot LUN. A separate igroup is created for the entire ESXi cluster to map all data LUNs to every node in the cluster.
1. To create igroups, run the following commands:
igroup create –vserver Infra-SVM –igroup VDI-1 –protocol fcp –ostype vmware –initiator <vm-host-VDI-1-iqn-a>,<vm-host-VDI-1-iqn-b>
igroup create –vserver Infra-SVM –igroup VDI-2 –protocol fcp –ostype vmware –initiator <vm-host-VDI-2-iqn-a>,<vm-host-VDI-2-iqn-b>
igroup create –vserver Infra-SVM –igroup VDI-3 –protocol fcp –ostype vmware –initiator <vm-host-VDI-3-iqn-a>,<vm-host-VDI-3-iqn-b>
2. To view igroups, type igroup show.
To allow access to specific LUNs by specific hosts, map the LUN to the appropriate igroup. For specific details about the LUNs created during this validation, see the Storage Configuration section below. To map luns to igroups, run the following commands:
lun map –vserver Infra-SVM –volume esxi_boot –lun VDI-1 –igroup VDI-1 –lun-id 0
lun map –vserver Infra-SVM –volume esxi_boot –lun VDI-2 –igroup VDI-2 –lun-id 0
lun map –vserver Infra-SVM –volume esxi_boot –lun VDI-3 –igroup VDI-3 –lun-id 0
On NetApp All Flash FAS systems, deduplication is enabled by default. To schedule deduplication, complete the following step:
1. After the volumes are created, assign a once-a-day deduplication schedule to esxi_boot, infra_datastore_1 and infra_datastore_2:
efficiency modify –vserver Infra-SVM –volume esxi_boot –schedule sun-sat@0
efficiency modify –vserver Infra-SVM –volume infra_datastore_1 –schedule sun-sat@0
efficiency modify –vserver Infra-SVM –volume infra_datastore_2 –schedule sun-sat@0
Run the following commands to create four FC LIFs (two on each node):
network interface create -vserver Infra-SVM -lif fcp_lif01a -role data -data-protocol fcp -home-node <st-node01> -home-port 0g –status-admin up
network interface create -vserver Infra-SVM -lif fcp_lif01b -role data -data-protocol fcp -home-node <st-node01> -home-port 0h –status-admin up
network interface create -vserver Infra-SVM -lif fcp_lif02a -role data -data-protocol fcp -home-node <st-node02> -home-port 0g –status-admin up
network interface create -vserver Infra-SVM -lif fcp_lif02b -role data -data-protocol fcp -home-node <st-node02> -home-port 0h –status-admin up
network interface show
To create an NFS LIF, run the following commands:
network interface create -vserver Infra-SVM -lif nfs_lif01 -role data -data-protocol nfs -home-node <st-node01> -home-port a0a-<infra-nfs-vlan-id> –address <node01-nfs_lif01-ip> -netmask <node01-nfs_lif01-mask> -status-admin up –failover-policy broadcast-domain-wide –firewall-policy data –auto-revert true
network interface create -vserver Infra-SVM -lif nfs_lif02 -role data -data-protocol nfs -home-node <st-node02> -home-port a0a-<infra-nfs-vlan-id> –address <node02-nfs_lif02-ip> -netmask <node02-nfs_lif02-mask>> -status-admin up –failover-policy broadcast-domain-wide –firewall-policy data –auto-revert true
network interface show
To add the infrastructure SVM administrator and SVM administration LIF in the out-of-band management network, complete the following steps:
1. Run the following commands:
network interface create –vserver Infra-SVM –lif svm-mgmt –role data –data-protocol none –home-node <st-node02> -home-port e0c –address <svm-mgmt-ip> -netmask <svm-mgmt-mask> -status-admin up –failover-policy broadcast-domain-wide –firewall-policy mgmt –auto-revert true
The SVM management IP in this step should be in the same subnet as the storage cluster management IP.
2. Create a default route to allow the SVM management interface to reach the outside world.
network route create –vserver Infra-SVM -destination 0.0.0.0/0 –gateway <svm-mgmt-gateway>
network route show
3. Set a password for the SVM vsadmin user and unlock the user.
security login password –username vsadmin –vserver Infra-SVM
Enter a new password: <password>
Enter it again: <password>
security login unlock –username vsadmin –vserver Infra-SVM
A cluster serves data through at least one and possibly several SVMs. We have just gone through creating a single SVM. If you would like to configure your environment with multiple SVMs, this is a good time to create additional SVMs.
The storage components for this reference architecture are composed of one AFF A300 HA pair and one DS224C disk with 24x 3.8TB solid-state drives. This configuration delivers 65 TB of usable storage and over 200TB effective storage with deduplication, compression and compaction, and the potential for over 300,000 IOPs depending on the application workload.
This section contains details on the specific storage system configuration used in this validation. This section does not include all possible configuration options, only those necessary to support this solution.
A cluster consists of one or more nodes grouped as (HA pairs) to form a scalable cluster. Creating a cluster enables the nodes to pool their resources and distribute work across the cluster, while presenting administrators with a single entity to manage. Clustering also enables continuous service to end users if individual nodes go offline.
Table 17 lists the cluster details.
Cluster Name | ONTAP Version | Node Count | Data SVM Count | Cluster Raw Capacity |
AFF A300 | 9.3P2 | 2 | 1 | 83.84TB |
Table 18 lists the storage details for each HA pair.
Node Names | Shelf Count | Disk Count | Disk Capacity | Raw Capacity |
AFF-A300-01 AFF-A300-02 | DS224-12: 1 | SSD: 24 | SSD: 83.84TB | 83.84TB |
Raw capacity is not the same as usable capacity.
Table 19 lists the drive allocation details for each node.
Table 19 Drive Allocation Details
Node Name | Total Disk Count | Allocated Disk Count | Disk Type | Raw Capacity | Spare Disk Count |
AFF-A300-01 | 12 | 12 | 3.8TB_SSD | 41.92TB | 0 |
AFF-A300-02 | 12 | 12 | 3.8TB_SSD | 41.92TB | 0 |
Raw capacity is not the same as usable capacity.
Table 20 lists the adapter cards present in each node.
Node Name | System Model | Slot Number | Part Number | Description |
AFF-A300-01 | AFF A300 | 1 | X2069 | PMC PM8072; PCI-E quad-port SAS (PM8072) |
AFF-A300-01 | AFF A300 | 2 | X1144A | NIC,2x40GbE,QSFP |
AFF-A300-02 | AFF A300 | 1 | X2069 | PMC PM8072; PCI-E quad-port SAS (PM8072) |
AFF-A300-02 | AFF A300 | 2 | X1144A | NIC,2x40GbE,QSFP |
Table 21 lists the relevant firmware details for each node.
Node Name | Node Firmware | Shelf Firmware | Drive Firmware | Remote Mgmt Firmware |
AFF-A300-01 | AFF-A300: 11.1 | IOM12: A:0210, B:0210 | X357_S163A3T8ATE: NA51 | SP: 5.1 |
AFF-A300-02 | AFF-A300: 11.1 | IOM12: A:0210, B:0210 | X357_S163A3T8ATE: NA51 | SP: 5.0X21 |
You can modify the MTU, auto-negotiation, duplex, flow control, and speed settings of a physical network port or interface group.
Table 22 lists the network port settings.
Table 22 Network Port Settings for ONTAP
Node Name | Port Name | Link Status | Port Type | MTU Size | Flow Control (Admin/Oper) | IPspace Name | Broadcast Domain |
AFF-A300-01 | a0a | up | if_group | 9000 | full/- | Default |
|
AFF-A300-01 | a0a-61 | up | vlan | 1500 | full/- | Default | IB |
AFF-A300-01 | a0a-62 | up | vlan | 1500 | full/- | Default | cifs |
AFF-A300-01 | a0a-63 | up | vlan | 9000 | full/- | Default | nfs |
AFF-A300-01 | e0a | up | physical | 9000 | none/none | Cluster | Cluster |
AFF-A300-01 | e0b | up | physical | 9000 | none/none | Cluster | Cluster |
AFF-A300-01 | e0c | down | physical | 1500 | none/none | Default | Default |
AFF-A300-01 | e0d | down | physical | 1500 | none/none | Default |
|
AFF-A300-01 | e0e | up | physical | 1500 | none/none | Default |
|
AFF-A300-01 | e0f | up | physical | 1500 | none/none | Default |
|
AFF-A300-01 | e0M | up | physical | 1500 | full/full | Default | Default |
AFF-A300-01 | e2a | up | physical | 9000 | none/none | Default |
|
AFF-A300-01 | e2e | up | physical | 9000 | none/none | Default |
|
AFF-A300-02 | a0a | up | if_group | 9000 | full/- | Default |
|
AFF-A300-02 | a0a-61 | up | vlan | 1500 | full/- | Default | IB |
AFF-A300-02 | a0a-62 | up | vlan | 1500 | full/- | Default | cifs |
AFF-A300-02 | a0a-63 | up | vlan | 9000 | full/- | Default | nfs |
AFF-A300-02 | e0a | up | physical | 9000 | none/none | Cluster | Cluster |
AFF-A300-02 | e0b | up | physical | 9000 | none/none | Cluster | Cluster |
AFF-A300-02 | e0c | down | physical | 1500 | none/none | Default | Default |
AFF-A300-02 | e0d | down | physical | 1500 | none/none | Default |
|
AFF-A300-02 | e0e | up | physical | 1500 | none/none | Default |
|
AFF-A300-02 | e0f | up | physical | 1500 | none/none | Default |
|
AFF-A300-02 | e0M | up | physical | 1500 | full/full | Default | Default |
AFF-A300-02 | e2a | up | physical | 9000 | none/none | Default |
|
AFF-A300-02 | e2e | up | physical | 9000 | none/none | Default |
|
An interface group (ifgrp) is a port aggregate containing two or more physical ports that acts as a single trunk port. Expanded capabilities include increased resiliency, increased availability, and load distribution. You can create three different types of interface groups on your storage system: single-mode, static multimode, and dynamic multimode. Each interface group provides different levels of fault tolerance. Multimode interface groups provide methods for load balancing network traffic.
Table 23 lists the network port ifgrp settings.
Table 23 Network Port Ifgrp Settings
Node Name | Ifgrp Name | Mode | Distribution Function | Ports |
AFF-A300-01 | a0a | multimode_lacp | port | e2a, e2e |
AFF-A300-02 | a0a | multimode_lacp | port | e2a, e2e |
You control how LIFs in an SVM use your network for outbound traffic by configuring routing tables and static routes.
· Routing tables. Routes are configured for each SVM and identify the SVM, subnet, and destination. Because routing tables are for each SVM, routing changes to one SVM do not alter the route table of another SVM.
Routes are created in an SVM when a service or application is configured for the SVM. Like data SVMs, the admin SVM of each IPspace has its own routing table because LIFs can be owned by admin SVMs and might need route configurations different from those on data SVMs.
If you have defined a default gateway when creating a subnet, a default route to that gateway is added automatically to the SVM that uses a LIF from that subnet.
· Static route. A defined route between a LIF and a specific destination IP address. The route can use a gateway IP address.
Table 24 lists the network routes for Data ONTAP 8.3 or later.
Cluster Name | SVM Name | Destination Address | Gateway Address | Metric | LIF Names |
AFF-A300 | AFF-A300 | 0.0.0.0/0 | 10.29.164.1 | 20 | AFF-A300-01_mgmt1 AFF-A300-02_mgmt1 cluster_mgmt |
AFF-A300 | Infra | 0.0.0.0/0 | 10.10.62.1 | 20 | CIFS1-01 CIFS2-02 |
Broadcast domains enable you to group network ports that belong to the same layer 2 network. The ports in the group can then be used by an SVM for data or management traffic. A broadcast domain resides in an IPspace.
During cluster initialization, the system creates two default broadcast domains:
· The default broadcast domain contains ports that are in the default IPspace. These ports are used primarily to serve data. Cluster management and node management ports are also in this broadcast domain.
· The cluster broadcast domain contains ports that are in the cluster IPspace. These ports are used for cluster communication and include all cluster ports from all nodes in the cluster.
Table 25 lists the network port broadcast domains for Data ONTAP 8.3 or later.
Table 25 Network Port Broadcast Domains
Cluster Name | Broadcast Domain | IPspace Name | MTU Size | Subnet Names | Port List | Failover Group Names |
AFF-A300 | Cifs | Default | 1500 |
| AFF-A300-01:a0a-62 AFF-A300-02:a0a-62 | cifs |
AFF-A300 | Cluster | Cluster | 9000 |
| AFF-A300-01:e0a AFF-A300-01:e0b AFF-A300-02:e0a AFF-A300-02:e0b | Cluster |
AFF-A300 | Default | Default | 1500 |
| AFF-A300-01:e0c AFF-A300-01:e0M AFF-A300-02:e0c AFF-A300-02:e0M | Default |
AFF-A300 | IB | Default | 1500 |
| AFF-A300-01:a0a-61 AFF-A300-02:a0a-61 | IB |
AFF-A300 | nfs | Default | 9000 |
| AFF-A300-01:a0a-63 AFF-A300-02:a0a-63 | nfs |
Aggregates are containers for the disks managed by a node. You can use aggregates to isolate workloads with different performance demands, to tier data with different access patterns, or to segregate data for regulatory purposes.
· For business-critical applications that need the lowest possible latency and the highest possible performance, you might create an aggregate consisting entirely of SSDs.
· To tier data with different access patterns, you can create a hybrid aggregate, deploying flash as high-performance cache for a working data set, while using lower-cost HDDs or object storage for less frequently accessed data. A Flash Pool consists of both SSDs and HDDs. A Fabric Pool consists of an all-SSD aggregate with an attached object store.
· If you need to segregate archived data from active data for regulatory purposes, you can use an aggregate consisting of capacity HDDs, or a combination of performance and capacity HDDs.
Table 26 contains all aggregate configuration information.
Table 26 Aggregate Configuration
Aggregate Name | Home Node Name | State | RAID Status | RAID Type | Disk Count (By Type) | RG Size (HDD / SSD) | HA Policy | Has Mroot | Mirrored | Size Nominal |
aggr0_A300_01 | AFF-A300-01 | online | normal | raid_dp | 11@3.8TB_SSD (Shared) | 24 | cfo | True | False | 414.47GB |
aggr0_A300_02 | AFF-A300-02 | online | normal | raid_dp | 10@3.8TB_SSD (Shared) | 24 | cfo | True | False | 368.42GB |
aggr1_AFF300_01 | AFF-A300-01 | online | normal | raid_dp | 23@3.8TB_SSD (Shared) | 24 | sfo | False | False | 32.51TB |
aggr1_AFF300_02 | AFF-A300-02 | online | normal | raid_dp | 23@3.8TB_SSD (Shared) | 24 | sfo | False | False | 32.51TB |
An SVM is a secure virtual storage server that contains data volumes and one or more LIFs through which it serves data to the clients. An SVM appears as a single dedicated server to the clients. Each SVM has a separate administrator authentication domain and can be managed independently by its SVM administrator.
In a cluster, an SVM facilitates data access. A cluster must have at least one SVM to serve data. SVMs use the storage and network resources of the cluster. However, the volumes and LIFs are exclusive to the SVM. Multiple SVMs can coexist in a single cluster without being bound to any node in a cluster. However, they are bound to the physical cluster on which they exist.
Table 27 lists the SVM configuration.
Cluster Name | SVM Name | Type | Subtype | State | Allowed Protocols | Name Server Switch | Name Mapping Switch | Comment |
AFF-A300 | Infra | data | default | running | running | cifs, fcp | AFF-A300 |
|
Table 28 lists the SVM storage configuration.
Table 28 SVM Storage Configuration
Cluster Name | SVM Name | Root Volume Security Style | Language | Root Volume | Root Aggregate | Aggregate List |
AFF-A300 | Infra | unix | c.utf_8 | svm_root | aggr1_AFF300_01 | aggr1_AFF300_01, aggr1_AFF300_02 |
A FlexVol volume is a data container associated with a SVM with FlexVol volumes. It gets its storage from a single associated aggregate, which it might share with other FlexVol volumes or infinite volumes. It can be used to contain files in a NAS environment, or LUNs in a SAN environment.
Table 29 lists the FlexVol configuration.
Table 29 FlexVol Configuration
Cluster Name | SVM Name | Volume Name | Containing Aggregate | Type | Snapshot Policy | Export Policy | Security Style | Size Nominal |
AFF-A300 | Infra | esxi_boot | aggr1_AFF300_01 | RW | default | default | unix | 500.00GB |
AFF-A300 | Infra | home | aggr1_AFF300_02 | RW | default | default | ntfs | 1.00TB |
AFF-A300 | Infra | infra_nfs_ds01 | aggr1_AFF300_01 | RW | default | default | unix | 6.00TB |
AFF-A300 | Infra | vdi_cifs (FlexGroup) | aggr1_AFF300_01 & 02 | RW | default | default | unix | 1.00TB |
AFF-A300 | Infra | vdi_cifs_vDisk | aggr1_AFF300_02 & 02 | RW | default | default | unix | 500GB |
AFF-A300 | Infra | vdi_nfs_ds01 | aggr1_AFF300_01 | RW | default | default | unix | 10.00TB |
AFF-A300 | Infra | vdi_nfs_ds02 | aggr1_AFF300_02 | RW | default | default | unix | 10.00TB |
AFF-A300 | Infra | vdi_nfs_ds03 | aggr1_AFF300_01 | RW | default | default | unix | 10.00TB |
AFF-A300 | Infra | vdi_nfs_ds04 | aggr1_AFF300_02 | RW | default | default | unix | 10.00TB |
AFF-A300 | Infra | vdi_nfs_ds05 | aggr1_AFF300_01 | RW | default | default | unix | 10.00TB |
Infra | vdi_nfs_ds06 | aggr1_AFF300_02 | RW | default | default | unix | 10.00TB | |
AFF-A300 | Infra | vdi_nfs_ds07 | aggr1_AFF300_01 | RW | default | default | unix | 10.00TB |
AFF-A300 | Infra | vdi_nfs_ds08 | aggr1_AFF300_02 | RW | default | default | unix | 10.00TB |
ONTAP can be accessed over Common Internet File System (CIFS), Server Message Block (SMB) and Network File System (NFS) capable clients.
This means clients can access all files on a SVM regardless of the protocol they are connecting with or the type of authentication they require.
A LIF is an IP address with associated characteristics, such as a role, a home port, a home node, a routing group, a list of ports to fail over to, and a firewall policy. You can configure LIFs on ports over which the cluster sends and receives communications over the network.
LIFs can be hosted on the following ports:
· Physical ports that are not part of interface groups
· Interface groups
· VLANs
· Physical ports or interface groups that host VLANs
While configuring SAN protocols such as FC on a LIF, it a LIF role determines the kind of traffic that is supported over the LIF, along with the failover rules that apply and the firewall restrictions that are in place.
LIF failover refers to the automatic migration of a LIF in response to a link failure on the LIF's current network port. When such a port failure is detected, the LIF is migrated to a working port.
A failover group contains a set of network ports (physical, VLANs, and interface groups) on one or more nodes. A LIF can subscribe to a failover group. The network ports that are present in the failover group define the failover targets for the LIF.
Table 30 lists the NAS LIF settings.
Cluster Name | SVM Name | Interface Name | Status (Admin/Oper) | IP Address | Current Node | Current Port | Is Home |
AFF-A300 | Infra | CIFS1-01 | up/up | 10.10.62.10/24 | AFF-A300-01 | a0a-62 | False |
AFF-A300 | Infra | CIFS2-02 | up/up | 10.10.62.11/24 | AFF-A300-02 | a0a-62 | True |
AFF-A300 | Infra | mgmt2 | up/up | 10.10.61.26/24 | AFF-A300-01 | a0a-61 | True |
AFF-A300 | Infra | NFS1-01 | up/up | 10.10.63.10/24 | AFF-A300-01 | a0a-63 | False |
AFF-A300 | Infra | NFS2-02 | up/up | 10.10.63.11/24 | AFF-A300-02 | a0a-63 | True |
You can enable and configure a CIFS SVM to let SMB clients access files on your SVM. Each data SVM in the cluster can be bound to only one Active Directory domain; however, the data SVMs do not need to be bound to the same domain. Each SVM can be bound to a unique Active Directory domain. Additionally, a CIFS SVM can be used to tunnel cluster administration authentication, which can be bound to only one Active Directory domain.
CIFS clients can access files on a SVM using the CIFS protocol provided ONTAP can properly authenticate the user.
Table 31 lists CIFS server configuration information.
Cluster Name | SVM Name | CIFS Server | Domain | Domain NetBIOS Name | WINS Servers | Preferred DC |
AFF-A300 | Infra | INFRA | VDILAB.LOCAL | VDILAB |
|
|
Most of these options are only available starting with Data ONTAP 8.2.
Table 32 lists CIFS options.
Cluster Name | SVM Name | SMB v2 Enabled | SMB v3 Enabled | Export Policy Enabled | Copy Offload Enabled | Local Users and Groups Enabled | Referral Enabled | Shadow Copy Enabled |
AFF-A300 | Infra | True | True | False | True | True | False | True |
You can create local users and groups on the SVM. The CIFS server can use local users for CIFS authentication and can use both local users and groups for authorization when determining both share and file and directory access rights.
Local group members can be local users, domain users and groups, and domain machine accounts.
Local users and groups can also be assigned privileges. Privileges control access to SVM resources and can override the permissions that are set on objects. A user or member of a group that is assigned a privilege is granted the specific rights that the privilege allows.
Privileges do not provide ONTAP general administrative capabilities.
A CIFS share is a named access point in a volume and/or namespace that enables CIFS clients to view, browse, and manipulate files on an SVM.
Table 33 lists the CIFS shares.
Cluster Name | SVM Name | Share Name | Path | Share Properties | Symlink Properties | Share ACL |
AFF-A300 | Infra | %w | %w | homedirectory | symlinks | Everyone:Full Control |
AFF-A300 | Infra | admin$ | / | browsable |
| UTD |
AFF-A300 | Infra | c$ | / | oplocks browsable changenotify show_previous_versions | symlinks | Administrators:Full Control |
AFF-A300 | Infra | HomeDirs$ | /vdi_cifs/HomeDirs | oplocks browsable changenotify show_previous_versions | symlinks | Everyone:Full Control |
AFF-A300 | Infra | ipc$ | / | browsable |
| UTD |
Infra | Profile$ | /vdi_cifs/Profiles | oplocks browsable changenotify show_previous_versions | symlinks | Everyone:Full Control | |
AFF-A300 | Infra | vDisk$ | /vdi_cifs_vDisk | oplocks browsable changenotify show_previous_versions | symlinks | Everyone:Full Control |
ONTAP home directories enable you to configure an SMB share that maps to different directories based on the user that connects to it and a set of variables. Instead of having to create separate shares for each user, you can configure a single share with a few home directory parameters to define a user's relationship between an entry point (the share) and their home directory (a directory on the SVM).
The home directory search paths are a set of absolute paths from the root of the SVM that you specify that directs the ONTAP search for home directories. You specify one or more search paths by using the vserver cifs home-directory search-path add command. If you specify multiple search paths, ONTAP tries them in the order specified until it finds a valid path.
Table 34 lists the CIFS home directory search paths.
Table 34 CIFS Home Directory Search Paths
Cluster Name | SVM Name | Position | Path |
AFF-A300 | Infra | 1 | /home/LoginVSI |
Storage Area Network (SAN) is a term used to describe a purpose-built storage controller that provides block-based data access. ONTAP supports traditional FC as well as iSCSI and FCoE) within a unified architecture.
LUNs are created and exist within a given FlexVol volume and are used to store data which is presented to servers or clients. LUNs provide storage for block-based protocols such as FC or iSCSI.
Table 35 lists the LUN details.
Cluster Name | SVM Name | Path | Mapped | Online | Protocol Type | Read Only | Size |
AFF-A300 | Infra | /vol/esxi_boot/VDI-1 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-2 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-3 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-4 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-5 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-6 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-7 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-9 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-10 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-11 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-12 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-13 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-14 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-15 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-17 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-18 | True | True | vmware | False | 10.00GB |
Infra | /vol/esxi_boot/VDI-19 | True | True | vmware | False | 10.00GB | |
AFF-A300 | Infra | /vol/esxi_boot/VDI-20 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-21 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-22 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-23 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-24 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-25 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-26 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-27 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-28 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-29 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-30 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-31 | True | True | vmware | False | 10.00GB |
AFF-A300 | Infra | /vol/esxi_boot/VDI-32 | True | True | vmware | False | 10.00GB |
Initiator groups (igroups) are tables of FC protocol host WWPNs or iSCSI host node names. You can define igroups and map them to LUNs to control which initiators have access to LUNs.
Typically, you want all of the host’s initiator ports or software initiators to have access to a LUN. If you are using multipathing software or have clustered hosts, each initiator port or software initiator of each clustered host needs redundant paths to the same LUN.
You can create igroups that specify which initiators have access to the LUNs either before or after you create LUNs, but you must create igroups before you can map a LUN to an igroup.
Initiator groups can have multiple initiators, and multiple igroups can have the same initiator. However, you cannot map a LUN to multiple igroups that have the same initiator. An initiator cannot be a member of igroups of differing OS types.
Table 36 lists the igroups that have been created.
Cluster Name | SVM Name | Initiator Group Name | Protocol | OS Type | ALUA | Initiators Logged In |
AFF-A300 | Infra | VDI-1 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-2 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-3 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-4 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-5 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-6 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-7 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-9 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-10 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-11 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-12 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-13 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-14 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-15 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-17 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-18 | fcp | vmware | True | full |
Infra | VDI-19 | fcp | vmware | True | full | |
AFF-A300 | Infra | VDI-20 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-21 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-22 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-23 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-24 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-25 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-26 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-27 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-28 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-29 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-30 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-31 | fcp | vmware | True | full |
AFF-A300 | Infra | VDI-32 | fcp | vmware | True | full |
Table 37 lists the FCP LIF settings.
Cluster Name | SVM Name | Interface Name | Status (Admin/Oper) | Port Name | Current Node | Current Port | Is Home |
AFF-A300 | Infra | fcp_01a | up/up | 20:01:00:a0:98:af:bd:e8 | AFF-A300-01 | 0g | True |
AFF-A300 | Infra | fcp_01b | up/up | 20:02:00:a0:98:af:bd:e8 | AFF-A300-01 | 0h | True |
AFF-A300 | Infra | fcp_02a | up/up | 20:03:00:a0:98:af:bd:e8 | AFF-A300-02 | 0g | True |
AFF-A300 | Infra | fcp_02b | up/up | 20:04:00:a0:98:af:bd:e8 | AFF-A300-02 | 0h | True |
FCP is a licensed service on the storage system that enables you to export LUNs and transfer block data to hosts using the SCSI protocol over an FC fabric.
Table 38 lists the FCP service configuration details.
Table 38 FCP Service Configuration
Cluster Name | SVM Name | Node Name | Available |
AFF-A300 | Infra | 20:00:00:a0:98:af:bd:e8 | True |
You can use storage controller onboard FC ports as both initiators and targets. You can also add storage controller FC ports on expansion adapters and use them as initiators or targets, depending on the type of expansion adapter installed.
Table 39 lists the details of the storage controller target ports and the WWPN address of each.
Table 39 FCP Adapter Configuration
Node Name | Adapter Name | State | Data Link Rate | Media Type | Speed | Port Name |
AFF-A300-01 | 0e | offlined by user/system | 0 | ptp | auto | 50:0a:09:82:80:13:41:27 |
AFF-A300-01 | 0f | offlined by user/system | 0 | ptp | auto | 50:0a:09:81:80:13:41:27 |
AFF-A300-01 | 0g | Online | 8 | ptp | auto | 50:0a:09:84:80:13:41:27 |
AFF-A300-01 | 0h | Online | 8 | ptp | auto | 50:0a:09:83:80:13:41:27 |
AFF-A300-02 | 0e | offlined by user/system | 0 | ptp | auto | 50:0a:09:82:80:d3:67:d3 |
AFF-A300-02 | 0f | offlined by user/system | 0 | ptp | auto | 50:0a:09:81:80:d3:67:d3 |
AFF-A300-02 | 0g | Online | 8 | ptp | auto | 50:0a:09:84:80:d3:67:d3 |
AFF-A300-02 | 0h | Online | 8 | ptp | auto | 50:0a:09:83:80:d3:67:d3 |
ONTAP offers a wide range of storage-efficiency technologies in addition to Snapshot. Key technologies include thin provisioning, deduplication, compression, and FlexClone volumes, files, and LUNs. Like Snapshot, all are built on ONTAP WAFL.
You can run deduplication, data compression, and data compaction together or independently on a FlexVol volume or an infinite volume to achieve optimal space savings. Deduplication eliminates duplicate data blocks and data compression compresses the data blocks to reduce the amount of physical storage that is required. Data compaction stores more data in less space to increase storage efficiency.
Beginning with ONTAP 9.2, all inline storage-efficiency features, such as inline deduplication and inline compression, are enabled by default on AFF volumes.
Table 40 lists the volume efficiency settings.
Table 40 Volume Efficiency Settings
Cluster Name | SVM Name | Volume Name | Space Guarantee | Dedupe | Schedule Or Policy Name | Compression | Inline Compression |
AFF-A300 | Infra | esxi_boot | none | True | sun-sat@1 | True | True |
AFF-A300 | Infra | infra_nfs_ds01 | none | True | inline-only | True | True |
AFF-A300 | Infra | svm_root | volume |
| - |
|
|
Infra | vdi_cifs | none | True | inline-only | True | True | |
AFF-A300 | Infra | vdi_cifs_vDisk | none | True | inline-only | True | True |
AFF-A300 | Infra | vdi_nfs_ds01 | none | True | inline-only | True | True |
AFF-A300 | Infra | vdi_nfs_ds02 | none | True | inline-only | True | True |
AFF-A300 | Infra | vdi_nfs_ds03 | none | True | inline-only | True | True |
AFF-A300 | Infra | vdi_nfs_ds04 | none | True | inline-only | True | True |
AFF-A300 | Infra | vdi_nfs_ds05 | none | True | inline-only | True | True |
AFF-A300 | Infra | vdi_nfs_ds06 | none | True | inline-only | True | True |
AFF-A300 | Infra | vdi_nfs_ds07 | none | True | inline-only | True | True |
AFF-A300 | Infra | vdi_nfs_ds08 | none | True | inline-only | True | True |
Thin provisioning enables storage administrators to provision more storage on a LUN than is physically present on the volume. By overprovisioning the volume, storage administrators can increase the capacity utilization of that volume. As the blocks are written to the LUN, ONTAP adds more space to the LUN from available space on the volume.
With thin provisioning, you can present more storage space to the hosts connecting to the SVM than what is actually available on the SVM. Storage provisioning with thinly provisioned LUNs enables storage administrators to provide actual storage that the LUN needs. As ONTAP writes blocks to the LUN, the LUN increases in size automatically.
Table 41 lists the LUN efficiency settings.
Table 41 LUN Efficiency Settings
Cluster Name | SVM Name | Path | Space Reservation Enabled | Space Allocation Enabled |
AFF-A300 | Infra | /vol/esxi_boot/VDI-1 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-2 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-3 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-4 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-5 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-6 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-7 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-9 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-10 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-11 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-12 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-13 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-14 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-15 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-17 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-18 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-19 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-20 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-21 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-22 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-23 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-24 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-25 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-26 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-27 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-28 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-29 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-30 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-31 | False | False |
AFF-A300 | Infra | /vol/esxi_boot/VDI-32 | False | False |
NetApp is the leader in providing a fully functional CIFS storage server. NetApp has been providing CIFS server functions since SMB1 and NetApp provides support for SMB 2.0, 2.1 and 3.0. The benefit of using the integrated CIFS functionality within the storage array is that it removes need to have the IO processed twice. With a Windows File Server environment, the data is processed at the Windows File Server layer and then passed on to be processed by the storage array. With NetApp’s CIFS functionality, the client maps the share on the NetApp storage cluster directly; therefore, the IO is only processed at the storage array level. In the NetApp CIFS model, the requirement for separate Windows file servers is removed, which then removes the overhead of having the data processed at the Windows File Server.
Windows File Services Features in Clustered Data ONTAP 9.3
Clustered Data ONTAP 9.3 contains the following new CIFS features:
· Microsoft Management Console (MMC) support for viewing and managing open files, open sessions, and shares
· NetBIOS aliases
· Storage-Level Access Guard (SLAG)
· Native file-access auditing for user logon and logoff
· Group Policy object (GPO) security policy support
· NetApp FPolicy pass-through read support
· Offloaded data transfer (ODX) enhancements
· Support for Microsoft Dynamic Access Control (DAC)
Table 42 presents a complete list of CIFS features.
Table 42 9.3 CIFS Features in Clustered Data ONTAP
CIFS Features |
Support for Microsoft DAC (Dynamic Access Control) |
AES 128/256 for CIFS Kerberos authentication |
ODX direct-copy |
MMC support for viewing and managing open files and sessions |
NetBIOS aliases |
SLAG |
Native auditing for logon and logoff to shares |
UNIX character mapping |
GPO security policy support |
FPolicy pass-through read support |
CIFS restrict anonymous capability |
Control bypass traverse checking |
CIFS home directory show user command |
Control of CIFS home directory access for admins |
Multidomain user mapping |
LDAP over SSL (start-TLS) |
Offbox antivirus |
Separate AD licensing |
SMB 3.0 , SMB 2.1, and SBM 2.0 |
Copy offload |
SMB autolocation |
BranchCache |
Local users and groups |
FSecurity |
FPolicy |
Roaming profiles and folder redirection |
Access-based enumeration (ABE) |
Offline folders |
SMB signing |
Remove VSS |
File access auditing or file access monitoring |
Best Practices |
· Use CIFS shares on the NetApp storage cluster instead of a Windows File Server VM · Use CIFS shares on the NetApp storage cluster for VDI Home directories, VDI profiles, and other VDI CIFS data. |
Best Practices |
· Use deduplication and compression for end-user data files stored in home directories to achieve storage efficiency. NetApp strongly recommends storing user data on the CIFS home directory in the NetApp storage cluster. · Use Microsoft DFS to manage CIFS shares. NetApp supports client DFS to locate directories and files. · Use the NetApp home directory share feature to minimize the number of shares on the system. · Use SMB3 for home directories. |
The second type of user data is the user profile (personal data). This data allows the user to have a customized desktop environment when using a non-persistent virtual desktop. User profiles are typically stored in C:\Users on a Microsoft Windows physical machine and can be redirected to a CIFS share for use in a non-persistent, virtual desktop environment.
Many profile management solutions on the market simplify management, improve reliability, and reduce network requirements when compared with standard Windows folder redirection. A profile management solution speeds the migration process from a physical desktop or laptop by first virtualizing the profile and then virtualizing the desktop. This improves login times compared with using folder redirection alone and centralizes the end-user profile for better backup and recovery and disaster recovery of data.
Profile management provides an easy, reliable, and high-performance way to manage user personalization settings in virtual or physical Windows OS environments. It requires minimal infrastructure and administration and provides users with fast logons and logoffs. A Windows user profile is a collection of folders, files, registry settings, and configuration settings that define the environment for a user who logs on with a particular user account. These settings can be customized by the user, depending on the administrative configuration. Examples of settings that can be customized are:
· Desktop settings such as wallpaper and screen saver
· Shortcuts and start menu settings
· Internet Explorer favorites and homepage
· Microsoft Outlook signature
· Printers
Some user settings and data can be redirected by means of folder redirection. However, these settings are stored within the user profile if folder redirection is not used.
The first stage in planning a profile-management deployment is to decide on a set of policy settings that together form a suitable configuration for your environment and users. The automatic configuration feature simplifies some of this for XenDesktop deployments.
Best Practices |
For a faster login: · NetApp recommends All Flash FAS (Solid State Drives) for profiles and PVS vDisk. · NetApp recommends the use of Citrix User Profile Manager (UPM) software to eliminate unnecessary file copying during login and to allow users to personalize their desktops. · NetApp recommends utilizing Folder redirection in Microsoft GPO’s to eliminate an enlarged profile state, which slows down login time. · NetApp recommends using SMB3 shares on the NetApp storage with NetApp FlexGroup volumes, which eliminates the CIFS I/O being processed by a Windows server and then by the storage array. The data is directly written to the storage array over multiple volumes of multiple clustered storage nodes. |
Citrix recommends placing the PVS vDisks on a Microsoft SMB3 to centralize all Golden Master VDI templates images for all PVS servers in the cluster. This prevents the need to copy Golden Masters to each PVS server and potentially avoiding the issue of outdated or mixed version Golden Master templates. NetApp storage has the capability of placing the PVS vDisks on a SMB3 share residing on the NetApp storage for centralization, enterprise backups and the functional benefit of using enterprise storage. In this reference architecture, we placed the PVS vDisks on a NetApp SMB3 share on a FlexGroup volume. We successfully tested failover of the storage nodes while the PVS server was running and the PVS vDisks resided on the NetApp A300 storage array. FlexGroups do not support the Microsoft Continuous Availability (CA) feature of SMB3 (persistent handles). Therefore, there is no longer a need to enable NetApp’s CA shares feature on the PVS vDisks SMB3 share when using a FlexGroup volume. This was a previous recommendation and is no longer required from ONTAP 9.3 and beyond.
NetApp OnCommand System Manager can be used to set up CIFS volumes, shares and LIFs for PVS vDisks. Although LIFs can be created and managed through the command line, this section focuses on the NetApp OnCommand System Manager GUI. Note that System Manager 3.0 or later is required to perform these steps.
In clustered Data ONTAP 9.3, you can use Microsoft Management Console (MMC) to create shares or NetApp System Manager. In addition, you can use the NetApp System Manager tool to configure the CIFS server in a NetApp SVM and to configure the CIFS shares as well.
In this reference architecture, we used NetApp’s System Manager to create and configure CIFS for the VDI environment, including the CIFS shares. Below, we show you how to create and configure CIFS shares with the NetApp System Manger GUI tool. To configure CIFS, complete the following steps:
1. To configure CIFS, sign into the System Manager Tool and go to the SVM menu.
2. Click the SVM menu and then click the SVM that will contain the CIFS volumes and thus require the CIFS configuration.
3. In the left pane, select Configuration > Protocols > CIFS.
4. In section “name of section”, we added CIFS licenses and enabled the CIFS service. To configure the Preferred Domain Controllers, click the line in the bottom window. Add the preferred DCs IP address and the FQDN and click save. Repeat this step for each DC that is local to your site and that you want to be on your preferred list.
5. Enable the built-in administrator account by selecting Users and Groups in the Configuration menu. Then click Windows. In the right pane, select the local administrator account and click Edit.
6. Deselect Disable This Account and click Modify.
The Account Disabled column should read No.
7. To configure Windows-to-Unix and Unix-to-Windows name mapping, select Name Mapping within the Users and Groups menu.
8. Click Add and then add the following:
- Unix to Windows: ID=1, Pattern=root, Replacement=Domain administrator
- Windows to Unix: ID=1, Pattern=Domain administrator, Replacement=root
As a part of the CIFS share design, we chose to utilize NetApp Qtrees to provide quotas at a lower level within the volume. A Qtree is a folder that is created within the NetApp volume and yet is maintained at the storage level, not the hypervisor level. The hypervisor has access to the Qtree, which appears as a normal mount point within the volume. The Qtree folder provides granular quota functions within the volume. A Qtree folder must be created prior to creating the CIFS share because we will export the Qtree folder as the CIFS share.
1. To create a Qtree, sign into the System Manager tool and go to the SVM menu. Expand the SVM menu and select Storage > Qtrees.
2. In the right pane, click Create to create a Qtree.
3. Enter the Qtree folder name, chose the storage volume, select Enable Opslocks for Files and Directories in This Qtree, and enter the export policy. You can create the export policy prior to this step or by clicking Create Export Policy, then click the Quota tab.
4. Select Limit Total Space Usage Within This Qtree and enter the space usage limit in TB or GB. Then select the Limit Total Space Usage for Users of This Qtree and enter the space usage limit in TB or GB. Click Create.
There are several tools that can create CIFS shares supported on NetApp storage. Some of the tools that can used to create CIFS shares on NetApp storage are:
· Microsoft Management Console as of cDOT 8.3
· The NetApp clustered Data ONTAP CLI
· NetApp System Manager
For this reference architecture, NetApp System Manager is used to take advantage of the NetApp User Home Directory Shares feature.
To create CIFS shares, complete the following steps:
1. Within System Manager, select SVM menu, expand the SVM, and select Storage > Shares in the left pane.
2. Click Create to create the CIFS share.
3. Enter the folder to share (the Qtree path). The CIFS share name is the advertised SMB share name mapped by the VDI clients. Enter a comment and, if needed, select Enable Continuous Availability for Hyper-V and SQL. Click Create.
Selecting Enable Continuous Availability for Hyper-V and SQL enables Microsoft Persistent Handles support on the NetApp SMB3 CIFS share. This feature is only available with SMB3 clients (Windows 2012 Servers) that map the NetApp CIFS share. Therefore, this feature is not available for Windows 2008 or 2003 servers.
Since it is Citrix PVS best practice to install the PVS vDisk on a CIFS share to eliminate the need for multiple copies of the Golden Templates, it is NetApp best practice to activate continuous availability (Persistent Handles) on the PVS vDisk share. If using FlexVol volumes for the PVS vDisks, not selecting the Continuous Availability (CA) option for the PVS vDisk share on the NetApp storage may result in a loss of PVS service if a storage controller failover event occurs (one storage node failing over its resources to another storage controller node). With FlexGroups, this is not the case.
In this reference architecture, we used FlexGroups for the CIFS requirements and FlexGroups do not support the Continuous Availability (CA) option. We conducted several successful tests of storage node failover while the PVS servers were running and did not experience any failures. Therefore, CA option is not needed for PVS servers if the PVS servers vDisks are placed on FlexGroup volumes.
Best Practices |
· Use FlexGroup volumes for VDI CIFS requirements including the PVS vDisks. · If using FlexGroup volumes for the PVS vDisks, the Continuous Availability is not supported and not needed. |
Previously, it was NetApp best practice to disable deduplication on volumes that contain write-cache disks. With the advent of NetApp All Flash FAS (flash media) and the Citrix Ram Cache Plus Overflow feature starting in XenDesktop 7.15 and greater, NetApp recommends enabling deduplication on write-cache volumes. The two primary reasons for the change are the need to reduce the amount of stale data in the write cache disks and the need for capacity reduction.
The Citrix Ram Cache Plus Overflow feature reduces the majority of IOPS from centrally shared storage but still requires the full capacity of the write-cache disks. This creates a requirement for high capacity, low-IOPS on the write-cache disk volumes. This situation takes excellent advantage of the NetApp storage inline and post deduplication features. Storage deduplication is very beneficial in a low-IOPS, high-capacity environment.
Write-cache disks can build up stale data. The write-cache disk cache data is cleared when a VDI desktop is rebooted or deleted. However, in many VDI environments, customers have persistent VDI desktops or VDI desktops that do not get rebooted regularly. Therefore, stale data builds up in the write cache disks. Deduplication reduces the amount of stale data that resides on these disks.
Since capacity is an important requirement with an all flash disks environment, in this reference architecture, we enabled deduplication on the write-cache volumes to reduce the amount of capacity required. In addition, write-cache disks may vary in size from user to user. Therefore, it is uncertain how much capacity is needed for each user’s write-cache disk.
Another option for conserving capacity is to utilize NetApp volume thin provisioning. Thin provisioning on the write-cache volumes takes the guess work out of allocating capacity for each user’s write-cache disk file and prevents over provisioned in the environment. It is best practice to enable storage thin provisioning on the write-cache volumes.
When creating the VMware vmdk disks, NetApp recommends provisioning the VMware vmdk disks as thick-eager zero in a Citrix VDI environment. The reasoning behind this option is to remove the extra vmdk formatting write I/O during production time. With thick-eager zero disks, the vmdk files are pre-formatted at time of creation with zeros. NetApp storage inline deduplication feature reduces those inline zeros to a couple of bytes. The VMware vmdk will display and 100Gb drive but it will only represent a couple of bytes on the NetApp storage when configured in this manner.
With thin-provisioning vmdk disks, the vmdk disks are formatted when a write I/O is requested during production; therefore, adding additional IO during production usage time to the storage. VDI environments are 90 percent writes and 10 percent reads so creating the VMware vmdk disks as thick-eager zero will reduce the amount of overall write I/O.
On the other hand, with NFS volumes, creating thick-eager zero vmdk disks does not allow a VMware vmdk to realize the space savings with storage deduplication. The NetApp storage realizes the space savings but the VMware vmdk will not see the space savings derived from the storage deduplication feature. If seeing the vmdk capacity savings within VMware is a higher priority than performance, you may want to create the VMware vmdk disks as thin-provisioned.
Best Practices |
· Enable NetApp storage deduplication on the write-cache volumes · Enable thin provisioning on the write-cache volumes · Create VMware vmdk disks as Thick-eager zero disks for Citrix XenDesktop Environments |
In this solution, we utilized the Cisco MDS 9148S Switches for Fiber Channel Switching. For racking, cable and initial setup of the MDS switches, please refer to the Quick Start Guide:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/hw/9148/quick/guide/MDS_9148_QSG.pdf
For this solution, the Fabric Interconnect A ports 1-4 connected to the MDS switch A ports 43-46, and MDS Switch A ports 37, 38 connected to Netapp A300 Controller A and Controller B ports 0g. Similarly, the Fabric Interconnect B ports 1 - 4 connected to the MDS switch B ports 43-46, and MDS Switch B ports 37, 38 connected to Netapp A300 Controller A and Controller B ports 0h. All ports carry 16 Gb/s FC Traffic.
For this design, two separate fabrics were created, each with their own unique VSAN. Fabric A side configured for VSAN400 while Fabric B side for VSAN401.
Figure 29 VSAN 400 Configured for Fabric A
Figure 30 VSAN 401 Configured for Fabric B
Figure 31 Fibre Channel Cable Connectivity from Netapp AFF A300 to Cisco MDS 9148S to Cisco 6332-16UP Fabric Interconnects
All connections are 16Gb FC links.
To set feature on MDS Switches, complete the following steps on both MDS switches:
1. Login as admin user into MDS Switch A.
config terminal
feature npiv
switchname MDS-A
copy running-config startup-config
2. Login as admin user into MDS Switch B. Repeat the steps above on MDS Switch B.
The next steps are to configure the VSANs, ports, and zones in the MDS switches. The commands listed below how to do it. The entire MDS 9148S FC switch configuration is included in Appendix A of this document.
config terminal
VSAN database
vsan 400
vsan 400 interface fc {interface 1/X}
exit
interface fc {interface 1/X} switchport trunk allowed vsan 400
switchport trunk mode off
port-license acquire
no shutdown
exit
Zoneset name AFF-A300_VDI vsan 400
Member {ESXi hostname-fc0}
exit
Zoneset activate name AFF-A300_VDI vsan 400
Zone commit vsan 400
exit
Copy running-config startup-config
config terminal
VSAN database
vsan 401
vsan 401 interface fc {interface 1/X}
exit
interface fc {interface 1/X} switchport trunk allowed vsan 401
switchport trunk mode off
port-license acquire
no shutdown
exit
Zoneset name AFF-A300_VDI vsan 401
Member {ESXi hostname-fc1}
exit
Zoneset activate name AFF-A300_VDI vsan 401
Zone commit vsan 401
exit
Copy running-config startup-config
This section provides detailed instructions for installing VMware ESXi 6.5 Update1 in an environment. After the procedures are completed, two booted ESXi hosts will be provisioned.
Several methods exist for installing ESXi in a VMware environment. These procedures focus on how to use the built-in keyboard, video, mouse (KVM) console and virtual media features in Cisco UCS Manager to map remote installation media to individual servers and connect to their boot logical unit numbers (LUNs).
To download the Cisco Custom Image for ESXi 6.5 Update 1, complete the following steps:
1. Click the following link vmware login page.
2. Type your email or customer number and the password and then click Log in.
3. Click the following link:
https://my.vmware.com/web/vmware/details?productId=614&downloadGroup=ESXI65U1
4. Click Download Now.
5. Save it to your destination folder.
To log in to the Cisco UCS environment, complete the following steps:
1. Log in to Cisco UCS Manager.
2. The IP KVM enables the administrator to begin the installation of the operating system (OS) through remote media. It is necessary to log in to the Cisco UCS environment to run the IP KVM.
3. Open a Web browser and enter the IP address for the Cisco UCS cluster address. This step launches the Cisco UCS Manager application.
4. Log in to Cisco UCS Manager by using the admin user name and password.
5. From the main menu, click the Servers tab.
6. Select Servers > Service Profiles > root > VM-Host-01.
7. Right-click VM-Host-01 and select KVM Console.
8. Repeat steps for 4-6 for all host servers.
To prepare the server for the OS installation, complete the following steps on each ESXi host:
1. In the KVM window, click the Virtual Media tab.
2. Click Add Image.
3. Browse to the ESXi installer ISO image file and click Open.
4. Select the Mapped checkbox to map the newly added image.
5. Click the KVM tab to monitor the server boot.
6. Boot the server by selecting Boot Server and click OK, then click OK again.
To install VMware ESXi to the SAN-bootable LUN of the hosts, complete the following steps on each host:
1. On reboot, the machine detects the presence of the ESXi installation media. Select the ESXi installer from the menu that is displayed.
2. After the installer is finished loading, press Enter to continue with the installation.
3. Read and accept the end-user license agreement (EULA). Press F11 to accept and continue.
4. Select the AFF A300 boot LUN.
5. (NetApp LUN C-Mode(naa.600a098038304331395d4) 10 GB that was previously set up as the installation disk for ESXi and press Enter to continue with the installation.
6. Select the appropriate keyboard layout and press Enter.
7. Enter and confirm the root password and press Enter.
8. The installer issues a warning that existing partitions will be removed from the volume. Press F11 to continue with the installation.
9. After the installation is complete, clear the Mapped checkbox (located in the Virtual Media tab of the KVM console) to unmap the ESXi installation image.
The ESXi installation image must be unmapped to make sure that the server reboots into ESXi and not into the installer.
10. The Virtual Media window might issue a warning stating that it is preferable to eject the media from the guest. Because the media cannot be ejected and it is read-only, click Yes to unmap the image.
11. From the KVM tab, press Enter to reboot the server.
Adding a management network for each VMware host is necessary for managing the host.
To configure the ESXi host with access to the management network, complete the following steps:
1. After the server has finished rebooting, press F2 to customize the system.
2. Log in as root and enter the corresponding password.
3. Select the Configure the Management Network option and press Enter.
4. Select the VLAN (Optional) option and press Enter.
5. Enter the VLAN in-band management ID and press Enter.
6. From the Configure Management Network menu, select IP Configuration and press Enter.
7. Select the Set Static IP Address and Network Configuration option by using the space bar.
8. Enter the IP address for managing the first ESXi host.
9. Enter the subnet mask for the first ESXi host.
10. Enter the default gateway for the first ESXi host.
11. Press Enter to accept the changes to the IP configuration.
12. Select the IPv6 Configuration option and press Enter.
13. Using the spacebar, unselect Enable IPv6 (restart required) and press Enter.
14. Select the DNS Configuration option and press Enter.
Since the IP address is assigned manually, the DNS information must also be entered manually.
15. Enter the IP address of the primary DNS server.
16. Optional: Enter the IP address of the secondary DNS server.
17. Enter the fully qualified domain name (FQDN) for the first ESXi host.
18. Press Enter to accept the changes to the DNS configuration.
19. Press Esc to exit the Configure Management Network submenu.
20. Press Y to confirm the changes and return to the main menu.
21. The ESXi host reboots. After reboot, press F2 and log back in as root.
22. Select Test Management Network to verify that the management network is set up correctly and press Enter.
23. Press Enter to run the test.
24. Press Enter to exit the window.
25. Press Esc to log out of the VMware console.
To download the VMware vSphere Client, complete the following steps:
1. Open a web browser on the management workstation and navigate to the VM-Host-01 management IP address.
2. Download and install the vSphere Client.
This application is downloaded from the VMware website and Internet access is required on the management workstation.
To download VMware vSphere CLI 6.5, complete the following steps:
1. Click the following link: https://my.vmware.com/en/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere/6_5
2. Select your OS and click Download.
3. Save it to your destination folder.
4. Run the VMware-vSphere-CLI.exe
5. Click Next.
6. Accept the terms for the license and click Next.
7. Click Next on the Destination Folder screen.
8. Click Install.
9. Click Finish.
Install VMware vSphere CLI 6.5 on the management workstation.
10. Log in to VMware ESXi Hosts by Using VMware vSphere Client.
To log in to the VM-Host-01 ESXi host by using the VMware vSphere Client, complete the following steps:
1. Open the recently downloaded VMware vSphere Client and enter the IP address of VM-Host-01 as the host you are trying to connect to: <<var_vm_host_01_ip>>.
2. Enter root for the user name.
3. Enter the root password.
4. Click Login to connect.
The Cisco VIC drivers for VMware ESXi Hypervisor may require an update to match current Cisco Hardware and Software Interoperability Matrix.
Figure 32 Cisco UCS Hardware and Software Interoperability Matrix for vSphere 6.5 U1 and Cisco UCS B200 M5 on Cisco UCS Manager v3.2.3. Recommendation
1. Download the recommended Cisco Virtual Interface Card (VIC) eNIC and fNIC drivers:
The neNIC version is 1.0.16.0 and the fNIC version is 1.6.0.37 were used in this configuration
2. Open a Web browser on the management workstation and navigate to www.cisco.com.
3. Download the Cisco eNIC and fNIC driver bundle.
4. Open the neNIC driver bundle. This bundle includes the VMware driver bundle which will be uploaded to ESXi hosts.
5. Open the fNIC driver bundle. This bundle includes the VMware driver bundle which will be uploaded to ESXi hosts.
6. Save the location of these driver bundles for uploading to ESXi in the next section.
Go to www.cisco.com for the latest ISO images of Cisco UCS-related drivers.
To install VMware VIC Drivers on the ESXi host servers, complete the following steps:
1. From the vSphere Client, select the host in the inventory.
2. Click the Summary tab to view the environment summary.
3. From Resources > Storage, right-click datastore1 and select Browse Datastore.
4. Click the fourth button and select Upload File.
5. Navigate to the saved location for each downloaded VIC driver and select:
a. VMware ESXi 6.5 NIC nenic 1.0.16.0 Driver for Cisco nenic
b. VMware ESXi 6.5 fnic 1.6.0.37 FC Driver for Cisco
6. Click Open on each and click Yes to upload the file to datastore1.
7. Click the fourth button and select Upload File.
8. Repeat the process until the files have been uploaded to all ESXi hosts.
9. From the management workstation, open the VMware vSphere Remote CLI that was previously installed.
10. At the command prompt, run the following commands to account for each host:
To get the host thumbprint, type the command without the –-thumbprint option, then copy and paste the thumbprint into the command.
esxcli –s <<var_vm_host_ip>> -u root –p <<var_password>> --thumbprint <host_thumbprint> software vib update -d /vmfs/volumes/datastore1/ VMW-ESX-6.5.0-nenic-1.0.16.0-offline_bundle-7643104.zip
esxcli –s <<var_vm_host_ip>> -u root –p <<var_password>> --thumbprint <host_thumbprint> software vib update -d /vmfs/volumes/datastore1/ fnic_driver_1.6.0.37-offline_bundle-7765239.zip
11. Back in the vSphere Client for each host, right click the host and select Reboot.
12. Click Yes and OK to reboot the host.
13. Log back into each host with vSphere Client.
Verify the neNIC driver version installed by entering vmkload_mod –s nenic and vmkload_mod –s fNIC at the command prompt.
Figure 33 Verify the neNIC Driver Version
Figure 34 Verify the fNIC Driver Version
Log in to the VM-Host-01 ESXi host by using the VMware vSphere Client and complete the following steps:
1. Open the recently downloaded VMware vSphere Client and enter the IP address of VM-Host-01 as the host you are trying to connect to.
2. Enter root for the user name.
3. Enter the root password.
4. Click Login to connect.
To build the VMWare vCenter VM, complete the following steps:
1. From the vSphere 6 download page on the VMware Web site, download the vCenter ISO file for the vCenter Server appliance onto your system.
2. Mount the vSphere ISO file via Windows Explorer and navigate to the folder vcsa-ui-installer/win32 and click installer file to start the VCSA Appliance from Installer.
3. Click Install.
4. Click Next.
5. Follow the onscreen prompts. Accept EULA.
6. Select Install vCenter Server with and Embedded Platform Services Controller (unless your environment already has a PSC).
7. Click Next.
8. Provide Host IP or FQDN and User Name, Password Credentials of the Host to connect.
9. Click Next to continue.
10. Click Yes to accept Certificate Warning.
11. Provide a VM name and a root password for the vCenter appliance.
12. Click Next to continue.
13. Select the proper appliance size for your deployment. In our study, Large was selected.
14. Click Next to continue
15. Select appropriate Data store. Check Enable Thin Disk Mode.
16. Click Next to continue.
17. Provide Network Settings for the appliance.
It is important to note at this step that you should create a DNS A record for your appliance prior to running the install. The services will fail to startup and your install will fail if it cannot resolve properly.
18. Click Next.
19. Review Settings and click Finish.
20. Once deployment completed click Continue to proceed with set up.
21. To start Set Up vCenter Server Appliance with an Embedded PSC click Next.
22. Configure NTP time synchronization for the appliance.
23. Create a new SSO domain (unless your environment already has and SSO domain. Multiple SSO domains can co-exist).
24. Provide Single Sign On Password and Site Name Credentials.
25. Click Next.
26. Configure participation in CEICP program.
27. Click Next.
28. Review Settings and click Finish.
29. Click Close upon a successful set up.
30. Log in using the Single Sign-On username and password created during the vCenter installation into the vSphere Web Client (ie. https://vcenter65.vdilab.local/vsphere-client).
31. Click Create Datacenter in the center pane.
32. Type VDI-DC as the Datacenter name.
33. Click OK to continue.
34. Right-click Datacenters > VDI-DC in the list in the center pane, then click New Cluster.
35. Name the cluster Infrastructure.
36. Check the box to turn on DRS. Leave the default values.
Set DRS to Manual for Clusters hosting non persistent desktops.
37. Check the box to turn on vSphere HA. Leave the default values.
38. Click OK to create the new cluster.
If mixing Cisco UCS B200 M5 servers within a vCenter cluster, it is necessary to enable VMware Enhanced vMotion Compatibility (EVC) mode. For more information about setting up EVC mode, refer to Enhanced vMotion Compatibility (EVC) Processor Support.
39. Right-click Infrastructure in the left pane.
40. Select Add Host.
41. Type the host IP address and click Next.
42. Type root as the user name and root password as the password. Click Next to Continue.
43. Click Yes to accept the certificate.
44. Review the host details and click Next to continue.
45. Assign a license and click Next to continue.
46. Click Next to continue.
47. Click Next to continue.
48. Review the configuration parameters then click Finish to add the host.
49. Repeat this for the other hosts and clusters.
50. When completed, the vCenter cluster configuration is comprised of the following clusters, including a cluster to manage the workload launcher hosts:
To configure Network Time Protocol (NTP) on the ESXi hosts, complete the following steps on each host:
1. From each vSphere Client, select the host in the inventory.
2. Click the Configuration tab to enable configurations.
3. Click Time Configuration in the Software pane.
4. Click Properties at the upper right side of the window.
5. At the bottom of the Time Configuration dialog box, click Options.
6. In the NTP Daemon Options dialog box, complete the following steps:
- Click General in the left pane and select Start and stop with host.
- Click NTP Settings in the left pane and click Add.
7. In the Add NTP Server dialog box, enter <<var_global_ntp_server_ip>> as the IP address of the NTP server and click OK.
8. In the NTP Daemon Options dialog box, select the Restart NTP Service to Apply Changes checkbox and click OK.
9. In the Time Configuration dialog box, complete the following steps:
- Select the NTP Client Enabled checkbox and click OK.
- Verify that the clock is now set to approximately the correct time.
The NTP server time may vary slightly from the host time.
ESXi hosts booted from SAN need to be configured to do core dumps to the ESXi Dump Collector that is part of vCenter. The Dump Collector is not enabled by default on the vCenter Appliance. To setup the ESXi Dump Collector, complete the following steps:
1. In the vSphere web client, select Home > Administration.
2. In the left hand pane, click System Configuration.
3. In the left hand pane, click Services > VMware vSphere ESXi Dump Collector.
4. In the Actions menu, choose Start.
5. In the Actions menu, click Edit Startup Type.
6. Select Automatic.
7. Click OK.
8. On the Management Workstation, open the VMware vSphere CLI command prompt.
9. Set each SAN-booted ESXi Host to coredump to the ESXi Dump Collector by running the following commands:
esxcli –s <<var_vm_host_ip>> -u root –p <<var_password>> --thumbprint <host_thumbprint> system coredump network set –-interface-name vmk0 –-server-ipv4 <<var_vcenter_server_ip> --server-port 6500
To get the host thumbprint, type the command without the –-thumbprint option, then copy and paste the thumbprint into the command.
esxcli –s <<var_vm_host_ip>> -u root –p <<var_password>> --thumbprint <host_thumbprint> system coredump network set –-enable true
esxcli –s <<var_vm_host_ip>> -u root –p <<var_password>> --thumbprint <host_thumbprint> system coredump network check
This section provides detailed procedures for installing the VMware vDS on the FlexPod ESXi Desktop Workload Hosts.
In the Cisco UCS Configuration section of this document one set of vNICs (A and B) was setup. The vmnic ports associated with the A and B vNICs will be migrated to VMware vDS in this procedure. The critical infrastructure VLAN interfaces and vMotion interfaces will be placed on the vDS.
To configure the vDS from VMware vSphere Web Client, complete the following steps:
1. After logging into the VMware vSphere Web Client, select Networking under the Home tab.
2. Right-click the VDI-DC datacenter and select Distributed Switch > New Distributed Switch.
3. Give the Distributed Switch a name VDI-DVS and click Next.
4. Make sure Distributed switch: 6.5.0 is selected and click Next.
5. Change the Number of uplinks to 2. Leave Network I/O Control Enabled. Otherwise, Disable Network I/O Control. Do not check Create default Port group name. Click Next.
6. Review the information and click Finish to complete creating the vDS.
7. On the left, expand the VDI-DC datacenter and the newly created vDS. Select the newly created vDS.
8. In the center pane, select the New Distributed Port Group icon. Configure the following port groups:
9. Edit newly created Port Groups. Go to teaming and failover and using arrows place Uplink 1 and Uplink 2 on the list of Active uplinks.
10. Select the vDS on the left. Click the Edit distributed switch settings icon on the right.
11. On the left in the Edit Settings window, select Advanced.
12. Change the MTU to 9000. Click OK.
After a vSphere VDI-DVS distributed switch is created, add hosts from each vSphere cluster and physical adapters to create FlexPod virtual network using these steps:
1. Right-click VDI-DVS distributed switch in the vSphere Web Client and select Add and Manage Hosts.
2. On the Select tasks page, select Add hosts.
3. Click Next.
4. On the Select hosts page, click Add New hosts. The select new hosts dialog box opens. Select a host from the list and click OK.
5. Click Next.
6. Select Manage physical network adapters and Manage VMkernel adapters.
7. On the Manage physical network adapters page, configure physical NICs on the distributed switch.
- From the On other switches/unclaimed list, select a physical NIC.
If you select physical NICs that are already connected to other switches, they are migrated to the current distributed switch.
- Click Assign uplink.
- Select an uplink and click OK.
8. Click Next.
9. On the Manage VMkernel network adapters page, configure VMkernel adapters.
- Select Host management VMkernel adapter and click Assign port group.
- Select a DV-Mgmt distributed port group and click OK.
10. Repeat the process for the vMotion VMkernel adapter. Use appropriate distributed port group (DV-vMotion).
11. Click Next.
12. Review the impacted services as well as the level of impact.
13. Click Next.
14. On the Ready to complete page, review the settings you selected and click Finish.
This section details how to configure the software infrastructure components that comprise this solution.
Install and configure the infrastructure virtual machines by following the process provided in Table 43.
Table 43 Test Infrastructure Virtual Machine Configuration
Configuration | Citrix XenDesktop Controllers Virtual Machines | Citrix Provisioning Servers Virtual Machines |
Operating system | Microsoft Windows Server 2016 | Microsoft Windows Server 2016 |
Virtual CPU amount | 6 | 8 |
Memory amount | 8 GB | 16 GB |
Network | VMXNET3 Infra-Mgmt | VMXNET3 VDI |
Disk-1 (OS) size | 40 GB | 40 GB |
Configuration | Microsoft Active Directory DCs Virtual Machines | vCenter Server Appliance Virtual Machine |
Operating system | Microsoft Windows Server 2016 | VCSA – SUSE Linux |
Virtual CPU amount | 2 | 16 |
Memory amount | 4 GB | 32 GB |
Network | VMXNET3 Infra-Mgmt | VMXNET3 In-Band-Mgmt |
Disk size | 40 GB
| 599 GB (across 12 VMDKs)
|
Configuration | Microsoft SQL Server Virtual Machine | Citrix StoreFront Controller Virtual Machine |
Operating system | Microsoft Windows Server 2016 Microsoft SQL Server 2012 SP1 | Microsoft Windows Server 2016 |
Virtual CPU amount | 6 | 4 |
Memory amount | 24GB | 8 GB |
Network | VMXNET3 Infra-Mgmt | VMXNET3 Infra-Mgmt |
Disk-1 (OS) size | 40 GB | 40 GB |
Disk-2 size | 100 GB SQL Databases\Logs | - |
This section provides guidance around creating the golden (or master) images for the environment. VMs for the master targets must first be installed with the software components needed to build the golden images. For this CVD, the images contain the basics needed to run the Login VSI workload.
To prepare the master VMs for the Hosted Virtual Desktops (HVDs) and Hosted Shared Desktops (HSDs), there are three major steps: installing the PVS Target Device x64 software, installing the Virtual Delivery Agents (VDAs), and installing application software.
The master target Hosted Virtual Desktop (HVD) and Hosted Shared Desktop (HSD) VMs were configured as detailed in Table 44:
Table 44 VDI and RDS Configurations
Configuration | HVD Virtual Machines | HSD Virtual Machines |
Operating system | Microsoft Windows 10 64-bit | Microsoft Windows Server 2016 |
Virtual CPU amount | 2 | 9 |
Memory amount | 2 GB reserve for all guest memory | 24 GB reserve for all guest memory |
Network | VMXNET3 DV-VDI | VMXNET3 DV-VDI |
Citrix PVS vDisk size Full Clone Disk Size | 24 GB (dynamic) 100 GB | 40 GB (dynamic)
|
Citrix PVS write cache Disk size | 6 GB | 30 GB |
Citrix PVS write cache RAM cache size | 64 MB | 1024 MB |
Additional software used for testing | Microsoft Office 2016 Login VSI 4.1.25 (Knowledge Worker Workload) | Microsoft Office 2016 Login VSI 4.1.25 (Knowledge Worker Workload) |
This section details the installation of the core components of the XenDesktop/XenApp 7.15 system. This CVD installs two XenDesktop Delivery Controllers to support both hosted shared desktops (HSD), non-persistent hosted virtual desktops (HVD), and persistent hosted virtual desktops (HVD).
Citrix recommends that you use Secure HTTP (HTTPS) and a digital certificate to protect vSphere communications. Citrix recommends that you use a digital certificate issued by a certificate authority (CA) according to your organization's security policy. Otherwise, if security policy allows, use the VMware-installed self-signed certificate.
To install vCenter Server self-signed Certificate, complete the following steps:
1. Add the FQDN of the computer running vCenter Server to the hosts file on that server, located at SystemRoot/
WINDOWS/system32/Drivers/etc/. This step is required only if the FQDN of the computer running vCenter Server is not already present in DNS.
2. Open Internet Explorer and enter the address of the computer running vCenter Server (for example, https://FQDN as the URL).
3. Accept the security warnings.
4. Click the Certificate Error in the Security Status bar and select View certificates.
5. Click Install certificate, select Local Machine, and then click Next.
6. Select Place all certificates in the following store and then click Browse.
7. Select Show physical stores.
8. Select Trusted People.
9. Click Next and then click Finish.
10. Perform the above steps on all Delivery Controllers and Provisioning Servers.
The process of installing the XenDesktop Delivery Controller also installs other key XenDesktop software components, including Studio, which is used to create and manage infrastructure components, and Director, which is used to monitor performance and troubleshoot problems.
Dedicated StoreFront and License servers should be implemented for large scale deployments.
To install the Citrix License Server, complete the following steps:
1. To begin the installation, connect to the first Citrix License server and launch the installer from the Citrix XenDesktop 7.15 ISO.
2. Click Start.
3. Click “Extend Deployment – Citrix License Server.”
4. Read the Citrix License Agreement.
5. If acceptable, indicate your acceptance of the license by selecting the “I have read, understand, and accept the terms of the license agreement” radio button.
6. Click Next.
7. Click Next.
8. Select the default ports and automatically configured firewall rules.
9. Click Next.
10. Click Install.
11. Click Finish to complete installation.
To install the Citrix Licenses, complete the following steps:
1. Copy the license files to the default location (C:\Program Files (x86)\Citrix\Licensing\ MyFiles) on the license server.
2. Restart the server or Citrix licensing services so that the licenses are activated.
3. Run the application Citrix License Administration Console.
4. Confirm that the license files have been read and enabled correctly.
1. To begin the installation, connect to the first XenDesktop server and launch the installer from the Citrix XenDesktop 7.15ISO.
2. Click Start.
The installation wizard presents a menu with three subsections.
3. Click “Get Started - Delivery Controller.”
4. Read the Citrix License Agreement.
5. If acceptable, indicate your acceptance of the license by selecting the “I have read, understand, and accept the terms of the license agreement” radio button.
6. Click Next.
7. Select the components to be installed on the first Delivery Controller Server:
- Delivery Controller
- Studio
- Director
8. Click Next.
9. Since a dedicated SQL Server will be used to Store the Database, leave “Install Microsoft SQL Server 2012 SP1 Express” unchecked.
10. Click Next.
11. Select the default ports and automatically configured firewall rules.
12. Click Next.
13. Click Install to begin the installation.
14. (Optional) Configure Smart Tools/Call Home participation.
15. Click Next.
16. Click Finish to complete the installation.
17. (Optional) Check Launch Studio to launch Citrix Studio Console.
After the first controller is completely configured and the Site is operational, you can add additional controllers. In this CVD, we created two Delivery Controllers.
To configure additional XenDesktop controllers, complete the following steps:
1. To begin the installation of the second Delivery Controller, connect to the second XenDesktop server and launch the installer from the Citrix XenDesktop 7.15ISO.
2. Click Start.
3. Click Delivery Controller.
4. Repeat the same steps used to install the first Delivery Controller, including the step of importing an SSL certificate for HTTPS between the controller and vSphere.
5. Review the Summary configuration.
6. Click Install.
7. (Optional) Configure Smart Tools/Call Home participation.
8. Click Next.
9. Verify the components installed successfully.
10. Click Finish.
Citrix Studio is a management console that allows you to create and manage infrastructure and resources to deliver desktops and applications. Replacing Desktop Studio from earlier releases, it provides wizards to set up your environment, create workloads to host applications and desktops, and assign applications and desktops to users.
Citrix Studio launches automatically after the XenDesktop Delivery Controller installation, or if necessary, it can be launched manually. Studio is used to create a Site, which is the core XenDesktop 7.15environment consisting of the Delivery Controller and the Database.
To configure XenDesktop, complete the following steps:
1. From Citrix Studio, click the Deliver applications and desktops to your users button.
2. Select the “An empty, unconfigured Site” radio button.
3. Enter a site name.
4. Click Next.
5. Provide the Database Server Locations for each data type and click Next.
For an AlwaysOn Availability Group, use the group’s listener DNS name.
6. Click Select to specify additional controllers.
7. Click Add, specify controller FQDN, and click OK.
8. Click Save.
9. Click Next.
10. Provide the FQDN of the license server.
11. Click Connect to validate and retrieve any licenses from the server.
If no licenses are available, you can use the 30-day free trial or activate a license file.
12. Select the appropriate product edition using the license radio button.
13. Click Next.
14. Verify information on the Summary page.
15. Click Finish.
1. From Configuration > Hosting in Studio click Add Connection and Resources in the right pane.
2. Select the Connection type of VMware vSphere®.
3. Enter the FQDN of the vCenter server (in Server_FQDN/sdk format).
4. Enter the username (in domain\username format) for the vSphere account.
5. Provide the password for the vSphere account.
6. Provide a connection name.
7. Check Studio Tools radio button required to support desktop provisioning task by this connection.
8. Click Next.
9. Accept the certificate and click OK to trust the hypervisor connection.
10. Select Cluster that will be used by this connection.
11. Check Use storage shared by hypervisors radio button.
12. Click Next.
13. Make Storage selection to be used by this connection, use all provisioned for desktops NFS datastores.
14. Click Next.
15. Make Network selection to be used by this connection.
16. Click Next.
17. Review Site configuration Summary and click Finish.
To add resources to the additional vcenter clusters, complete the following steps:
1. From Configuration > Hosting in Studio click Add Connection and Resources in the right pane.
2. Select Use an existing Connection, use connection previously created for FlexPod environment.
3. Click Next.
4. Select Cluster you adding to this connection.
5. Check Use storage shared by hypervisors radio button.
6. Click Next.
7. Make Storage selection to be used by this connection, use all provisioned for desktops NFS datastores.
8. Click Next.
9. Make Network selection to be used by this connection.
10. Click Next.
11. Review the Site configuration Summary and click Finish.
12. Repeat these steps to add all additional clusters (Figure 35).
Figure 35 FlexPod Hosting Connection in Studio with Three Clusters
1. Connect to the XenDesktop server and open Citrix Studio Management console.
2. From the Configuration menu, right-click Administrator and select Create Administrator from the drop-down list.
3. Select/Create appropriate scope and click Next.
4. Choose an appropriate Role.
5. Review the Summary, check Enable administrator and click Finish.
Citrix StoreFront stores aggregate desktops and applications from XenDesktop sites, making resources readily available to users. In this CVD, we created two StoreFront servers on dedicated virtual machines.
1. To begin the installation of the StoreFront, connect to the first StoreFront server and launch the installer from the Citrix XenDesktop 7.15 ISO.
2. Click Start.
3. Click Extend Deployment Citrix StoreFront.
4. If acceptable, indicate your acceptance of the license by selecting the “I have read, understand, and accept the terms of the license agreement” radio button.
5. Click Next.
6. Click Next.
7. Select the default ports and automatically configured firewall rules.
8. Click Next.
9. Click Install.
10. (Optional) Click “I want to participate in Call Home.”
11. Click Next.
12. Check “Open the StoreFront Management Console."
13. Click Finish.
14. Click Create a new deployment.
15. Specify the URL of the StoreFront server.
For a multiple server deployment use the load balancing environment in the Base URL box.
16. Click Next.
17. Specify a name for your store.
18. Click Next.
19. Add the required Delivery Controllers to the store.
20. Click Next.
21. Specify how connecting users can access the resources, in this environment only local users on the internal network are able to access the store.
22. Click Next.
23. On the ”Authentication Methods” page, select the methods your users will use to authenticate to the store. The following methods were configured in this deployment:
- Username and password: Users enter their credentials and are authenticated when they access their stores.
- Domain passthrough: Users authenticate to their domain-joined Windows computers and their credentials are used to log them on automatically when they access their stores.
24. Click Next.
25. Configure the XenApp Service URL for users who use PNAgent to access the applications and desktops.
26. Click Create.
.
27. After creating the store click Finish.
After the first StoreFront server is completely configured and the Store is operational, you can add additional servers.
To configure additional StoreFront server, complete the following steps:
1. To begin the installation of the second StoreFront, connect to the second StoreFront server and launch the installer from the Citrix XenDesktop 7.15 ISO.
2. Click Start.
3. Click Extended Deployment Citrix StoreFront.
4. Repeat the same steps used to install the first StoreFront.
5. Review the Summary configuration.
6. Click Install.
7. (Optional) Click “I want to participate in Call Home.”
8. Click Next.
9. (Optional) check “Open the StoreFront Management Console."
10. Click Finish.
To configure second StoreFront if used, complete the following steps:
1. From the StoreFront Console on the second server select “Join existing server group.”
2. In the Join Server Group dialog, enter the name of the first Storefront server.
3. Before the additional StoreFront server can join the server group, you must connect to the first Storefront server, add the second server, and obtain the required authorization information.
4. Connect to the first StoreFront server.
5. Using the StoreFront menu on the left, you can scroll through the StoreFront management options.
6. Select Server Group from the menu.
7. To add the second server and generate the authorization information that allows the additional StoreFront server to join the server group, select Add Server from Actions pane.
8. Copy the Authorization code from the Add Server dialog.
9. Connect to the second Storefront server and paste the Authorization code into the Join Server Group dialog.
10. Click Join.
11. A message appears when the second server has joined successfully.
12. Click OK.
13. Verify Server Group status on the first StoreFront Server.
The second StoreFront is now in the Server Group.
In most implementations, there is a single vDisk providing the standard image for multiple target devices. Thousands of target devices can use a single vDisk shared across multiple Provisioning Services (PVS) servers in the same farm, simplifying virtual desktop management. This section describes the installation and configuration tasks required to create a PVS implementation.
The PVS server can have many stored vDisks, and each vDisk can be several gigabytes in size. Your streaming performance and manageability can be improved using a RAID array, SAN, or NAS. PVS software and hardware requirements are available in the Provisioning Services 7.15 document.
Set the following Scope Options on the DHCP server hosting the PVS target machines (for example, VDI, RDS).
The Boot Server IP was configured for Load Balancing by NetScaler VPX to support High Availability of TFTP service.
To Configure TFTP Load Balancing, complete the following steps:
1. Create Virtual IP for TFTP Load Balancing.
2. Configure servers that are running TFTP (your Provisioning Servers).
3. Define TFTP service for the servers (Monitor used: udp-ecv).
4. Configure TFTP for load balancing.
As a Citrix best practice cited in this CTX article, apply the following registry setting both the PVS servers and target machines:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TCPIP\Parameters\
Key: "DisableTaskOffload" (dword)
Value: "1"
Only one MS SQL database is associated with a farm. You can choose to install the Provisioning Services database software on an existing SQL database, if that machine can communicate with all Provisioning Servers within the farm, or with a new SQL Express database machine, created using the SQL Express software that is free from Microsoft.
The following databases are supported: Microsoft SQL Server 2008 SP3 through 2016 (x86, x64, and Express editions). Microsoft SQL 2016 was installed separately for this CVD.
High availability will be available for the databases once added to the SQL AlwaysOn Availability Group CTX201203
To install and configure Citrix Provisioning Service 7.15, complete the following steps:
1. Insert the Citrix Provisioning Services 7.15 ISO and let AutoRun launch the installer.
2. Click the Console Installation button.
3. Click Install to install the required prerequisites.
4. Click Next to start console installation.
5. Read the Citrix License Agreement.
6. If acceptable, select the radio button labeled “I accept the terms in the license agreement.”
7. Click Next.
8. Optionally provide User Name and Organization.
9. Click Next.
10. Accept the default path.
11. Click Next.
12. Click Install to start the console installation.
13. Click Finish after successful installation.
14. From the main installation screen, select Server Installation.
15. The installation wizard will check to resolve dependencies and then begin the PVS server installation process.
16. Click Install on the prerequisites dialog.
17. Click Yes when prompted to install the SQL Native Client.
18. Click Next when the Installation wizard starts.
19. Review the license agreement terms.
20. If acceptable, select the radio button labeled “I accept the terms in the license agreement.”
21. Click Next.
22. Provide User Name, and Organization information. Select who will see the application.
23. Click Next.
24. Accept the default installation location.
25. Click Next.
26. Click Install to begin the installation.
27. Click Finish when the install is complete.
28. The PVS Configuration Wizard starts automatically.
29. Click Next.
30. Since the PVS server is not the DHCP server for the environment, select the radio button labeled, “The service that runs on another computer.”
31. Click Next.
32. Since DHCP boot options 66 and 67 are used for TFTP services, select the radio button labeled, “The service that runs on another computer.”
33. Click Next.
34. Since this is the first server in the farm, select the radio button labeled, “Create farm.”
35. Click Next.
36. Enter the FQDN of the SQL server.
37. Click Next.
38. Provide the Database, Farm, Site, and Collection names.
39. Click Next.
40. Provide the vDisk Store details.
41. Click Next.
For large scale PVS environment it is recommended to create the share using support for CIFS/SMB3 on an enterprise ready File Server.
42. Provide the FQDN of the license server.
43. Optionally, provide a port number if changed on the license server.
44. Click Next.
If an Active Directory service account is not already setup for the PVS servers, create that account prior to clicking Next on this dialog.
45. Select the Specified user account radio button.
46. Complete the User name, Domain, Password, and Confirm password fields, using the PVS account information created earlier.
47. Click Next.
48. Set the Days between password updates to 7.
This will vary per environment. “7 days” for the configuration was appropriate for testing purposes.
49. Click Next.
50. Keep the defaults for the network cards.
51. Click Next.
52. Select Use the Provisioning Services TFTP service checkbox.
53. Click Next.
54. Make sure that the IP Addresses for all PVS servers are listed in the Stream Servers Boot List.
55. Click Next.
56. If Soap Server is used, provide details.
57. Click Next.
58. If desired fill in Problem Report Configuration.
59. Click Next.
60. Click Finish to start installation.
61. When the installation is completed, click Done.
Complete the installation steps on the additional PVS servers up to the configuration step where it asks to Create or Join a farm. In this CVD, we repeated the procedure to add a total of three PVS servers. To install additional PVS server, complete the following steps:
1. On the Farm Configuration dialog, select “Join existing farm.”
2. Click Next.
3. Provide the FQDN of the SQL Server.
4. Click Next.
5. Accept the Farm Name.
6. Click Next.
7. Accept the Existing Site.
8. Click Next.
9. Accept the existing vDisk store.
10. Click Next.
11. Provide the PVS service account information.
12. Click Next.
13. Set the Days between password updates to 7.
14. Click Next.
15. Accept the network card settings.
16. Click Next.
17. Select Use the Provisioning Services TFTP service checkbox.
18. Click Next.
19. Make sure that the IP Addresses for all PVS servers are listed in the Stream Servers Boot List.
20. Click Next.
21. If Soap Server is used, provide details.
22. Click Next.
23. If desired fill in Problem Report Configuration.
24. Click Next.
25. Click Finish to start the installation process.
26. Click Done when the installation finishes.
You can optionally install the Provisioning Services console on the second PVS server following the procedure in the section Installing Provisioning Services.
After completing the steps to install the three additional PVS servers, launch the Provisioning Services Console to verify that the PVS Servers and Stores are configured and that DHCP boot options are defined.
27. Launch Provisioning Services Console and select Connect to Farm.
28. Enter localhost for the PVS1 server.
29. Click Connect.
30. Select Store Properties from the drop-down list.
31. In the Store Properties dialog, add the Default store path to the list of Default write cache paths.
32. Click Validate. If the validation is successful, click Close and OK to continue.
Virtual Delivery Agents (VDAs) are installed on the server and workstation operating systems, and enable connections for desktops and apps. The following procedure was used to install VDAs for both HVD and HSD environments.
By default, when you install the Virtual Delivery Agent, Citrix User Profile Management is installed silently on master images. (Using profile management as a profile solution is optional but was used for this CVD, and is described in a later section.)
To install XenDesktop Virtual Desktop Agents, complete the following steps:
1. Launch the XenDesktop installer from the XenDesktop 7.15 ISO.
2. Click Start on the Welcome Screen.
3. To install the VDA for the Hosted Virtual Desktops (VDI), select Virtual Delivery Agent for Windows Desktop OS.
4. After the VDA is installed for Hosted Virtual Desktops, repeat the procedure to install the VDA for Hosted Shared Desktops (RDS). In this case, select Virtual Delivery Agent for Windows Server OS and follow the same basic steps.
5. Select “Create a Master Image.”
6. Click Next.
7. For the VDI vDisk, select “No, install the standard VDA.”
Select Yes, install in HDX 3D Pro Mode if VM will be used with vGPU. For more information, go to section Configure VM with vGPU.
8. Click Next.
9. Optional: Do not select Citrix Receiver.
10. Click Next.
11. Select additional components required for your image. In this design, only UPM and MCS components were installed on the image.
Deselect Citrix Machine Identity Service when building a master image for use with Citrix Provisioning Services.
12. Click Next
13. Do not configure Delivery Controllers at this time.
14. Click Next.
15. Accept the default features.
16. Click Next.
17. Allow the firewall rules to be configured Automatically.
18. Click Next.
19. Verify the Summary and click Install.
The machine will reboot automatically during installation.
20. (Optional) Configure Smart Tools/Call Home participation.
21. Click Next.
22. Check “Restart Machine.”
23. Click Finish and the machine will reboot automatically.
The Master Target Device refers to the target device from which a hard disk image is built and stored on a vDisk. Provisioning Services then streams the contents of the vDisk created to other target devices. This procedure installs the PVS Target Device software that is used to build the RDS and VDI golden images.
To install the Citrix Provisioning Server Target Device software, complete the following steps:
The instructions below outline the installation procedure to configure a vDisk for VDI desktops. When you have completed these installation steps, repeat the procedure to configure a vDisk for RDS.
1. Launch the PVS installer from the Provisioning Services 7.15 ISO.
2. Click the Target Device Installation button.
The installation wizard will check to resolve dependencies and then begin the PVS target device installation process.
3. Click Next.
4. Indicate your acceptance of the license by selecting the “I have read, understand, and accept the terms of the license agreement” radio button.
5. Click Next.
6. Optionally. provide the Customer information.
7. Click Next.
8. Accept default installation path.
9. Click Next.
10. Click Install.
11. Deselect the checkbox to launch the Imaging Wizard and click Finish.
12. Click Yes to reboot the machine.
The PVS Imaging Wizard automatically creates a base vDisk image from the master target device. To create the Citrix Provisioning Server vDisks, complete the following steps:
The instructions below describe the process of creating a vDisk for VDI desktops. When you have completed these steps, repeat the procedure to build a vDisk for RDS.
1. The PVS Imaging Wizard's Welcome page appears.
2. Click Next.
3. The Connect to Farm page appears. Enter the name or IP address of a Provisioning Server within the farm to connect to and the port to use to make that connection.
4. Use the Windows credentials (default) or enter different credentials.
5. Click Next.
6. Select Create new vDisk.
7. Click Next.
8. The Add Target Device page appears.
9. Select the Target Device Name, the MAC address associated with one of the NICs that was selected when the target device software was installed on the master target device, and the Collection to which you are adding the device.
10. Click Next.
11. The New vDisk dialog displays. Enter the name of the vDisk.
12. Select the Store where the vDisk will reside. Select the vDisk type, either Fixed or Dynamic, from the drop-down list.
This CVD used Dynamic rather than Fixed vDisks.
13. Click Next.
14. On the Microsoft Volume Licensing page, select the volume license option to use for target devices. For this CVD, volume licensing is not used, so the None button is selected.
15. Click Next.
16. Select Image entire boot disk on the Configure Image Volumes page.
17. Click Next.
18. Select Optimize for hard disk again for Provisioning Services before imaging on the Optimize Hard Disk for Provisioning Services.
19. Click Next.
20. Select Create on the Summary page.
21. Review the configuration and click Continue.
22. When prompted, click No to shut down the machine.
23. Edit the VM settings and select Force BIOS Setup under Boot Options.
24. Configure the BIOS/VM settings for PXE/network boot, putting Network boot from VMware VMXNET3 at the top of the boot device list.
25. Select Exit Saving Changes.
After restarting the VM, log into the HVD or HSD master target. The PVS imaging process begins, copying the contents of the C: drive to the PVS vDisk located on the server.
26. If prompte