Cisco Validated Design for a 5,000 Seat Virtual Desktop Infrastructure using Citrix XenDesktop/XenApp 7.7. Built on Cisco UCS and Cisco Nexus 9000 Series with NetApp AFF 8080EX and the VMware vSphere ESXi 6.0 Update 1 Hypervisor Platform
Last Updated: May 15, 2018
About Cisco Validated Designs
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit:
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2016 Cisco Systems, Inc. All rights reserved.
Table of Contents
FlexPod Data Center with Cisco UCS
Benefits of Cisco Unified Computing System
Benefits of Cisco Nexus Physical and Virtual Switching
Benefits of NetApp Cluster Data ONTAP Storage Controllers
Benefits of VMware vSphere ESXi 6.0
Benefits of Citrix XenApp and XenDesktop 7.7
Cisco Unified Computing System
Cisco Unified Computing System Components
Cisco UCS B200 M4 Blade Server
Cisco UCS VIC1340 Converged Network Adapter
Cisco Nexus 1000V Distributed Virtual Switch
Important Differentiators for the Cisco Nexus 1000V for VMware vSphere
Citrix XenApp and XenDesktop 7.7
Improved Database Flow and Configuration
Multiple Notifications before Machine Updates or Scheduled Restarts
API Support for Managing Session Roaming
API Support for Provisioning VMs from Hypervisor Templates
Support for New and Additional Platforms
Citrix Provisioning Services 7.7
NetApp AFF8080EX-A Used in Testing
Cisco Desktop Virtualization Solutions: Data Center
Cisco Desktop Virtualization Focus
Understanding Applications and Data
Project Planning and Solution Sizing Sample Questions
Desktop Virtualization Design Fundamentals
Example XenDesktop Deployments
Designing a XenDesktop Environment for a Mixed Workload
High-Level Architecture Design
NetApp Architecture Design Best Practice Guidelines
Storage Architecture Design Layout
Network Port Settings (8.3 or Later)
Network Port Broadcast Domains
Storage Efficiency And Space Management
Deployment Hardware and Software
Cisco Unified Computing System Configuration
Configure Fabric Interconnect at Console
Base Cisco UCS System Configuration
Enable Server Uplink and Storage Ports
Set Jumbo Frames in Cisco UCS Fabric
Create Local Disk Configuration Policy (Optional)
Create Network Control Policy for Cisco Discovery Protocol
Create Server Pool Qualification Policy (Optional)
Create vNIC/vHBA Placement Policy for Virtual Machine Infrastructure Hosts
Configure Update Default Maintenance Policy
Create Service Profile Templates
Configuration of AFF8080EX-A with Clustered Data ONTAP
Clustered Data ONTAP 8.3 ADP and Active–Active Configuration
Configuring Boot from iSCSI on NetApp AFF8080EX-A
Storage Considerations for PVS vDisks
Create Storage Volumes for PVS vDisks
NetApp Storage Configuration for CIFS Shares
Create User Home Directory Shares in Clustered Data ONTAP
Configuring Boot from SAN Overview
Configuring Boot from iSCSI on NetApp AFF8080
Clustered Data ONTAP iSCSI Configuration
iSCSI SAN Configuration on Cisco UCS Manager
Installing and Configuring VMware ESXi 6.0
Download Cisco Custom Image for ESXi 6 Update1
Set Up VMware ESXi Installation
Set Up Management Networking for ESXi Hosts
Download VMware vSphere Client and vSphere Remote CLI
Log in to VMware ESXi Hosts by using VMware vSphere Client
Download Updated Cisco VIC eNIC Drivers
Load Updated Cisco VIC eNIC Drivers
Set Up VMkernel Ports and Virtual Switch
Install and Configure vCenter 6.0
FlexPod VMware vCenter Appliance
Install the Client Integration Plug-in
ESXi Dump Collector Setup for iSCSI-Booted Hosts
Installing and configuring NetApp Virtual Storage Console (VSC)
FlexVol Volumes in Clustered Data ONTAP
FlexPod Cisco Virtual Switch Update Manager and Nexus 1000V
Installing Cisco Virtual Switch Update Manager
Install Cisco Virtual Switch Update Manager
Install Cisco Nexus 1000V using Cisco VSUM
Perform Base Configuration of the Primary VSM
Add VMware ESXi Hosts to Cisco Nexus 1000V
Migrate ESXi Host Redundant Network Ports to Cisco Nexus 1000V
Building the Virtual Machines and Environment
Software Infrastructure Configuration
Installing and Configuring XenDesktop and XenApp
Install XenDesktop Delivery Controller, Citrix Licensing and StoreFront
Additional XenDesktop Controller Configuration
Add the Second Delivery Controller to the XenDesktop Site
Create Host Connections with Citrix Studio
Installing and Configuring Citrix Provisioning Server 7.7
Install Additional PVS Servers
Install XenDesktop Virtual Desktop Agents
Install the Citrix Provisioning Server Target Device Software
Create Citrix Provisioning Server vDisks
Provision Virtual Desktop Machines
Configure User Profile Manager Share on NetApp AFF8080
Citrix XenDesktop Policies and Profile Management
Configure Citrix XenDesktop Policies
Configuring User Profile Management
Install and Configure NVIDIA M6 Card
Physical Install of M6 Card into B200 M4 Server
Install the NVIDIA VMware VIB Driver
Install the GPU Drivers inside your Windows VM
Install and Configure NVIDIA Grid License Server
Installing Cisco UCS Performance Manager
Configure the Control Center Host Mode
Enabling Access to Browser Interfaces
Deploy Cisco UCS Performance Manager
Setting up Cisco UCS Performance Manager
Add Nexus 9000 Series Switches
Cisco UCS Performance Manager Sample Test Data
Cisco UCS Test Configuration for Single Blade Scalability
Cisco UCS Configuration for Cluster Testing
Cisco UCS Configuration for Full Scale Testing
Testing Methodology and Success Criteria
Server-Side Response Time Measurements
Single-Server Recommended Maximum Workload
Single-Server Recommended Maximum Workload Testing
Single-Server Recommended Maximum Workload for RDS with 240 Users
Single-Server Recommended Maximum Workload for VDI Non-Persistent with 195 Users
Single-Server Recommended Maximum Workload for VDI Persistent with 195 Users
Cluster Workload Testing with 2600 RDS Users
Key NetApp AFF8080EX Performance Metrics During RDS Cluster Workload Testing
Key Infrastructure VM Server Performance Metrics During RDS Cluster Workload Testing
Cluster Workload Testing with 1200 Non-Persistent Desktop Users
Key NetApp AFF8080EX-A Performance Metrics during VDI Non-Persistent Cluster Workload Testing
Key Infrastructure VM Server Performance Metrics during VDI Non-Persistent Cluster Workload Testing
Cluster Workload Testing with 1200 Persistent Desktop Users
Key NetApp AFF8080EX Performance Metrics during VDI Persistent Cluster Workload Testing
Key Infrastructure VM Server Performance Metrics during VDI Persistent Cluster Testing
Full Scale Mixed Workload Testing with 5000 Users
Key NetApp AFF8080EX Performance Metrics during Full Scale Testing
Key Infrastructure VM Server Performance Metrics during Full Scale Testing
Scalability Considerations and Guidelines
NetApp FAS Storage Guidelines for Mixed Desktop Virtualization Workloads
Scalability of Citrix XenDesktop 7.7 Configuration
Appendix A Cisco Nexus 9372 Configuration
Appendix B NetApp AFF8080 Monitoring with PowerShell Scripts
Creating User Home Directory Folders with a Powershell Script
Appendix C Additional Test Results
Login VSI Test Report for Full Scale Mixed Testing
This document provides a Reference Architecture for a virtual desktop and application design using Citrix XenApp/XenDesktop 7.7 built on Cisco UCS with a NetApp AFF 8080 EX and the VMware vSphere ESXi 6.0 Update-1 hypervisor platform.
The landscape of desktop and application virtualization is changing constantly. New, high performance Cisco UCS Blade Servers and Cisco UCS unified fabric combined as part of the FlexPod Proven Infrastructure with the latest generation NetApp AFF storage result in a more compact, more powerful, more reliable and more efficient platform.
In addition, the advances in the Citrix XenApp/XenDesktop 7.7 system, which now incorporates both traditional hosted virtual Windows 7, Windows 8, or Windows 10 desktops, hosted applications and hosted shared Server 2008 R2 or Server 2012 R2 server desktops provide unparalleled scale and management simplicity while extending the Citrix HDX FlexCast models to additional mobile devices.
This document provides the architecture and design of a virtual desktop infrastructure for up to 5000 mixed use-case users. The infrastructure is 100 percent virtualized on VMware ESXi 6.0 U1 with fourth-generation Cisco UCS B-Services B200 M4 blade servers booting via iSCSI from a NetApp AFF 8080 EX storage array. The virtual desktops are powered using Citrix Provisioning Server 7.7 and Citrix XenApp/XenDesktop 7.7, with a mix of RDS hosted shared desktops (2600), pooled/non-persistent hosted virtual Windows 7 desktops (1200) and persistent hosted virtual Windows 7 desktops provisioned by NetApp Virtual Storage Console (1200) to support the user population. Where applicable, the document provides best practice recommendations and sizing guidelines for customer deployments of this solution.
The data center market segment is shifting toward heavily virtualized private, hybrid, and public cloud computing models running on industry-standard systems. These environments require uniform design points that can be repeated for ease of management and scalability.
These factors have led to the need for predesigned computing, networking, and storage building blocks optimized to lower the initial design cost, simplify management, and enable horizontal scalability and high levels of utilization.
Use cases include:
§ Enterprise Data Center (small failure domains)
§ Service Provider Data Center (small failure domains)
The FlexPod® Data Center solution combines NetApp® storage systems, Cisco® Unified Computing System servers, and Cisco Nexus fabric into a single, flexible architecture. FlexPod Data Center can scale up for greater performance and capacity or scale out for environments that need consistent, multiple deployments; FlexPod also has the flexibility to be sized and optimized to accommodate different use cases including app workloads such as MS SQL Server, Exchange, MS SharePoint, SAP, Red Hat, VDI, or Secure Multi-tenancy (SMT) environments. FlexPod Data Center delivers:
§ Faster Infrastructure, Workload and Application provisioning
§ Improved IT Staff Productivity
§ Reduced Downtime
§ Reduce Cost of Data Center Facilities, Power, and Cooling
§ Improved Utilization of Compute Resources
§ Improved Utilization of Storage Resources
The FlexPod Data Center with Cisco UCS allows IT departments to address Data Center infrastructure challenges using a streamlined architecture following compute, network and storage best practices.
For more design and use case details, refer to the FlexPod with Cisco UCS Design Guide: FlexPod Datacenter with VMware vSphere 6.0 Design Guide
This document describes the architecture and deployment procedures of an infrastructure comprised of Cisco, NetApp, VMware hypervisor and Citrix desktop/app virtualization products. The intended audience of this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers deploy the core FlexPod architecture with NetApp clustered Data ONTAP running Citrix XenApp/XenDesktop workloads.
This solution is Cisco’s Desktop Virtualization Converged Design with FlexPod providing our customers with a turnkey physical and virtual infrastructure specifically designed to support up to 700 desktop users in a highly available proven design. This architecture is well suited for midsize deployments and enterprise-edge environments of virtual desktop infrastructure.
The combination of technologies from Cisco Systems, Inc., Citrix Systems, Inc., NetApp, and VMware Inc. produced a highly efficient, robust and affordable desktop virtualization solution for a hosted virtual desktop and hosted shared desktop mixed deployment supporting different use cases. Key components of the solution include the following:
§ More power, same size. Cisco UCS B200 M4 half-width blade with dual 12-core 2.5 GHz Intel Xeon (E5-2680v3) processors and 384GB of memory for Citrix XenApp and XenDesktop hosts supports more virtual desktop workloads than the previously released generation processors on the same hardware. The Intel Xeon E5-2680 v3 12-core processors used in this study provided a balance between increased per-blade capacity and cost.
§ Fault-tolerance with high availability built into the design. The various designs are based on using one Unified Computing System chassis with multiple Cisco UCS B200 M4 blades for virtualized desktop and infrastructure workloads. The design provides N+1 server fault tolerance for hosted virtual desktops, hosted shared desktops and infrastructure services.
§ Stress-tested to the limits during aggressive boot scenario. The 5000-user mixed hosted virtual desktop and hosted shared desktop environment booted and registered with the XenDesktop 7.7 Delivery Controllers in under 15 minutes, providing our customers with an extremely fast, reliable cold-start desktop virtualization system.
§ Stress-tested to the limits during simulated login storms. All 5000 simulated users logged in and started running workloads up to steady state in 48-minutes without overwhelming the processors, exhausting memory or exhausting the storage subsystems, providing customers with a desktop virtualization system that can easily handle the most demanding login and startup storms.
§ Ultra-condensed computing for the datacenter. The rack space required to support the system is less than a single rack, conserving valuable data center floor space.
§ Pure Virtualization: This CVD presents a validated design that is 100 percent virtualized on VMware ESXi 6.0. All of the virtual desktops, user data, profiles, and supporting infrastructure components, including Active Directory, Provisioning Servers, SQL Servers, XenDesktop Delivery Controllers, XenDesktop VDI desktops, and XenApp RDS servers were hosted as virtual machines. This provides customers with complete flexibility for maintenance and capacity additions because the entire system runs on the FlexPod converged infrastructure with stateless Cisco UCS Blade servers, and NetApp unified storage.
§ Cisco maintains industry leadership with the new Cisco UCS Manager 3.1(1) software that simplifies scaling, guarantees consistency, and eases maintenance. Cisco’s ongoing development efforts with Cisco UCS Manager, Cisco UCS Central, and Cisco UCS Director insure that customer environments are consistent locally, across Cisco UCS Domains and across the globe, our software suite offers increasingly simplified operational and deployment management, and it continues to widen the span of control for customer organizations’ subject matter experts in compute, storage and network.
§ Our 10G unified fabric story gets additional validation on 6200 Series Fabric Interconnects as Cisco runs more challenging workload testing, while maintaining unsurpassed user response times.
§ NetApp® AFF with clustered Data ONTAP® provides industry-leading storage solutions that efficiently handle the most demanding I/O bursts (for example, login storms), profile management, and user data management, deliver simple and flexible business continuance, and help reduce storage cost per desktop.
§ NetApp AFF provides a simple to understand storage architecture for hosting all user data components (VMs, profiles, user data) on the same storage array.
§ NetApp Clustered Data ONTAP system enables to seamlessly add, upgrade or remove storage from the infrastructure to meet the needs of the virtual desktops.
§ NetApp Virtual Storage Console (VSC) for VMware vSphere hypervisor has deep integrations with vSphere, providing easy-button automation for key storage tasks such as storage repository provisioning, storage resize, data deduplication, directly from VCenter.
§ Latest and greatest virtual desktop and application product. Citrix XenApp™ and XenDesktop™ 7.7 follows a new unified product architecture that supports both hosted-shared desktops and applications (RDS) and complete virtual desktops (VDI). This new XenDesktop release simplifies tasks associated with large-scale VDI management. This modular solution supports seamless delivery of Windows apps and desktops as the number of users increase. In addition, HDX enhancements help to optimize performance and improve the user experience across a variety of endpoint device types, from workstations to mobile devices including laptops, tablets, and smartphones.
§ Optimized to achieve the best possible performance and scale. For hosted shared desktop sessions, the best performance was achieved when the number of vCPUs assigned to the XenApp 7.7 RDS virtual machines did not exceed the number of hyper-threaded (logical) cores available on the server. In other words, maximum performance is obtained when not overcommitting the CPU resources for the virtual machines running virtualized RDS systems.
§ Provisioning desktop machines made easy. Citrix Provisioning Services 7.7 created hosted virtual desktops as well as hosted shared desktops for this solution using a single method for both, the “PVS XenDesktop Setup Wizard”. The addition of the feature “Cache in RAM with overflow on hard disk” greatly reduced the amount of IOPS endured by the storage.
Each of the components of the overall solution materially contributes to the value of functional design contained in this document.
Cisco Unified Computing System™ (UCS) is the first converged data center platform that combines industry-standard, x86-architecture servers with networking and storage access into a single converged system. The system is entirely programmable using unified, model-based management to simplify and speed deployment of enterprise-class applications and services running in bare-metal, virtualized, and cloud computing environments.
The benefits of the Cisco Unified Computing System include:
Architectural Flexibility
§ Cisco UCS B-Series blade servers for infrastructure and virtual workload hosting
§ Cisco UCS C-Series rack-mount servers for infrastructure and virtual workload Hosting
§ Cisco UCS 6200 Series second generation fabric interconnects provide unified blade, network and storage connectivity
§ Cisco UCS 5108 Blade Chassis provide the perfect environment for multi-server type, multi-purpose workloads in a single containment
Infrastructure Simplicity
§ Converged, simplified architecture drives increased IT productivity
§ Cisco UCS management results in flexible, agile, high performance, self-integrating information technology with faster ROI
§ Fabric Extender technology reduces the number of system components to purchase, configure and maintain
§ Standards-based, high bandwidth, low latency virtualization-aware unified fabric delivers high density, excellent virtual desktop user-experience
Business Agility
§ Model-based management means faster deployment of new capacity for rapid and accurate scalability
§ Scale up to 20 Chassis and up to 160 blades in a single Cisco UCS management domain
§ Scale to multiple Cisco UCS Domains with Cisco UCS Central within and across data centers globally
§ Leverage Cisco UCS Management Packs for VMware vCenter 5.1 for integrated management
The Cisco Nexus product family includes lines of physical unified port layer 2, 10 GB switches, fabric extenders, and virtual distributed switching technologies. In our study, we utilized Cisco Nexus 9300 series physical switches and Cisco Nexus 1000V distributed virtual switches to deliver amazing end user experience while extending connectivity control.
With the release of the NetApp® clustered Data ONTAP® storage operating system, NetApp was the first to market with enterprise-ready, unified scale-out storage. Developed from a solid foundation of proven Data ONTAP technology and innovation, clustered Data ONTAP is the basis for virtualized shared storage infrastructures that are architected for nondisruptive operations over the lifetime of the system. For details on how to configure clustered Data ONTAP with VMware vSphere hyper-visor, see the FlexPod Datacenter with VMware vSphere 6.0 Design Guide.
All clustering technologies follow a common set of guiding principles:
§ Nondisruptive operation. Configuring a cluster so that it cannot fail is the key to efficiency and the basis of clustering.
§ Virtualized cluster access. Steady-state operations are abstracted from the storage nodes, allowing the user to interact with the cluster as a single entity. It is only during the initial configuration of the cluster that direct node access is necessary.
§ Data mobility and container transparency. A collection of independent storage nodes work together and are presented as one holistic solution. Therefore, data moves freely and nondisruptively within the boundaries of the cluster regardless of disk type, disk size, or data location.
§ Load balancing. Loads are balanced across clustered storage controller nodes with no interruption to the end user.
§ Hardware flexibility. A cluster can contain different hardware models for scaling up or scaling out. You can start a cluster with inexpensive storage controllers and then add more expensive, high-performance controllers when business demand requires them without sacrificing previous investments.
§ Delegated management. In large complex clusters, workloads can be isolated by delegating or segmenting features and functions into containers that can be acted upon independently of the cluster. Notably, the cluster architecture itself must not create these isolations. This principle should not be confused with security concerns regarding the content being accessed.
Data centers require agility. In a data center, each storage controller has CPU, memory, and disk shelves limits. With scale out, additional controllers can be added seamlessly to the resource pool residing on a shared storage infrastructure as the storage environment grows. Host and client connections as well as storage repositories can be moved seamlessly and nondisruptively anywhere within the resource pool.
Scale-out provides the following benefits:
§ Nondisruptive operations
§ No downtime when adding thousands of users to virtual desktop environments
§ Operational simplicity and flexibility
NetApp clustered Data ONTAP is the first product that offers a complete scale-out solution in an intelligent, adaptable, always-available storage infrastructure, utilizing proven storage efficiency for today's highly virtualized environments. Figure 1 depicts the organization of the NetApp scale-out solution.
Multiprotocol unified architecture supports multiple data access protocols concurrently in the same storage system over a whole range of different controller and disk storage types. Data ONTAP 7G and Data ONTAP operating in 7-Mode have long supported multiple protocols, and now clustered Data ONTAP supports an even wider range of data access protocols. Clustered Data ONTAP 8.2 supports the following protocols:
§ NFS v3, v4, and v4.1, including pNFS
§ SMB 1, 2, 2.1, and 3, including support for non-disruptive failover in Microsoft Hyper-V and Citrix PVS vDisk
§ iSCSI
§ Fibre Channel
§ FCoE
Isolated servers and data storage can result in low utilization, gross inefficiency, and an inability to respond to changing business needs. Cloud architecture and delivering IT as a service (ITaaS) can overcome these limitations while reducing future IT expenditure.
The storage virtual machine (SVM; formerly called Vserver) is the primary cluster logical component. Each SVM can create volumes, logical interfaces, and protocol access. With clustered Data ONTAP, each tenant's virtual desktops and data can be placed on different SVMs. The administrator of each SVM has the rights to provision volumes and other SVM-specific operations. This is particularly advantageous for service providers or any multitenant environments for which workload separation is desired.
Figure 2 shows the multitenancy concept in clustered Data ONTAP.
For complete and consistent management of storage and SAN infrastructure, NetApp recommends using the tools listed in Table 1, unless specified otherwise.
Table 1 Cluster Management Tools
Task | Management Tools |
SVM management | NetApp OnCommand® System Manager |
Switch management and zoning switch vendor | GUI or CLI interfaces |
Volume and LUN provisioning and management | NetApp Virtual Storage Console for Citrix XenServer |
The following key terms are used throughout the remainder of this document:
§ Cluster. The information boundary and domain in which information travels. The cluster is where SVMs operate and also where high availability is defined between the physical nodes.
§ Node. A physical storage entity running Data ONTAP. This physical entity can be a traditional NetApp FAS controller; a supported third-party array front ended by a V-Series controller; or a NetApp virtual storage appliance (VSA) running Data ONTAP-v™.
§ SVM. A secure, virtualized storage controller that appears to end users as a physical entity (similar to a virtual machine [VM]). It is connected to one or more nodes through internal networking relationships (covered later in this document). An SVM is the highest element visible to an external consumer, and it abstracts the layer of interaction from the physical nodes. In other words, an SVM is used to provision cluster resources and can be compartmentalized in a secured manner to prevent access to other parts of the cluster.
The physical interfaces on a node are called ports, and IP addresses are assigned to logical interfaces (LIFs). LIFs are logically connected to a port in much the same way that VM virtual network adapters and VMkernel ports are connected to physical adapters, except without the need for virtual switches and port groups. Physical ports can be grouped into interface groups, and VLANs can be created on top of physical ports or interface groups. LIFs can be associated with a port, interface group, or VLAN.
Figure 3 shows the clustered Data ONTAP networking concept.
Figure 3 Example of ports and LIFs
Most desktop virtualization implementations deploy thousands of desktops from a small number of golden VM images, resulting in a large amount of duplicate data. This is particularly the case with the VM operating system.
The NetApp All Flash FAS (AFF) solution includes built-in thin provisioning, data deduplication, compression, and zero-cost cloning with NetApp FlexClone® technology. Customers get multilevel storage efficiency across virtual desktop data, installed applications, and user data. This comprehensive storage efficiency can significantly reduce the storage footprint for virtualized desktop implementations. Capacity can realistically be reduced by up to 10:1 or 90%, based on existing customer deployments and NetApp solutions lab validation.
The following features make this storage efficiency possible:
§ Thin provisioning allows multiple applications to share a single pool of on-demand storage. This capability eliminates the need to provision more storage for one application when another application still has plenty of allocated but unused storage. Thin provisioning is not really a storage efficiency technology because thin-provisioned VMs do not necessarily remain thin over time, but it can help increase utilization.
§ Inline deduplication saves space on primary storage by removing redundant copies of blocks in a volume that hosts hundreds of virtual desktops prior to writing the data to disks. This process is transparent to the application and the user, and it can be enabled and disabled dynamically by volume. To eliminate any potential concerns about deduplication causing additional wear on the SSDs, NetApp provides up to seven years of support at point-of-sale pricing. This support includes three years of standard and two plus two years of extended support, regardless of the number of drive writes per day. With AFF, deduplication can be run in an inline configuration to maintain storage efficiency over time.
§ FlexClone technology offers hardware-assisted rapid creation of space-efficient, writable, point-in-time images of individual VM files, LUNs, or flexible volumes. It is fully integrated with VMware vSphere vStorage APIs for Array Integration (VAAI) and Microsoft offloaded data transfer (ODX). The use of FlexClone technology in virtual desktop infrastructure deployments provides high levels of scalability and significant savings in cost, space, and time.
Both file-level and volume-level cloning are tightly integrated with the VMware vCenter Server. Integration is supported by the NetApp Virtual Storage Console Provisioning and Cloning vCenter plug-in and native VM cloning offload with VMware VAAI and Microsoft ODX. The Virtual Storage Console provides the flexibility to rapidly provision and redeploy thousands of VMs with hundreds of VMs in each datastore.
§ Inline pattern matching occurs when data is written to the storage system. Incoming data is received and hashed against existing data on the system. If the data is similar, it is marked for bit-for-bit comparison. Any zeros written to the system are removed through inline deduplication.
§ Inline zero deduplication saves space and improves performance by not writing zeroes. Doing so improves storage efficiency by eliminating the need to deduplicate the zeroes. This feature improves cloning time for eager-zeroed thick disk files and eliminates the zeroing of virtual machine disks (VMDKs) that require zeroing before data write, thus increasing SSD life expectancy. This feature is available in Data ONTAP 8.3 and later.
§ Inline compression saves space by compressing data as it enters the storage controller. Inline compression can be beneficial for the different data types that make up a virtual desktop environment. Each of these data types has different capacity and performance requirements, so some types might be better suited for inline compression than others. Using inline compression and deduplication together can significantly increase storage efficiency over using each independently.
§ Advanced drive partitioning distributes the root file system across multiple disks in an HA pair. It allows higher overall capacity utilization by removing the need for dedicated root and spare disks. This feature is available in Data ONTAP 8.3 and later.
VMware vSphere® 6.0, the industry-leading virtualization platform, empowers users to virtualize any application with confidence, redefines availability, and simplifies the virtual data center. The result is a highly available, resilient, on-demand infrastructure that is the ideal foundation for any cloud environment. This blockbuster release contains the following new features and enhancements, many of which are industry-first features.
The following are some key features included with vSphere 6.0:
§ Increased Scalability
§ Expanded Support
§ Extended Graphics Support
§ Instant Clone
§ Storage Policy-Based Management
§ Network IO Control
§ Multicast Snooping
§ vMotion Enhancements
§ Expanded Software-Based Fault Tolerance
§ Enhanced User Interface
Enterprise IT organizations are tasked with the challenge of provisioning Microsoft Windows apps and desktops while managing cost, centralizing control, and enforcing corporate security policy. Deploying Windows apps to users in any location, regardless of the device type and available network bandwidth, enables a mobile workforce that can improve productivity. With Citrix XenDesktop™ 7.7, IT can effectively control app and desktop provisioning while securing data assets and lowering capital and operating expenses.
The XenDesktop™ 7.7 release offers these benefits:
§ Comprehensive virtual desktop delivery for any use case. The XenDesktop 7.7 release incorporates the full power of XenApp, delivering full desktops or just applications to users. Administrators can deploy both XenApp published applications and desktops (to maximize IT control at low cost) or personalized VDI desktops (with simplified image management) from the same management console. Citrix XenDesktop 7.7 leverages common policies and cohesive tools to govern both infrastructure resources and user access.
§ Simplified support and choice of BYO (Bring Your Own) devices. XenDesktop 7.7 brings thousands of corporate Microsoft Windows-based applications to mobile devices with a native-touch experience and optimized performance. HDX technologies create a “high definition” user experience, even for graphics intensive design and engineering applications.
§ Lower cost and complexity of application and desktop management. XenDesktop 7.7 helps IT organizations take advantage of agile and cost-effective cloud offerings, allowing the virtualized infrastructure to flex and meet seasonal demands or the need for sudden capacity changes. IT organizations can deploy XenDesktop application and desktop workloads to private or public clouds.
§ Protection of sensitive information through centralization. XenDesktop decreases the risk of corporate data loss, enabling access while securing intellectual property and centralizing applications since assets reside in the datacenter.
This section describes the infrastructure components used in the solution outlined in this study.
Cisco UCS Manager provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System™ (Cisco UCS) through an intuitive GUI, a command-line interface (CLI), and an XML API. The manager provides a unified management domain with centralized management capabilities and can control multiple chassis and thousands of virtual machines.
Cisco UCS is a next-generation data center platform that unites computing, networking, and storage access. The platform, optimized for virtual environments, is designed using open industry-standard technologies and aims to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency; lossless 10 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. It is an integrated, scalable, multichassis platform in which all resources participate in a unified management domain.
The main components of Cisco UCS (Figure 4) are:
§ Computing: The system is based on an entirely new class of computing system that incorporates blade servers based on Intel® Xeon® processor E5-2600/4600 v3 and E7-2800 v3 family CPUs.
§ Network: The system is integrated on a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing (HPC) networks, which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables needed, and by decreasing the power and cooling requirements.
§ Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.
§ Storage access: The system provides consolidated access to local storage, SAN storage, and network-attached storage (NAS) over the unified fabric. With storage access unified, Cisco UCS can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Small Computer System Interface over IP (iSCSI) protocols. This capability provides customers with choice for storage access and investment protection. In addition, server administrators can preassign storage-access policies for system connectivity to storage resources, simplifying storage connectivity and management and helping increase productivity.
§ Management: Cisco UCS uniquely integrates all system components, enabling the entire solution to be managed as a single entity by Cisco UCS Manager. The manager has an intuitive GUI, a CLI, and a robust API for managing all system configuration processes and operations.
Figure 4 Cisco Unified Computing System Components
Cisco UCS is designed to deliver:
§ Reduced TCO and increased business agility
§ Increased IT staff productivity through just-in-time provisioning and mobility support
§ A cohesive, integrated system that unifies the technology in the data center; the system is managed, serviced, and tested as a whole
§ Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand
§ Industry standards supported by a partner ecosystem of industry leaders
Cisco UCS Manager, release 3.1 is a unified software release for all supported Cisco UCS hardware platforms. The release adds support for HTML5 interface in addition to the Java interface, both of which are available across all platforms
The Cisco UCS 6200 Series Fabric Interconnects are a core part of Cisco UCS, providing both network connectivity and management capabilities for the system. The Cisco UCS 6200 Series offers line-rate, low-latency, lossless 10 Gigabit Ethernet, FCoE, and Fibre Channel functions.
The fabric interconnects provide the management and communication backbone for the Cisco UCS B-Series Blade Servers and Cisco UCS 5100 Series Blade Server Chassis. All chassis, and therefore all blades, attached to the fabric interconnects become part of a single, highly available management domain. In addition, by supporting unified fabric, the Cisco UCS 6200 Series provides both LAN and SAN connectivity for all blades in the domain.
For networking, the Cisco UCS 6200 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 Gigabit Ethernet on all ports, 1-terabit (Tb) switching capacity, and 160 Gbps of bandwidth per chassis, independent of packet size and enabled services. The product series supports Cisco low-latency, lossless, 10 Gigabit Ethernet unified network fabric capabilities, increasing the reliability, efficiency, and scalability of Ethernet networks. The fabric interconnects support multiple traffic classes over a lossless Ethernet fabric, from the blade server through the interconnect. Significant TCO savings come from an FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated.
Figure 5 Cisco UCS 6200 Series Fabric Interconnect
The Cisco UCS B200 M4 Blade Server (Figures 2 and 3) is a density-optimized, half-width blade server that supports two CPU sockets for Intel Xeon processor E5-2600 v3 series CPUs and up to 24 DDR4 DIMMs. It supports one modular LAN-on-motherboard (LOM) dedicated slot for a Cisco virtual interface card (VIC) and one mezzanine adapter. In additions, the Cisco UCS B200 M4 supports an optional storage module that accommodates up to two SAS or SATA hard disk drives (HDDs) or solid-state disk (SSD) drives. You can install up to eight Cisco UCS B200 M4 servers in a chassis, mixing them with other models of Cisco UCS blade servers in the chassis if desired. Latest features of Cisco UCS Virtual Interface Cards (VICs)
Figure 6 Cisco UCS B200 M4 Front View
Figure 7 Cisco UCS B200 M4 Back View
Cisco UCS combines Cisco UCS B-Series Blade Servers and C-Series Rack Servers with networking and storage access into a single converged system with simplified management, greater cost efficiency and agility, and increased visibility and control. The Cisco UCS B200 M4 Blade Server is one of the newest servers in the Cisco UCS portfolio.
The Cisco UCS B200 M4 delivers performance, flexibility, and optimization for data centers and remote sites. This enterprise-class server offers market-leading performance, versatility, and density without compromise for workloads ranging from web infrastructure to distributed databases. The Cisco UCS B200 M4 can quickly deploy stateless physical and virtual workloads with the programmable ease of use of the Cisco UCS Manager software and simplified server access with Cisco® Single Connect technology. Based on the Intel Xeon processor E5-2600 v3 product family, it offers up to 768 GB of memory using 32-GB DIMMs, up to two disk drives, and up to 80 Gbps of I/O throughput. The Cisco UCS B200 M4 offers exceptional levels of performance, flexibility, and I/O throughput to run your most demanding applications.
In addition, Cisco UCS has the architectural advantage of not having to power and cool excess switches, NICs, and HBAs in each blade server chassis. With a larger power budget per blade server, it provides uncompromised expandability and capabilities, as in the new Cisco UCS B200 M4 server with its leading memory-slot capacity and drive capacity.
The Cisco UCS B200 M4 provides:
§ Up to two multicore Intel Xeon processor E5-2600 v3 series CPUs for up to 36 processing cores
§ 24 DIMM slots for industry-standard DDR4 memory at speeds up to 2133 MHz, and up to 768 GB of total memory when using 32-GB DIMMs
§ Two optional, hot-pluggable SAS and SATA HDDs or SSDs
§ Cisco UCS VIC 1340, a 2-port, 40 Gigabit Ethernet and FCoE–capable modular (mLOM) mezzanine adapter
— Provides two 40-Gbps unified I/O ports or two sets of four 10-Gbps unified I/O ports
— Delivers 80 Gbps to the server
— Adapts to either 10- or 40-Gbps fabric connections
§ Cisco FlexStorage local drive storage subsystem, with flexible boot and local storage capabilities that allow you to:
— Configure the Cisco UCS B200 M4 to meet your local storage requirements without having to buy, power, and cool components that you do not need
— Choose an enterprise-class RAID controller, or go without any controller or drive bays if you are not using local drives
— Easily add, change, and remove Cisco FlexStorage modules
The Cisco UCS B200 M4 server is a half-width blade. Up to eight can reside in the 6-rack-unit (6RU) Cisco UCS 5108 Blade Server Chassis, offering one of the highest densities of servers per rack unit of blade chassis in the industry.
The Cisco UCS B200 M4 server is well suited for a broad spectrum of IT workloads, including:
§ IT and web infrastructure
§ Virtualized workloads
§ Consolidating applications
§ Virtual desktops
§ Middleware
§ Enterprise resource planning (ERP) and customer-relationship management (CRM) applications
The Cisco UCS B200 M4 is one member of the Cisco UCS B-Series Blade Servers platform. As part of Cisco UCS, Cisco UCS B-Series servers incorporate many innovative Cisco technologies to help customers handle their most challenging workloads. Cisco UCS B-Series servers within a Cisco UCS management framework incorporate a standards-based unified network fabric, Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) virtualization support, Cisco UCS Manager, Cisco UCS Central Software, Cisco UCS Director software, and Cisco fabric extender architecture.
The Cisco UCS B200 M4 Blade Server delivers:
§ Suitability for a wide range of applications and workload requirements
§ Highest-performing CPU and memory options without constraints in configuration, power, or cooling
§ Half-width form factor that offers industry-leading benefits
§ Latest features of Cisco UCS VICs
For more information about the Cisco UCS B200 B4, see http://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-b200-m4-blade-server/model.html
The Cisco UCS Virtual Interface Card (VIC) 1340 (Figure 8) is a 2-port 40-Gbps Ethernet or dual 4 x 10-Gbps Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) designed exclusively for the M4 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the Cisco UCS VIC 1340 capabilities is enabled for two ports of 40-Gbps Ethernet.
The Cisco UCS VIC 1340 enables a policy-based, stateless, agile server infrastructure that can present over 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1340 supports Cisco® Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment and management.
Figure 9 The Cisco UCS VIC 1340 Virtual Interface Cards Deployed in the Cisco UCS B-Series B200 M4 Blade Servers
The Cisco Nexus 9372PX/9372PX-E Switches has 48 1/10-Gbps Small Form Pluggable Plus (SFP+) ports and 6 Quad SFP+ (QSFP+) uplink ports. All the ports are line rate, delivering 1.44 Tbps of throughput in a 1-rack-unit (1RU) form factor. Cisco Nexus 9372PX benefits are listed below.
Architectural Flexibility
§ Includes top-of-rack or middle-of-row fiber-based server access connectivity for traditional and leaf-spine architectures
§ Leaf node support for Cisco ACI architecture is provided in the roadmap
§ Increase scale and simplify management through Cisco Nexus 2000 Fabric Extender support
Feature Rich
§ Enhanced Cisco NX-OS Software is designed for performance, resiliency, scalability, manageability, and programmability
§ ACI-ready infrastructure helps users take advantage of automated policy-based systems management
§ Virtual Extensible LAN (VXLAN) routing provides network services
§ Cisco Nexus 9372PX-E supports IP-based endpoint group (EPG) classification in ACI mode
Highly Available and Efficient Design
§ High-density, non-blocking architecture
§ Easily deployed into either a hot-aisle and cold-aisle configuration
§ Redundant, hot-swappable power supplies and fan trays
Simplified Operations
§ Power-On Auto Provisioning (POAP) support allows for simplified software upgrades and configuration file installation
§ An intelligent API offers switch management through remote procedure calls (RPCs, JSON, or XML) over a HTTP/HTTPS infrastructure
§ Python Scripting for programmatic access to the switch command-line interface (CLI)
§ Hot and cold patching, and online diagnostics
Investment Protection
§ A Cisco 40 Gb bidirectional transceiver allows for reuse of an existing 10 Gigabit Ethernet multimode cabling plant for 40 Gigabit Ethernet
§ Support for 1 Gb and 10 Gb access connectivity for data centers migrating access switching infrastructure to faster speed
Supports
§ 1.44 Tbps of bandwidth in a 1 RU form factor
§ 48 fixed 1/10-Gbps SFP+ ports
§ 6 fixed 40-Gbps QSFP+ for uplink connectivity that can be turned into 10 Gb ports through a QSFP to SFP or SFP+ Adapter (QSA)
§ Latency of 1 to 2 microseconds
§ Front-to-back or back-to-front airflow configurations
§ 1+1 redundant hot-swappable 80 Plus Platinum-certified power supplies
§ Hot swappable 2+1 redundant fan tray
Get highly secure, multitenant services by adding virtualization intelligence to your data center network with the Cisco Nexus 1000V Switch for VMware vSphere. This switch does the following:
§ Extends the network edge to the hypervisor and virtual machines
§ Is built to scale for cloud networks
§ Forms the foundation of virtual network overlays for the Cisco Open Network Environment and Software Defined Networking (SDN)
§ Extensive virtual network services built on Cisco advanced service insertion and routing technology
§ Support for vCloud Director and vSphere hypervisor
§ Feature and management consistency for easy integration with the physical infrastructure
§ Exceptional policy and control features for comprehensive networking functionality
§ Policy management and control by the networking team instead of the server virtualization team (separation of duties)
The Cisco Nexus 1000V Switch optimizes the use of Layer 4 - 7 virtual networking services in virtual machine and cloud environments through Cisco vPath architecture services.
Cisco vPath 2.0 supports service chaining so you can use multiple virtual network services as part of a single traffic flow. For example, you can simply specify the network policy, and vPath 2.0 can direct traffic:
§ Second, through the Cisco Virtual Security Gateway for Nexus 1000V Switch for a zoning firewall
In addition, Cisco vPath works on VXLAN to support movement between servers in different Layer 2 domains. Together, these features promote highly secure policy, application, and service delivery in the cloud
Citrix XenApp and XenDesktop are application and desktop virtualization solutions built on a unified architecture so they're simple to manage and flexible enough to meet the needs of all your organization's users. XenApp and XenDesktop have a common set of management tools that simplify and automate IT tasks. You use the same architecture and management tools to manage public, private, and hybrid cloud deployments as you do for on premises deployments.
Citrix XenApp delivers the following:
§ XenApp published apps, also known as server-based hosted applications: These are applications hosted from Microsoft Windows servers to any type of device, including Windows PCs, Macs, smartphones, and tablets. Some XenApp editions include technologies that further optimize the experience of using Windows applications on a mobile device by automatically translating native mobile-device display, navigation, and controls to Windows applications; enhancing performance over mobile networks; and enabling developers to optimize any custom Windows application for any mobile environment.
§ XenApp published desktops, also known as server-hosted desktops: These are inexpensive, locked-down Windows virtual desktops hosted from Windows server operating systems. They are well suited for users, such as call center employees, who perform a standard set of tasks.
§ Virtual machine–hosted apps: These are applications hosted from machines running Windows desktop operating systems for applications that can’t be hosted in a server environment.
§ Windows applications delivered with Microsoft App-V: These applications use the same management tools that you use for the rest of your XenApp deployment.
§ Citrix XenDesktop 7.7: Includes significant enhancements to help customers deliver Windows apps and desktops as mobile services while addressing management complexity and associated costs. Enhancements in this release include:
§ Unified product architecture for XenApp and XenDesktop:The FlexCast Management Architecture (FMA). This release supplies a single set of administrative interfaces to deliver both hosted-shared applications (RDS) and complete virtual desktops (VDI). Unlike earlier releases that separately provisioned Citrix XenApp and XenDesktop farms, the XenDesktop 7.7 release allows administrators to deploy a single infrastructure and use a consistent set of tools to manage mixed application and desktop workloads.
§ Support for extending deployments to the cloud. This release provides the ability for hybrid cloud provisioning from Microsoft Azure, Amazon Web Services (AWS) or any Cloud Platform-powered public or private cloud. Cloud deployments are configured, managed, and monitored through the same administrative consoles as deployments on traditional on-premises infrastructure.
Citrix XenDesktop delivers:
§ VDI desktops: These virtual desktops each run a Microsoft Windows desktop operating system rather than running in a shared, server-based environment. They can provide users with their own desktops that they can fully personalize.
§ Hosted physical desktops: This solution is well suited for providing secure access powerful physical machines, such as blade servers, from within your data center.
§ Remote PC access: This solution allows users to log in to their physical Windows PC from anywhere over a secure XenDesktop connection.
§ Server VDI: This solution is designed to provide hosted desktops in multitenant, cloud environments.
§ Capabilities that allow users to continue to use their virtual desktops: These capabilities let users continue to work while not connected to your network.
This product release includes the following new and enhanced features:
Some XenDesktop editions include the features available in XenApp.
Deployments that span widely-dispersed locations connected by a WAN can face challenges due to network latency and reliability. Configuring zones can help users in remote regions connect to local resources without forcing connections to traverse large segments of the WAN. Using zones allows effective Site management from a single Citrix Studio console, Citrix Director, and the Site database. This saves the costs of deploying, staffing, licensing, and maintaining additional Sites containing separate databases in remote locations.
Zones can be helpful in deployments of all sizes. You can use zones to keep applications and desktops closer to end users, which improves performance.
For more information, see the Zones article.
When you configure the databases during Site creation, you can now specify separate locations for the Site, Logging, and Monitoring databases. Later, you can specify different locations for all three databases. In previous releases, all three databases were created at the same address, and you could not specify a different address for the Site database later.
You can now add more Delivery Controllers when you create a Site, as well as later. In previous releases, you could add more Controllers only after you created the Site.
For more information, see the Databases and Controllers articles.
Configure application limits to help manage application use. For example, you can use application limits to manage the number of users accessing an application simultaneously. Similarly, application limits can be used to manage the number of simultaneous instances of resource-intensive applications, this can help maintain server performance and prevent deterioration in service.
For more information, see the Manage applications article.
You can now choose to repeat a notification message that is sent to affected machines before the following types of actions begin:
§ Updating machines in a Machine Catalog using a new master image
§ Restarting machines in a Delivery Group according to a configured schedule
If you indicate that the first message should be sent to each affected machine 15 minutes before the update or restart begins, you can also specify that the message be repeated every five minutes until the update/restart begins.
For more information, see the Manage Machine Catalogs and Manage machines in Delivery Groups articles.
By default, sessions roam between client devices with the user. When the user launches a session and then moves to another device, the same session is used and applications are available on both devices. The applications follow, regardless of the device or whether current sessions exist. Similarly, printers and other resources assigned to the application follow.
You can now use the PowerShell SDK to tailor session roaming. This was an experimental feature in the previous release.
For more information, see the Sessions article.
When using the PowerShell SDK to create or update a Machine Catalog, you can now select a template from other hypervisor connections. This is in addition to the currently-available choices of VM images and snapshots.
See the System requirements article for full support information. Information about support for third-party product versions is updated periodically.
By default, SQL Server 2012 Express SP2 is installed when you install the Delivery Controller. SP1 is no longer installed.
The component installers now automatically deploy newer Microsoft Visual C++ runtime versions: 32-bit and 64-bit Microsoft Visual C++ 2013, 2010 SP1, and 2008 SP1. Visual C++ 2005 is no longer deployed.
You can install Studio or VDAs for Windows Desktop OS on machines running Windows 10.
You can create connections to Microsoft Azure virtualization resources.
Figure 10 Logical Architecture of Citrix XenDesktop
Most enterprises struggle to keep up with the proliferation and management of computers in their environments. Each computer, whether it is a desktop PC, a server in a data center, or a kiosk-type device, must be managed as an individual entity. The benefits of distributed processing come at the cost of distributed management. It costs time and money to set up, update, support, and ultimately decommission each computer. The initial cost of the machine is often dwarfed by operating costs.
Citrix PVS takes a very different approach from traditional imaging solutions by fundamentally changing the relationship between hardware and the software that runs on it. By streaming a single shared disk image (vDisk) rather than copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage, even as the number of machines continues to grow, simultaneously providing the efficiency of centralized management and the benefits of distributed processing.
In addition, because machines are streaming disk data dynamically and in real time from a single shared image, machine image consistency is essentially ensured. At the same time, the configuration, applications, and even the OS of large pools of machines can be completed changed in the time it takes the machines to reboot.
Using PVS, any vDisk can be configured in standard-image mode. A vDisk in standard-image mode allows many computers to boot from it simultaneously, greatly reducing the number of images that must be maintained and the amount of storage that is required. The vDisk is in read-only format, and the image cannot be changed by target devices.
If you manage a pool of servers that work as a farm, such as Citrix XenApp servers or web servers, maintaining a uniform patch level on your servers can be difficult and time consuming. With traditional imaging solutions, you start with a clean golden master image, but as soon as a server is built with the master image, you must patch that individual server along with all the other individual servers. Rolling out patches to individual servers in your farm is not only inefficient, but the results can also be unreliable. Patches often fail on an individual server, and you may not realize you have a problem until users start complaining or the server has an outage. After that happens, getting the server resynchronized with the rest of the farm can be challenging, and sometimes a full reimaging of the machine is required.
With Citrix PVS, patch management for server farms is simple and reliable. You start by managing your golden image, and you continue to manage that single golden image. All patching is performed in one place and then streamed to your servers when they boot. Server build consistency is assured because all your servers use a single shared copy of the disk image. If a server becomes corrupted, simply reboot it, and it is instantly back to the known good state of your master image. Upgrades are extremely fast to implement. After you have your updated image ready for production, you simply assign the new image version to the servers and reboot them. You can deploy the new image to any number of servers in the time it takes them to reboot. Just as important, rollback can be performed in the same way, so problems with new images do not need to take your servers or your users out of commission for an extended period of time.
Because Citrix PVS is part of Citrix XenDesktop, desktop administrators can use PVS’s streaming technology to simplify, consolidate, and reduce the costs of both physical and virtual desktop delivery. Many organizations are beginning to explore desktop virtualization. Although virtualization addresses many of IT’s needs for consolidation and simplified management, deploying it also requires deployment of supporting infrastructure. Without PVS, storage costs can make desktop virtualization too costly for the IT budget. However, with PVS, IT can reduce the amount of storage required for VDI by as much as 90 percent. And with a single image to manage instead of hundreds or thousands of desktops, PVS significantly reduces the cost, effort, and complexity for desktop administration.
Different types of workers across the enterprise need different types of desktops. Some require simplicity and standardization, and others require high performance and personalization. XenDesktop can meet these requirements in a single solution using Citrix FlexCast delivery technology. With FlexCast, IT can deliver every type of virtual desktop, each specifically tailored to meet the performance, security, and flexibility requirements of each individual user.
Not all desktops applications can be supported by virtual desktops. For these scenarios, IT can still reap the benefits of consolidation and single-image management. Desktop images are stored and managed centrally in the data center and streamed to physical desktops on demand. This model works particularly well for standardized desktops such as those in lab and training environments and call centers and thin-client devices used to access virtual desktops.
Citrix PVS streaming technology allows computers to be provisioned and re-provisioned in real time from a single shared disk image. With this approach, administrators can completely eliminate the need to manage and patch individual systems. Instead, all image management is performed on the master image. The local hard drive of each system can be used for runtime data caching or, in some scenarios, removed from the system entirely, which reduces power use, system failure rate, and security risk.
The PVS solution’s infrastructure is based on software-streaming technology. After PVS components are installed and configured, a vDisk is created from a device’s hard drive by taking a snapshot of the OS and application image and then storing that image as a vDisk file on the network. A device used for this process is referred to as a master target device. The devices that use the vDisks are called target devices. vDisks can exist on a PVS, file share, or in larger deployments, on a storage system with which PVS can communicate (iSCSI, SAN, network-attached storage [NAS], and Common Internet File System [CIFS]). vDisks can be assigned to a single target device in private-image mode, or to multiple target devices in standard-image mode.
The Citrix PVS infrastructure design directly relates to administrative roles within a PVS farm. The PVS administrator role determines which components that administrator can manage or view in the console.
A PVS farm contains several components. Figure 11 provides a high-level view of a basic PVS infrastructure and shows how PVS components might appear within that implementation.
Figure 11 Logical Architecture of Citrix Provisioning Services
The following new features are available with Provisioning Services 7.7:
§ Streaming VHDX formatted disks
§ Support for Microsoft Windows 10 Enterprise and Professional editions
§ Support for Unified Extensible Firmware Interface (UEFI) enhancements
§ The licensing grace period for Provisioning Services has changed from 96 hours to 30 days, for consistency with XenApp and XenDesktop
§ Enhancements to API
§ vGPU-enabled XenDesktop machines can be provisioned using the Provisioning Services XenDesktop Setup Wizard
§ Support for System Center Virtual Machine Manager Generation 2 VMs
§ FIPS support
§ XenApp Session Recording Enabled by Default
The Challenge
§ Enabling Data-Driven Business
— As technology has expanded to cover key business operations and back-office functions, IT leaders have had to rethink the way they architect storage. Traditional requirements such as storage uptime, scalability, and cost efficiency are still critical, but so are factors such as cloud integration, unified support for SAN and NAS, and simplified data mining for competitive advantage.
— Many enterprises struggle and are held back by structural limitations in legacy storage and data architectures. Traditional storage arrays might deliver on basic needs, but they are nonetheless incapable of meeting advanced service requirements and adapting to new IT models such as the cloud.
The Solution
§ Accelerate Business Operations with Unified Scale-Out Storage
— The demands of a data-driven business require a fundamentally new approach to storage, including an integrated combination of high-performance hardware and adaptive, scalable storage software. This new approach must support existing workloads as well as adapt and scale quickly to address new applications and evolving IT models.
— NetApp FAS8000 enterprise storage systems are specifically engineered to address these needs. Powered by Data ONTAP and optimized for scale-out, the FAS8000 series unifies your SAN and NAS storage infrastructure. With proven data management capabilities, the FAS8000 has the flexibility to keep up with changing business needs while delivering on core IT requirements.
— The FAS8000 features a multiprocessor Intel chip set and leverages high-performance memory modules, NVRAM to accelerate and optimize writes, and an I/O-tuned PCIe gen3 architecture that maximizes application throughput. Building on a decade of multicore optimization, Data ONTAP drives the latest cores and increased core counts to keep up with continuous growth in storage demands. The result is a flexible, efficient I/O design capable of supporting large numbers of high-speed network connections and massive capacity scaling.
— FAS8000 scale-out storage systems offer exceptional flexibility and expandability. Integrated unified target adapter (UTA2) ports support 16Gb Fibre Channel, 10GbE, and FCoE, so your storage is ready on day one for whatever changes the future requires.
The FAS 8000 provides the following key benefits:
§ Support more workloads. Run SAN and NAS workloads simultaneously with the industry’s only unified scale-out storage.
§ Consolidate infrastructure. Expand scaling up to 103PB and include existing storage with NetApp FlexArray™ storage virtualization software.
§ Accelerate I/O-intensive apps. Reduce latency and speed operations with up to 1.7PB of hybrid flash.
§ Maximize uptime. Experience >99.999% availability and nondisruptive operations that eliminate planned downtime.
§ Realize superior value. Deliver up to 2x the price and performance of the previous generation.
§ Optimized for the hybrid cloud. Easily implement a service-oriented IT architecture that spans on-premises and off-site resources.
Figure 12 NetApp AFF8000 Controllers
For performance-intensive environments where maximum scale and I/O throughput are necessary to drive business-critical applications, NetApp offers the FAS8080 EX. For more information, see the AFF8080 EX datasheet.
§ Get More From Existing Storage Investments
Simplify your IT operations and deliver more value from existing storage with the only unified storage virtualization solution. FlexArray virtualization software running on a FAS8000 extends the capabilities of Data ONTAP to include storage capacity from EMC, Hitachi, HP, and NetApp E-Series arrays. Consolidate management of your existing storage to increase efficiency and provide superior functionality. This software creates a single storage management architecture that supports both SAN and NAS while simplifying management and cloud integration.
§ Scale and Adapt to Meet Changing Needs
Your business is constantly changing, and your storage infrastructure should adapt and scale right along with it. With FAS8000 unified scale-out storage, you can optimize and accelerate your storage environment as needed. All FAS8000 storage is designed to scale as performance and capacity requirements change. You can scale up by adding capacity, adding flash acceleration, and upgrading controllers and also scale out. A single cluster can accommodate up to 24 nodes and 103PB of capacity with ease. You can non-disruptively add or replace storage systems and components and mix and match different FAS models. Therefore, scaling occurs without maintenance windows or the challenge of coordinating downtime across teams.
§ Unlock the Full Power of Flash
Flash-accelerated FAS8000 storage systems deliver twice the performance of our previous generation of storage, boosting throughput, lowering latency, and meeting stringent service levels with predictable high performance. Data ONTAP on the FAS8000 simplifies flash management, resulting in more powerful hybrid storage.
In hybrid FAS8000 configurations, flash functions as a self-managing virtual storage tier with up to 144TB of flash per HA pair and 1.7PB per cluster. Hot data is automatically promoted to flash in real time, so you get the full benefit of flash performance. The NetApp AFF family of flash arrays is optimized for applications that require high performance, low latency, and rich data management. For more information, see the All Flash FAS datasheet.
§ Enable Innovation and Empower Users
In a data-driven business, performance and capacity alone are not enough. You must leverage data for competitive advantage and assign resources dynamically for more effective operations. The NetApp OnCommand® storage management software portfolio is composed of a range of products for use with the NetApp FAS8000, including device-level management, automation, integration, and enterprise storage resource management.
NetApp OnCommand software provides flexibility, scalability, simplified provisioning, and data protection to meet business needs today and changing needs in the future.
§ Achieve Unparalleled Availability and Nondisruptive Operations
FAS8000 enterprise storage is engineered to meet the most demanding availability requirements. All models are designed to deliver 99.999% or greater availability through a comprehensive approach that combines highly reliable hardware, innovative software, and sophisticated service analytics. Software and firmware updates, hardware repair and replacement, load balancing, and tech refresh happen without planned downtime.
NetApp Integrated Data Protection technologies protect your data, accelerate recovery, and integrate with leading backup applications for easier management. Advanced service analytics software prevents issues from becoming outages. Risk signatures are constantly monitored, and your administrators and/or NetApp service staff are alerted to proactively address issues that might affect operations.
NetApp MetroCluster™ high-availability and disaster recovery software expands data protection to eliminate risk of data loss by synchronously mirroring data between locations for continuous availability of information. A MetroCluster storage array can exist in a single data center or in two different data centers that are located across a campus, across a metropolitan area, or in different cities altogether. MetroCluster provides data protection combined with continuous data availability. This means that no matter what happens, your data can be protected from loss and is continuously available to meet the most business-critical needs.
§ Build the Right Long-Term Platform
When it comes to long-term storage infrastructure investments, total cost of ownership and the ability to accommodate new IT initiatives are critical. FAS8000 enterprise storage systems unlock the power of your data and your people. In addition to a significant price and performance benefit—up to two times that of the previous generation—the FAS8000 platform delivers industry-leading storage efficiency technologies such as deduplication, compression, thin provisioning, and space-efficient Snapshot® copies. This reduces your cost per effective gigabyte of storage.
§ Optimize Hybrid Cloud Deployment
Organizations today are focusing on service-oriented IT architectures in which cloud IT models enhance return on investment and assets. A FAS8000 running Data ONTAP is optimized for private and hybrid cloud computing environments with secure multi-tenancy, quality of service (QoS), nondisruptive operations, and easily defined tiers of service. A FAS8000 tightly integrated with the industry standard OpenStack cloud infrastructure enables you to build a private cloud that delivers a leading service-oriented architecture and meets the significant demands of enterprise applications.
For organizations that require an enterprise-class hybrid cloud with predictable performance and availability, the FAS8000 can be used in a NetApp Private Storage (NPS) for Cloud solution. With NPS for Cloud you can directly connect to multiple clouds by using a private, high-bandwidth, low-latency connection. You can connect to industry-leading clouds such as Amazon Web Services (AWS), Microsoft Azure, or SoftLayer and switch between them at any time. NPS for Cloud delivers complete control of your data on your dedicated, private FAS8000.
With NetApp technologies, you get the elasticity of the public cloud and critical protection for your data that you understand and trust. For maximum flexibility, NetApp Cloud ONTAP® provides superior data portability at the best ROI. Cloud ONTAP is a software-defined storage version of Data ONTAP that runs in AWS and provides the storage efficiency, availability, and scalability of Data ONTAP. This storage solution enables quick and easy movement of data between your on-premises FAS8000 and AWS environments by using NetApp SnapMirror® data replication software.
Support for business-critical requirements in financial modeling, engineering analysis, and data warehousing thought impossible only five years ago is now realizable with the NetApp® FAS8080 EX. Designed to deliver superior levels of performance, availability, and scalability, the FAS8080 EX transforms storage into a strategic operational resource for generating revenue more quickly.
Powered by NetApp Data ONTAP®, the FAS8080 EX is optimized at every level for enterprise applications. Forty processor cores, 256GB of high-speed DRAM, capacity for 1,440 drives, and 144TB of hybrid flash acceleration are balanced together in a high-availability (HA) pair to reliably process the massive amounts of data in a modern enterprise. With 16 onboard I/O ports as well as 24 PCIe 3.0 expansion slots, serving data to applications has never been easier. Combine this with world-class data management from Data ONTAP, and you have a system that can reduce the time it takes to complete critical operations, increase your organization’s competitive advantage, and keep your enterprise applications running at top speed 24/7/365.
The AFF8080 EX is available for workloads requiring the high performance and low latency of an all flash array with the enterprise reliability and extensive data management capabilities of Data ONTAP. See the All Flash FAS datasheet for details.
Our leading Intel multiprocessor architecture with its high bandwidth DDR3 memory system maximizes throughput for business-critical workloads such as SAP, SQL Server, and Oracle databases as well as computational modeling and logistics management applications. Building on a decade of multicore optimizations, Data ONTAP takes advantage of the FAS8080 EX’s 40 processor cores to keep pace with growth in storage I/O demands for your most intensive SAN and NAS applications. Integrated unified target adapter (UTA2) ports support 16Gb FC and 10GbE (FCoE, iSCSI, SMB, NFS) so you’re ready for whatever the future holds. Twenty-four I/O-tuned PCIe gen3 slots support Flash Cache™ cards or up to 48 extra 10GbE or 16Gb FC ports for demanding data-processing installations.
For OLTP databases and other business-critical workloads that require a storage solution with rich data management features, the FAS8080 EX delivers increased performance and lower latency. The response times of OLTP and other business-critical workloads can be significantly improved with a FAS8080 EX hybrid storage array. Proven examples demonstrate that shifting an OLTP database workload from an all-SAS to a hybrid array with flash and SATA can lower cost per TB by more than 40 percent, lower cost per IOPS by almost 20 percent, and reduce power consumption by over 25 percent. Hybrid FAS8080 EX systems boost performance and lower latency for your IT operations, so they are more predictable to meet stringent service-level objectives. Flash functions as a self-managing virtual storage tier with up to 144TB of hybrid flash per HA pair and 1.7PB per cluster. The FAS8080 EX offers flexibility to configure a system for optimal productivity and predictability to keep operations running smoothly.
Figure 13 Multi-Tenant Virtual Infrastructure
Storage scale-out to more than 17,000 drives and 4M IOPS of performance combined with the industry-leading management capabilities of Data ONTAP enables the FAS8080 EX to process petabytes of data for complex resource exploration, faster semiconductor chip design, or optimally managing logistics for billion-dollar-plus product operations. 103PB of data in a 24-node cluster can be managed from a single console to reduce the cost of operations and simplify warehousing years of business-critical data.
As with other business-critical arrays, the FAS8080 EX is designed to deliver 99.999 percent or greater availability through a comprehensive approach that combines highly reliable hardware, innovative software, and sophisticated service analytics. Advanced hardware features, including alternate control path (ACP), persistent NVRAM write logs, and an integrated service processor, provide the utmost reliability to protect your investment. All I/O devices, including embedded ports, can be independently reset, allowing the FAS8080 EX to detect, contain, and recover from faults.
NetApp builds upon the resilient hardware platform with advanced software capabilities that further improve uptime. Meet the most demanding SLOs with Data ONTAP nondisruptive operations (NDO), quality of service (QoS), and integrated data protection capabilities. NDO enables software and firmware updates, hardware repair and replacement, load balancing, and tech refresh to happen without planned downtime. QoS makes sure that applications have access to the resources they need. NetApp Integrated Data Protection technologies protect your data, accelerate recovery, and integrate with leading backup applications for easier management.
At NetApp, we use advanced service analytics to identify patterns in billions of rows of log data gathered from thousands of deployed NetApp systems. Risk signatures are constantly monitored, and your administrators and/or NetApp service staff are alerted to proactively address issues that might affect operations. NetApp MetroCluster™ expands data protection to eliminate risk of data loss by synchronously mirroring data between locations for continuous availability of information. A MetroCluster storage array can exist in a single data center or in two different data centers that are located across a campus, across a metropolitan area, or in different cities altogether. MetroCluster provides data protection combined with continuous data availability. This means that no matter what happens, your data can be protected from loss and is continuously available to meet the most business-critical needs.
Extend the capabilities of the FAS8080 EX to include EMC, Hitachi, HP, and NetApp E-Series arrays with FlexArray® virtualization software. Consolidate management of your existing storage to simplify operations, while increasing efficiency and providing superior functionality with FlexArray, the only unified storage virtualization solution.With FlexArray, create a single storage management architecture that supports both SAN and NAS while simplifying management and cloud integration.
Organizations today are focusing on service-oriented IT architectures where cloud IT models are leveraged to enhance return on investment and assets. The FAS8080 EX is optimized for private and hybrid cloud with secure multi-tenancy, QoS, nondisruptive operations, and easily defined tiers of service. A FAS8080 EX tightly integrated with the industry standard OpenStack cloud infrastructure enables an organization to build a private cloud that delivers a leading service-oriented IT architecture and meets the demanding needs of enterprise applications. For organizations that need an enterprise-class hybrid cloud with predictable performance and availability, the FAS8080 EX can be used in a NetApp Private Storage (NPS) for Cloud solution.
With NPS for Cloud you can directly connect to multiple clouds using a private, high-bandwidth, low-latency connection. Connect to industry-leading clouds such as Amazon Web Services (AWS), Microsoft Azure, or SoftLayer and switch between them at any time, all while maintaining complete control of your data on your dedicated, private FAS8080 EX. You get the elasticity of the public cloud and protect your data with NetApp technologies that you understand and trust.
For maximum flexibility, Cloud ONTAP™ provides superior data portability at the best ROI. Cloud ONTAP is a software-defined storage version of Data ONTAP that runs in AWS and provides the storage efficiency, availability, and scalability of Data ONTAP. It allows quick and easy movement of data between your on-premises FAS8080 EX and AWS environments with NetApp SnapMirror® data replication software.
Whether you are planning your next-generation environment, need specialized know-how for a major deployment, or want to get the most from your current storage, NetApp and our certified partners can help. We collaborate with you to enhance your IT capabilities through a full portfolio of services that covers your IT lifecycle with:
§ Strategy services to align IT with your business goals:
§ Design services to architect your best storage environment
§ Deploy and transition services to implement validated architectures and prepare your storage environment
§ Operations services to deliver continuous operations while driving operational excellence and efficiency.
In addition, NetApp provides in-depth knowledge transfer and education services that give you access to our global technical resources and intellectual property.
VMware provides virtualization software. VMware’s enterprise software hypervisors for servers—VMware vSphere ESX, vSphere ESXi, and VSphere—are bare-metal hypervisors that run directly on server hardware without requiring an additional underlying operating system. VMware vCenter Server for vSphere provides central management and complete control and visibility into clusters, hosts, virtual machines, storage, networking, and other critical elements of your virtual infrastructure.
VMware vSphere 6.0 introduces many enhancements to vSphere Hypervisor, VMware virtual machines, vCenter Server, virtual storage, and virtual networking, further extending the core capabilities of the vSphere platform.
vSphere 6.0 introduces a number of new features in the hypervisor:
§ Scalability Improvements
ESXi 6.0 dramatically increases the scalability of the platform. With vSphere Hypervisor 6.0, clusters can scale to as many as 64 hosts, up from 32 in previous releases. With 64 hosts in a cluster, vSphere 6.0 can support 8000 virtual machines in a single cluster. This capability enables greater consolidation ratios, more efficient use of VMware vSphere Distributed Resource Scheduler (DRS), and fewer clusters that must be separately managed. Each vSphere Hypervisor 6.0 instance can support up to 480 logical CPUs, 12 terabytes (TB) of RAM, and 1024 virtual machines. By using the newest hardware advances, ESXi 6.0 enables the virtualization of applications that previously had been thought to be nonvirtualizable.
§ Security Enhancements
— ESXi 6.0 offers these security enhancements:
o Account management: ESXi 6.0 enables management of local accounts on the ESXi server using new ESXi CLI commands. The capability to add, list, remove, and modify accounts across all hosts in a cluster can be centrally managed using a vCenter Server system. Previously, the account and permission management functions for ESXi hosts were available only for direct host connections. The setup, removal, and listing of local permissions on ESXi servers can also be centrally managed.
o Account lockout: ESXi Host Advanced System Settings have two new options for the management of failed local account login attempts and account lockout duration. These parameters affect Secure Shell (SSH) and vSphere Web Services connections, but not ESXi direct console user interface (DCUI) or console shell access.
o Password complexity rules: In previous versions of ESXi, password complexity changes had to be made by manually editing the /etc/pam.d/passwd file on each ESXi host. In vSphere 6.0, an entry in Host Advanced System Settings enables changes to be centrally managed for all hosts in a cluster.
o Improved auditability of ESXi administrator actions: Prior to vSphere 6.0, actions at the vCenter Server level by a named user appeared in ESXi logs with the vpxuser username: for example, [user=vpxuser]. In vSphere 6.0, all actions at the vCenter Server level for an ESXi server appear in the ESXi logs with the vCenter Server username: for example, [user=vpxuser: DOMAIN\User]. This approach provides a better audit trail for actions run on a vCenter Server instance that conducted corresponding tasks on the ESXi hosts.
o Flexible lockdown modes: Prior to vSphere 6.0, only one lockdown mode was available. Feedback from customers indicated that this lockdown mode was inflexible in some use cases. With vSphere 6.0, two lockdown modes are available:
§ In normal lockdown mode, DCUI access is not stopped, and users on the DCUI access list can access the DCUI.
§ In strict lockdown mode, the DCUI is stopped.
o Exception users: vSphere 6.0 offers a new function called exception users. Exception users are local accounts or Microsoft Active Directory accounts with permissions defined locally on the host to which these users have host access. These exception users are not recommended for general user accounts, but they are recommended for use by third-party applications—for service accounts, for example—that need host access when either normal or strict lockdown mode is enabled. Permissions on these accounts should be set to the bare minimum required for the application to perform its task and with an account that needs only read-only permissions on the ESXi host.
o Smart card authentication to DCUI: This function is for U.S. federal customers only. It enables DCUI login access using a Common Access Card (CAC) and Personal Identity Verification (PIV). The ESXi host must be part of an Active Directory domain.
Today’s IT departments are facing a rapidly evolving workplace environment. The workforce is becoming increasingly diverse and geographically dispersed, including offshore contractors, distributed call center operations, knowledge and task workers, partners, consultants, and executives connecting from locations around the world at all times.
This workforce is also increasingly mobile, conducting business in traditional offices, conference rooms across the enterprise campus, home offices, on the road, in hotels, and at the local coffee shop. This workforce wants to use a growing array of client computing and mobile devices that they can choose based on personal preference. These trends are increasing pressure on IT to ensure protection of corporate data and prevent data leakage or loss through any combination of user, endpoint device, and desktop access scenarios (Figure 14).
These challenges are compounded by desktop refresh cycles to accommodate aging PCs and bounded local storage and migration to new operating systems, specifically Microsoft Windows 10.
Figure 14 Cisco Desktop Virtualization Solutions
Some of the key drivers for desktop virtualization are increased data security and reduced TCO through increased control and reduced management costs.
Cisco focuses on three key elements to deliver the best desktop virtualization data center infrastructure: simplification, security, and scalability. The software combined with platform modularity provides a simplified, secure, and scalable desktop virtualization platform.
Cisco UCS provides a radical new approach to industry-standard computing and provides the core of the data center infrastructure for desktop virtualization. Among the many features and benefits of Cisco UCS are the drastic reduction in the number of servers needed and in the number of cables used per server, and the capability to rapidly deploy or reprovision servers through Cisco UCS service profiles. With fewer servers and cables to manage and with streamlined server and virtual desktop provisioning, operations are significantly simplified. Thousands of desktops can be provisioned in minutes with Cisco UCS Manager service profiles and Cisco storage partners’ storage-based cloning. This approach accelerates the time to productivity for end users, improves business agility, and allows IT resources to be allocated to other tasks.
Cisco UCS Manager automates many mundane, error-prone data center operations such as configuration and provisioning of server, network, and storage access infrastructure. In addition, Cisco UCS B-Series Blade Servers and C-Series Rack Servers with large memory footprints enable high desktop density that helps reduce server infrastructure requirements.
Simplification also leads to more successful desktop virtualization implementation. Cisco and its technology partners like Citrix (Citrix XenDesktop), and NetApp (NetApp FAS) have developed integrated, validated architectures, including predefined converged architecture infrastructure packages such as FlexPod. Cisco Desktop Virtualization Solutions have been tested with all the leading hypervisors, including VMware vSphere, Citrix XenServer, and Microsoft Hyper-V.
Although virtual desktops are inherently more secure than their physical predecessors, they introduce new security challenges. Mission-critical web and application servers using a common infrastructure such as virtual desktops are now at a higher risk for security threats. Inter–virtual machine traffic now poses an important security consideration that IT managers need to address, especially in dynamic environments in which virtual machines, using VMware vMotion, move across the server infrastructure.
Desktop virtualization, therefore, significantly increases the need for virtual machine–level awareness of policy and security, especially given the dynamic and fluid nature of virtual machine mobility across an extended computing infrastructure. The ease with which new virtual desktops can proliferate magnifies the importance of a virtualization-aware network and security infrastructure. Cisco data center infrastructure (Cisco UCS and Cisco Nexus Family solutions) for desktop virtualization provides strong data center, network, and desktop security, with comprehensive security from the desktop to the hypervisor. Security is enhanced with segmentation of virtual desktops, virtual machine–aware policies and administration, and network security across the LAN and WAN infrastructure.
Growth of a desktop virtualization solution is all but inevitable, so a solution must be able to scale, and scale predictably, with that growth. The Cisco Desktop Virtualization Solutions support high virtual-desktop density (desktops per server), and additional servers scale with near-linear performance. Cisco data center infrastructure provides a flexible platform for growth and improves business agility. Cisco UCS Manager service profiles allow on-demand desktop provisioning and make it just as easy to deploy dozens of desktops as it is to deploy thousands of desktops.
Cisco UCS servers provide near-linear performance and scale. Cisco UCS implements the patented Cisco Extended Memory Technology to offer large memory footprints with fewer sockets (with scalability to up to 1 terabyte (TB) of memory with 2- and 4-socket servers). Using unified fabric technology as a building block, Cisco UCS server aggregate bandwidth can scale to up to 80 Gbps per server, and the northbound Cisco UCS fabric interconnect can output 2 terabits per second (Tbps) at line rate, helping prevent desktop virtualization I/O and memory bottlenecks. Cisco UCS, with its high-performance, low-latency unified fabric-based networking architecture, supports high volumes of virtual desktop traffic, including high-resolution video and communications traffic. In addition, Cisco storage partners NetApp help maintain data availability and optimal performance during boot and login storms as part of the Cisco Desktop Virtualization Solutions. Recent Cisco Validated Designs based on Citrix, Cisco UCS, and NetApp joint solutions have demonstrated scalability and performance, with up to 5000 desktops up and running in 30 minutes.
Cisco UCS and Cisco Nexus data center infrastructure provides an excellent platform for growth, with transparent scaling of server, network, and storage resources to support desktop virtualization, data center applications, and cloud computing.
The simplified, secure, scalable Cisco data center infrastructure for desktop virtualization solutions saves time and money compared to alternative approaches. Cisco UCS enables faster payback and ongoing savings (better ROI and lower TCO) and provides the industry’s greatest virtual desktop density per server, reducing both capital expenditures (CapEx) and operating expenses (OpEx). The Cisco UCS architecture and Cisco Unified Fabric also enables much lower network infrastructure costs, with fewer cables per server and fewer ports required. In addition, storage tiering and deduplication technologies decrease storage costs, reducing desktop storage needs by up to 50 percent.
The simplified deployment of Cisco UCS for desktop virtualization accelerates the time to productivity and enhances business agility. IT staff and end users are more productive more quickly, and the business can respond to new opportunities quickly by deploying virtual desktops whenever and wherever they are needed. The high-performance Cisco systems and network deliver a near-native end-user experience, allowing users to be productive anytime and anywhere.
The ultimate measure of desktop virtualization for any organization is its efficiency and effectiveness in both the near term and the long term. The Cisco Desktop Virtualization Solutions are very efficient, allowing rapid deployment, requiring fewer devices and cables, and reducing costs. The solutions are also very effective, providing the services that end users need on their devices of choice while improving IT operations, control, and data security. Success is bolstered through Cisco’s best-in-class partnerships with leaders in virtualization and storage, and through tested and validated designs and services to help customers throughout the solution lifecycle. Long-term success is enabled through the use of Cisco’s scalable, flexible, and secure architecture as the platform for desktop virtualization.
§ Healthcare: Mobility between desktops and terminals, compliance, and cost
§ Federal government: Teleworking initiatives, business continuance, continuity of operations (COOP), and training centers
§ Financial: Retail banks reducing IT costs, insurance agents, compliance, and privacy
§ Education: K-12 student access, higher education, and remote learning
§ State and local governments: IT and service consolidation across agencies and interagency security
§ Retail: Branch-office IT cost reduction and remote vendors
§ Manufacturing: Task and knowledge workers and offshore contractors
§ Microsoft Windows 7 migration
§ Security and compliance initiatives
§ Opening of remote and branch offices or offshore facilities
§ Mergers and acquisitions
This section describes the Cisco networking infrastructure components used in the configuration.
The Cisco Nexus 9372TX Switch has 48 1/10GBase-T ports that can operate at 100 Mbps, 1 Gbps, and 10 Gbps speeds and six Quad Small Form Pluggable Plus (QSFP+) uplink ports. All ports are line rate, delivering 1.44 Tbps of throughput in a 1-rack-unit (1RU) form factor. Nexus 9372TX benefits are listed below:
Architectural Flexibility
§ Includes top-of-rack or middle-of-row copper-based server access connectivity
§ Includes a leaf node in a spine-leaf architecture for low-latency traffic flow and reduced convergence time in the event of a failure
Feature-Rich
§ Includes an enhanced version of Cisco NX-OS Software designed for performance, resiliency, scalability, manageability, and programmability
§ ACI-ready hardware infrastructure allows users to take full advantage of an automated, policy-based, systems management approach
§ Supports virtual extensible LAN (VXLAN) routing
§ An additional 25 MB buffer is included, surpassing competing switch offerings
Highly Available and Efficient Design
§ High-density, non-blocking architecture
§ Easily deployed into either a hot-aisle and cold-aisle configuration
§ Redundant, hot-swappable power supplies and fan trays
Simplified Operations
§ Power-On Auto Provisioning (POAP) support allows for simplified software upgrades and configuration file installation
§ An intelligent API offers switch management through remote procedure calls (RPCs, JSON, or XML) over a HTTP/HTTPS infrastructure
§ Python Scripting for programmatic access to the switch command-line interface (CLI)
§ Hot and cold patching, and online diagnostics
Investment Protection
§ A Cisco 40 Gb bidirectional transceiver allows for reuse of an existing 10 Gigabit Ethernet multimode cabling plant for 40 Gigabit Ethernet
§ Support for 100 Mb, 1 Gb, and 10 Gb access connectivity for data centers migrating access switching infrastructure to faster speeds
Specifications At-a-Glance
§ 1.44 Tbps of bandwidth in a 1 RU form factor
§ 48 fixed 1/10-Gbps BASE-T ports that can operate at 100 Mbps, 1 Gbps, or 10 Gbps speeds
§ 6 fixed 40-Gbps QSFP+ for uplink connectivity that can be turned into 10 Gb ports through a Qualified Security Assessor (QSA) adapter
§ Latency of 1 to 2 microseconds
§ Front-to-back or back-to-front airflow configurations
§ 1+1 redundant hot-swappable 80 Plus Platinum-certified power supplies
§ Hot swappable 2+1 redundant fan trays
Figure 15 Cisco Nexus 9372TX Switch
There are many reasons to consider a virtual desktop solution such as an ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and even Bring Your Own Computer (BYOC) to work programs. The first step in designing a virtual desktop solution is to understand the user community and the type of tasks that are required to successfully execute their role. The following user classifications are provided:
§ Knowledge Workers today do not just work in their offices all day – they attend meetings, visit branch offices, work from home, and even coffee shops. These anywhere workers expect access to all of their same applications and data wherever they are.
§ External Contractors are increasingly part of your everyday business. They need access to certain portions of your applications and data, yet administrators still have little control over the devices they use and the locations they work from. Consequently, IT is stuck making trade-offs on the cost of providing these workers a device vs. the security risk of allowing them access from their own devices.
§ Task Workers perform a set of well-defined tasks. These workers access a small set of applications and have limited requirements from their PCs. However, since these workers are interacting with your customers, partners, and employees, they have access to your most critical data.
§ Mobile Workers need access to their virtual desktop from everywhere, regardless of their ability to connect to a network. In addition, these workers expect the ability to personalize their PCs, by installing their own applications and storing their own data, such as photos and music, on these devices.
§ Shared Workstation users are often found in state-of-the-art university and business computer labs, conference rooms or training centers. Shared workstation environments have the constant requirement to re-provision desktops with the latest operating systems and applications as the needs of the organization change, tops the list.
After the user classifications have been identified and the business requirements for each user classification have been defined, it becomes essential to evaluate the types of virtual desktops that are needed based on user requirements. There are essentially five potential desktops environments for each user:
§ Traditional PC: A traditional PC is what ―typically‖ constituted a desktop environment: physical device with a locally installed operating system.
§ Hosted Shared Desktop: A hosted, server-based desktop is a desktop where the user interacts through a delivery protocol. With hosted, server-based desktops, a single installed instance of a server operating system, such as Microsoft Windows Server 2012, is shared by multiple users simultaneously. Each user receives a desktop "session" and works in an isolated memory space. Changes made by one user could impact the other users.
§ Hosted Virtual Desktop: A hosted virtual desktop is a virtual desktop running either on virtualization layer (ESX) or on bare metal hardware. The user does not work with and sit in front of the desktop, but instead the user interacts through a delivery protocol.
§ Published Applications: Published applications run entirely on the XenApp RDS server and the user interacts through a delivery protocol. With published applications, a single installed instance of an application, such as Microsoft, is shared by multiple users simultaneously. Each user receives an application "session" and works in an isolated memory space.
§ Streamed Applications: Streamed desktops and applications run entirely on the user‘s local client device and are sent from a server on demand. The user interacts with the application or desktop directly but the resources may only available while they are connected to the network.
§ Local Virtual Desktop: A local virtual desktop is a desktop running entirely on the user‘s local device and continues to operate when disconnected from the network. In this case, the user’s local device is used as a type 1 hypervisor and is synced with the data center when the device is connected to the network.
For the purposes of the validation represented in this document both XenDesktop hosted virtual desktops and hosted shared server desktops were validated. Each of the sections provides some fundamental design decisions for this environment.
When the desktop user groups and sub-groups have been identified, the next task is to catalog group application and data requirements. This can be one of the most time-consuming processes in the VDI planning exercise, but is essential for the VDI project’s success. If the applications and data are not identified and co-located, performance will be negatively affected.
The process of analyzing the variety of application and data pairs for an organization will likely be complicated by the inclusion cloud applications, like SalesForce.com. This application and data analysis is beyond the scope of this Cisco Validated Design, but should not be omitted from the planning process. There are a variety of third party tools available to assist organizations with this crucial exercise.
Now that user groups, their applications and their data requirements are understood, some key project and solution sizing questions may be considered.
General project questions should be addressed at the outset, including:
§ Has a VDI pilot plan been created based on the business analysis of the desktop groups, applications and data?
§ Is there infrastructure and budget in place to run the pilot program?
§ Are the required skill sets to execute the VDI project available? Can we hire or contract for them?
§ Do we have end user experience performance metrics identified for each desktop sub-group?
§ How will we measure success or failure?
§ What is the future implication of success or failure?
Below is a short, non-exhaustive list of sizing questions that should be addressed for each user sub-group:
§ What is the desktop OS planned? Windows 7, Windows 8, or Windows 10?
§ 32 bit or 64 bit desktop OS?
§ How many virtual desktops will be deployed in the pilot? In production? All Windows 7/8/10?
§ How much memory per target desktop group desktop?
§ Are there any rich media, Flash, or graphics-intensive workloads?
§ What is the end point graphics processing capability?
§ Will XenApp RDS be used for Hosted Shared Server Desktops or exclusively XenDesktop HVD?
§ Are there XenApp hosted applications planned? Are they packaged or installed?
§ Will Provisioning Server, Machine Creation Services, or NetApp VSC be used for virtual desktop deployment?
§ What is the hypervisor for the solution?
§ What is the storage configuration in the existing environment?
§ Are there sufficient IOPS available for the write-intensive VDI workload?
§ Will there be storage dedicated and tuned for VDI service?
§ Is there a voice component to the desktop?
§ Is anti-virus a part of the image?
§ Is user profile management (e.g., non-roaming profile based) part of the solution?
§ What is the fault tolerance, failover, disaster recovery plan?
§ Are there additional desktop sub-group specific questions?
Citrix XenDesktop is hypervisor-agnostic, so any of the following three hypervisors can be used to host RDS- and VDI-based desktops:
§ VMware vSphere: VMware vSphere comprises the management infrastructure or virtual center server software and the hypervisor software that virtualizes the hardware resources on the servers. It offers features like Distributed Resource Scheduler, vMotion, high availability, Storage vMotion, VMFS, and a multi-pathing storage layer. More information on vSphere can be obtained at the VMware web site: http://www.vmware.com/products/datacenter-virtualization/vsphere/overview.html.
§ Hyper-V: Microsoft Windows Server with Hyper-V is available in a Standard, Server Core and free Hyper-V Server versions. More information on Hyper-V can be obtained at the Microsoft web site: http://www.microsoft.com/en-us/server-cloud/windows-server/default.aspx.
§ XenServer: Citrix® XenServer® is a complete, managed server virtualization platform built on the powerful Xen® hypervisor. Xen technology is widely acknowledged as the fastest and most secure virtualization software in the industry. XenServer is designed for efficient management of Windows and Linux virtual servers and delivers cost-effective server consolidation and business continuity. More information on XenServer can be obtained at the web site: http://www.citrix.com/products/xenserver/overview.html .
For this CVD, the hypervisor used was VMware ESXi 6.0 Update 1a.
An ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and even Bring Your Own (BYO) device to work programs are prime reasons for moving to a virtual desktop solution.
Citrix XenDesktop 7.7 integrates Hosted Shared and VDI desktop virtualization technologies into a unified architecture that enables a scalable, simple, efficient, and manageable solution for delivering Windows applications and desktops as a service.
Users can select applications from an easy-to-use “store” that is accessible from tablets, smartphones, PCs, Macs, and thin clients. XenDesktop delivers a native touch-optimized experience with HDX high-definition performance, even over mobile networks.
Collections of identical Virtual Machines (VMs) or physical computers are managed as a single entity called a Machine Catalog. In this CVD, VM provisioning relies on Citrix Provisioning Services to make sure that the machines in the catalog are consistent. In this CVD, machines in the Machine Catalog are configured to run either a Windows Server OS (for RDS hosted shared desktops) or a Windows Desktop OS (for hosted pooled VDI desktops).
To deliver desktops and applications to users, you create a Machine Catalog and then allocate machines from the catalog to users by creating Delivery Groups. Delivery Groups provide desktops, applications, or a combination of desktops and applications to users. Creating a Delivery Group is a flexible way of allocating machines and applications to users. In a Delivery Group, you can:
§ Use machines from multiple catalogs
§ Allocate a user to multiple machines
§ Allocate multiple users to one machine
As part of the creation process, you specify the following Delivery Group properties:
§ Users, groups, and applications allocated to Delivery Groups
§ Desktop settings to match users' needs
§ Desktop power management options
Figure 16shows how users access desktops and applications through machine catalogs and delivery groups.
Server OS and Desktop OS Machines configured in this CVD to support hosted shared desktops and hosted virtual desktops (both non-persistent and persistent).
Figure 16 Access Desktops and Applications through Machine Catalogs and Delivery Groups
Citrix XenDesktop 7.7 can be deployed with or without Citrix Provisioning Services (PVS). The advantage of using Citrix PVS is that it allows virtual machines to be provisioned and re-provisioned in real-time from a single shared-disk image. In this way administrators can completely eliminate the need to manage and patch individual systems and reduce the number of disk images that they manage, even as the number of machines continues to grow, simultaneously providing the efficiencies of a centralized management with the benefits of distributed processing.
The Provisioning Services solution’s infrastructure is based on software-streaming technology. After installing and configuring Provisioning Services components, a single shared disk image (vDisk) is created from a device’s hard drive by taking a snapshot of the OS and application image, and then storing that image as a vDisk file on the network. A device that is used during the vDisk creation process is the Master target device. Devices or virtual machines that use the created vDisks are called target devices.
When a target device is turned on, it is set to boot from the network and to communicate with a Provisioning Server. Unlike thin-client technology, processing takes place on the target device (Step 1).
Figure 17 Citrix Provisioning Services Functionality
The target device downloads the boot file from a Provisioning Server (Step 2) and boots. Based on the boot configuration settings, the appropriate vDisk is mounted on the Provisioning Server (Step 3). The vDisk software is then streamed to the target device as needed, appearing as a regular hard drive to the system.
Instead of immediately pulling all the vDisk contents down to the target device (as with traditional imaging solutions), the data is brought across the network in real-time as needed. This approach allows a target device to get a completely new operating system and set of software in the time it takes to reboot. This approach dramatically decreases the amount of network bandwidth required and making it possible to support a larger number of target devices on a network without impacting performance
Citrix PVS can create desktops as Pooled or Private:
§ Pooled Desktop: A pooled virtual desktop uses Citrix PVS to stream a standard desktop image to multiple desktop instances upon boot.
§ Private Desktop: A private desktop is a single desktop assigned to one distinct user.
The alternative to Citrix Provisioning Services for pooled desktop deployments is Citrix Machine Creation Services (MCS), which is integrated with the XenDesktop Studio console.
When considering a PVS deployment, there are some design decisions that need to be made regarding the write cache for the target devices that leverage provisioning services. The write cache is a cache of all data that the target device has written. If data is written to the PVS vDisk in a caching mode, the data is not written back to the base vDisk. Instead it is written to a write cache file in one of the following locations:
§ Cache on device hard drive. Write cache exists as a file in NTFS format, located on the target-device’s hard drive. This option frees up the Provisioning Server since it does not have to process write requests and does not have the finite limitation of RAM.
§ Cache on device hard drive persisted. (Experimental Phase) This is the same as “Cache on device hard drive”, except that the cache persists. At this time, this method is an experimental feature only, and is only supported for NT6.1 or later (Windows 7 and Windows 2008 R2 and later). This method also requires a different bootstrap.
§ Cache in device RAM. Write cache can exist as a temporary file in the target device’s RAM. This provides the fastest method of disk access since memory access is always faster than disk access.
§ Cache in device RAM with overflow on hard disk. This method uses VHDX differencing format and is only available for Windows 7 and Server 2008 R2 and later. When RAM is zero, the target device write cache is only written to the local disk. When RAM is not zero, the target device write cache is written to RAM first. When RAM is full, the least recently used block of data is written to the local differencing disk to accommodate newer data on RAM. The amount of RAM specified is the non-paged kernel memory that the target device will consume.
§ Cache on a server. Write cache can exist as a temporary file on a Provisioning Server. In this configuration, all writes are handled by the Provisioning Server, which can increase disk I/O and network traffic. For additional security, the Provisioning Server can be configured to encrypt write cache files. Since the write-cache file persists on the hard drive between reboots, encrypted data provides data protection in the event a hard drive is stolen.
§ Cache on server persisted. This cache option allows for the saved changes between reboots. Using this option, a rebooted target device is able to retrieve changes made from previous sessions that differ from the read only vDisk image. If a vDisk is set to this method of caching, each target device that accesses the vDisk automatically has a device-specific, writable disk file created. Any changes made to the vDisk image are written to that file, which is not automatically deleted upon shutdown.
In this CVD, Provisioning Server 7.7 was used to manage Pooled/Non-Persistent VDI Machines and XenApp RDS Machines with “Cache in device RAM with overflow on hard disk” for each virtual machine. This design enables good scalability to many thousands of desktops. Provisioning Server 7.7 was used for Active Directory machine account creation and management as well as for streaming the shared disk to the hypervisor hosts.
Two examples of typical XenDesktop deployments are the following:
§ A distributed components configuration
§ A multiple site configuration
Since XenApp and XenDesktop 7.7 are based on a unified architecture, combined they can deliver a combination of Hosted Shared Desktops (HSDs, using a Server OS machine) and Hosted Virtual Desktops (HVDs, using a Desktop OS).
You can distribute the components of your deployment among a greater number of servers, or provide greater scalability and failover by increasing the number of controllers in your site. You can install management consoles on separate computers to manage the deployment remotely. A distributed deployment is necessary for an infrastructure based on remote access through NetScaler Gateway (formerly called Access Gateway).
Figure 18 shows an example of a distributed components configuration. A simplified version of this configuration is often deployed for an initial proof-of-concept (POC) deployment. The CVD described in this document deploys Citrix XenDesktop in a configuration that resembles this distributed components configuration shown.
Figure 18 Example of a Distributed Components Configuration
If you have multiple regional sites, you can use Citrix NetScaler to direct user connections to the most appropriate site and StoreFront to deliver desktops and applications to users.
In Figure 19, depicting multiple sites, a site was created in two data centers. Having two sites globally, rather than just one, minimizes the amount of unnecessary WAN traffic. Two Cisco blade servers host the required infrastructure services (AD, DNS, DHCP, Profile, SQL, Citrix XenDesktop management, and web servers).
You can use StoreFront to aggregate resources from multiple sites to provide users with a single point of access with NetScaler. A separate Studio console is required to manage each site; sites cannot be managed as a single entity. You can use Director to support users across sites.
Citrix NetScaler accelerates application performance, load balances servers, increases security, and optimizes the user experience. In this example, two NetScalers are used to provide a high availability configuration. The NetScalers are configured for Global Server Load Balancing and positioned in the DMZ to provide a multi-site, fault-tolerant solution.
With Citrix XenDesktop 7.7, the method you choose to provide applications or desktops to users depends on the types of applications and desktops you are hosting and available system resources, as well as the types of users and user experience you want to provide.
Server OS machines | You want: Inexpensive server-based delivery to minimize the cost of delivering applications to a large number of users, while providing a secure, high-definition user experience. Your users: Perform well-defined tasks and do not require personalization or offline access to applications. Users may include task workers such as call center operators and retail workers, or users that share workstations. Application types: Any application. |
Desktop OS machines
| You want: A client-based application delivery solution that is secure, provides centralized management, and supports a large number of users per host server (or hypervisor), while providing users with applications that display seamlessly in high-definition. Your users: Are internal, external contractors, third-party collaborators, and other provisional team members. Users do not require off-line access to hosted applications. Application types: Applications that might not work well with other applications or might interact with the operating system, such as .NET framework. These types of applications are ideal for hosting on virtual machines. Applications running on older operating systems such as Windows XP or Windows Vista, and older architectures, such as 32-bit or 16-bit. By isolating each application on its own virtual machine, if one machine fails, it does not impact other users. |
Remote PC Access
| You want: Employees with secure remote access to a physical computer without using a VPN. For example, the user may be accessing their physical desktop PC from home or through a public WIFI hotspot. Depending upon the location, you may want to restrict the ability to print or copy and paste outside of the desktop. This method enables BYO device support without migrating desktop images into the datacenter. Your users: Employees or contractors that have the option to work from home, but need access to specific software or data on their corporate desktops to perform their jobs remotely. Host: The same as Desktop OS machines. Application types: Applications that are delivered from an office computer and display seamlessly in high definition on the remote user's device. |
For the Cisco Validated Design described in this document, a mix of Hosted Shared Desktops (HSDs) using RDS-based Server OS machines and Hosted Virtual Desktops (HVDs) using VDI-based Desktop OS machines were configured and tested. The mix consisted of a combination of both use cases. The following sections discuss design decisions relative to the Citrix XenDesktop deployment, including the CVD test environment.
Virtual desktop solutions deliver OS, user, and corporate-application management and user profile and data management.
NetApp recommends implementing virtual layering technologies to separate the various components of a desktop, including the base OS image, user profiles and settings, corporate apps, user-installed apps, and user data, into manageable entities called layers. Layers create the lowest storage costs per desktop because you do not need to size storage for peak. For example, Snapshot-based backup and recovery can be applied to the different layers of the desktop to improve storage efficiency.
The key benefits of virtual desktop layering are as follows:
§ Ease of virtual desktop infrastructure (VDI) image management. Individual desktops no longer need to be patched or updated individually. This feature results in cost savings because you do not need to size the storage array for write I/O storms.
§ Efficient data management. Separating the different desktop components into layers allows for the application of intelligent data management policies, such as deduplication, NetApp Snapshot backups, and so on, on different layers as required. For example, you can enable deduplication on storage volumes that host Citrix personal vDisks and user data.
§ Ease of application rollout and updates. Layering improves the roll out of new applications and updates to existing applications.
§ Improved end-user experience. Layering allows users to install applications and permits the persistence of these applications during updates to the desktop OS or applications.
This section outlines the recommended storage architecture for deploying a mix of various XenDesktop FlexCast delivery models on the same NetApp clustered Data ONTAP storage array. These models include hosted VDI, hosted-shared desktops, and intelligent VDI layering, such as profile management and user data management.
For hosted and shared desktops and the hosted VDI, the following recommendations are best practices for the OS vDisk, the write cache disk, profile management, user data management, and application virtualization:
§ Provisioning Services (PVS) vDisk. CIFS/SMB 3 is used to host the PVS vDisk. CIFS/SMB 3 allows the same vDisk to be shared among multiple PVS servers while still maintaining resilience during storage node failover. This process results in significant operational savings and architecture simplicity. SMB3 is available in Windows Server 2012 or higher and provides persistent handles. SMB3 persistent handles prevents the PVS server from crashing during a storage node failover. Therefore, Windows 2012 is the required OS for a PVS server to ensure a stable PVS implementation.
§ PVS write cache file. For simplicity and scalability, the PVS write cache file is hosted on NFS storage repositories. Deduplication should not be enabled on this volume, because the rate of change is too high. The PVS write cache file should be set for thin provisioning at the storage layer.
§ Profile management. To make sure that user profiles and settings are preserved, you should leverage the profile management software Citrix UPM to redirect the user profiles to the CIFS home directories.
§ User data management. NetApp recommends hosting user data on CIFS home directories to preserve data upon VM reboot or redeploy.
§ Monitoring and management. NetApp recommends using OnCommand Balance and Citrix Desktop Director to provide end-to-end monitoring and management of the solution.
The following section is a compilation of NetApp best practices that have been discussed previously in this document.
§ Use System Manager to create the SVM.
§ Verify that you are using the latest release of clustered Data ONTAP.
§ Segregate the SVMs by tenant, solution, or apps administrator or alternatively by protocol.
§ Do not configure too many SVMs. Because volume move is isolated within an SVM, configuring too many SVMs has the potential to limit the use of volume move or copy. Configuring too many SVMs can also make administration more complex.
§ Create load-sharing mirrors for all of the SVM root volumes.
§ Make sure to use advance data partitioning to create the root aggregates.
§ Create one data aggregate per controller.
§ Create volumes in multiples of four volumes per storage node (4, 8, 12, 16 and so on).
§ Create a minimum of four volumes on each storage controller (node) for Remote Desktop Services (RDS) in groups of four volumes per storage node.
§ Create a minimum of four volumes on each storage controller (node) for PVS nonpersistent VDI in groups of four volumes per storage node.
§ Create a minimum of four volumes on each storage controller (node) for Machine Creation Services (MCS) persistent VDI in groups of four volumes per storage node.
§ Make sure that inline deduplication is configured on storage volumes that benefit from deduplication.
§ Make sure that the deduplication policy Inline Only is set on all volumes.
§ Set both the Inline Compression option and the Inline Deduplication option to True for all VDI, Infrastructure, and CIFS volumes.
§ Set both the Inline Compression option and the Inline Deduplication option to False for all swap volumes and volumes with high rate of changes (for example, database logs).
§ If you are using a switchless storage cluster, make sure that the Switchless option is set.
§ Create load-sharing mirrors for all SVM root volumes.
§ Create a minimum of one LIF per volume (storage repository) if possible.
§ Create LIF failover groups and assigned them to LIFs.
§ NAS LIFs are the only LIFs that migrate. Therefore, make sure that Asymmetric Logical Unit Access (ALUA) is configured and working for block protocol LIFs.
§ Assign the same port on each clustered storage node to the same LIF.
§ Use the latest release of clustered Data ONTAP.
§ Use the latest release of shelf firmware and disk firmware.
§ Switch ports connected to the NetApp storage controllers must be set to edge ports to turn spanning tree off. Also, make sure that portfast is enabled.
§ Set flow control to None on the switch, storage controller, and XenServer ports.
§ Make sure that Suspend-Individual is set to No on the switch.
§ Use jumbo frames on the NFS data network.
§ Use the Link Aggregation Control Protocol (LACP) or the virtual port channel (VPC) for port teaming.
§ Make sure that the load-balancing method is set to Port.
§ The NFS data network should be nonroutable.
§ Segregate the CIFS network and NFS data network on different ports or interface groups to eliminate the possibility of maximum transmission unit (MTU) mismatch errors.
§ Make sure that the load-balancing method is set to Port.
§ Use FCoE or iSCSI for the boot LUNs and configure Boot from SAN.
§ Separate iSCSI paths with VLANs.
§ Use Virtual Storage Console (VSC) for creating datastores.
§ Use NFS volumes for the datastores.
§ If you are using NFS 4.1, separate NFS dual paths with VLANs.
§ Create the thick eager-zero for your VMDK.
§ Use iSCSI for the boot LUNs and configure Boot from SAN.
§ Use NetApp VSC to provision Citrix persistent desktops.
§ Use the VSC for resizing or applying deduplication to the storage datastores.
§ Use NFS volumes for the storage datastores.
§ Do not configure Always-On Deduplication with In-Line Deduplication on the volume.
§ Use In-Line Deduplication on the infrastructure volumes.
§ Thin provision the write cache infrastructure volumes at the storage layer.
§ Use SMB3 for the PVS CIFS share with the NetApp continuous share feature.
§ Use a profile manager for profiles and CIFS. NetApp recommends Citrix UPM.
§ Use redirected folders for the home directories on the CIFS shares.
§ Do not locate the redirected folders in the profiles folder. This will cause login performance issues.
§ NetApp recommends UCS Director for servers, storage, and switch infrastructure.
§ NetApp recommends OnCommand Insight to monitor VDI I/O from guests to storage.
§ Have a NetApp sales engineer or a NetApp partner use the NetApp SPM tool to size the virtual desktop solution. When sizing CIFS, NetApp recommends sizing with a heavy user workload. We assumed 80% concurrency and 10GB per user for home directory space with 35% deduplication space savings. Each VM used 2GB of RAM. PVS write cache is sized at 5GB per desktop for nonpersistent and pooled desktops and 2GB for persistent desktops with personal vDisk.
The storage components for this reference architecture were composed of two AFF8080EX-A nodes with 48 800GB SSD drives. We used clustered Data ONTAP 8.3.2.
To support the differing security, backup, performance, and data sharing needs of your users, group the physical data storage resources on your storage system into one or more aggregates. You can design and configure your aggregates to provide the appropriate level of performance and redundancy for your storage requirements.
Each aggregate has its own RAID configuration, plex structure, and set of assigned disks or array LUNs. The aggregate provides storage, based on its configuration, to its associated NetApp FlexVol® volumes or Infinite Volumes. Aggregates have the following characteristics:
§ They can be composed of disks or array LUNs.
§ They can be mirrored or unmirrored.
§ If they are composed of disks, they can be single-tier (composed of only HDDs or only SSDs), or they can be Flash Pools, which include both of those storage types in two separate tiers.
The cluster administrator can assign one or more aggregates to an SVM. which means that only those aggregates can contain volumes for that SVM.
Unless you are using SyncMirror, all of your aggregates are unmirrored. Unmirrored aggregates have only one plex (copy of their data), which contains all of the RAID groups belonging to that aggregate. Mirrored aggregates have two plexes (copies of their data) that use the SyncMirror functionality to duplicate the data to provide redundancy.
A Flash Pool aggregate combines SSDs and HDDs to provide performance and capacity, respectively. This combination creates a high performance aggregate more economically than an SSD-only aggregate. The SSDs provide a high-performance cache for the active data set of the data volumes provisioned on the Flash Pool aggregate. Random read operations and repetitive random write operations are offloaded to improve response times and overall throughput for disk I/O-bound data access operations. Performance is not significantly increased for predominately sequential workloads.
For information about best practices for working with aggregates, see Technical Report 3437: Storage Subsystem Resiliency Guide.
Table 2 contains all aggregate configuration information.
Table 2 Aggregate Configuration
Aggregate Name | Owner Node Name | State | RAID Status | RAID Type |
aff_aggr0_root_node01 | aff-cluster-01-01 | online | normal | raid_dp |
aff_aggr0_root_node02 | aff-cluster-01-02 | online | normal | raid_dp |
aff_aggr1_data_node01 | aff-cluster-01-01 | online | normal | raid_dp |
aff_aggr1_data_node02 | aff-cluster-01-02 | online | normal | raid_dp |
Disk Count (By Type) | RG Size (HDD / SSD) | HA Policy | Has Mroot | Mirrored | Size Nominal |
22@800GB_SSD (Shared) | 23 | cfo | True | False | 368.42 GB |
22@800GB_SSD (Shared) | 23 | cfo | True | False | 368.42 GB |
23@800GB_SSD (Shared) | 23 | sfo | False | False | 13.35 TB |
23@800GB_SSD (Shared) | 23 | sfo | False | False | 13.35 TB |
Volumes are data containers that allow you to partition and manage your data. Understanding the types of volumes and their associated capabilities enables you to design your storage architecture for maximum storage efficiency and ease of administration. Volumes are the highest-level logical storage object. Unlike aggregates, which are composed of physical storage resources, volumes are completely logical objects.
Clustered Data ONTAP provides two types of volumes: FlexVol volumes and Infinite Volumes. There are also volume variations, such as FlexClone volumes, NetApp FlexCache® volumes, data protection mirrors, extended data protection mirrors and load-sharing mirrors. Not all volume variations are supported for both types of volumes. Compression and deduplication, the Data ONTAP efficiency capabilities, are supported for all types of volumes.
Volumes contain file systems in a NAS environment and LUNs in a SAN environment. Also, volumes are always associated with one SVM. The SVM is a virtual management entity, or server, that consolidates various cluster resources into a single manageable unit. When you create a volume, you specify the SVM that it is associated with. The type of the volume (FlexVol volume or Infinite Volume) and its language are determined by immutable SVM attributes.
Volumes depend on their associated aggregates for their physical storage; they are not directly associated with any physical storage objects, such as disks or RAID groups. If the cluster administrator has assigned specific aggregates to an SVM, then only those aggregates can be used to provide storage to the volumes associated with that SVM. This impacts volume creation, and also copying and moving FlexVol volumes between aggregates.
A node's root volume is a FlexVol volume that is installed at the factory and reserved for system files, log files, and core files. The directory name is /mroot, which is accessible only through the systemshell with guidance from technical support.
Every SVM has a root volume that contains the paths where the data volumes are junctioned into the namespace. Data access for NAS clients is dependent on the root volume namespace, and SAN data access for SAN clients is not dependent on the root volume namespace.
The root volume serves as the entry point to the namespace provided by that SVM. The root volume of an SVM is a FlexVol volume that resides at the top level of the namespace hierarchy. The root volume contains the directories that are used as mount points (paths where data volumes are junctioned into the namespace).
Table 3 lists the node and SVM root volumes configuration.
Table 3 Root Volume Configuration
Cluster Name | SVM Name | Volume Name | Containing Aggregate | Root Type | State | Snapshot Policy | Export Policy | Security Style | Size Nominal |
aff-cluster-01 | aff-cluster-01-01 | vol0 | aff_aggr0_root_node01 | Node | online |
|
|
| 348.62 GB |
aff-cluster-01 | aff-cluster-01-02 | vol0 | aff_aggr0_root_node02 | Node | online |
|
|
| 348.62 GB |
aff-cluster-01 | San_Boot | San_Boot_root | aff_aggr1_data_node01 | Vserver (RW) | online | default | default | UNIX | 1.00 GB |
aff-cluster-01 | VDI | VDI_root | aff_aggr1_data_node02 | Vserver (RW) | online | default | default | UNIX | 1.00 GB |
A FlexVol volume is a data container associated with a NetApp Storage Virtual Machine (SVM). A FlexVol volume accesses storage from a single associated aggregate that it might share with other FlexVol volumes or Infinite Volumes. A FlexVol volume can be used to contain files in a NAS environment or LUNs in a SAN environment. Table 4 lists the configuration of FlexVol volumes.
Table 4 FlexVol Volume Configuration
Cluster Name | SVM Name | Volume Name | Containing Aggregate | Type | Snapshot Policy | Export Policy | Security Style | Size Nominal |
aff-cluster-01 | San_Boot | San_Boot01 | aff_aggr1_data_node01 | RW | none | default | UNIX | 300.00 GB |
aff-cluster-01 | San_Boot | San_Boot02 | aff_aggr1_data_node02 | RW | none | default | UNIX | 300.00 GB |
aff-cluster-01 | VDI | CIFS_HomeDir01 | aff_aggr1_data_node01 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_HomeDir02 | aff_aggr1_data_node02 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_HomeDir03 | aff_aggr1_data_node01 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_HomeDir04 | aff_aggr1_data_node02 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_HomeDir05 | aff_aggr1_data_node01 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_HomeDir06 | aff_aggr1_data_node02 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_HomeDir07 | aff_aggr1_data_node01 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_HomeDir08 | aff_aggr1_data_node02 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_HomeDir09 | aff_aggr1_data_node01 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_HomeDir10 | aff_aggr1_data_node02 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_PVS_vDisk01 | aff_aggr1_data_node02 | RW | default | CIFS | NTFS | 250.00 GB |
aff-cluster-01 | VDI | CIFS_RDSH01 | aff_aggr1_data_node01 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | CIFS_VDI01 | aff_aggr1_data_node02 | RW | default | CIFS | NTFS | 200.00 GB |
aff-cluster-01 | VDI | Infra01 | aff_aggr1_data_node01 | RW | none | NFS | UNIX | 1.95 TB |
aff-cluster-01 | VDI | MCS_PERS01 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | MCS_PERS02 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | MCS_PERS03 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | MCS_PERS04 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | MCS_PERS05 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | MCS_PERS06 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | MCS_PERS07 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | MCS_PERS08 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | PVSWC_NON_PERS01 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | PVSWC_NON_PERS02 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | PVSWC_NON_PERS03 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | PVSWC_NON_PERS04 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | PVSWC_NON_PERS05 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | PVSWC_NON_PERS06 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | PVSWC_NON_PERS07 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | PVSWC_NON_PERS08 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 500.00 GB |
aff-cluster-01 | VDI | PVSWC_RDSH01 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 200.00 GB |
aff-cluster-01 | VDI | PVSWC_RDSH02 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 200.00 GB |
aff-cluster-01 | VDI | PVSWC_RDSH03 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 200.00 GB |
aff-cluster-01 | VDI | PVSWC_RDSH04 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 200.00 GB |
aff-cluster-01 | VDI | PVSWC_RDSH05 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 200.00 GB |
aff-cluster-01 | VDI | PVSWC_RDSH06 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 200.00 GB |
aff-cluster-01 | VDI | PVSWC_RDSH07 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 200.00 GB |
aff-cluster-01 | VDI | PVSWC_RDSH08 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 200.00 GB |
aff-cluster-01 | VDI | VM_Infra01 | aff_aggr1_data_node01 | RW | default | NFS | UNIX | 2.00 TB |
aff-cluster-01 | VDI | VM_Swap01 | aff_aggr1_data_node02 | RW | default | NFS | UNIX | 100.00 GB |
aff-cluster-01 | VDI | VMSWP | aff_aggr1_data_node02 | RW | none | NFS | UNIX | 100.00 GB |
We created 47 volumes to support the individual software component tests (RDS, PVS nonpersistent, and MCS persistent), which we call cluster testing. These volumes also supported the full-scale testing of all 5,000 users in a mixed workload environment.
The mixed workload environment consisted of 2,600 RDS (hosted shared desktop) desktop users, 1,200 PVS nonpersistent desktop users, and 1,200 MCS persistent desktop users. Although the persistent desktop users were managed by the Citrix MCS broker, they were provisioned with NetApp VSC. Later in this document we discuss in detail the benefits of using NetApp VSC to provision physical desktops. Also, note that we adhered to four volumes per controller per software component.
You can group HA pairs of nodes together to form a scalable cluster. Creating a cluster enables the nodes to pool their resources and distribute work across the cluster while presenting administrators with a single entity to manage. Clustering also enables continuous service to end users if individual nodes go offline.
A cluster can contain up to 24 nodes (or up to 10 nodes if it contains an SVM with an Infinite Volume) for NAS-based clusters and up to 8 nodes for SAN based clusters (as of Data ONTAP 8.2). Each node in the cluster can view and manage the same volumes as any other node in the cluster. The total file-system namespace, which comprises all of the volumes and their resultant paths, spans the cluster.
If you have a two-node cluster, you must configure cluster HA. For more information, see the Clustered Data ONTAP High-Availability Configuration Guide.
The nodes in a cluster communicate over a dedicated, physically isolated, dual-fabric, secure Ethernet network. The cluster LIFs on each node in the cluster must be on the same subnet. For information about network management for cluster and nodes, see the Clustered Data ONTAP Network Management Guide. For information about setting up a cluster or joining a node to the cluster, see the Clustered Data ONTAP Software Setup Guide.
Table 5 shows the cluster details.
Cluster Name | Data ONTAP Version | Node Count | Data SVM Count | Cluster Raw Capacity |
aff-cluster-01 | 8.3.2 | 2 | 2 | 34.20 TB |
A node is a controller in a cluster that is connected to other nodes in the cluster over a cluster network. It is also connected to the disk shelves that provide physical storage for the Data ONTAP system or to third-party storage arrays that provide array LUNs for Data ONTAP use.
Table 6 shows the node details.
Cluster Name | Node Name | System Model | Serial Number | HA Partner Node Name | Data ONTAP Version |
aff-cluster-01 | aff-cluster-01-01 | AFF8080 | 721544000374 | aff-cluster-01-02 | 8.3.2 |
aff-cluster-01 | aff-cluster-01-02 | AFF8080 | 721544000373 | aff-cluster-01-01 | 8.3.2 |
Table 7 shows the storage configuration for each node.
Node Name | Shelf Connectivity | ACP Connectivity | Cluster HA Configured | SFO Enabled | Takeover Possible |
aff-cluster-01-01 | Multi-Path HA | Full Connectivity | True | True | True |
aff-cluster-01-02 | Multi-Path HA | Full Connectivity | True | True | True |
Table 8 shows the storage details for each HA pair.
Node Names | Shelf Count | Disk Count | Disk Capacity | Raw Capacity |
aff-cluster-01-01 aff-cluster-01-02 | DS2246: 2 | SSD: 47 | SSD: 34.20 TB | 34.20 TB |
Raw capacity is not the same as usable capacity.
Table 9 shows the shelf details for each HA pair.
Cluster Name | Node Names | Shelf ID | Shelf State | Shelf Model | Shelf Type | Drive Slot Count |
aff-cluster-01 | aff-cluster-01-01 aff-cluster-01-02 | 10 | online | DS2246 | IOM6 | 24 |
aff-cluster-01 | aff-cluster-01-01 aff-cluster-01-02 | 11 | online | DS2246 | IOM6 | 24 |
Table 10 shows the drive allocation details for each node.
Table 10 Drive Allocation Details
Node Name | Total Disk Count | Allocated Disk Count | Disk Type | Raw Capacity | Spare Disk Count |
– | 1 | 0 | – | 0 | 0 |
aff-cluster-01-01 | 23 | 23 | 800GB_SSD | 16.74 TB | 0 |
aff-cluster-01-02 | 24 | 24 | 800GB_SSD | 17.47 TB | 0 |
Raw capacity is not the same as usable capacity.
Table 11 shows the adapter cards present in each node.
Node Name | System Model | Adapter Card |
aff-cluster-01-01 | AFF8080 | slot 1: X1117A: Intel Dual 10G IX1-SFP+ NIC slot 3: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003) slot 4: X1117A: Intel Dual 10G IX1-SFP+ NIC |
aff-cluster-01-02 | AFF8080 | slot 1: X1117A: Intel Dual 10G IX1-SFP+ NIC slot 3: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003) slot 4: X1117A: Intel Dual 10G IX1-SFP+ NIC |
You can manage a node remotely by using a remote management device, which can be the SP or the RLM, depending on the platform model. The device stays operational regardless of the operating state of the node.
The RLM is included in the 31xx, 6040, and 6080 platforms. The SP is included in all other platform models.
Table 12 lists the remote management devices.
Table 12 Remote Management Devices
Cluster Name | Node Name | Type | Status | IP Address | Gateway |
aff-cluster-01 | aff-cluster-01-01 | SP | online | 10.29.164.75/24 | 10.29.164.1 |
aff-cluster-01 | aff-cluster-01-02 | SP | online | 10.29.164.76/24 | 10.29.164.1 |
Table 13 shows the relevant firmware details for each node.
Node Name | Node Firmware | Shelf Firmware | Drive Firmware | Remote Mgmt Firmware |
aff-cluster-01-01 | AFF8080: 9.3 | IOM6: A:0181, B:0181 | X447_1625800MCSG: NA03 X447_S1633800AMD: NA01 | SP: 3.1.2 |
aff-cluster-01-02 | AFF8080: 9.3 | IOM6: A:0181, B:0181 | X447_1625800MCSG: NA03 X447_S1633800AMD: NA01 | SP: 3.1.2 |
Clustered Data ONTAP provides features for network file service, multiprotocol file and block sharing, data storage management, and data organization management, data access management, data migration management, data protection management, and AutoSupport.
AutoSupport proactively monitors the health of your system and automatically sends email messages to NetApp technical support, your internal support organization, and a support partner. Only the cluster administrator can perform AutoSupport management. The SVM administrator has no access to AutoSupport. AutoSupport is enabled by default when you configure your storage system for the first time.
AutoSupport begins sending messages to technical support 24 hours after AutoSupport is enabled. You can cut short the 24-hour period by upgrading or reverting the system, modifying the AutoSupport configuration, or changing the time of the system to be outside of the 24-hour period.
You can disable AutoSupport at any time, but you should leave it enabled. Enabling AutoSupport can significantly help speed problem determination and resolution should a problem occur on your storage system. By default, the system collects AutoSupport information and stores it locally even if you disable AutoSupport.
Although AutoSupport messages to technical support are enabled by default, you need to set the correct options and have a valid mail host to have messages sent to your internal support organization.
For more information about AutoSupport, see the NetApp Support Site.
Table 14 lists the AutoSupport settings.
Node Name | Enabled | Support Enabled | Performance Data Enabled | Private Data Removed | Throttle Enabled |
aff-cluster-01-01 | True | True | True | False | True |
aff-cluster-01-02 | True | True | True | False | True |
Problems can occur when the cluster time is inaccurate. Although you can manually set the time zone, date, and time on the cluster, you should configure the Network Time Protocol (NTP) servers to synchronize the cluster time. NTP is always enabled. However, configuration is still required for the cluster to synchronize with an external time source.
Table 15 lists the system time settings for Data ONTAP 8.3 or later.
Table 15 System Time Settings for Data ONTAP 8.3
Cluster Name | Server | Version | Preferred | Public Default |
aff-cluster-01 | 10.29.164.66 | auto | False | False |
Clustered Data ONTAP supports two methods for host-name resolution: DNS and hosts table. Cluster administrators can configure DNS and hosts file naming services for host-name lookup in the admin SVM.
Table 16 lists the cluster DNS settings.
Cluster Name | State | Domain Names | Servers |
aff-cluster-01 | enabled | dvpod2.local | 10.10.61.30 10.10.61.31 |
An SVM is a secure virtual storage server that contains data volumes and one or more LIFs through which it serves data to the clients. An SVM appears as a single dedicated server to the clients. Each SVM has a separate administrator authentication domain and can be managed independently by its SVM administrator.
In a cluster, an SVM facilitates data access. A cluster must have at least one SVM to serve data. SVMs use the storage and network resources of the cluster. However, the volumes and LIFs are exclusive to the SVM. Multiple SVMs can coexist in a single cluster without being bound to any node in a cluster. However, they are bound to the physical cluster on which they exist.
In Data ONTAP 8.1.1, an SVM can either contain one or more FlexVol volumes, or a single Infinite Volume. A cluster can either have one or more SVMs with FlexVol volumes or one SVM with an Infinite Volume.
Table 17 lists the SVM configuration.
Cluster Name | SVM Name | Type | Subtype | State | Allowed Protocols | Name Server Switch | Name Mapping Switch | Comment |
aff-cluster-01 | San_Boot | data | default | running | iscsi | file | file |
|
aff-cluster-01 | VDI | data | default | running | nfs, cifs | file | file |
|
Table 18 lists the SVM storage configuration.
Table 18 SVM Storage Configuration
Cluster Name | SVM Name | Root Volume Security Style | Language | Root Volume | Root Aggregate | Aggregate List |
aff-cluster-01 | San_Boot | UNIX | en_us | San_Boot_root | aff_aggr1_data_node01 |
|
aff-cluster-01 | VDI | UNIX | en_us | VDI_root | aff_aggr1_data_node02 |
|
Table 19 lists the SVM default policy settings.
Cluster Name | SVM Name | Snapshot Policy | Quota Policy | Antivirus On-Access Policy | QOS Policy Group |
aff-cluster-01 | San_Boot | default | default | default |
|
aff-cluster-01 | VDI | default | default | default |
|
A cluster administrator or an SVM administrator can configure DNS for host-name lookup in an SVM.
Starting with Data ONTAP 8.1, each SVM has its own DNS configuration. Each SVM communicates with its DNS server in the SVM's context, which includes the SVM's LIF and routing tables. Client requests are authenticated and authorized by using the specific SVM network configuration. DNS configuration is mandatory when CIFS is used for data access.
Table 20 lists the SVM DNS settings.
Cluster Name | SVM Name | DNS State | Domain Names | DNS Servers |
aff-cluster-01 | San_Boot | enabled | dvpod2.local | 10.10.61.30 10.10.61.31 |
aff-cluster-01 | VDI | enabled | dvpod2.local | 10.10.61.30 10.10.61.31 |
An SVM administrator can administer an SVM and its resources, such as volumes, protocols, and services, depending on the capabilities assigned by the cluster administrator.
Table 21 lists the SVM administrative users.
Table 21 SVM Administrative Users
Cluster Name | SVM Name | Username | Application | Authentication Method | Role Name |
aff-cluster-01 | San_Boot | vsadmin | ontapi | password | vsadmin |
aff-cluster-01 | San_Boot | vsadmin | ssh | password | vsadmin |
aff-cluster-01 | VDI | vsadmin | ontapi | password | vsadmin |
aff-cluster-01 | VDI | vsadmin | ssh | password | vsadmin |
Data ONTAP controls access to files according to the authentication-based and file-based restrictions that you specify. To properly manage file access control, Data ONTAP must communicate with external services such as NIS, LDAP and Active Directory servers. Configuring a storage system for file access using CIFS or NFS requires setting up the appropriate services depending on your environment.
Communication with external services usually occurs over the data LIF of the SVM. In some situations, communication over the data LIF might fail or must be made on a node that does not host data LIFs for the SVM. In this case, the storage system attempts to use node and cluster management LIFs instead. For these reasons, you must ensure that the SVM has a data LIF properly configured to reach all required external services. In addition, all management LIFs in the cluster must be able to reach these external services.
Data ONTAP supports LDAP for user authentication, file access authorization, user lookup and mapping services between NFS and CIFS. If the SVM is set up to use LDAP as a name service using the -ns-switch ldap option or for name mapping using the -nm-switch ldap option, you should create an LDAP configuration for it. Clustered Data ONTAP supports only the RFC 2307 schema for LDAP authentication of SVM accounts. It does not support any other schemas, such as Active Directory Identity Management for UNIX (AD-IDMU) and Active Directory Services for UNIX (AD-SFU).
A job is defined as any asynchronous task. Jobs are typically long-running volume operations such as copy, move, and mirror. Jobs are placed into a job queue and run when resources are available. If a job is consuming too many system resources, you can pause or stop it until there is less demand on the system.
Many tasks—for example, volume snapshots and mirror replications—can be configured to run on specified schedules by using one of the system-wide defined job schedules. Schedules that run at specific times are known as cron schedules because of their similarity to UNIX cron schedules.
Table 22 lists the configured job schedules.
Cluster Name | Schedule Name | Type | Description |
aff-cluster-01 | 1min | cron | @:00,:01,:02,:03,:04,:05,:06,:07,:08,:09,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,:33,:34,:35,:36,:37,:38,:39,:40,:41,:42,:43,:44,:45,:46,:47,:48,:50,:51,:52,:53,:54,:55,:56,:57,:58,:59 |
aff-cluster-01 | 5min | cron | @:00,:05,:10,:15,:20,:25,:30,:35,:40,:45,:50,:55 |
aff-cluster-01 | 8hour | cron | @2:15,10:15,18:15 |
aff-cluster-01 | Auto Balance Aggregate Scheduler | interval | Every 1h |
aff-cluster-01 | daily | cron | @0:10 |
aff-cluster-01 | hourly | cron | @:05 |
aff-cluster-01 | RepositoryBalanceMonitorJobSchedule | interval | Every 10m |
aff-cluster-01 | weekly | cron | Sun@0:15 |
Clustered Data ONTAP utilizes policies for many configuration items. These policies are created at the cluster level and are then available for use by the SVMs. For example, Snapshot policies are created and then applied to the FlexVol volumes for which they take snapshots. Specific Snapshot configurations are not defined on a per FlexVol volume basis.
You can create a snapshot policy to specify the frequency and maximum number of automatically created Snapshot copies.
Table 23 lists the snapshot policy settings.
Cluster Name | SVM Name | Policy Name | Enabled | Total Schedules | Schedule Name | Schedule Prefix | Schedule Count |
aff-cluster-01 | aff-cluster-01 | default | True | 3 | daily hourly weekly | daily hourly weekly | 2 6 2 |
aff-cluster-01 | aff-cluster-01 | default-1weekly | True | 3 | daily hourly weekly | daily hourly weekly | 2 6 1 |
aff-cluster-01 | aff-cluster-01 | none | False | 0 |
|
|
|
A SnapMirror policy is either a cluster-wide or SVM-wide policy that defines the SnapMirror parameters between the source and destination volumes and is applied to SnapMirror relationships. The SVM parameter defines if the policy is applicable to the entire cluster or a specific SVM.
SnapMirror policies are only available starting with clustered Data ONTAP 8.2.
Table 24 lists the SnapMirror policy settings.
Cluster Name | SVM Name | Policy Name | Policy Type | Policy Owner | Tries Limit | Total Rules | SnapMirror Label | Keep | Preserve | Warn |
aff-cluster-01 | aff-cluster-01 | DPDefault | async_mirror | cluster_admin | 8 | 1 | sm_created | 1 | False | 0 |
aff-cluster-01 | aff-cluster-01 | MirrorAllSnapshots | async_mirror | cluster_admin | 8 | 2 | all_source_snapshots sm_created | 1 1 | False False | 0 0 |
aff-cluster-01 | aff-cluster-01 | MirrorAndVault | mirror_vault | cluster_admin | 8 | 3 | daily sm_created weekly | 7 1 52 | False False False | 0 0 0 |
aff-cluster-01 | aff-cluster-01 | MirrorLatest | async_mirror | cluster_admin | 8 | 1 | sm_created | 1 | False | 0 |
aff-cluster-01 | aff-cluster-01 | XDPDefault | vault | cluster_admin | 8 | 2 | daily weekly | 7 52 | False False | 0 0 |
Export policies enable you to restrict access to volumes to clients that match specific IP addresses and specific authentication types. An export policy with export rules must exist on an SVM for clients to access data. Each volume is associated with one export policy. Each export policy is identified by a unique name and a unique numeric ID. A Data ONTAP cluster can contain up to 1,024 export policies.
Export policies consist of individual export rules. An export policy can contain a large number of rules (approximately 4,000). Each rule specifies access permissions to volumes for one or more clients. The clients can be specified by host name, IP address, or netgroup. Rules are processed in the order in which they appear in the export policy. The rule order is dictated by the rule index number.
The rule also specifies the authentication types that are required for both read-only and read/write operations. To have any access to a volume, matching clients must authenticate with the authentication type specified by the read-only rule. To have write access to the volume, matching clients must authenticate with the authentication type specified by the read/write rule.
If a client makes an access request that is not permitted by the applicable export policy, the request fails with a permission-denied message. If a client IP address does not match any rule in the volume's export policy, then access is denied. If an export policy is empty, then all accesses are implicitly denied. Export rules can use host entries from a netgroup.
You can modify an export policy dynamically on a system running Clustered Data ONTAP. Starting with clustered Data ONTAP 8.2, export policies are only needed to control access to NFS clients. Access to Windows clients is controlled and managed by access control lists (ACLs) defined on the CIFS shares.
Table 25 lists the export policy rules.
Cluster Name | SVM Name | Policy Name | Rule Index | Client Match | Protocol | RO Rule | RW Rule | Anon Userid | Super User |
aff-cluster-01 | VDI | CIFS | 1 | 10.10.62.0/24 | cifs | any | any | 65534 | any |
aff-cluster-01 | VDI | default | 1 | 10.10.63.116 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 2 | 10.10.63.115 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 3 | 10.10.63.114 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 4 | 10.10.63.123 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 5 | 10.10.63.122 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 6 | 10.10.63.124 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 7 | 10.10.63.121 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 8 | 10.10.63.118 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 9 | 10.10.63.117 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 10 | 10.10.63.129 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 11 | 10.10.63.128 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 12 | 10.10.63.125 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 13 | 10.10.63.120 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 14 | 10.10.63.119 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 15 | 10.10.63.108 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 16 | 10.10.63.109 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 17 | 10.10.63.110 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 18 | 10.10.63.112 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 19 | 10.10.63.111 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 20 | 10.10.63.102 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 21 | 10.10.63.103 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 22 | 10.10.63.100 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 23 | 10.10.63.101 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 24 | 10.10.63.107 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 25 | 10.10.63.104 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 26 | 10.10.63.105 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 27 | 10.10.63.106 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 28 | 10.10.63.113 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | default | 30 | 10.10.62.0/24 | cifs | any | any | 65534 | any |
aff-cluster-01 | VDI | NFS | 1 | 10.10.63.117 | nfs | sys | sys | 65534 | any |
aff-cluster-01 | VDI | NFS | 2 | 10.10.63.116 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 3 | 10.10.63.115 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 4 | 10.10.63.114 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 5 | 10.10.63.123 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 6 | 10.10.63.122 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 7 | 10.10.63.124 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 8 | 10.10.63.121 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 9 | 10.10.63.118 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 10 | 10.10.63.117 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 11 | 10.10.63.129 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 12 | 10.10.63.128 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 13 | 10.10.63.125 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 14 | 10.10.63.120 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 15 | 10.10.63.119 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 16 | 10.10.63.108 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 17 | 10.10.63.109 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 18 | 10.10.63.110 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 19 | 10.10.63.112 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 20 | 10.10.63.111 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 21 | 10.10.63.102 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 22 | 10.10.63.103 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 23 | 10.10.63.100 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 24 | 10.10.63.101 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 25 | 10.10.63.107 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 26 | 10.10.63.104 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 27 | 10.10.63.105 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 28 | 10.10.63.106 | nfs | sys | sys | 65534 | sys |
aff-cluster-01 | VDI | NFS | 29 | 10.10.63.113 | nfs | sys | sys | 65534 | sys |
Setting up a firewall enhances the security of the cluster and helps prevent unauthorized access to the storage system. By default, the firewall service allows remote systems access to a specific set of default services for data, management, and intercluster LIFs.
Firewall policies can be used to control access to management service protocols such as SSH, HTTP, HTTPS, Telnet, NTP, NDMP, NDMPS, RSH, DNS, or SNMP. Firewall policies cannot be set for data protocols such as NFS or CIFS.
Table 26 lists the firewall policy settings for clustered Data ONTAP 8.3 or later.
Table 26 Firewall Policies for Clustered Data ONTAP 8.3
Cluster Name | SVM Name | Policy Name | Service Name | IPspace Name | AllowList |
aff-cluster-01 | aff-cluster-01 | data | dns | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | data | ndmp | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | data | ndmps | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | intercluster | https | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | intercluster | ndmp | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | intercluster | ndmps | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | mgmt | dns | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | mgmt | http | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | mgmt | https | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | mgmt | ndmp | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | mgmt | ndmps | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | mgmt | ntp | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | mgmt | snmp | Default | 0.0.0.0/0 |
aff-cluster-01 | aff-cluster-01 | mgmt | ssh | Default | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | data | dns | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | data | ndmp | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | data | ndmps | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | intercluster | https | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | intercluster | ndmp | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | intercluster | ndmps | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | mgmt | dns | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | mgmt | http | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | mgmt | https | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | mgmt | ndmp | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | mgmt | ndmps | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | mgmt | ntp | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | mgmt | snmp | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | iSCSI | mgmt | ssh | iSCSI | 0.0.0.0/0 |
aff-cluster-01 | NAS | data | dns | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | data | ndmp | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | data | ndmps | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | intercluster | https | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | intercluster | ndmp | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | intercluster | ndmps | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | mgmt | dns | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | mgmt | http | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | mgmt | https | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | mgmt | ndmp | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | mgmt | ndmps | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | mgmt | ntp | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | mgmt | snmp | NAS | 0.0.0.0/0 |
aff-cluster-01 | NAS | mgmt | ssh | NAS | 0.0.0.0/0 |
Volume efficiency policies can be used to define the deduplication schedule and duration on a FlexVol volume or an Infinite Volume.
Table 27 lists the volume efficiency policy settings.
Table 27 Volume Efficiency Policies
Cluster Name | SVM Name | Policy Name | Enabled | Schedule | Duration (Hours) | QoS Policy |
aff-cluster-01 | San_Boot | default | True | daily |
| best_effort |
aff-cluster-01 | San_Boot | inline-only |
|
|
|
|
aff-cluster-01 | VDI | Always_On | True | 1min | - | background |
aff-cluster-01 | VDI | default | True | daily |
| best_effort |
aff-cluster-01 | VDI | inline-only |
|
|
|
|
Your storage system supports physical network interfaces, such as Ethernet, Converged Network Adapter (CNA) and virtual network interfaces, such as interface groups and VLANs. Physical and/or virtual network interfaces have user definable attributes such as MTU, speed, and flow control.
LIFs are virtual network interfaces associated with SVMs and are assigned to failover groups, which are made up of physical ports, interface groups, and/or VLANs. A LIF is an IP address with associated characteristics, such as a role, a home port, a home node, a routing group, a list of ports to fail over to and a firewall policy.
IPv4 and IPv6 are supported on all storage platforms starting with clustered Data ONTAP 8.2.
Your storage system might support the following types of physical network interfaces depending on the platform:
§ 10/100/1000 Ethernet
§ 10 Gigabit Ethernet
Most storage system models have a physical network interface named e0M. This is a low-bandwidth interface of 100 Mbps that is used only for Data ONTAP management activities, such as running a Telnet, SSH, or RSH session. This physical Ethernet port is also shared by the storage controllers’ out-of-band remote management port (platform dependent), which is also known as by one of the following names: Baseboard Management Controller (BMC), Remote LAN Management (RLM), or Service Processor (SP).
Ports are either physical ports (NICs) or virtualized ports, such as interface groups or VLANs. Interface groups treat several physical ports as a single port, while VLANs subdivide a physical port into multiple separate virtual ports.
You can modify the MTU, autonegotiation, duplex, flow control, and speed settings of a physical network port or interface group.
Table 28 lists the network port settings for clustered Data ONTAP 8.3 or later.
Table 28 Network Port Settings for Clustered Data ONTAP 8.3
Node Name | Port Name | Link Status | Port Type | MTU Size | Flow Control (Admin/Oper) | IPspace Name | Broadcast Domain |
aff-cluster-01-01 | a0a | up | if_group | 9000 | full/- | Default |
|
aff-cluster-01-01 | a0a-63 | up | vlan | 9000 | full/- | NAS | NFS |
aff-cluster-01-01 | a0b | up | if_group | 9000 | full/- | Default |
|
aff-cluster-01-01 | a0b-62 | up | vlan | 9000 | full/- | NAS | CIFS |
aff-cluster-01-01 | a0b-64 | up | vlan | 9000 | full/- | iSCSI | iSCSI_A |
aff-cluster-01-01 | a0b-65 | up | vlan | 9000 | full/- | iSCSI | iSCSI_B |
aff-cluster-01-01 | e0a | up | physical | 9000 | none/none | Cluster | Cluster |
aff-cluster-01-01 | e0b | up | physical | 9000 | none/none | Cluster | Cluster |
aff-cluster-01-01 | e0c | up | physical | 9000 | none/none | Cluster | Cluster |
aff-cluster-01-01 | e0d | up | physical | 9000 | none/none | Cluster | Cluster |
aff-cluster-01-01 | e0e | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-01 | e0f | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-01 | e0g | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-01 | e0h | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-01 | e0i | up | physical | 1500 | none/none | Default | Default |
aff-cluster-01-01 | e0j | down | physical | 1500 | none/none | Default |
|
aff-cluster-01-01 | e0k | down | physical | 1500 | none/none | Default |
|
aff-cluster-01-01 | e0l | down | physical | 1500 | none/none | Default |
|
aff-cluster-01-01 | e0M | up | physical | 1500 | none/none | Default | Default |
aff-cluster-01-01 | e1a | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-01 | e1b | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-01 | e4a | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-01 | e4b | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-02 | a0a | up | if_group | 9000 | full/- | Default |
|
aff-cluster-01-02 | a0a-63 | up | vlan | 9000 | full/- | NAS | NFS |
aff-cluster-01-02 | a0b | up | if_group | 9000 | full/- | Default |
|
aff-cluster-01-02 | a0b-62 | up | vlan | 9000 | full/- | NAS | CIFS |
aff-cluster-01-02 | a0b-64 | up | vlan | 9000 | full/- | iSCSI | iSCSI_A |
aff-cluster-01-02 | a0b-65 | up | vlan | 9000 | full/- | iSCSI | iSCSI_B |
aff-cluster-01-02 | e0a | up | physical | 9000 | none/none | Cluster | Cluster |
aff-cluster-01-02 | e0b | up | physical | 9000 | none/none | Cluster | Cluster |
aff-cluster-01-02 | e0c | up | physical | 9000 | none/none | Cluster | Cluster |
aff-cluster-01-02 | e0d | up | physical | 9000 | none/none | Cluster | Cluster |
aff-cluster-01-02 | e0e | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-02 | e0f | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-02 | e0g | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-02 | e0h | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-02 | e0i | up | physical | 1500 | none/none | Default | Default |
aff-cluster-01-02 | e0j | down | physical | 1500 | none/none | Default |
|
aff-cluster-01-02 | e0k | down | physical | 1500 | none/none | Default |
|
aff-cluster-01-02 | e0l | down | physical | 1500 | none/none | Default |
|
aff-cluster-01-02 | e0M | up | physical | 1500 | none/none | Default | Default |
aff-cluster-01-02 | e1a | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-02 | e1b | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-02 | e4a | up | physical | 9000 | none/none | Default |
|
aff-cluster-01-02 | e4b | up | physical | 9000 | none/none | Default |
|
An interface group (ifgrp) is a port aggregate containing two or more physical ports that acts as a single trunk port. Expanded capabilities include increased resiliency, increased availability, and load distribution. You can create three different types of interface groups on your storage system: single-mode, static multimode, and dynamic multimode. Each interface group provides different levels of fault tolerance. Multimode interface groups provide methods for load balancing network traffic.
Table 29 lists the network port ifgrp settings.
Table 29 Network Port Ifgrp Settings
Node Name | Ifgrp Name | Mode | Distribution Function | Ports |
aff-cluster-01-01 | a0a | multimode_lacp | port | e0e, e0g, e4a, e4b |
aff-cluster-01-01 | a0b | multimode_lacp | port | e0f, e0h, e1a, e1b |
aff-cluster-01-02 | a0a | multimode_lacp | port | e0e, e0g, e1a, e1b |
aff-cluster-01-02 | a0b | multimode_lacp | port | e0f, e0h, e4a, e4b |
VLANs provide logical segmentation of networks by creating separate broadcast domains. A VLAN can span multiple physical network segments. The end stations belonging to a VLAN are related by function or application.
Table 30 lists the network port VLAN settings.
Table 30 Network Port VLAN Settings
Node Name | Interface Name | VLAN ID | Parent Interface |
aff-cluster-01-01 | a0a-63 | 63 | a0a |
aff-cluster-01-01 | a0b-62 | 62 | a0b |
aff-cluster-01-01 | a0b-64 | 64 | a0b |
aff-cluster-01-01 | a0b-65 | 65 | a0b |
aff-cluster-01-02 | a0a-63 | 63 | a0a |
aff-cluster-01-02 | a0b-62 | 62 | a0b |
aff-cluster-01-02 | a0b-64 | 64 | a0b |
aff-cluster-01-02 | a0b-65 | 65 | a0b |
A LIF is an IP address with associated characteristics, such as a role, a home port, a home node, a routing group, a list of ports to fail over to, and a firewall policy. You can configure LIFs on ports over which the cluster sends and receives communications over the network.
LIFs can be hosted on the following ports:
§ Physical ports that are not part of interface groups
§ Interface groups
§ VLANs
§ Physical ports or interface groups that host VLANs
While configuring SAN protocols such as FC on a LIF, it aLIF role determines the kind o