FlashStack Cisco UCS X-Series and Pure Storage for Citrix Virtual Apps and Desktops Design Guide

Available Languages

Download Options

  • PDF
    (1.8 MB)
    View with Adobe Reader on a variety of devices
  • ePub
    (1.2 MB)
    View in various apps on iPhone, iPad, Android, Sony Reader, or Windows Phone
  • Mobi (Kindle)
    (831.1 KB)
    View on Kindle device or Kindle app on multiple devices
Updated:December 21, 2023

Bias-Free Language

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

Available Languages

Download Options

  • PDF
    (1.8 MB)
    View with Adobe Reader on a variety of devices
  • ePub
    (1.2 MB)
    View in various apps on iPhone, iPad, Android, Sony Reader, or Windows Phone
  • Mobi (Kindle)
    (831.1 KB)
    View on Kindle device or Kindle app on multiple devices
Updated:December 21, 2023

Table of Contents

 

 

Published: December 2023

A logo for a companyDescription automatically generated

Logo, company nameDescription automatically generated

In partnership with:

LogoDescription automatically generated with medium confidence

About the Cisco Validated Design Program

The Cisco Validated Design (CVD) program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information, go to: http://www.cisco.com/go/designzone.

Executive Summary

Cisco Design Guides consist of systems and solutions that are designed, tested, and documented to facilitate and improve customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of our customers.

This document details the design of the FlashStack Virtual Desktop Infrastructure for Citrix Virtual Apps and Desktops 2203, VMware vSphere 8.0 U2 Design Guide, which describes a validated Converged Infrastructure (CI) jointly developed by Cisco and Pure Storage.

The solution explains the deployment of a predesigned, best-practice data center architecture with:

   Citrix Virtual Apps and Desktops

   VMware vSphere.

   Cisco Unified Computing System (Cisco UCS) incorporating the Cisco X-Series modular platform.

   Cisco Nexus 9000 family of switches.

   Cisco MDS 9000 family of Fibre Channel switches.

   Pure Storage FlashArray//X70  all flash array supporting Fibre Channel storage access.

Additionally, this FlashStack solution is delivered as Infrastructure as Code (IaC) to eliminate error-prone manual tasks, allowing quicker and more consistent solution deployments. Cisco Intersight cloud platform delivers monitoring, orchestration, workload optimization and lifecycle management capabilities for the FlashStack solution.

When deployed, the architecture presents a robust infrastructure viable for a wide range of application workloads implemented as a Virtual Desktop Infrastructure (VDI).

If you’re interested in understanding the FlashStack design and deployment details, including the configuration of various elements of design and associated best practices, refer to the Cisco Validated Designs for FlashStack, here: Data Center Design Guides - FlashStack Platforms

Solution Overview

This chapter contains the following:

   Audience

   Purpose of this Document

   What’s New in this Release?

The current industry trend in data center design is towards shared infrastructures. By using virtualization along with pre-validated IT platforms, enterprise customers have embarked on the journey to the cloud by moving away from application silos and toward shared infrastructure that can be quickly deployed, thereby increasing agility, and reducing costs. Cisco, Pure Storage, Citrix, and VMware have partnered to deliver this Cisco Validated Design, which uses best of breed storage, server, and network components to serve as the foundation for desktop virtualization workloads, enabling efficient architectural designs that can be quickly and confidently deployed.

Audience

The intended audience for this document includes, but is not limited to IT architects, sales engineers, field consultants, professional services, IT managers, IT engineers, partners, and customers who are interested in learning about and deploying the Virtual Desktop Infrastructure (VDI).

Purpose of this Document

This document provides a step-by-step design, configuration, and implementation guide for the Cisco Validated Design for:

   Large-scale Citrix Virtual Apps and Desktops 2203 VDI.

   Pure Storage FlashArray//X array.

   Cisco UCS X210c M7 Blade Servers running VMware vSphere 8.0 U2.

   Cisco Nexus 9000 Series Ethernet Switches.

   Cisco MDS 9100 Series Multilayer Fibre Channel Switches.

What’s New in this Release?

Highlights for this design include:

   Support for Cisco UCS 9508 blade server chassis with Cisco UCS X210c M7 compute nodes.

   Support for Pure Storage FlashArray//X70  with Purity version 6.3.14.

   Citrix Virtual Apps and Desktops 2203 LTSR.

   Support for VMware vSphere 8.0 U2.

   Support for VMware vCenter 8.0 U2 to set up and manage the virtual infrastructure as well as integration of the virtual environment with Cisco Intersight software.

   Support for Cisco Intersight platform to deploy, maintain, and support the FlashStack components.

   Support for Cisco Intersight Assist virtual appliance to help connect the Pure Storage FlashArray and VMware vCenter with the Cisco Intersight platform.

   Fully automated solution deployment describing the FlashStack infrastructure and VMware vSphere virtualization and VMware vSphere virtualization.

These factors have led to the need for a predesigned computing, networking and storage building blocks optimized to lower the initial design cost, simplify management, and enable horizontal scalability and high levels of utilization.

The use cases include:

   Enterprise Data Center

   Service Provider Data Center

   Large Commercial Data Center

Technology Overview

This chapter contains the following:

   FlashStack

   Cisco Unified Computing System

   Cisco UCS Fabric Interconnect

   Cisco UCS Virtual Interface Cards (VICs)

   Cisco Switches

   Citrix Virtual Apps and Desktops

   Citrix Virtual Apps and Desktops RDS Sessions and Windows 10 Desktops

   Citrix Virtual Apps and Desktops Design Fundamentals

   VMware vSphere 8.0 Update 2

   Red Hat Ansible

   Cisco Intersight Assist Device Connector for VMware vCenter and Pure Storage FlashArray

   Pure Storage for VDI

   Purity for FlashArray

   Pure1

   Cisco UCS Virtual Interface Cards (VICs)

Cisco and Pure Storage have partnered to deliver several Cisco Validated Designs, which use best-in-class storage, server, and network components to serve as the foundation for virtualized workloads such as Virtual Desktop Infrastructure (VDI), enabling efficient architectural designs that you can deploy quickly and confidently.

FlashStack

The FlashStack architecture was jointly developed by Cisco and Pure Storage. All FlashStack components are integrated, al-lowing customers to deploy the solution quickly and economically while eliminating many of the risks associated with re-searching, designing, building, and deploying similar solutions from the foundation. One of the main benefits of FlashStack is its ability to maintain consistency at scale. Each of the component families shown in Figure 1 (Cisco UCS, Cisco Nexus, Cisco MDS, and Pure Storage FlashArray systems) offers platform and resource options to scale up or scale out the infrastructure while supporting the same features and functions.

Figure 1.          FlashStack components

DiagramDescription automatically generated with low confidence

Cisco Unified Computing System

Cisco Unified Computing System (Cisco UCS) is a next-generation data center platform that integrates computing, networking, storage access, and virtualization resources into a cohesive system designed to reduce total cost of ownership and increase business agility. The system integrates a low-latency, lossless 10-100 Gigabit Ethernet unified network fabric with enterprise-class, x86-architecture servers. The system is an integrated, scalable, multi-chassis platform with a unified management domain for managing all resources.

Cisco Unified Computing System consists of the following subsystems:

   Compute: The compute piece of the system incorporates servers based on the third-generation Intel Xeon Scalable processors. Servers are available in blade and rack form factor, managed by Cisco UCS Manager.

   Network: The integrated network fabric in the system provides a low-latency, lossless, 10/25/40/100 Gbps Ethernet fabric. Networks for LAN, SAN and management access are consolidated within the fabric. The unified fabric uses the innovative Single Connect technology to lower costs by reducing the number of network adapters, switches, and cables. This in turn lowers the power and cooling needs of the system.

   Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtual environments to support evolving business needs.

   Storage access: Cisco UCS system provides consolidated access to both SAN storage and Network Attached Storage over the unified fabric. This provides customers with storage choices and investment protection. Also, the server administrators can pre-assign storage-access policies to storage resources, for simplified storage connectivity and management leading to increased productivity.

   Management: The system uniquely integrates compute, network, and storage access subsystems, enabling it to be managed as a single entity through Cisco UCS Manager software. Cisco UCS Manager increases IT staff productivity by enabling storage, network, and server administrators to collaborate on Service Profiles that define the desired physical configurations and infrastructure policies for applications. Service Profiles increase business agility by enabling IT to automate and provision re-sources in minutes instead of days.

Cisco UCS Differentiators

Cisco Unified Computing System is revolutionizing the way servers are managed in the datacenter. The following are the unique differentiators of Cisco Unified Computing System and Cisco UCS Manager:

   Embedded Management: In Cisco UCS, the servers are managed by the embedded firmware in the Fabric Interconnects, eliminating the need for any external physical or virtual devices to manage the servers.

   Unified Fabric: In Cisco UCS, from blade server chassis or rack servers to FI, there is a single Ethernet cable used for LAN, SAN, and management traffic. This converged I/O results in reduced cables, SFPs and adapters – reducing capital and operational expenses of the overall solution.

   Auto Discovery: By simply inserting the blade server in the chassis or connecting the rack server to the fabric interconnect, discovery and inventory of compute resources occurs automatically without any management intervention. The combination of unified fabric and auto-discovery enables the wire-once architecture of Cisco UCS, where compute capability of Cisco UCS can be extended easily while keeping the existing external connectivity to LAN, SAN, and management networks.

   Policy Based Resource Classification: Once Cisco UCS Manager discovers a compute resource, it can be automatically classified to a given resource pool based on policies defined. This capability is useful in multi-tenant cloud computing. This CVD showcases the policy-based resource classification of Cisco UCS Manager.

   Combined Rack and Blade Server Management: Cisco UCS Manager can manage Cisco UCS B-series blade servers and Cisco UCS C-series rack servers under the same Cisco UCS domain. This feature, along with stateless computing makes compute resources truly hardware form factor agnostic.

   Model based Management Architecture: The Cisco UCS Manager architecture and management database is model based, and data driven. An open XML API is provided to operate on the management model. This enables easy and scalable integration of Cisco UCS Manager with other management systems.

   Policies, Pools, Templates: The management approach in Cisco UCS Manager is based on defining policies, pools, and templates, instead of cluttered configuration, which enables a simple, loosely coupled, data driven approach in managing compute, network, and storage resources.

   Loose Referential Integrity: In Cisco UCS Manager, a service profile, port profile or policies can refer to other policies or logical resources with loose referential integrity. A referred policy cannot exist at the time of authoring the referring policy or a referred policy can be deleted even though other policies are referring to it. This provides different subject matter experts to work independently from each other. This provides great flexibility where different experts from different domains, such as network, storage, security, server, and virtualization work together to accomplish a complex task.

   Policy Resolution: In Cisco UCS Manager, a tree structure of organizational unit hierarchy can be created that mimics the re-al-life tenants and/or organization relationships. Various policies, pools and templates can be defined at different levels of organization hierarchy. A policy referring to another policy by name is resolved in the organizational hierarchy with closest policy match. If no policy with specific name is found in the hierarchy of the root organization, then the special policy named “default” is searched. This policy resolution practice enables automation friendly management APIs and provides great flexibility to owners of different organizations.

   Service Profiles and Stateless Computing: A service profile is a logical representation of a server, carrying its various identities and policies. This logical server can be assigned to any physical compute resource as far as it meets the resource requirements. Stateless computing enables procurement of a server within minutes, which used to take days in legacy server management systems.

   Built-in Multi-Tenancy Support: The combination of policies, pools and templates, loose referential integrity, policy resolution in the organizational hierarchy and a service profiles-based approach to compute resources makes Cisco UCS Manager inherently friendly to multi-tenant environments typically observed in private and public clouds.

   Extended Memory: The enterprise-class Cisco UCS Blade Server extends the capabilities of the Cisco Unified Computing System portfolio in a half-width blade form factor. It harnesses the power of the latest Intel Xeon Scalable Series processor family CPUs and Intel Optane DC Persistent Memory (DCPMM) with up to 18TB of RAM (using 256GB DDR4 DIMMs and 512GB DCPMM).

   Simplified QoS: Even though Fibre Channel and Ethernet are converged in the Cisco UCS fabric, built-in support for QoS and lossless Ethernet makes it seamless. Network Quality of Service (QoS) is simplified in Cisco UCS Manager by representing all system classes in one GUI panel.

Cisco Intersight

Cisco Intersight is a lifecycle management platform for your infrastructure, regardless of where it resides. In your enterprise data center, at the edge, in remote and branch offices, at retail and industrial sites—all these locations present unique management challenges and have typically required separate tools. Cisco Intersight Software as a Service (SaaS) unifies and simplifies your experience of the Cisco Unified Computing System (Cisco UCS).

Cisco Intersight software delivers a new level of cloud-powered intelligence that supports lifecycle management with continuous improvement. It is tightly integrated with the Cisco Technical Assistance Center (TAC). Expertise and information flow seamlessly between Cisco Intersight and IT teams, providing global management of Cisco infrastructure, anywhere. Remediation and problem resolution are supported with automated upload of error logs for rapid root-cause analysis.

Figure 2.          Cisco Intersight

A screenshot of a cell phoneDescription automatically generated

   Automate your infrastructure.

Cisco has a strong track record for management solutions that deliver policy-based automation to daily operations. Intersight SaaS is a natural evolution of our strategies. Cisco designed Cisco UCS to be 100 percent programmable. Cisco Intersight simply moves the control plane from the network into the cloud. Now you can manage your Cisco UCS and infrastructure wherever it resides through a single interface.

   Deploy your way.

If you need to control how your management data is handled, comply with data locality regulations, or consolidate the number of outbound connections from servers, you can use the Cisco Intersight Virtual Appliance for an on-premises experience. Cisco Intersight Virtual Appliance is continuously updated just like the SaaS version, so regardless of which approach you implement, you never have to worry about whether your management software is up to date.

   DevOps ready.

If you are implementing DevOps practices, you can use the Cisco Intersight API with either the cloud-based or virtual appliance offering. Through the API you can configure and manage infrastructure as code—you are not merely configuring an abstraction layer; you are managing the real thing. Through the API and support of cloud-based RESTful API, Terraform providers, Microsoft PowerShell scripts, or Python software, you can automate the deployment of settings and software for both physical and virtual layers. Using the API, you can simplify infrastructure lifecycle operations and increase the speed of continuous application delivery.

   Pervasive simplicity.

Simplify the user experience by managing your infrastructure regardless of where it is installed.

   Actionable intelligence.

   Use best practices to enable faster, proactive IT operations.

   Gain actionable insight for ongoing improvement and problem avoidance.

   Manage anywhere.

   Deploy in the data center and at the edge with massive scale.

   Get visibility into the health and inventory detail for your Intersight Managed environment on-the-go with the Cisco Inter-sight Mobile App.

For more information about Cisco Intersight and the different deployment options, go to: Cisco Intersight – Manage your systems anywhere.

Cisco UCS Fabric Interconnect

The Cisco UCS Fabric Interconnect (FI) is a core part of the Cisco Unified Computing System, providing both network connectivity and management capabilities for the system. Depending on the model chosen, the Cisco UCS Fabric Interconnect offers line-rate, low-latency, lossless 10 Gigabit, 25 Gigabit, 40 Gigabit, or 100 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE) and Fibre Channel connectivity. Cisco UCS Fabric Interconnects provide the management and communication backbone for the Cisco UCS C-Series, Cisco UCS X-Series, Cisco UCS B-Series Blade Servers, and Cisco UCS Chassis. All servers and chassis, and therefore all Compute Nodes, attached to the Cisco UCS Fabric Interconnects become part of a single, highly available management domain. In addition, by supporting unified fabrics, the Cisco UCS Fabric Interconnects provide both the LAN and SAN connectivity for all servers within its domain.

For networking performance, the Cisco UCS 6536 uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10/25/40/100 Gigabit Ethernet ports, a switching capacity of 7.42 Tbps per FI and 14.84 Tbps per unified fabric domain, independent of packet size and enabled services. It enables 1600Gbps bandwidth per X9508 chassis with X9108-IFM-100G in addition to enabling end-to-end 100G ethernet and 200G aggregate bandwidth per X210c compute node. With the X9108-IFM-25G and the IOM 2408, it enables 400Gbps bandwidth per chassis per FI domain. The product family supports Cisco® low-latency, lossless 10/25/40/100 Gigabit Ethernet unified network fabric capabilities, which increases the reliability, efficiency, and scalability of Ethernet networks. The 6536 Fabric Interconnect supports multiple traffic classes over a lossless Ethernet fabric from the server through the fabric interconnect. Significant TCO savings come from the Unified Fabric optimized server design in which Network Interface Cards (NICs), Host Bus Adapters (HBAs), cables, and switches can be consolidated.

Cisco UCS 6536 Fabric Interconnect

The Cisco UCS Fabric Interconnects (FIs) provide a single point for connectivity and management for the entire Cisco Unified Computing System. Typically deployed as an active/active pair, the system’s FIs integrate all components into a single, highly available management domain controlled by Cisco Intersight. Cisco UCS FIs provide a single unified fabric for the system, with low-latency, lossless, cut-through switching that supports LAN, SAN, and management traffic using a single set of cables.

Figure 3.          Cisco UCS  6536 Fabric Interconnects

Related image, diagram or screenshot

The Cisco UCS 6536 utilized in this design is a 36-port Fabric Interconnect. This single RU device includes up to 36 10/25/40/100 Gbps Ethernet ports, 16 8/16/32-Gbps Fibre Channel ports via 4 128 Gbps to 4x32 Gbps breakouts on ports 33-36. All 36 ports support breakout cables or QSA interfaces.

Cisco UCS X210c M7 Compute Node

The Cisco UCS X9508 Chassis is designed to host up to eight Cisco UCS X210c M7 Compute Nodes. The hardware details of the Cisco UCS X210c M7 Compute Nodes are shown in Figure 4.

Figure 4.          Cisco UCS X210c M7 Compute Node

Related image, diagram or screenshot

The Cisco UCS X210c M7 features:

   CPU: Up to 2x 4th Gen Intel Xeon Scalable Processors with up to 60 cores per processor and up to 2.625 MB Level 3 cache per core and up to 112.5 MB per CPU.

   Memory: Up to 8TB of main memory with 32x 256 GB DDR5-4800 DIMMs.

   Disk storage: Up to six hot-pluggable, solid-state drives (SSDs), or non-volatile memory express (NVMe) 2.5-inch drives with a choice of enterprise-class redundant array of independent disks (RAIDs) or passthrough controllers, up to two M.2 SATA drives with optional hardware RAID.

   Optional front mezzanine GPU module: The Cisco UCS front mezzanine GPU module is a passive PCIe Gen 4.0 front mezzanine option with support for up to two U.2 NVMe drives and two HHHL GPUs.

   mLOM virtual interface cards:

   Cisco UCS Virtual Interface Card (VIC) 15420 occupies the server's modular LAN on motherboard (mLOM) slot, enabling up to 50 Gbps of unified fabric connectivity to each of the chassis intelligent fabric modules (IFMs) for 100 Gbps connectivity per server.

   Cisco UCS Virtual Interface Card (VIC) 15231 occupies the server's modular LAN on motherboard (mLOM) slot, enabling up to 100 Gbps of unified fabric connectivity to each of the chassis intelligent fabric modules (IFMs) for 100 Gbps connectivity per server.

   Optional mezzanine card:

   Cisco UCS 5th Gen Virtual Interface Card (VIC) 15422 can occupy the server's mezzanine slot at the bottom rear of the chassis. This card's I/O connectors link to Cisco UCS X-Fabric technology. An included bridge card extends this VIC's 2x 50 Gbps of network connections through IFM connectors, bringing the total bandwidth to 100 Gbps per fabric (for a total of 200 Gbps per server).

   Cisco UCS PCI Mezz card for X-Fabric can occupy the server's mezzanine slot at the bottom rear of the chassis. This card's I/O connectors link to Cisco UCS X-Fabric modules and enable connectivity to the Cisco UCS X440p PCIe Node.

   All VIC mezzanine cards also provide I/O connections from the Cisco UCS X210c M7 compute node to the X440p PCIe Node.

   Security: The server supports an optional Trusted Platform Module (TPM). Additional security features include a secure boot FPGA and ACT2 anticounterfeit provisions.

Cisco UCS Virtual Interface Cards (VICs)

The Cisco UCS VIC 15000 series is designed for Cisco UCS X-Series M6/M7 Blade Servers, Cisco UCS B-Series M6 Blade Servers, and Cisco UCS C-Series M6/M7 Rack Servers. The adapters are capable of supporting 10/25/40/50/100/200-Gigabit Ethernet and Fibre Channel over Ethernet (FCoE). They incorporate Cisco’s next-generation Converged Network Adapter (CNA) technology and offer a comprehensive feature set, providing investment protection for future feature software releases

Cisco UCS VIC 15231

The Cisco UCS VIC 15231 (Figure 5) is a 2x100-Gbps Ethernet/FCoE-capable modular LAN on motherboard (mLOM) designed exclusively for the Cisco UCS X210 Compute Node. The Cisco UCS VIC 15231 enables a policy-based, stateless, agile server infrastructure that can present to the host PCIe standards-compliant interfaces that can be dynamically configured as either NICs or HBAs.

Figure 5.          Cisco UCS VIC 15231

A close-up of a computer chipDescription automatically generated

Figure 6.          Cisco UCS VIC 15231 Infrastructure

A diagram of a computerDescription automatically generated

Cisco Switches

Cisco Nexus 93180YC-FX Switches

The Cisco Nexus 93180YC-EX Switch provides a flexible line-rate Layer 2 and Layer 3 feature set in a compact form factor. Designed with Cisco Cloud Scale technology, it supports highly scalable cloud architectures. With the option to operate in Cisco NX-OS or Application Centric Infrastructure (ACI) mode, it can be deployed across enterprise, service provider, and Web 2.0 data centers.

   Architectural Flexibility

   Includes top-of-rack or middle-of-row fiber-based server access connectivity for traditional and leaf-spine architectures.

   Leaf node support for Cisco ACI architecture is provided in the roadmap.

   Increase scale and simplify management through Cisco Nexus 2000 Fabric Extender support.

   Feature Rich

   Enhanced Cisco NX-OS Software is designed for performance, resiliency, scalability, manageability, and programmability.

   ACI-ready infrastructure helps you take advantage of automated policy-based systems management.

   Virtual Extensible LAN (VXLAN) routing provides network services.

   Rich traffic flow telemetry with line-rate data collection.

   Real-time buffer utilization per port and per queue, for monitoring traffic micro-bursts and application traffic patterns.

   Highly Available and Efficient Design

   High-density, non-blocking architecture.

   Easily deployed into either a hot-aisle and cold-aisle configuration.

   Redundant, hot-swappable power supplies and fan trays.

   Simplified Operations

   Power-On Auto Provisioning (POAP) support allows for simplified software upgrades and configuration file installation.

   An intelligent API offers switch management through remote procedure calls (RPCs, JSON, or XML) over a HTTP/HTTPS infra-structure.

   Python Scripting for programmatic access to the switch command-line interface (CLI).

   Hot and cold patching, and online diagnostics.

   Investment Protection

A Cisco 40 Gbe bidirectional transceiver allows reuse of an existing 10 Gigabit Ethernet multimode cabling plant for 40 Giga-bit Ethernet Support for 1 Gbe and 10 Gbe access connectivity for data centers migrating access switching infrastructure to faster speed. The following is supported:

   1.8 Tbps of bandwidth in a 1 RU form factor.

   48 fixed 1/10/25-Gbe SFP+ ports.

   6 fixed 40/100-Gbe QSFP+ for uplink connectivity.

   Latency of less than 2 microseconds.

   Front-to-back or back-to-front airflow configurations.

   1+1 redundant hot-swappable 80 Plus Platinum-certified power supplies.

   Hot swappable 3+1 redundant fan trays.

Figure 7.          Cisco Nexus 93180YC-EX Switch

https://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-736651.docx/_jcr_content/renditions/datasheet-c78-736651_0.jpg

Cisco MDS 9132T 32-Gb Fiber Channel Switch

The next-generation Cisco MDS 9132T 32-Gb 32-Port Fibre Channel Switch (Figure 8) provides high-speed Fibre Channel connectivity from the server rack to the SAN core. It empowers small, midsize, and large enterprises that are rapidly deploying cloud-scale applications using extremely dense virtualized servers, providing the dual benefits of greater bandwidth and consolidation.

Small-scale SAN architectures can be built from the foundation using this low-cost, low-power, non-blocking, line-rate, and low-latency, bi-directional airflow capable, fixed standalone SAN switch connecting both storage and host ports.

Medium-size to large-scale SAN architectures built with SAN core directors can expand 32-Gb connectivity to the server rack using these switches either in switch mode or Network Port Virtualization (NPV) mode.

Additionally, investing in this switch for the lower-speed (4- or 8- or 16-Gb) server rack gives you the option to upgrade to 32-Gb server connectivity in the future using the 32-Gb Host Bus Adapter (HBA) that are available today. The Cisco MDS 9132T 32-Gb 32-Port Fibre Channel switch also provides unmatched flexibility through a unique port expansion module (Figure 8) that provides a robust cost-effective, field swappable, port upgrade option.

This switch also offers state-of-the-art SAN analytics and telemetry capabilities that have been built into this next-generation hardware platform. This new state-of-the-art technology couples the next-generation port ASIC with a fully dedicated Network Processing Unit designed to complete analytics calculations in real time. The telemetry data extracted from the inspection of the frame headers are calculated on board (within the switch) and, using an industry-leading open format, can be streamed to any analytics-visualization platform. This switch also includes a dedicated 10/100/1000BASE-T telemetry port to maximize data delivery to any telemetry receiver including Cisco Data Center Network Manager.

Figure 8.          Cisco MDS 9132T 32-Gb Fibre Channel Switch

https://www.cisco.com/c/dam/en/us/products/collateral/storage-networking/mds-9100-series-multilayer-fabric-switches/datasheet-c78-739613.docx/_jcr_content/renditions/datasheet-c78-739613_0.jpg

Figure 9.          Cisco MDS 9132T 32-Gb 16-Port Fibre Channel Port Expansion Module

A picture containing musicDescription automatically generated

   Features

   High performance: Cisco MDS 9132T architecture, with chip-integrated nonblocking arbitration, provides consistent 32-Gb low-latency performance across all traffic conditions for every Fibre Channel port on the switch.

   Capital Expenditure (CapEx) savings: The 32-Gb ports allow you to deploy them on existing 16- or 8-Gb transceivers, reducing initial CapEx with an option to upgrade to 32-Gb transceivers and adapters in the future.

   High availability: Cisco MDS 9132T switches continue to provide the same outstanding availability and reliability as the previous-generation Cisco MDS 9000 Family switches by providing optional redundancy on all major components such as the power supply and fan. Dual power supplies also facilitate redundant power grids.

   Pay-as-you-grow: The Cisco MDS 9132T Fibre Channel switch provides an option to deploy as few as eight 32-Gb Fibre Channel ports in the entry-level variant, which can grow by 8 ports to 16 ports, and thereafter with a port expansion module with sixteen 32-Gb ports, to up to 32 ports. This approach results in lower initial investment and power consumption for entry-level configurations of up to 16 ports compared to a fully loaded switch. Upgrading through an expansion module also reduces the overhead of managing multiple instances of port activation licenses on the switch. This unique combination of port upgrade options allow four possible configurations of 8 ports, 16 ports, 24 ports and 32 ports.

   Next-generation Application-Specific Integrated Circuit (ASIC): The Cisco MDS 9132T Fibre Channel switch is powered by the same high-performance 32-Gb Cisco ASIC with an integrated network processor that powers the Cisco MDS 9700 48-Port 32-Gb Fibre Channel Switching Module. Among all the advanced features that this ASIC enables, one of the most notable is inspection of Fibre Channel and Small Computer System Interface (SCSI) headers at wire speed on every flow in the smallest form-factor Fibre Channel switch without the need for any external taps or appliances. The recorded flows can be analyzed on the switch and also exported using a dedicated 10/100/1000BASE-T port for telemetry and analytics purposes.

   Intelligent network services: Slow-drain detection and isolation, VSAN technology, Access Control Lists (ACLs) for hardware-based intelligent frame processing, smartzoning and fabric wide Quality of Service (QoS) enable migration from SAN islands to enterprise-wide storage networks. Traffic encryption is optionally available to meet stringent security requirements.

   Sophisticated diagnostics: The Cisco MDS 9132T provides intelligent diagnostics tools such as Inter-Switch Link (ISL) diagnostics, read diagnostic parameters, protocol decoding, network analysis tools, and integrated Cisco Call Home capability for greater reliability, faster problem resolution, and reduced service costs.

   Virtual machine awareness: The Cisco MDS 9132T provides visibility into all virtual machines logged into the fabric. This feature is available through HBAs capable of priority tagging the Virtual Machine Identifier (VMID) on every FC frame. Virtual machine awareness can be extended to intelligent fabric services such as analytics[1] to visualize performance of every flow originating from each virtual machine in the fabric.

   Programmable fabric: The Cisco MDS 9132T provides powerful Representational State Transfer (REST) and Cisco NX-API capabilities to enable flexible and rapid programming of utilities for the SAN as well as polling point-in-time telemetry data from any external tool.

   Single-pane management: The Cisco MDS 9132T can be provisioned, managed, monitored, and troubleshot using Cisco Data Center Network Manager (DCNM), which currently manages the entire suite of Cisco data center products.

   Self-contained advanced anticounterfeiting technology: The Cisco MDS 9132T uses on-board hardware that protects the entire system from malicious attacks by securing access to critical components such as the bootloader, system image loader and Joint Test Action Group (JTAG) interface.

Citrix Virtual Apps and Desktops

When you want to keep workloads on premises, Citrix Virtual Apps and Desktops is the way to go. Whether you’re a corporate security team facing strict compliance standards or need to stay in the datacenter for operational reasons, Citrix makes it easy to deliver IT-managed VDI. It’s app and desktop virtualization, done your way. With a wide range of features to boost productivity and increase security.

For more information, go to: Citrix Virtual Apps and Desktops.

Citrix Virtual Apps and Desktops RDS Sessions and Windows 10 Desktops

The virtual app and desktop solution is designed for an exceptional experience.

Today's employees spend more time than ever working remotely, causing companies to rethink how IT services should be delivered. To modernize infrastructure and maximize efficiency, many are turning to desktop as a service (DaaS) to enhance their physical desktop strategy, or they are updating on-premises virtual desktop infrastructure (VDI) deployments. Managed in the cloud, these deployments are high-performance virtual instances of desktops and apps that can be delivered from any datacenter or public cloud provider.

DaaS and VDI capabilities provide corporate data protection as well as an easily accessible hybrid work solution for employees. Because all data is stored securely in the cloud or datacenter, rather than on devices, end-users can work securely from anywhere, on any device, and over any network—all with a fully IT-provided experience. IT also gains the benefit of centralized management, so they can scale their environments quickly and easily. By separating endpoints and corporate data, resources stay protected even if the devices are compromised.

As a leading VDI and DaaS provider, Citrix provides the capabilities organizations need for deploying virtual apps and desktops to reduce downtime, increase security, and alleviate the many challenges associated with traditional desktop management.

For more information, go to:

https://docs.citrix.com/en-us/citrix-virtual-apps-desktops

https://docs.citrix.com/en-us/tech-zone/toc/by-product/citrix-virtual-apps-and-desktops/design-guidance.html

Citrix Virtual Apps and Desktops Design Fundamentals

An ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and even Bring Your Own (BYO) device to work programs are prime reasons for moving to a virtual desktop solution.

Citrix Virtual Apps and Desktops 7 integrates Hosted Shared and VDI desktop virtualization technologies into a unified architecture that enables a scalable, simple, efficient, and manageable solution for delivering Windows applications and desktops as a service.

You can select applications from an easy-to-use “store” that is accessible from tablets, smartphones, PCs, Macs, and thin clients. Virtual Apps and Desktops delivers a native touch-optimized experience with HDX high-definition performance, even over mobile networks.

Machine Catalogs

Collections of identical virtual machines or physical computers are managed as a single entity called a Machine Catalog. In this CVD, virtual machine provisioning relies on Citrix Provisioning Services and Machine Creation Services to make sure that the machines in the catalog are consistent. In this CVD, machines in the Machine Catalog are configured to run either a Multi-session OS VDA (Windows Server OS) or a Single-session OS VDA (Windows Desktop OS). 

Delivery Groups

To deliver desktops and applications to users, you create a Machine Catalog and then allocate machines from the catalog to users by creating Delivery Groups. Delivery Groups provide desktops, applications, or a combination of desktops and applications to users. Creating a Delivery Group is a flexible way of allocating machines and applications to users. In a Delivery Group, you can:

   Use machines from multiple catalogs

   Allocate a user to multiple machines

   Allocate multiple users to one machine

As part of the creation process, you specify the following Delivery Group properties:

   Users, groups, and applications allocated to Delivery Groups

   Desktop settings to match users' needs

   Desktop power management options

Figure 10 illustrates how users access desktops and applications through machine catalogs and delivery groups.

Figure 10.       Access Desktops and Applications through Machine Catalogs and Delivery Groups

Machine catalogs and Delivery Groups

Citrix Provisioning Services

Citrix Virtual Apps and Desktops 7 can be deployed with or without Citrix Provisioning Services (PVS). The advantage of using Citrix PVS is that it allows virtual machines to be provisioned and re-provisioned in real-time from a single shared-disk image. In this way administrators can completely eliminate the need to manage and patch individual systems and reduce the number of disk images that they manage, even as the number of machines continues to grow, simultaneously providing the efficiencies of a centralized management with the benefits of distributed processing.

The Provisioning Services solution’s infrastructure is based on software-streaming technology. After installing and configuring Provisioning Services components, a single shared disk image (vDisk) is created from a device’s hard drive by taking a snapshot of the OS and application image, and then storing that image as a vDisk file on the network. A device that is used during the vDisk creation process is the Master target device. Devices or virtual machines that use the created vDisks are called target devices.

When a target device is turned on, it is set to boot from the network and to communicate with a Provisioning Server. Unlike thin-client technology, processing takes place on the target device.

Figure 11.       Citrix Provisioning Services Functionality

Related image, diagram or screenshot

The target device downloads the boot file from a Provisioning Server (Step 2) and boots. Based on the boot configuration settings, the appropriate vDisk is mounted on the Provisioning Server (Step 3). The vDisk software is then streamed to the target device as needed, appearing as a regular hard drive to the system.

Instead of immediately pulling all the vDisk contents down to the target device (as with traditional imaging solutions), the data is brought across the network in real-time as needed. This approach allows a target device to get a completely new operating system and set of software in the time it takes to reboot. This approach dramatically decreases the amount of network bandwidth required and making it possible to support a larger number of target devices on a network without impacting performance

Citrix PVS can create desktops as Pooled or Private:

   Pooled Desktop: A pooled virtual desktop uses Citrix PVS to stream a standard desktop image to multiple desktop instances upon boot.

   Private Desktop: A private desktop is a single desktop assigned to one distinct user.

   The alternative to Citrix Provisioning Services for pooled desktop deployments is Citrix Machine Creation Services (MCS), which is integrated with the Virtual Apps and Desktops Studio console.

Locating the PVS Write Cache

When considering a PVS deployment, there are some design decisions that need to be made regarding the write cache for the target devices that leverage provisioning services. The write cache is a cache of all data that the target device has written. If data is written to the PVS vDisk in a caching mode, the data is not written back to the base vDisk. Instead, it is written to a write cache file in one of the following locations:

   Cache in device RAM. Write cache can exist as a temporary file in the target device’s RAM. This provides the fastest method of disk access since memory access is always faster than disk access.

   Cache in device RAM with overflow on hard disk. This method uses VHDX differencing format and is only available for Windows 11 and Server 2008 R2 and later. When RAM is zero, the target device write cache is only written to the local disk. When RAM is not zero, the target device write cache is written to RAM first. When RAM is full, the least recently used block of data is written to the local differencing disk to accommodate newer data on RAM. The amount of RAM specified is the non-paged kernel memory that the target device will consume.

   Cache on a server. Write cache can exist as a temporary file on a Provisioning Server. In this configuration, all writes are handled by the Provisioning Server, which can increase disk I/O and network traffic. For additional security, the Provisioning Server can be configured to encrypt write cache files. Since the write-cache file persists on the hard drive between reboots, encrypted data provides data protection in the event a hard drive is stolen.

   Cache on server persisted. This cache option allows for the saved changes between reboots. Using this option, a rebooted target device is able to retrieve changes made from previous sessions that differ from the read only vDisk image. If a vDisk is set to this method of caching, each target device that accesses the vDisk automatically has a device-specific, writable disk file created. Any changes made to the vDisk image are written to that file, which is not automatically deleted upon shutdown.

Note:      In this CVD, Provisioning Server 2022 was used to manage Pooled/Non-Persistent Single-session OS Machines with “Cache in device RAM with Overflow on Hard Disk” for each virtual machine. This design enables good scalability to many thousands of desktops. Provisioning Server 2022 was used for Active Directory machine account creation and management as well as for streaming the shared disk to the hypervisor hosts.

Example Citrix Virtual Apps and Desktops Deployments

Two examples of typical Virtual Apps and Desktops deployments are as follows:

   A distributed components configuration

   A multiple site configuration

Distributed Components Configuration

You can distribute the components of your deployment among a greater number of servers or provide greater scalability and failover by increasing the number of controllers in your site. You can install management consoles on separate computers to manage the deployment remotely. A distributed deployment is necessary for an infrastructure based on remote access through NetScaler Gateway (formerly called Access Gateway).

Figure 12 shows an example of a distributed components configuration. A simplified version of this configuration is often deployed for an initial proof-of-concept (POC) deployment. The CVD described in this document deploys Citrix Virtual Apps and Desktops in a configuration that resembles this distributed component configuration shown.

Figure 12.       Example of a Distributed Components Configuration

http://support.citrix.com/proddocs/topic/xenapp-xendesktop-75/components-distributed.png

Multiple Site Configuration

If you have multiple regional sites, you can use Citrix NetScaler to direct user connections to the most appropriate site and StoreFront to deliver desktops and applications to users.

Figure 13 depicts multiple sites; a site was created in two data centers. Having two sites globally, rather than just one, minimizes the amount of unnecessary WAN traffic.

Figure 13.       Multiple Sites

http://support.citrix.com/proddocs/topic/xenapp-xendesktop-75/components-multiple.png

You can use StoreFront to aggregate resources from multiple sites to provide users with a single point of access with NetScaler. A separate Studio console is required to manage each site; sites cannot be managed as a single entity. You can use Director to support users across sites.

Citrix NetScaler accelerates application performance, load balances servers, increases security, and optimizes the user experience. In this example, two NetScalers are used to provide a high availability configuration. The NetScalers are configured for Global Server Load Balancing and positioned in the DMZ to provide a multi-site, fault-tolerant solution.

Note:      The CVD was done based on single site and did not use NetScaler for its infrastructure and testing.

Citrix Cloud Services

Easily deliver the Citrix portfolio of products as a service. Citrix Cloud services simplify the delivery and management of Citrix technologies extending existing on-premises software deployments and creating hybrid workspace services.

   Fast: Deploy apps and desktops, or complete secure digital workspaces in hours, not weeks.

   Adaptable: Choose to deploy on any cloud or virtual infrastructure — or a hybrid of both.

   Secure: Keep all proprietary information for your apps, desktops, and data under your control.

   Simple: Implement a fully-integrated Citrix portfolio through a single-management plane to simplify administration

Designing a Virtual Apps and Desktops Environment for Different Workloads

With Citrix Virtual Apps and Desktops, the method you choose to provide applications or desktops to users depends on the types of applications and desktops you are hosting and available system resources, as well as the types of users and user experience you want to provide.

Desktop Type

User Experience

Server OS machines

You want: Inexpensive server-based delivery to minimize the cost of delivering applications to a large number of users, while providing a secure, high-definition user experience.

Your users: Perform well-defined tasks and do not require personalization or offline access to applications. Users may include task workers such as call center operators and retail workers, or users that share workstations.

Application types: Any application.

Desktop OS machines

You want: A client-based application delivery solution that is secure, provides centralized management, and supports a large number of users per host server (or hypervisor), while providing users with applications that display seamlessly in high-definition.

Your users: Are internal, external contractors, third-party collaborators, and other provisional team members. Users do not require off-line access to hosted applications.

Application types: Applications that might not work well with other applications or might interact with the operating system, such as .NET framework. These types of applications are ideal for hosting on virtual machines.

Applications running on older operating systems such as Windows XP or Windows Vista, and older architectures, such as 32-bit or 16-bit. By isolating each application on its own virtual machine, if one machine fails, it does not impact other users.

Remote PC Access

You want: Employees with secure remote access to a physical computer without using a VPN. For example, the user may be accessing their physical desktop PC from home or through a public WIFI hotspot. Depending upon the location, you may want to restrict the ability to print or copy and paste outside of the desktop. This method enables BYO device support without migrating desktop images into the data center.

Your users: Employees or contractors that have the option to work from home but need access to specific software or data on their corporate desktops to perform their jobs remotely.

Host: The same as Desktop OS machines.

Application types: Applications that are delivered from an office computer and display seamlessly in high definition on the remote user's device.

For this Cisco Validated Design, the following designs are included:

   Single-session OS Solution:

   MCS: 2000 Windows 11 Virtual desktops random pooled will be configured and tested

   PVS: 2000 Windows 11 Virtual desktops random pooled will be configured and tested

   Multi-session OS Solution:

   RDS: 2500 Windows Server 2022 random pooled desktops will be configured and tested

VMware vSphere 8.0 Update 2

VMware vSphere is an enterprise workload platform for holistically managing large collections of infrastructures (resources including CPUs, storage, and networking) as a seamless, versatile, and dynamic operating environment. Unlike traditional operating systems that manage an individual machine, VMware vSphere aggregates the infrastructure of an entire data center to create a single powerhouse with resources that can be allocated quickly and dynamically to any application in need.

The VMware vSphere 8 Update 2 release delivered enhanced value in operational efficiency for admins, supercharged performance for higher-end AI/ML workloads, and elevated security across the environment. VMware vSphere 8 Update 2 has now achieved general availability.

For more information about the VMware vSphere 8 Update 2 three key areas of enhancements, go to: VMware blog.

VMware vSphere vCenter

VMware vCenter Server provides unified management of all hosts and VMs from a single console and aggregates performance monitoring of clusters, hosts, and VMs. VMware vCenter Server gives administrators a deep insight into the status and configuration of compute clusters, hosts, VMs, storage, the guest OS, and other critical components of a virtual infrastructure. VMware vCenter manages the rich set of features available in a VMware vSphere environment.

Red Hat Ansible

Ansible is simple and powerful, allowing you to easily manage various physical devices within FlashStack including the provisioning of Cisco UCS servers, Cisco Nexus switches, Pure Storage FlashArray storage and VMware vSphere. Using Ansible’s Playbook-based automation is easy and integrates into your current provisioning infrastructure.

Cisco Intersight Assist Device Connector for VMware vCenter and Pure Storage FlashArray

Cisco Intersight integrates with VMware vCenter and Pure Storage FlashArray as follows:

   Cisco Intersight uses the device connector running within Cisco Intersight Assist virtual appliance to communicate with the VMware vCenter.

   Cisco Intersight uses the device connector running within a Cisco Intersight Assist virtual appliance to integrate with all Pure Storage FlashArray models. The newest version 1.1 of Pure Storage integration to Cisco Intersight introduces support for REST API 2.x for FlashArray products (running Purity//FA 6.0.3 or later), along with User Agent support (for telemetry). Intersight Cloud Orchestrator now has new storage tasks for adding/removing a Pure Storage snapshot and copy a Pure Storage volume from snapshot.

Figure 14.       Cisco Intersight and vCenter and Pure Storage Integration

A picture containing graphical user interfaceDescription automatically generated

The device connector provides a safe way for connected targets to send information and receive control instructions from the Cisco Intersight portal using a secure Internet connection. The integration brings the full value and simplicity of Cisco Intersight infrastructure management service to VMware hypervisor and FlashArray storage environments. The integration architecture enables FlashStack customers to use new management capabilities with no compromise in their existing VMware or Pure Storage FlashArray operations. IT users will be able to manage heterogeneous infrastructure from a centralized Cisco Intersight portal. At the same time, the IT staff can continue to use VMware vCenter and the Pure Storage dashboard for comprehensive analysis, diagnostics, and reporting of virtual and storage environments. The next section addresses the functions that this integration provides.

Pure Storage for VDI

Pure Storage helps organizations—of all sizes and across multiple industries—overcome the most common reasons for disappointing results from a VDI. All-flash storage delivers:

   Always-on, always fast and always secure VDI, ensuring a consistently superior end-user experience.

   Efficiency with up to 2x better data-reduction rates, lowering capital and operating costs.

   Effortless storage management, sharply reducing the demands on IT staff.

   Evergreen growth and scalability, incorporating non-disruptive upgrades and clearly defined costs known well in advance.

Whether you’re planning a VDI rollout or have already implemented VDI that’s delivering sub-par results, this white paper will provide valuable guidance—citing actual end-user deployments—that clearly illustrates how deploying flash storage can optimize your end user productivity and experience with VDI.

Purity for FlashArray

The essential element of every FlashArray is the Purity Operating Environment software. Purity implements advanced data reduction, storage management, and flash management features, enabling organizations to enjoy Tier 1 data services for all workloads, proven 99.9999% availability over multiple years (inclusive of maintenance and generational upgrades), completely non-disruptive operations, 2X better data reduction versus alternative all-flash solutions, and – with FlashArray//X – the power and efficiency of DirectFlash.

Related image, diagram or screenshot

Moreover, Purity includes enterprise-grade data security, modern data protection options, and complete business continuity and global disaster recovery through ActiveCluster multi-site stretch cluster and ActiveDR* for continuous replication with near zero RPO. All these features are included with every array.

FlashArray File Services

Pure Storage acquired Compuverde last year, and they’ve been busy at work integrating this technology into the Purity//FA operating environment. They emphasize the “integrating,” because they didn’t just take the existing product, drop it onto a FlashArray system, and run it on top of Purity. Instead, they incorporated key parts of it into Purity to give you the advantages of native files alongside blocks.

The SMB and NFS protocols bring consolidated storage to the Purity//FA operating system, complementing its block capabilities, while the file system offers features like directory snapshots and directory-level performance and space monitoring.  For the purposes of this reference architecture, we will be focusing on using File Services for User Profile management.

Figure 15.       FlashArray//X Specifications

A picture containing timelineDescription automatically generated

Graphical user interface, text, application, emailDescription automatically generated

** Effective capacity assumes HA, RAID, and metadata overhead, GB-to-GiB conversion, and includes the benefit of data reduction with always-on inline deduplication, compression, and pattern removal. Average data reduction is calculated at 5-to-1 and does not include thin provisioning or snapshots.

† Array accepts Pure Storage DirectFlash Shelf and/or Pure Storage SAS-based expansion shelf.

Evergreen Storage

Customers can deploy storage once and enjoy a subscription to continuous innovation through Pure’s Evergreen Storage ownership model: expand and improve performance, capacity, density, and/or features for 10 years or more – all without downtime, performance impact, or data migrations. Pure Storage has disrupted the industry’s 3-5-year rip-and-replace cycle by engineering compatibility for future technologies right into its products, notably nondisruptive capability to upgrade from //M to //X with NVMe, DirectMemory, and NVMe-oF capability.

Pure1

Pure1, our cloud-based management, analytics, and support platform, expands the self-managing, plug-n-play design of Pure Storage all-flash arrays with the machine learning predictive analytics and continuous scanning of Pure1 Meta to enable an effortless, worry-free data platform.

A picture containing text, indoor, computer, electronicsDescription automatically generated

Pure1 Manage

In the Cloud IT operating model, installing, and deploying management software is an oxymoron: you simply login. Pure1 Manage is SaaS-based, allowing you to manage your array from any browser or from the Pure1 Mobile App – with nothing extra to purchase, deploy, or maintain. From a single dashboard you can manage all your arrays, with full visibility on the health and performance of your storage.

Pure1 Analyze

Pure1 Analyze delivers true performance forecasting – giving customers complete visibility into the performance and capacity needs of their arrays – now and in the future. Performance forecasting enables intelligent consolidation and unprecedented workload optimization.

Pure1 Support

Pure Storage combines an ultra-proactive support team with the predictive intelligence of Pure1 Meta to deliver unrivaled support that’s a key component in our proven FlashArray 99.9999% availability. Customers are often surprised and delighted when we fix issues they did not even know existed.

Pure1 META

The foundation of Pure1 services, Pure1 Meta is global intelligence built from a massive collection of storage array health and performance data. By continuously scanning call-home telemetry from Pure’s installed base, Pure1 Meta uses machine learning predictive analytics to help resolve potential issues and optimize workloads. The result is both a white glove customer support experience and breakthrough capabilities like accurate performance forecasting.

Meta is always expanding and refining what it knows about array performance and health, moving the Data Platform toward a future of self-driving storage.

Pure1 VM Analytics

Pure1 helps you narrow down the troubleshooting steps in your virtualized environment. VM Analytics provides you with a visual representation of the IO path from the VM all the way through to the FlashArray. Other tools and features guide you through identifying where an issue might be occurring in order to help eliminate potential candidates for a problem.

VM Analytics doesn’t only help when there’s a problem. The visualization allows you to identify which volumes and arrays particular applications are running on. This brings the whole environment into a more manageable domain.

A screenshot of a computerDescription automatically generated

CloudSnap

Pure portable snapshots provide simple, built-in, local and cloud protection for Pure Storage FlashArrays. Purity Snapshots enable free movement of space-efficient copies between FlashArrays, to FlashBlade, to 3rd party NFS servers, and to the cloud. Pure’s portable snapshot technology encapsulates metadata along with data into the snapshot, making the snapshot portable, so it can be offloaded from a FlashArray to the cloud in a format that is recoverable to any FlashArray.

Benefits

CloudSnap is a self-backup technology built into FlashArray. It does not require the purchase of additional backup software or hardware, nor is there a need to learn and use an additional management interface. CloudSnap is natively managed via Pure Storage FlashArray’s GUI, CLI, and REST interfaces and is integrated with the Pure1 Snapshot Catalog. Since FlashArray connects to AWS via https, data is encrypted in transit and stored in an encrypted format in the S3 bucket using server-side encryption. Since CloudSnap was built from scratch for FlashArray, it is deeply integrated with the Purity Operating Environment, resulting in highly efficient operation. A few examples of the efficiency of CloudSnap:

   CloudSnap preserves data compression on the wire, and in the S3 bucket, saving network bandwidth and increasing storage space efficiency.

   CloudSnap preserves data reduction across snapshots of a volume. After offloading the initial baseline snapshot of a volume, it only sends delta changes for subsequent snaps of the same volume. The snapshot differencing engine runs within the Purity Operating Environment in FlashArray and uses a local copy of the previous snapshot to compute the delta changes. Therefore, there is no back and forth network traffic between FlashArray and the cloud to compute deltas between snapshots, further reducing network congestion and data access costs in the cloud.

   CloudSnap knows which data blocks already exist on FlashArray, so during restores it only pulls back missing data blocks to rebuild the complete snapshot on FlashArray. In addition, CloudSnap uses dedupe preserving restores, so when data is restored from the offload target to FlashArray, it is deduped to save space on FlashArray.

The highly efficient operation of CloudSnap provides the following benefits:

   Less space is consumed in the S3 bucket

   Network utilization is minimized

   Backup windows are much smaller

   Data retrieval costs from the S3 bucket are lower

Solution Design

This chapter contains the following:

   Design Considerations for Desktop Virtualization

   Understanding Applications and Data

   Project Planning and Solution Sizing Sample Questions

   Hypervisor Selection

   Storage Considerations

Design Considerations for Desktop Virtualization

There are many reasons to consider a virtual desktop solution such as an ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and even Bring Your Own Device (BYOD) to work programs. The first step in designing a virtual desktop solution is to understand the user community and the type of tasks that are required to successfully execute their role. The following user classifications are provided:

   Knowledge Workers today do not just work in their offices all day – they attend meetings, visit branch offices, work from home, and even coffee shops. These anywhere workers expect access to all of their same applications and data wherever they are.

   External Contractors are increasingly part of your everyday business. They need access to certain portions of your applications and data, yet administrators still have little control over the devices they use and the locations they work from. Consequently, IT is stuck making trade-offs on the cost of providing these workers a device vs. the security risk of allowing them access from their own devices.

   Task Workers perform a set of well-defined tasks. These workers access a small set of applications and have limited requirements from their PCs. However, since these workers are interacting with your customers, partners, and employees, they have access to your most critical data.

   Mobile Workers need access to their virtual desktop from everywhere, regardless of their ability to connect to a network. In addition, these workers expect the ability to personalize their PCs, by installing their own applications and storing their own data, such as photos and music, on these devices.

   Shared Workstation users are often found in state-of-the-art university and business computer labs, conference rooms or training centers. Shared workstation environments have the constant requirement to re-provision desktops with the latest operating systems and applications as the needs of the organization change, tops the list.

After the user classifications have been identified and the business requirements for each user classification have been defined, it becomes essential to evaluate the types of virtual desktops that are needed based on user requirements. There are essentially five potential desktops environments for each user:

   Traditional PC: A traditional PC is what typically constitutes a desktop environment: physical device with a locally installed operating system.

   Remoted Desktop Server Hosted Sessions: A hosted; server-based desktop is a desktop where the user interacts through a delivery protocol. With hosted, server-based desktops, a single installed instance of a server operating system, such as Microsoft Windows Server 2019, is shared by multiple users simultaneously. Each user receives a desktop "session" and works in an isolated memory space. Remoted Desktop Server Hosted Server sessions: A hosted virtual desktop is a virtual desktop running on a virtualization layer (ESX). The user does not work with and sit in front of the desktop, but instead the user interacts through a delivery protocol.

   Published Applications: Published applications run entirely on the VMware RDS server virtual machines and the user interacts through a delivery protocol. With published applications, a single installed instance of an application, such as Microsoft Office, is shared by multiple users simultaneously. Each user receives an application "session" and works in an isolated memory space.

   Streamed Applications: Streamed desktops and applications run entirely on the user‘s local client device and are sent from a server on demand. The user interacts with the application or desktop directly, but the resources may only available while they are connected to the network.

   Local Virtual Desktop: A local virtual desktop is a desktop running entirely on the user‘s local device and continues to operate when disconnected from the network. In this case, the user’s local device is used as a type 1 hypervisor and is synced with the data center when the device is connected to the network.

Note:      For the purposes of the validation represented in this document, both Single-session OS and Multi-session OS VDAs were validated.

Understanding Applications and Data

When the desktop user groups and sub-groups have been identified, the next task is to catalog group application and data requirements. This can be one of the most time-consuming processes in the VDI planning exercise but is essential for the VDI project’s success. If the applications and data are not identified and co-located, performance will be negatively affected.

The process of analyzing the variety of application and data pairs for an organization will likely be complicated by the inclusion cloud applications, for example, SalesForce.com. This application and data analysis is beyond the scope of this Cisco Validated Design but should not be omitted from the planning process. There are a variety of third-party tools available to assist organizations with this crucial exercise.

Project Planning and Solution Sizing Sample Questions

The following key project and solution sizing questions should be considered:

   Has a VDI pilot plan been created based on the business analysis of the desktop groups, applications, and data?

   Is there infrastructure and budget in place to run the pilot program?

   Are the required skill sets to execute the VDI project available? Can we hire or contract for them?

   Do we have end user experience performance metrics identified for each desktop sub-group?

   How will we measure success or failure?

   What is the future implication of success or failure?

Below is a short, non-exhaustive list of sizing questions that should be addressed for each user sub-group:

   What is the Single-session OS version?

   32-bit or 64-bit desktop OS?

   How many virtual desktops will be deployed in the pilot? In production?

   How much memory per target desktop group desktop?

   Are there any rich media, Flash, or graphics-intensive workloads?

   Are there any applications installed? What application delivery methods will be used, Installed, Streamed, Layered, Hosted, or Local?

   What is the Multi-session OS version?

   What is a method be used for virtual desktop deployment?

   What is the hypervisor for the solution?

   What is the storage configuration in the existing environment?

   Are there sufficient IOPS available for the write-intensive VDI workload?

   Will there be storage dedicated and tuned for VDI service?

   Is there a voice component to the desktop?

   Is there a 3rd party graphics component?

   Is anti-virus a part of the image?

   What is the SQL server version for database?

   Is user profile management (for example, non-roaming profile based) part of the solution?

   What is the fault tolerance, failover, disaster recovery plan?

   Are there additional desktop sub-group specific questions?

Hypervisor Selection

VMware vSphere 8.0 has been selected as the hypervisor for this Citrix Virtual Apps and Desktops and Remote Desktop Server Hosted (RDSH) Sessions deployment.

VMware vSphere: VMware vSphere comprises the management infrastructure or virtual center server software and the hypervisor software that virtualizes the hardware resources on the servers. It offers features like Distributed Resource Scheduler, vMotion, high availability, Storage vMotion, VMFS, and a multi-pathing storage layer. More information on VMware vSphere can be obtained at the VMware web site.

Storage Considerations

Boot from SAN

When utilizing Cisco UCS Server technology, it is recommended to configure Boot from SAN and store the boot partitions on remote storage, this enabled architects and administrators to take full advantage of the stateless nature of service profiles for hardware flexibility across lifecycle management of server hardware generational changes, Operating Systems/Hypervisors, and overall portability of server identity. Boot from SAN also removes the need to populate local server storage creating more administrative overhead.

Pure Storage FlashArray Considerations

Make sure Each FlashArray Controller is connected to BOTH storage fabrics (A/B).

Within Purity, it’s best practice to map Hosts to Host Groups and then Host Groups to Volumes, this ensures the Volume is presented on the same LUN ID to all hosts and allows for simplified management of ESXi Clusters across multiple nodes.

How big should a Volume be? With the Purity Operating Environment, we remove the complexities of aggregates, RAID groups, and so on.  When managing storage, you just create a volume based on the size required, availability and performance are taken care of through RAID-HD and DirectFlash Software.  As an administrator you can create 1 10TB volume or 10 1TB Volumes and their performance/availability will be the same, so instead of creating volumes for availability or performance you can think about recoverability, manageability, and administrative considerations.  For example, what data do I want to present to this application or what data do I want to store together so I can replicate it to another site/system/cloud, and so on.

Port Connectivity

10/25/40/100 Gbe connectivity support – while both 10 and 25 Gbe is provided through 2 onboard NICs on each FlashArray controller, if more interfaces are required or if 40Gbe connectivity is also required, then make sure additional NICs have been included in the original FlashArray BOM.

16/32Gb Fiber Channel support (N-2 support) – Pure Storage offers up to 32Gb FC support on the latest FlashArray//X arrays. Always make sure the correct number of HBAs and the speed of SFPs are included in the original FlashArray BOM.

Oversubscription

To reduce the impact of an outage or maintenance scheduled downtime it Is good practice when designing fabrics to provide oversubscription of bandwidth, this enables a similar performance profile during component failure and protects workloads from being impacted by a reduced number of paths during a component failure or maintenance event. Oversubscription can be achieved by increasing the number of physically cabled connections between storage and compute.  These connections can then be utilized to deliver performance and reduced latency to the underlying workloads running on the solution.

Topology

When configuring your SAN, it’s important to remember that the more hops you have, the more latency you will see. For best performance, the ideal topology is a “Flat Fabric” where the FlashArray is only one hop away from any applications being hosted on it.

VMware Virtual Volumes Considerations

Note:      Citrix Virtual Apps and Desktops 2203 does not support VVols so they will not be used in the deployment guide and performance testing.

vCenters that are in Enhanced Linked Mode will each be able to communicate with the same FlashArray, however vCenters that are not in Enhanced Linked Mode must use CA-Signed Certificates using the same FlashArray. If multiple vCenters need to use the same FlashArray for vVols, they should be configured in Enhanced Linked Mode.

Ensure that the Config vVol is either part of an existing FlashArray Protection Group, Storage Policy that includes snapshots, or manual snapshots of the Config vVol are taken. This will help with the VM recovery process if the VM is deleted.

There are some FlashArray limits on Volume Connections per Host, Volume Count, and Snapshot Count. For more information about FlashArray limits, review the following: https://support.purestorage.com/FlashArray/PurityFA/General_Troubleshooting/Pure_Storage_FlashArray_Limits

When a Storage Policy is applied to a vVol VM, the volumes associated with that VM are added to the designated protection group when applying the policy to the VM. If replication is part of the policy, be mindful of the amount of VMs using that storage policy and replication group. A large amount of VMs with a high change rate could cause replication to miss its schedule due to increased replication bandwidth and time needed to complete the scheduled snapshot. Pure Storage recommends vVol VMs that have Storage Policies applied be balanced between protection groups.

Pure Storage FlashArray Best Practices for VMware vSphere 8.0

The following Pure Storage best practices for VMware vSphere should be followed as part of a design:

   FlashArray Volumes are automatically presented to VMware vSphere using the Round Robin Path Selection Policy (PSP) and appropriate vendor Storage Array Type Plugin (SATP) for vSphere 7.0.

   vSphere 8.0 also uses the Latency SATP that was introduced in vSphere 6.7U1 (This replaces the I/O Operations Limit of 1 SATP, which was the default from vSphere 6.5U1).

   When using iSCSI connected FlashArray volumes, it is recommended to set DelayedAck to false (disabled) and LoginTimeout to 30 seconds. Jumbo Frames are optional when using iSCSI.

   For VMFS-6, keep automatic UNMAP enabled.

   DataMover.HardwareAcceleratedMove, DataMover.HardwareAcceleratedInit, and VMFS3.HardwareAcceleratedLocking should all be enabled.

   Ensure all ESXi hosts are connected to both FlashArray controllers. A minimum of two paths to each in order to achieve total redundancy.

   Install VMware tools or Open VM tools whenever possible.

   Queue depths should be left at the default. Changing queue depths on the ESXi host is a tweak and should only be examined if a performance problem (high latency) is observed.

   When mounting snapshots, use the ESXi resignature option and avoid force-mounting.

   Configure Host Groups on the FlashArray identically to clusters in vSphere. For example, if a cluster has four hosts in it, create a corresponding Host Group on the relevant FlashArray with exactly those four hosts—no more, no less.

   When possible, use Paravirtual SCSI adapters for virtual machines.

   Atomic Test and Set (ATS) is required on all Pure Storage volumes. This is a default configuration, and no changes should normally be needed.

For more information about the VMware vSphere Pure Storage FlashArray Best Practices, go to: https://support.purestorage.com/Solutions/VMware_Platform_Guide/001VMwareBestPractices/hhhWeb_Guide%3A_FlashArray_VMware_Best_Practices

 

Deployment Hardware and Software

This chapter contains the following:

   Architecture

   Products Deployed

   Physical Topology

   Logical Architecture

   Configuration Guidelines

Architecture

This FlashStack architecture delivers a Virtual Desktop Infrastructure that is redundant and uses the best practices of Cisco and Pure Storage.

The architecture includes:

   VMware vSphere 8.0 hypervisor installed on the Cisco UCS X210c M7 compute nodes configured for stateless compute design using boot from SAN.

   Pure Storage FlashArray//X70  provides the storage infrastructure required for VMware vSphere hypervisors and the VDI workload delivered by Citrix Virtual Apps and Desktops 2203.

   Cisco Intersight provides UCS infrastructure management with lifecycle management capabilities.

The architecture deployed is highly modular. While each customer’s environment might vary in its exact configuration, the reference architecture contained in this document once built, can easily be scaled as requirements, and demands change. This includes scaling both up (adding additional resources within a Cisco UCS Domain) and out (adding additional Cisco UCS Domains and Pure Storage).

Products Deployed

   VMware vSphere ESXi 8.0 2 hypervisor.

   VMware vCenter 8.0 to set up and manage the virtual infrastructure as well as integration of the virtual environment with Cisco Intersight software.

   Microsoft SQL Server 2022.

   Microsoft Windows Server 2022 and Windows 11 64-bit virtual machine Operating Systems.

   Citrix Virtual Apps and Desktops 2203 Remote Desktop Server Hosted (RDSH) Sessions provisioned as Instant Clone RDS Servers and stored on the Pure Storage FlashArray//X70 .

   Citrix Virtual Apps and Desktops 2203n Non-Persistent Win 11 Virtual Desktops (VDI) provisioned as Citrix PVS virtual machines and stored on Pure Storage FlashArray//X70 .

   Citrix Virtual Apps and Desktops 2203 Persistent Win 11Virtual Desktops (VDI) provisioned as Citrix MCS Full Clones virtual machines and stored on Pure Storage FlashArray//X70 .

   Microsoft Office 2021 for Login VSI End User Measurement Knowledge worker workload test.

   FSLogix for User Profile Management.

   Cisco Intersight platform to deploy, maintain, and support the FlashStack components.

   Cisco Intersight Assist virtual appliance to help connect the Pure Storage FlashArray and VMware vCenter with the Cisco Intersight platform.

Physical Topology

FlashStack VDI with Cisco UCS X210c M7 Modular System is a Fibre Channel (FC) based storage access design. Pure Storage FlashArray and Cisco UCS are connected through Cisco MDS 9132T switches and storage access utilizes the FC network. For VDI IP based file share storage access Pure Storage FlashArray and Cisco UCS are connected through Cisco Nexus C93180YC-FX switches. The physical connectivity details are explained below.

Figure 16.       FlashStack VDI – Physical Topology for FC

Related image, diagram or screenshot

Figure 16 details the physical hardware and cabling deployed to enable this solution:

   Two Cisco Nexus 93180YC-FX Switches in NX-OS Mode.

   Two Cisco MDS 9132T 32-Gb Fibre Channel Switches.

   One Cisco UCS 9508 Chassis with two Cisco UCS-IFM-9108 100GB Modules.

   Eight Cisco UCS X210C M7 Blade Servers with Intel(R) Xeon(R) Gold 6448 CPU 2.60GHz 32-core processors, 1TB 4400MHz RAM, and one Cisco VIC15231 mezzanine card, providing N+1 server fault tolerance.

   Pure Storage FlashArray//X70 with dual redundant controllers, with Twenty 1.92TB DirectFlash NVMe drives.

Note:      The management components and LoginVSI Test infrastructure are hosted on a separate vSphere cluster and are not a part of the physical topology of this solution.

Table 1 lists the software versions of the primary products installed in the environment.

Table 1.      Software and Firmware Versions

Vendor

Product/Component

Version/Build/Code

Cisco

UCS Component Firmware

4.2(3e)

Cisco

UCS X210C M7 Compute Node

5.3(1.230052)

Cisco

VIC 15231 (Virtual Interface Card)

5.3(1.230046)

Cisco

Cisco Nexus 93180YC-FX Switches

9.3(7a)

Cisco

Cisco MDS 9132T

8.5(1a)

Pure Storage

FlashArray//X70

Purity//FA 6.3.14

VMware

vCenter Server Appliance

8.0.2

VMware

vSphere 8. 0. 2

8.0.2

Citrix

Citrix Virtual Apps and Desktops 2203

2203 LTSR FR3

Cisco

Intersight Assist

1.0.11-759

Microsoft

FSLogix 2105 HF_01 (User Profile Mgmt.)

2.9.7979.62170

VMware

Tools

12.2.0.21223074

Logical Architecture

The logical architecture of the validated solution which is designed to run desktop and RDSH server VMs supporting up to 2300 users on a single chassis containing 8 blades, with physical redundancy for the blade servers for each workload type and have a separate vSphere cluster to host management services, is illustrated in Figure 17.

Note:      Separating management components and desktops is a best practice for the large environments.

Figure 17.       Logical Architecture Overview

A diagram of a cloud serverDescription automatically generated

VMware Clusters

Two VMware Clusters in one vCenter datacenter were utilized to support the solution and testing environment:

   VDI Cluster Flashstack Datacenter with Cisco UCS

   Infrastructure: Infra VMs (vCenter, Active Directory, DNS, DHCP, SQL Server, Citrix Virtual Apps and Desktops Studio Servers, Citrix Virtual Apps and Desktops Storefront Servers, VMware vSphere, VSMs, and required VMs or plug in VMS so on).

   VDI Workload VMs (Windows Server 2022 streamed RDS Server VMs with Citrix Virtual Apps and Desktops for Remote Desktop Server Hosted (RDSH) Sessions, Windows 11 Streamed with Citrix Virtual Apps and Desktops PVS (non-persistent) and Full Cloned (persistent) desktops.

   VSI Launchers and Launcher Cluster

For Example, the cluster(s) configured for running LoginVSI workload for measuring VDI End User Experience is LVS-Launcher-CLSTR: (The Login VSI infrastructure cluster consists of Login VSI data shares, LVSI Web Servers and LVSI Management Control VMs etc. were connected using the same set of switches and vCenter instance but was hosted on separate local storage. LVS-Launcher-CLSTR configured and used for the purpose of testing the LoginVSI End User Experience measurement for VMware RDSH multi server session and Win 10 VDI users.

Configuration Guidelines

The Citrix Virtual Apps and Desktops solution described in this document provides details for configuring a fully redundant, highly-available configuration. Configuration guidelines are provided that refer to which redundant component is being configured with each step, whether that be A or B. For example, Cisco Nexus A and Cisco Nexus B identify the pair of Cisco Nexus switches that are configured. The Cisco UCS Fabric Interconnects are configured similarly. 

Note:      This document is intended to allow the reader to configure the Citrix Virtual Apps and Desktops customer environment as a stand-alone solution. 

VLANs

The VLAN configuration recommended for the environment includes a total of six VLANs as outlined in Table 2.

Table 2.      VLANs Configured in this study

VLAN Name

VLAN ID

VLAN Purpose

Default

1

Native VLAN

FS-InBand-Mgmt_70

70

In-Band management interfaces

FS-Infra-Mgmt_71

71

Infrastructure Virtual Machines

FS-VDI_72

72

RDSH, VDI Persistent and Non-Persistent

FS-vMotion_73

73

VMware vMotion

OOB-Mgmt_164

164

Out of Band management interfaces

VSANs

Table 3 lists the two virtual SANs that were configured for communications and fault tolerance in this design.

Table 3.      VSANs Configured in this study

VSAN Name

VSAN ID

VSAN Purpose

VSAN 500

500

VSAN for Primary SAN communication

VSAN 501

501

VSAN for Secondary SAN communication

Solution Configuration

This chapter contains the following:

   Solution Cabling

Solution Cabling

The following sections detail the physical connectivity configuration of the FlashStack Citrix Virtual Apps and Desktops VDI environment.

The information provided in this section is a reference for cabling the physical equipment in this Cisco Validated Design environment. To simplify cabling requirements, the tables include both local and remote device and port locations.

The tables in this section list the details for the prescribed and supported configuration of the Pure Storage FlashArray//X70  storage array to the Cisco 6536 Fabric Interconnects through Cisco MDS 9132T 32-Gb FC switches.

Note:      This document assumes that out-of-band management ports are plugged into an existing management infrastructure at the deployment site. These interfaces will be used in various configuration steps.

Note:      Be sure to follow the cabling directions in this section. Failure to do so will result in problems with your deployment.

Figure 18 details the cable connections used in the validation lab for FlashStack topology based on the Cisco UCS 6536 fabric interconnect. Four 32Gb uplinks connect as port-channels to each Cisco UCS Fabric Interconnect from the MDS switches, and a total of eight 32Gb links connect the MDS switches to the Pure Storage FlashArray//X  controllers, four of these have been used for scsi-fc and the other four to support nvme-fc. Also, the 25Gb links connect the Cisco UCS Fabric Interconnects to the Cisco Nexus Switches and the Pure Storage FlashArray//X  controllers to the Cisco Nexus Switches. Additional 1Gb management connections will be needed for an out-of-band network switch that sits apart from the FlashStack infrastructure. Each Cisco UCS fabric interconnect and Cisco Nexus switch is connected to the out-of-band network switch, and each FlashArray controller has a connection to the out-of-band network switch. Layer 3 network connectivity is required between the Out-of-Band (OOB) and In-Band (IB) Management Subnets.

Figure 18.       FlashStack solution cabling diagram

 A diagram of a computerDescription automatically generated

Configuration and Installation

This chapter contains the following:

   FlashStack Automated Deployment with Ansible

   FlashStack Manual Deployment

   Cisco UCS X-Series M7 Configuration – Intersight Managed Mode (IMM)

   Cisco MDS 9132T 32-Gb FC Switch Configuration

   Pure Storage FlashArray//X70 to MDS SAN Fabric Connectivity

   Configure Pure Storage FlashArray//X70

FlashStack Automated Deployment with Ansible

This solution offers Ansible Playbooks that are made available from a GitHub repository that customers can access to automate the FlashStack deployment.

GitHub repository is available here: https://github.com/ucs-compute-solutions/FlashStack_IMM_Ansible.

This repository contains Ansible playbooks to configure all the components of FlashStack including:

   Cisco UCS in Intersight Managed Mode (IMM)

   Cisco Nexus and MDS Switches

   Pure Storage FlashArray

   VMware ESXi and VMware vCenter

Figure 19.       High-Level FlashStack Automation

Related image, diagram or screenshot

FlashStack Manual Deployment

Cisco Intersight Managed Mode standardizes policy and operation management for Cisco UCS X-Series M7. The compute nodes in Cisco UCS X-Series M7 are configured using server profiles defined in Cisco Intersight. These server profiles derive all the server characteristics from various policies and templates. At a high level, configuring Cisco UCS using Cisco Intersight Managed Mode consists of the steps shown in Figure 20.   

Figure 20.       Configuration Steps for Cisco Intersight Managed Mode

Chart, bubble chartDescription automatically generated

Cisco UCS X-Series M7 Configuration – Intersight Managed Mode (IMM)

Procedure 1.       Configure Cisco UCS Fabric Interconnects for IMM

Step 1.      Verify the following physical connections on the fabric interconnect:

   The management Ethernet port (mgmt0) is connected to an external hub, switch, or router.

   The L1 ports on both fabric interconnects are directly connected to each other.

   The L2 ports on both fabric interconnects are directly connected to each other.

Step 2.      Connect to the console port on the first Fabric Interconnect.

Step 3.      Configure Fabric Interconnect A (FI-A). On the Basic System Configuration Dialog screen, set the management mode to Intersight. All the remaining settings are similar to those for the Cisco UCS Manager managed mode (UCSM-Managed).

Cisco UCS Fabric Interconnect A

Procedure 1.       Configure the Cisco UCS for use in Intersight Managed Mode

Step 1.      Connect to the console port on the first Cisco UCS fabric interconnect:

  Enter the configuration method. (console/gui) ? console

 

  Enter the management mode. (ucsm/intersight)? intersight

 

  You have chosen to setup a new Fabric interconnect in “intersight” managed mode. Continue? (y/n): y

 

  Enforce strong password? (y/n) [y]: Enter

 

  Enter the password for "admin": <password>

  Confirm the password for "admin": <password>

 

  Enter the switch fabric (A/B) []: A

 

  Enter the system name:  <ucs-cluster-name>

  Physical Switch Mgmt0 IP address : <ucsa-mgmt-ip>

 

  Physical Switch Mgmt0 IPv4 netmask : <ucsa-mgmt-mask>

 

  IPv4 address of the default gateway : <ucsa-mgmt-gateway>

 

  Configure the DNS Server IP address? (yes/no) [n]: y

 

    DNS IP address : <dns-server-1-ip>

 

  Configure the default domain name? (yes/no) [n]: y

 

    Default domain name : <ad-dns-domain-name>

<SNIP>

  Verify and save the configuration.

Step 2.      After applying the settings, make sure you can ping the fabric interconnect management IP address. When Fabric Interconnect A is correctly set up and is available, Fabric Interconnect B will automatically discover Fabric Interconnect A during its setup process as shown in the next step.

Step 3.      Configure Fabric Interconnect B (FI-B). For the configuration method, choose console. Fabric Interconnect B will detect the presence of Fabric Interconnect A and will prompt you to enter the admin password for Fabric Interconnect A. Provide the management IP address for Fabric Interconnect B and apply the configuration.

Cisco UCS Fabric Interconnect B

Enter the configuration method. (console/gui) ? console

 

  Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Continue (y/n) ? y

 

  Enter the admin password of the peer Fabric interconnect: <password>

    Connecting to peer Fabric interconnect... done

    Retrieving config from peer Fabric interconnect... done

    Peer Fabric interconnect Mgmt0 IPv4 Address: <ucsa-mgmt-ip>

    Peer Fabric interconnect Mgmt0 IPv4 Netmask: <ucsa-mgmt-mask>

 

    Peer FI is IPv4 Cluster enabled. Please Provide Local Fabric Interconnect Mgmt0 IPv4 Address

 

  Physical Switch Mgmt0 IP address : <ucsb-mgmt-ip>

 

  Local fabric interconnect model(UCS-FI-6536)

  Peer fabric interconnect is compatible with the local fabric interconnect. Continuing with the installer...

 

  Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes

Procedure 2.       Claim a Cisco UCS Fabric Interconnect in the Cisco Intersight Platform

If you do not already have a Cisco Intersight account, you need to set up a new account in which to claim your Cisco UCS deployment. Start by connecting to https://intersight.com.

All information about Cisco Intersight features, configurations can be accessed in the Cisco Intersight Help Center.

Virtual Port Channel (vPC) Summary for Data and Storage Network

In the Cisco Nexus 93180YC-FX switch topology, a single vPC feature is enabled to provide HA, faster convergence in the event of a failure, and greater throughput. Cisco Nexus 93180YC-FX vPC configurations with the vPC domains and corresponding vPC names and IDs for Oracle Database Servers is listed in Table 4.

Table 4.      vPC Summary

vPC Domain

vPC Name

vPC ID

50

Peer-Link

1

50

vPC Port-Channel to FI-A

11

50

vPC Port-Channel to FI-B

12

As listed in Table 4, a single vPC domain with Domain ID 50 is created across two Cisco Nexus 93180YC-FX member switches to define vPC members to carry specific VLAN network traffic. In this topology, a total number of 3 vPCs were defined:

   vPC ID 1 is defined as Peer link communication between two Nexus switches in Fabric A and B.

   vPC IDs 11 and 12 are defined for traffic from Cisco UCS fabric interconnects.

Cisco Nexus 93180YC-FX Switch Cabling Details

The following tables list the cabling information.

Table 5.      Cisco Nexus 93180YC-FX-A Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco Nexus 93180YC-FX Switch A

Eth1/51

100Gbe

Cisco UCS fabric interconnect B

Eth1/49

Eth1/52

100Gbe

Cisco UCS fabric interconnect A

Eth1/49

Eth1/1

25Gbe

Cisco Nexus 93180YC-FX B

Eth1/1

Eth1/2

25Gbe

Cisco Nexus 93180YC-FX B

Eth1/2

Eth1/3

25Gbe

Cisco Nexus 93180YC-FX B

Eth1/3

Eth1/4

25Gbe

Cisco Nexus 93180YC-FX B

Eth1/4

MGMT0

1Gbe

Gbe management switch

Any

Table 6.      Cisco Nexus 93180YC-FX-B Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco Nexus 93180YC-FX Switch B

Eth1/51

100Gbe

Cisco UCS fabric interconnect B

Eth1/50

Eth1/52

100Gbe

Cisco UCS fabric interconnect A

Eth1/50

Eth1/1

25Gbe

Cisco Nexus 93180YC-FX A

Eth1/1

Eth1/2

25Gbe

Cisco Nexus 93180YC-FX A

Eth1/2

Eth1/3

25Gbe

Cisco Nexus 93180YC-FX A

Eth1/3

Eth1/4

25Gbe

Cisco Nexus 93180YC-FX A

Eth1/4

MGMT0

1Gbe

Gbe management switch

Any

Cisco UCS Fabric Interconnect 6536 Cabling

The following tables list the Cisco UCS FI 6536 cabling information.

Table 7.      Cisco UCS Fabric Interconnect (FI) A Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco UCS FI-6536-A

FC 1/1

32G FC

Cisco MDS 9132T 32-Gb-A

FC 1/13

FC 1/2

32G FC

Cisco MDS 9132T 32-Gb-A

FC 1/14

Eth1/11-14

25Gbe

UCS 9508 Chassis IFM-A Chassis 1

Intelligent Fabric Module 1 Port1-2

Eth1/49

100Gbe

Cisco Nexus 93180YC-FX Switch A

Eth1/52

Eth1/50

100Gbe

Cisco Nexus 93180YC-FX Switch B

Eth1/52

Mgmt 0

1Gbe

Management Switch

Any

L1

1Gbe

Cisco UCS FI - A

L1

L2

1Gbe

Cisco UCS FI - B

L2

Table 8.      Cisco UCS Fabric Interconnect (FI) B Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco UCS FI-6536-B

FC 1/1

32Gb FC

Cisco MDS 9132T 32-Gb-B

FC 1/13

FC 1/2

32Gb FC

Cisco MDS 9132T 32-Gb-B

FC 1/14

Eth1/11-14

25Gbe

UCS 9508 Chassis IFM-B Chassis 1

Intelligent Fabric Module 1 Port1-2

Eth1/49

100Gbe

Cisco Nexus 93180YC-FX Switch A

Eth1/51

Eth1/50

100Gbe

Cisco Nexus 93180YC-FX Switch B

Eth1/51

Mgmt 0

1Gbe

Management Switch

Any

L1

1Gbe

Cisco UCS FI - A

L1

L2

1Gbe

Cisco UCS FI - B

L2

Procedure 1.       Create vPC Peer-Link Between the Two Cisco Nexus Switches

Step 1.      Log in as “admin” user into the Cisco Nexus Switch A.

Note:      For vPC 1 as Peer-link, we used interfaces 53-54 for Peer-Link. You may choose the appropriate number of ports for your needs.

Step 2.      Create the necessary port channels between devices by running these commands on both Cisco Nexus switches:

config terminal

feature vpc

feature lacp

vpc domain 50

peer-keepalive destination 173.37.52.104 source 173.37.52.103

exit

interface port-channel 10

description VPC peer-link

switchport mode trunk

switchport trunk allowed VLAN 1,70-76

spanning-tree port type network

vpc peer-link

interface Ethernet1/1

description VPC to K23-N9K-A

switchport mode trunk

switchport trunk allowed vlan 1,70-76,132

channel-group 10 mode active

no shutdown

exit

 

interface Ethernet1/2

description VPC to K23-N9K-A

switchport mode trunk

switchport trunk allowed vlan 1,70-76,132

channel-group 10 mode active

no shutdown

exit

 

interface Ethernet1/3

description VPC to K23-N9K-A

switchport mode trunk

switchport trunk allowed vlan 1,70-76,132

channel-group 10 mode active

no shutdown

exit

 

interface Ethernet1/4

description VPC to K23-N9K-A

switchport mode trunk

switchport trunk allowed vlan 1,70-76,132

channel-group 10 mode active

no shutdown

exit

copy running-config startup-config

Step 3.      Log in as admin user into the Nexus Switch B and repeat the above steps to configure second Cisco Nexus switch.

Step 4.      Make sure to change the peer-keepalive destination and source IP address appropriately for Cisco Nexus Switch B.

Procedure 2.       Create vPC Configuration Between Cisco Nexus 93180YC-FX and Cisco Fabric Interconnects

Create and configure vPC 11 and 12 for the data network between the Cisco Nexus switches and fabric interconnects.

Note:      Create the necessary port channels between devices, by running the following commands on both Cisco Nexus switches.

Step 1.      Log in as admin user into Cisco Nexus Switch A and enter the following:

config terminal

interface port-channel11

description FI-A-Uplink

switchport mode trunk

switchport trunk allowed VLAN 1,70-76

spanning-tree port type edge trunk

vpc 11

no shutdown

exit

interface port-channel12

description FI-B-Uplink

switchport mode trunk

switchport trunk allowed VLAN 1,70-76

spanning-tree port type edge trunk

vpc 12

no shutdown

exit

interface Ethernet1/51

description FI-A-Uplink

switch mode trunk

switchport trunk allowed vlan 1,70-76

spanning-tree port type edge trunk

mtu 9216

channel-group 11 mode active

no shutdown

exit

interface Ethernet1/52

description FI-B-Uplink

switch mode trunk

switchport trunk allowed vlan 1,70-76

spanning-tree port type edge trunk

mtu 9216

channel-group 12 mode active

no shutdown

exit

copy running-config startup-config

Step 2.      Log in as admin user into the Nexus Switch B and complete the following for the second switch configuration:

config Terminal

interface port-channel11

description FI-A-Uplink

switchport mode trunk

switchport trunk allowed VLAN 1,70-76

spanning-tree port type edge trunk

vpc 11

no shutdown

exit

interface port-channel12

description FI-B-Uplink

switchport mode trunk

switchport trunk allowed VLAN 1,70-76

spanning-tree port type edge trunk

vpc 12

no shutdown

exit

interface Ethernet1/51

description FI-A-Uplink

switch mode trunk

switchport trunk allowed vlan 1,70-76

spanning-tree port type edge trunk

mtu 9216

channel-group 11 mode active

no shutdown

exit

interface Ethernet1/52

description FI-B-Uplink

switch mode trunk

switchport trunk allowed vlan 1,70-76

spanning-tree port type edge trunk

mtu 9216

channel-group 12 mode active

no shutdown

exit

copy running-config startup-config

Verify vPC Status is up on both Cisco Nexus Switches

Figure 21 shows the verification of the vPC status on both Cisco Nexus Switches.

Figure 21.       vPC Description for Cisco Nexus Switch A and B

TextDescription automatically generated   TextDescription automatically generated

Cisco MDS 9132T 32-Gb FC Switch Configuration

Figure 21 illustrates the cable connectivity between the Cisco MDS 9132T 32-Gb switch and the Cisco 6536 Fabric Interconnects and Pure Storage FlashArray//X70  storage.

Note:      We used two 32Gb FC connections from each fabric interconnect to each MDS switch and two 32Gb FC connections from each Pure Storage FlashArray//X70  array controller to each MDS switch.

Table 9.      Cisco MDS 9132T-A Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco MDS 9132T-A

 

FC1/9

32Gb FC

Pure Storage FlashArray//X70  Controller 0

CT0.FC0

FC1/10

32Gb FC

Pure Storage FlashArray//X70  Controller 1

CT1.FC0

FC1/13

32Gb FC

Cisco 6536 Fabric Interconnect-A

FC1/1

FC1/14

32Gb FC

Cisco 6536 Fabric Interconnect-A

FC1/2

Table 10.   Cisco MDS 9132T-B Cabling Information

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco MDS 9132T-B

 

FC1/9

32Gb FC

Pure Storage FlashArray//X70  Controller 0

CT0.FC2

FC1/10

32Gb FC

Pure Storage FlashArray//X70  Controller 1

CT1.FC2

FC1/13

32Gb FC

Cisco 6536 Fabric Interconnect-B

FC1/1

FC1/14

32Gb FC

Cisco 6536 Fabric Interconnect-B

FC1/2

Pure Storage FlashArray//X70 to MDS SAN Fabric Connectivity

Pure Storage FlashArray//X70  to MDS A and B Switches using VSAN 500 for Fabric A and VSAN 501 Configured for Fabric B

In this solution, two ports (ports FC1/9 and FC1/10) of MDS Switch A and two ports (ports FC1/9 and FC1/10) of MDS Switch B are connected to Pure Storage System as listed in Table 11. All ports connected to the Pure Storage Array carry 32 Gb/s FC Traffic.

Table 11.   MDS 9132T 32-Gb switch Port Connection to Pure Storage System

Local Device

Local Port

Connection

Remote Device

Remote Port

Cisco MDS 9132T-A

 

FC1/9

32Gb FC

Pure Storage FlashArray//X70  Controller 0

CT0.FC0

FC1/10

32Gb FC

Pure Storage FlashArray//X70  Controller 1

CT1.FC0

Cisco MDS 9132T-B

 

FC1/9

32Gb FC

Pure Storage FlashArray//X70  Controller 0

CT0.FC2

FC1/10

32Gb FC

Pure Storage FlashArray//X70  Controller 1

CT1.FC2

Procedure 1.       Configure features and name for MDS Switch A and MDS Switch B

Follow these steps on both MDS switches.

Step 1.      Log in as admin user into MDS Switch A:

config terminal

feature npiv

feature telnet

switchname FlashStack-MDS-A

copy running-config startup-config

Step 2.      Log in as admin user into MDS Switch B. Repeat step 1 on MDS Switch B.

Procedure 2.       Configure VSANs for MDS Switch A and MDS Switch B

Step 1.      Log in as admin user into MDS Switch A. Create VSAN 500 for Storage Traffic:

config terminal

VSAN database

vsan 500

exit

zone smart-zoning enable vsan 500

vsan database

vsan 500 interface fc 1/9-16

exit

interface fc 1/9-16

switchport trunk allowed vsan 500

switchport trunk mode off

port-license acquire

no shutdown

exit

copy running-config startup-config

Step 2.      Log in as admin user into MDS Switch B. Create VSAN 501 for Storage Traffic:

config terminal

VSAN database

vsan 501

exit

zone smart-zoning enable vsan 501

vsan database

vsan 501 interface fc 1/9-16

exit

interface fc 1/9-16

switchport trunk allowed vsan 501

switchport trunk mode off

port-license acquire

no shutdown

exit

copy running-config startup-config

Configure Pure Storage FlashArray//X70

The design goal of the reference architecture is to best represent a real-world environment as closely as possible. The approach included the features of Cisco UCS to rapidly deploy stateless servers and use Pure Storage FlashArray’s boot LUNs to provision the ESXi on top of Cisco UCS. Zoning was performed on the Cisco MDS 9132T 32-Gb switches to enable the initiators to discover the targets during the boot process.

A Service Profile was created within Cisco UCS Manager to deploy the thirty-two servers quickly with a standard configuration. SAN boot volumes for these servers were hosted on the same Pure Storage FlashArray//X70 . Once the stateless servers were provisioned, the following process was performed to enable rapid deployment of thirty-two blade servers.

Each blade server has dedicated single LUN to install operating system and all the thirty-two blade servers configured to boot from SAN. For this solution, we installed VMware vSphere ESXi 8.0 Update 1 Cisco Custom ISO on these LUNs.

Using logical servers that are disassociated from the physical hardware removes many limiting constraints around how servers are provisioned. Cisco UCS Service Profiles contain values for a server's property settings, including virtual network interface cards (vNICs), MAC addresses, boot policies, firmware policies, fabric connectivity, external management, and HA information. The service profiles represent all the attributes of a logical server in Cisco UCS model. By abstracting these settings from the physical server into a Cisco Service Profile, the Service Profile can then be deployed to any physical compute hardware within the Cisco UCS domain. Furthermore, Service Profiles can, at any time, be migrated from one physical server to another. Cisco is the only hardware provider to offer a truly unified management platform, with Cisco UCS Service Profiles and hardware abstraction capabilities extending to both blade and rack servers.

In addition to the service profiles, using Pure Storage FlashArray with SAN boot policy provides the following benefits:

   Scalability - Rapid deployment of new servers to the environment in a few steps.

   Manageability - Enables seamless hardware maintenance and upgrades without any restrictions. This is a huge benefit in comparison to another appliance model like Exadata.

   Flexibility - Easy to repurpose physical servers for different applications and services as needed.

   Availability - Hardware failures are not impactful and critical.  In the rare case of a server failure, it is easier to associate the logical service profile to another healthy physical server to reduce the impact.

Summary

FlashStack is a powerful and reliable platform that has been specifically developed for enterprise end-user computing deployments and cloud data centers. It utilizes a range of innovative technologies, including Cisco UCS Blade and Rack Servers, Cisco Fabric Interconnects, Cisco Nexus 9000 switches, Cisco MDS 9100 Fibre Channel switches, and Pure Storage FlashArray//X70  Storage Array, to provide customers with a comprehensive solution that is designed and validated using best practices for compute, network, and storage.

With the introduction of Cisco UCS X210c M7 Series modular platform and Cisco Intersight, FlashStack now offers even more benefits to its users. These technologies enhance the ability to provide complete visibility and orchestration across all elements of the FlashStack datacenter, enabling users to modernize their infrastructure and operations. This means that users can achieve higher levels of efficiency, scalability, and flexibility while also reducing deployment time, project risk, and IT costs.

FlashStack has been validated using industry-standard benchmarks to ensure that it meets the highest standards of performance, management, scalability, and resilience. This makes it the ideal choice for customers who are looking to deploy enterprise-class VDI and other IT initiatives. With its powerful combination of hardware and software, FlashStack is capable of meeting the demands of the most complex and demanding IT environments, ensuring that users can focus on their core business objectives without having to worry about the underlying infrastructure.

Get More Business Value with Services

Whether you are planning your next-generation environment, need specialized know-how for a major deployment, or want to get the most from your current storage, Cisco Advanced Services, Pure Storage FlashArray//X70  storage and our certified partners can help. We collaborate with you to enhance your IT capabilities through a full portfolio of services for your IT lifecycle with:

   Strategy services to align IT with your business goals.

   Design services to architect your best storage environment.

   Deploy and transition services to implement validated architectures and prepare your storage environment.

   Operations services to deliver continuous operations while driving operational excellence and efficiency.

Additionally, Cisco Advanced Services and Pure Storage Support provide in-depth knowledge transfer and education services that give you access to our global technical resources and intellectual property.

Feedback

For comments and suggestions about this guide and related guides, join the discussion on Cisco Community at https://cs.co/en-cvds.

CVD Program

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLE-MENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unified Computing System (Cisco UCS), Cisco UCS B-Series Blade Servers, Cisco UCS C-Series Rack Servers, Cisco UCS S-Series Storage Servers, Cisco UCS X-Series, Cisco UCS Manager, Cisco UCS Management Software, Cisco Unified Fabric, Cisco Application Centric Infrastructure, Cisco Nexus 9000 Series, Cisco Nexus 7000 Series. Cisco Prime Data Center Network Manager, Cisco NX-OS Software, Cisco MDS Series, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study,  LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trade-marks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries. (LDW_P1)

All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)

Related image, diagram or screenshot

 

Learn more