Cisco UCS X-Series Servers with Intel Optane Persistent Memory for Virtual Desktop Infrastructure White Paper

White Paper

Available Languages

Download Options

  • PDF
    (2.0 MB)
    View with Adobe Reader on a variety of devices
Updated:May 10, 2022

Bias-Free Language

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

Available Languages

Download Options

  • PDF
    (2.0 MB)
    View with Adobe Reader on a variety of devices
Updated:May 10, 2022
 

 


 

Executive summary

Organizations increasingly are adopting hybrid-cloud computing environments, and Cisco architectures are at the center of on-premises infrastructure. With new memory technologies available today such as Intel’s persistent memory (PMEM), the question for IT management is often “Is PMEM right for the Virtual Desktop Infrastructure (VDI) deployment?” This document describes the design and testing of a solution using the Cisco Unified Computing System™ (Cisco UCS®) Intersight Release 5.0(1b), Cisco UCSX 210C Servers with Intel Optane persistent memory, VMware vSphere 7.0, VMware Horizon 8, Citrix Virtual Apps and Desktops 7 LTSR, and a “white box” storage array. Cisco Intersight  5.0(1b) provides consolidated support for all current Cisco UCS fabric interconnect models: Cisco UCS 6200, 6300, and 6400 Series Fabric Interconnects; and Cisco UCS 6324 Fabric Interconnects (Cisco UCS Mini). It also supports Cisco UCS 2200, 2300, and 2400 Series Fabric Extenders; Cisco UCS B-Series Blade Servers; and Cisco UCS C-Series Rack Servers. The solution includes the Cisco Intersight™ management platform.

The results of the study reported in this document show that the traditional Cisco UCS server configuration with dynamic RAM (DRAM) is comparable to a configuration with Intel Optane persistent memory. With both Citrix and VMware Horizon, VDI workloads performed quite well with Intel’s PMEM. Your organization may achieve a lower-cost solution by using PMEM instead of DRAM in VDI environments.

      For VDI configurations greater than 1 TB per server, you can achieve a higher return on investment (ROI) by using PMEM, without sacrificing performance.

      Incorporating PMEM will not by itself negatively affect performance.

      Performance results were the same across Citrix and VMware Horizon–based solutions.

Overview

This section describes the Cisco components used in the architecture.

Cisco Intersight platform

The Cisco Intersight platform is a software-as-a-service (SaaS) infrastructure lifecycle management platform that delivers simplified configuration, deployment, maintenance, and support. With the Cisco Intersight platform, you get all the benefits of SaaS delivery and the full lifecycle management of Cisco Intersight connected distributed servers and third-party storage systems such as those across data centers and at remote sites, branch offices, and edge environments (Figure 1).

The Cisco Intersight platform is designed to be modular, so you can adopt services based on your individual requirements. The platform significantly simplifies IT operations by bridging applications with infrastructure and providing visibility and management from bare metal servers and hypervisors to serverless applications, thereby reducing cost and mitigating risk. This unified SaaS platform uses a unified OpenAPI that natively integrates with the third-party platforms and tools.

DiagramDescription automatically generated

Figure 1.   

Cisco Intersight overview

The main benefits of Cisco Intersight infrastructure services are summarized here:

      Simplify daily operations by automating many daily manual tasks.

      Combine the convenience of a SaaS platform with the capability to connect from anywhere and manage infrastructure through a browser or mobile app.

      Stay ahead of problems and accelerate trouble resolution through advanced support capabilities.

      Gain global visibility of infrastructure health and status along with advanced management and support capabilities.

      Upgrade to add workload optimization and Kubernetes services when needed.

Cisco UCS is a next-generation data center platform that unites computing, networking, and storage access. Optimized for virtual environments, the platform is designed using open industry-standard technologies and aims to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency; lossless 25-Gigabit Ethernet (GE) unified network fabric with enterprise-class x86-architecture servers. It is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain.


 

Cisco Unified Computing System components

The main components of Cisco UCS follow:

      Compute: The system is based on an entirely new class of computing system that incorporates blade servers based on Intel Xeon Scalable Family processors.

      Network: The system is integrated on a low-latency, lossless, 25-GE unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing (HPC) networks, which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables needed, and by decreasing the power and cooling requirements.

      Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.

      Storage access: The system provides consolidated access to local storage, SAN storage, and network-attached storage (NAS) over the unified fabric. With storage access unified, Cisco UCS can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Small Computer System Interface over IP (iSCSI) protocols. This capability offers you choice for storage access and investment protection. In addition, server administrators can pre-assign storage-access policies for system connectivity to storage resources, simplifying storage connectivity and management and helping increase productivity.

 

Cisco UCS is designed to deliver:

      Reduced TCO and increased business agility

      Increased IT staff productivity through just-in-time provisioning and mobility support

      A cohesive, integrated system that unifies the technology in the data center; the system is managed, serviced, and tested as a whole

      Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand

      Industry standards supported by a partner ecosystem of industry leaders

Cisco UCS Manager provides unified, embedded management of all software and hardware components of Cisco UCS across multiple chassis, rack servers, and thousands of virtual machines. It manages Cisco UCS as a single entity through an intuitive graphical user interface (GUI), a command-line interface (CLI), or an Extensible Markup Language application programming interface ((XML API) for comprehensive access to all Cisco UCS Manager functions.

Cisco UCS 6400 Series Fabric Interconnect

The Cisco UCS 6400 Series Fabric Interconnects are a core part of Cisco UCS, providing both network connectivity and management capabilities for the system. The Cisco UCS 6400 Series offer line-rate, low-latency, lossless 10-/25-/40-/100-GE, FCoE, and Fibre Channel functions.

The Cisco UCS 6400 Series provide the management and communication backbone for the Cisco UCS B-Series Blade Servers, UCS 5108 B-Series Server chassis, UCS Managed C-Series Rack Servers, and UCS S-Series Storage Servers. All servers attached to a Cisco UCS 6400 Series Fabric Interconnect become part of a single, highly available management domain. In addition, by supporting a unified fabric, Cisco UCS 6400 Series Fabric Interconnect provides both the LAN and SAN connectivity for all servers within its domain.

From a networking perspective, the Cisco UCS 6400 Series use a cut-through architecture, supporting deterministic, low-latency, line-rate 10-/25-/40-/100-GE ports, switching capacity of 3.82 Tbps for the 6454, 7.42 Tbps for the 64108, and 200-GE bandwidth between the Fabric Interconnect 6400 Series and the Cisco UCS I/O Module (IOM) 2408 per Cisco UCS 5108 Blade Server chassis, independent of packet size and enabled services. The product family supports Cisco low-latency, lossless 10-/25-/40-/100-GE unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks. The fabric interconnect supports multiple traffic classes over a lossless Ethernet fabric from the server through the fabric interconnect. Significant TCO savings come from an FCoE-optimized server design in which you can consolidate network interface cards (NICs), host bus adapters (HBAs), cables, and switches. Figures 2 and 3 show Cisco UCS 6400 front and side views, respectively.

A picture containing basementDescription automatically generated

Figure 2.   

Cisco UCS 6400 Series Fabric Interconnect - 6454 Front View

A picture containing text, deviceDescription automatically generated

Figure 3.   

Cisco UCS 6400 Series Fabric Interconnect - 6454 Rear View

Cisco UCS X9508 chassis

The Cisco UCS X-Series Modular System simplifies your data center, adapting to the unpredictable needs of modern applications while also providing for traditional scale-out and enterprise workloads. It reduces the number of server types to maintain, helping to improve operational efficiency and agility as it helps reduce complexity. Powered by the Cisco Intersight cloud-operations platform, it shifts users' IT focus from administrative details to business outcomes-with a hybrid-cloud infrastructure that is assembled from the cloud, shaped to their workloads, and continuously optimized.

The Cisco UCS X-Series Modular System begins with the Cisco UCS X9508 chassis engineered to be adaptable and future-ready. It is a standard, open system designed to deploy and automate faster in concert with a hybrid-cloud environment.

With a midplane-free design, I/O connectivity for the X9508 chassis is accomplished with frontloading, vertically oriented compute nodes intersecting with horizontally oriented I/O connectivity modules in the rear of the chassis. A unified Ethernet fabric is supplied with the Cisco UCS 9108 Intelligent Fabric Modules. In the future, Cisco UCS X-Fabric Technology interconnects will supply other industry-standard protocols as standards emerge. You can easily update interconnections with new modules.

Features

      The 7-rack-unit (7RU) chassis has 8 front-facing flexible slots. These slots can house a combination of compute nodes and a pool of future I/O resources that may include GPU accelerators, disk storage, and nonvolatile memory.

       Cisco UCS 9108 Intelligent Fabric Modules (IFMs) at the top of the chassis connect the chassis to upstream Cisco UCS 6400 Series Fabric Interconnects. Each IFM features:

    Up to 100 Gbps of unified fabric connectivity per compute node

    Eight 25-Gbps Small Form-Factor {Pluggable 28 (SFP28) uplink ports; the unified fabric carries management traffic to the Cisco Intersight cloud-operations platform and FCoE and production Ethernet traffic to the fabric interconnects.

      At the bottom are slots ready to house future I/O modules that can flexibly connect the compute modules with I/O devices. We call this connectivity Cisco UCS X-Fabric technology because "X" is a variable that can evolve with new technology developments.

      Six 2800W power supply units (PSUs) provide 54V power to the chassis with N, N+1, and N+N redundancy. A higher voltage allows efficient power delivery with less copper and reduced power loss.

      Four efficient, 100-mm, dual counter-rotating fans deliver industry-leading airflow and power efficiency. Optimized thermal algorithms enable different cooling modes to best support the network environment. Cooling is modular so that future enhancements can potentially handle open- or closed-loop liquid cooling to support even higher-power processors.

Figure 4 shows the Cisco UCS 9508 X-Series chassis front and back views.

A picture containing text, computer, air conditionerDescription automatically generated

Figure 4.   

Cisco UCS 9508 X-Series chassis front (left) and back (right) views

Benefits

Since we first delivered Cisco UCS in 2009, our goal has been to simplify the data center. We pulled management out of servers and into the network. We simplified multiple networks into a single, unified fabric. And we eliminated network layers in favor of a flat topology wrapped into a single, unified system. With the Cisco UCS X-Series Modular System, we take that simplicity to the next level:

      Simplify with cloud-operated infrastructure: We move management from the network into the cloud so that you can respond at the speed and scale of your business and manage all of your infrastructure. You can shape Cisco UCS X-Series Modular System resources to workload requirements with the Cisco Intersight cloud-operations platform. You can integrate third-party devices including storage from NetApp, Pure Storage, and Hitachi. And you gain intelligent visualization, optimization, and orchestration for all of your applications and infrastructure.

      Simplify with an adaptable system designed for modern applications: Today's cloud-native, hybrid applications are inherently unpredictable. They get deployed and redeployed as part of an iterative DevOps practice. Requirements change often, and you need a system that doesn't lock you into one set of resources when you find that you need another. For hybrid applications and a range of traditional data center applications, you can consolidate onto a single platform that combines the density and efficiency of blade servers with the expandability of rack servers. The result: better performance, automation, and efficiency.

      Simplify with a system engineered for the future: Embrace emerging technology and reduce risk with a modular system designed to support future generations of processors, storage, nonvolatile memory, accelerators, and interconnects. Gone is the need to purchase, configure, maintain, power, and cool discrete management modules and servers. Cloud-based management is kept up-to-date automatically with a constant stream of new capabilities delivered by the Cisco Intersight SaaS model.

      Support a broader range of workloads: A single server type supporting a broader range of workloads means fewer different products to support, reduced training costs, and increased flexibility.

Cisco UCS X210 Series Servers

The Cisco UCS X-Series Modular System simplifies your data center, adapting to the unpredictable needs of modern applications while also providing for traditional scale-out and enterprise workloads. It reduces the number of server types to maintain, helping to improve operational efficiency and agility as it helps reduce complexity. Powered by the Cisco Intersight cloud-operations platform, it shifts your thinking from administrative details to business outcomes with hybrid-cloud infrastructure that is assembled from the cloud, shaped to your workloads, and continuously optimized. The Cisco UCS X210c M6 Compute Node is the first computing device to integrate into the Cisco UCS X-Series Modular System. Up to eight compute nodes can reside in the 7RU Cisco UCS X9508 chassis, offering one of the highest densities of compute, I/O, and storage per rack unit in the industry. The Cisco UCS X210c Compute Node harnesses the power of the latest third-generation Intel Xeon Scalable Processors (Ice Lake), and offers the following:

      CPU: The node offers up to two third-generation Intel Xeon Scalable Processors with up to 40 cores per processor and a 1.5-MB Level 3 cache per core.

      Memory: It offers up to thirty-two 256-GB double-data-rate 4 (DDR4) and 3200 dual in-line memory modules (DIMMs) for up to 8 TB of main memory. Configuring up to sixteen 512-GB Intel Optane persistent-memory DIMMs can yield up to 12 TB of memory.

      Storage: The node offers up to 6 hot-pluggable, solid-state drives (SSDs), or nonvolatile memory express (NVMe) 2.5-inch drives with a choice of enterprise-class Redundant Array of Independent Disks (RAID), or pass-through controllers with four lanes each of fourth-generation PCIe connectivity and up to 2 M.2 SATA drives for flexible boot and local storage capabilities.

      Modular LAN-on-Motherboard (mLOM) virtual interface card: Cisco UCS Virtual Interface Card (VIC) 14425 occupies the mLOM slot of the server, enabling up to 50 Gbps of unified fabric connectivity to each of the chassis IFMs for 100-Gbps connectivity per server.

      Optional mezzanine VIC: Cisco UCS VIC 14825 can occupy the mezzanine slot of the server at the bottom rear of the chassis. The I/O connectors link of this card to Cisco UCS X-Fabric technology is planned for future I/O expansion. An included bridge card extends the two 50 Gbps of network connections of this VIC through IFM connectors, bringing the total bandwidth to 100 Gbps per fabric (for a total of 200 Gbps per server).

      Security: The server supports an optional trusted platform module (TPM). Additional features include a secure boot FPGA and ACT2 anti-counterfeit provisions.

Cisco Nexus 93180YC-FX Switches

The Cisco Nexus 93180YC-X Switch provides a flexible line-rate Layer 2 and Layer 3 feature set in a compact form factor. Designed with Cisco Cloud Scale technology (refer to Figure 5), it supports highly scalable cloud architectures. With the option to operate in Cisco NX-OS or Application Centric Infrastructure (ACI) mode, you can deploy it across enterprise, service provider, and Web 2.0 data centers. Features of this switch include:

      Architectural flexibility:

    It includes top-of-rack or middle-of-row fiber-based server access connectivity for traditional and leaf-spine architectures.

    It offers leaf node support for Cisco ACI architecture, which is provided in the roadmap.

    It increases scale and simplifies management through Cisco Nexus 2000 Fabric Extender support.

      Feature richness:

    Enhanced Cisco NX-OS Software is designed for performance, resiliency, scalability, manageability, and programmability.

    ACI-ready infrastructure helps you take advantage of automated policy-based systems management.

    Virtual Extensible LAN (VXLAN) routing provides network services.

    The switch offers rich traffic flow telemetry with line-rate data collection.

    Real-time buffer use per port and per queue allows you to monitor traffic micro-bursts and application traffic patterns.

      Highly available and efficient design:

    High-density, nonblocking architecture

    Easily deployed into either a hot-aisle or cold-aisle

    Redundant, hot-swappable power supplies and fan trays

      Simplified operations:

    Power-On Auto Provisioning (POAP) support allows for simplified software upgrades and configuration file installation.

    An intelligent API offers switch management through remote procedure calls (RPCs), Javascript Object Notation (JSON), or Extensible Markup Language (XML) over a HTTP/Secure HTTP (HTTPS) infrastructure.

    You can use Python Scripting for programmatic access to the switch CLI.

    The switch features hot and cold patching and online diagnostics.

      Investment protection

A Cisco 25-GE bidirectional transceiver allows reuse of an existing 10-GE multimode cabling plant for 25-GE support for 1-and 10-GE access connectivity for data centers migrating access switching infrastructure to faster speed. The following are supported:

      1.8 Tbps of bandwidth in a 1RU form factor

      48 fixed 1-/10-/25-GE SFP+ ports

      6 fixed 40-/100-GE Quad SPF+ (QSFP+) ports for uplink connectivity

      Latency of less than 2 microseconds

      Front-to-back or back-to-front airflow configurations

      1+1 redundant hot-swappable 80 Plus Platinum-certified power supplies

      Hot-swappable 3+1 redundant fan trays

https://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-9000-series-switches/datasheet-c78-736651.docx/_jcr_content/renditions/datasheet-c78-736651_0.jpg

Figure 5.   

Cisco Nexus 93180YC-FX Switch

Solution design

This section provides an overview of the infrastructure setup, software and hardware requirements, and some of the design details. It does not discuss the design details and configuration of components such as Cisco Nexus and Cisco MDS 9000 Family Switches and storage array systems because their design and configuration conform to various Cisco Validated Designs for converged infrastructure and are covered widely elsewhere. This document focuses on the design elements and performance of the Intel platform for application in VDI environments.

Physical architecture

The architecture deployed is highly modular and follows Cisco Validated Designs implementation principles for the converged infrastructure (Figure 6). Although your environment may vary in its exact configuration, you can easily scale the architecture described in this document to meet different requirements and demands. You can scale the design both up (by adding resources within a Cisco UCS domain) and out (by adding Cisco UCS domains).

DiagramDescription automatically generated

Figure 6.   

Physical architecture

Components deployed include the following:

      Two Cisco Nexus 93180YC-FX Switches

      Two Cisco UCS 6454 Fabric Interconnects

      One Cisco UCS x210c M6 Series Server with 1 TB of DRAM

      One Cisco UCS x210c M Series Server with 1 TB of Intel Optane persistent memory

Logical architecture

The logical architecture is configured identically in both clusters to directly compare the performance of traditional DRAM to the performance of Intel Optane persistent memory (Figure 7). For desktop virtualization, the deployment includes a Citrix 1912 LTSR CU2 running on VMware vSphere ESXi 7.0.3 U3 and a VMware Horizon 8 on vSphere ESXi 7.0.3 U3.

The purpose of this design is to compare and contrast DRAM to PMEM in a VDI environment on Cisco UCS servers.

DiagramDescription automatically generated

Figure 7.   

Logical architecture

Table 1 lists the software and firmware versions used in the solution described in this document.

Table 1.        Software and firmware versions

Component

Version

Cisco UCS component firmware

Bundle Release 5.0(1b)

Cisco UCS Intersight

Bundle Release 5.0(1b)

Cisco UCS B200 M5 blades (for Infrastructure)

Bundle Release 5.0(1b)

Cisco UCS VIC 1440

Bundle Release 5.0(1b)

Cisco UCSX 210c M6

Bundle Release 5.0(1b)

Cisco UCS VIC 14425

Bundle Release 5.0(1b)

VMware vCenter Server Appliance

Release 7.0.3.187784558

VMware vSphere 7. 0.3 U2

Release 19193900

Citrix Virtual Apps and Desktops 1912 LTSR CU2

Release 1912.3000

Citrix Provisioning Services (PVS)

Release 1912.3000

Citrix Virtual Desktop Agent (VDA)

Release 1912.3000

Microsoft FSLogix for profile management

FSLogix_Apps_2.9.7654.46150

VMware Horizon

8.1

 

Creating Cisco UCS persistent-memory policy

To create a server persistent-memory policy for VMware ESXi hosts that have Intel Optane persistent memory installed, follow these steps:

1.     In the Cisco Intersight platform, choose Policies.

2.     Click Create Policy.

3.     Select Persistent Memory.

4.     Click Next.

5.     Name the policy appropriately.

6.     Disable Enable Security Passphrase.

7.     Under Goals, click Add.

8.     Set Memory Mode (%) to 100 and set Persistent Memory Type to App Direct.

 Graphical user interface, text, applicationDescription automatically generated

9.     Click Create to complete creating the policy.

VDI configuration: Configuring the master target

You must first install virtual machines for the master target with the software components needed to build the golden images. Additionally, you should install all available security patches for the Microsoft operating system and Microsoft Office.

To prepare the master virtual machines, perform these four major steps:

1.     Install the operating system and VMware tools.

2.     Install the application software.

3.     Install the Citrix VDA.

4.     Optimize the image with the Citrix OS Optimization Tool.

Note:      VMware OSOT, the optimization tool, includes customizable templates to enable or disable Windows system services and features using VMware recommendations and best practices across multiple systems. Because most Windows system services are enabled by default, you can use the optimization tool to easily disable unnecessary services and features to improve performance of your virtual desktops.

Note:      The images contain the basic functions needed to run the Login Virtual Section Indexer (VSI) workload.

The master target virtual machine was configured as listed in Table 2.

Table 2.        VDI virtual machine configuration

Configuration

VDI virtual machines

Operating system

Microsoft Windows 10 64-bit

Virtual CPUs (vCPUs)

2

Memory

4 GB

Network

VMXNET3  

VDI

Virtual disk (vDisk) size

32 GB

Additional software used for testing

Microsoft Office 2016 

Login VSI 4.1.40 (Knowledge Worker Workload)

Testing

Testing focused on host memory performance and comparison of traditional DRAM to Intel Optane persistent memory. Testing assessed processing of the virtual desktop lifecycle during desktop boot-up, user logon and virtual desktop acquisition (also referred to as ramp-up,) user workload execution (also referred to as steady state), and user logoff for the VDI session under test. This testing methodology is used in all Cisco Validated Designs for VDI. For more information, please visit: https://ucsnav.cisco.com/design-navigator/#/VCC?_k=yur5o2.

Test metrics were gathered from the Cisco UCS host and load-generation software to assess the overall success of an individual test cycle.

You can obtain additional information and a free test license at: http://www.loginvsi.com.

DRAM cluster test results with Citrix Virtual Apps and Desktops

This section shows the important performance metrics that were captured on the Cisco UCSX 210c M6 servers with dual Intel Xeon 6348 processors and 1 TB of 3200-MHz DRAM during cluster testing in the N+1 environment. The cluster testing was performed using Windows 10 64-bit VDI nonpersistent virtual machines (instant clones for VMware Horizon and Citrix PVS for Citrix) with two vCPUs and 4 GB of RAM.

Figure 8 shows the Login VSI data.

ChartDescription automatically generated

Figure 8.   

DRAM cluster scale testing for Citrix 1912 LTSR VDI: VSI score

Figures 9 and 10 show performance data for the server running the workload.

Chart, line chart, histogramDescription automatically generated

Figure 9.   

DRAM cluster scale testing for Citrix 1912 LTSR VDI: Host CPU usage

Chart, line chartDescription automatically generated

Figure 10.   

DRAM node testing: Host memory usage

Intel Optane persistent-memory cluster test results with Citrix Virtual Apps and Desktops

This section shows the important performance metrics that were captured on the Cisco UCSx 210c M6 servers with dual Intel Xeon 6348 processors and 1 TB of 3200-MHz PMEM and 256 GB of 3200-MHz DRAM during cluster testing in the N+1 environment. The cluster testing was performed using Windows 10 64-bit VDI nonpersistent virtual machines (instant clones for VMware Horizon and Citrix PVS for Citrix) with 2 vCPUs and 4 GB of RAM.

Figure 11 shows the Login VSI data.

ChartDescription automatically generated

Figure 11.   

Intel Optane persistent-memory cluster scale testing for Citrix 1912 LTSR VDI: VSI score

Figures 12 and 13 show performance data for the server running the workload.

Chart, line chartDescription automatically generated

Figure 12.   

Intel Optane persistent-memory cluster scale testing for Citrix 1912 LTSR VDI: Host CPU usage

Graphical user interface, line chartDescription automatically generated

Figure 13.   

Intel Optane persistent-memory node testing for Citrix 1912 LTSR VDI: Host memory usge (Host nonkernel MB was used for Intel Optane persistent-memory node; test for 250 users with Citrix VDI)

 

 

 

DRAM cluster test results with VMware Horizon

This section shows the important performance metrics that were captured on the Cisco UCSx 210c M6 servers with dual Intel Xeon 6348 processors and 1 TB of 3200-MHz DRAM during cluster testing in the N+1 environment. The cluster testing was performed using Windows 10 64-bit VDI nonpersistent virtual machines (instant clones for VMware Horizon) with two vCPUs and 4 GB of RAM.

Figure 14 shows the Login VSI data.

Graphical user interfaceDescription automatically generated with low confidence

Figure 14.   

DRAM testing for VMware Horizon 8.1 running 250 VDI knowledge worker workload users: Login VSI end-user experience score (in milliseconds [ms])

Figures 15 and 16 show performance data for the server running the workload.

Line chartDescription automatically generated with medium confidence

Figure 15.   

DRAM cluster scale testing for VMware Horizon VDI: Host CPU usage percentage (Host CPU usage percentage for the DRAM)

 

Chart, line chartDescription automatically generated

Figure 16.   

DRAM testing for VMware Horizon VDI: Host memory usage (Nonkernel memory for the DRAM)

Intel Optane persistent-memory cluster test results with VMware Horizon

This section shows the important performance metrics that were captured on the Cisco UCSx 210c M6 servers with dual Intel Xeon 6348 processors and 1 TB of 3200-MHz PMEM and 256 GB of 3200-MHz DRAM during cluster testing in the N+1 environment. The testing was performed using Windows 10 64-bit VDI nonpersistent virtual machines (instant clones for VMware Horizon and VMware Horizon PVS for VMware Horizon) with two vCPUs and 4 GB of RAM.

Figure 17 shows the Login VSI data.

Graphical user interfaceDescription automatically generated with medium confidence

Figure 17.   

Intel Optane persistent-memory PMEM cluster scale testing for VMware Horizon 8.1 running 250 VDI knowledge worker workload users: Login VSI end-user experience score (in ms) (Hosts running PMEM)

Figures 18 and 19 show performance data for the server running the workload.

Chart, line chartDescription automatically generated

Figure 18.   

Intel Optane persistent-memory cluster scale testing for VMware Horizon VDI: Host CPU usage (Host CPU use percentage for the PMEM node)

 Chart, line chartDescription automatically generated

Figure 19.   

Intel Optane persistent-memory cluster scale testing for VMware Horizon VDI: Host memory usage (Nonkernel memory for the PMEM node)

Test results comparison between DRAM and PMEM node testing with Citrix Virtual Apps and Desktops

Figure 20 shows a comparison of CPU usage using DRAM and PMEM.

Chart, line chartDescription automatically generated

Figure 20.   

Memory cluster scale testing using Citrix 1912 LTSR VDI: Host CPU usage

Test results comparison between DRAM and PMEM node testing with VMware Horizon

Figure 21 shows a comparison of Vmware Horizon CPU use percentage using DRAM and PMEM.

Chart, histogramDescription automatically generated

Figure 21.   

CPU usage percentage scale testing using VMware Horizon 8.1 VDI: Host CPU use

Conclusion

Cisco delivers a highly capable platform for enterprise end-user computing deployments using Cisco UCS X210C Servers with Intel Xeon CPUs, and now also with Intel Optane persistent memory.

The introduction of Intel Optane persistent memory in memory mode yields strong performance results similar to those for traditional DRAM-based solutions.

Integrating the Cisco Intersight platform into your environment gives you global visibility into infrastructure health and status along with a constantly growing list of advanced management and support capabilities.

For more information

Consult the following references for additional information about the topics discussed in this document:

Products and solutions

      Cisco Intersight platform:
https://www.intersight.com/

      Cisco UCS 6454 Fabric Interconnect:
https://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/datasheet-c78-741116.html

      Cisco UCS X9508 Series Server chassis:
https://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-x-series-modular-system/datasheet-c78-2472574.html

      Cisco UCS X-Series compute nodes:
https://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-x-series-modular-system/datasheet-c78-2465523.html

      Cisco UCS adapters: http://www.cisco.com/en/US/products/ps10277/prod_module_series_home.html

      Cisco Nexus 9000 Series Switches:
http://www.cisco.com/c/en/us/products/switches/nexus-9000-series-switches/index.html

Interoperability matrixes

      Cisco UCS Hardware Compatibility Matrix:
https://ucshcltool.cloudapps.cisco.com/public/

Cisco Validated Designs for VDI

      Deployment guide for VDI FlexPod Datacenter with VMware vSphere 7.0 and NetApp ONTAP 9.9:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flexpod_citrix_vmware_esxi7_hypervisor.html

 

 

 

 

 

Learn more