Explore Cisco
How to Buy

Have an account?

  •   Personalized content
  •   Your products and support

Need an account?

Create an account

Elastic Cloud on Kubernetes with Cisco HyperFlex and Cisco Container Platform

White Paper

Available Languages

Download Options

  • PDF
    (3.0 MB)
    View with Adobe Reader on a variety of devices
Updated:January 18, 2021

Bias-Free Language

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

Available Languages

Download Options

  • PDF
    (3.0 MB)
    View with Adobe Reader on a variety of devices
Updated:January 18, 2021



Solution overview

This section provides an overview of the Elastic Cloud on Cisco HyperFlex solution and the challenges it is designed to address.

Business challenge

As organizations continue to evolve and embrace digitization, the adoption of cloud-based services is continuing to grow. Organizations are increasingly taking the hybrid cloud approach in their digitization journeys to capitalize on the benefits of both private and public cloud models.

With hybrid cloud environments comes the need to track, store, analyze, and manage large volumes of business data generated on a daily basis. This rapid growth in data  from IT infrastructure and applications challenges IT operations. To capture, analyze, and acted on this data, IT organizations require performance and scale from underlying hardware infrastructure that allows them to integrate and manage petabytes of data on demand. In addition, unpredictable data volumes and data surges demand infrastructure readiness: that is, infrastructure must be highly adaptable and reliable and have the performance characteristics necessary for real-time data analysis and that allows the organization to derive meaningful business insights. The on-premises infrastructure must be competitive to deliver these functions and provide cloud-like scalability and ease of deployment.

The solution

Elasticsearch is a highly scalable open-source full-text search and analytics engine. It allows you to store, search, and analyze large volumes of data quickly and in near real time. Elasticsearch provides a distributed system on top of Lucene Standard Analyzer for indexing and automatic type guessing and uses a JavaScript Object Notation (JSON)–based representational state transfer (REST) API to refer to Lucene features. Additionally, Elastic Cloud on Kubernetes (ECK) extends the basic Kubernetes orchestration capabilities to support the setup, management, and monitoring of Elasticsearch and Kibana on Kubernetes. ECK also provides a full-featured Elastic Stack experience on Kubernetes.

The Cisco HyperFlex™ hyperconverged infrastructure (HCI)–based offering is designed for simplicity. It brings increased operation efficiency and adaptability to modern workloads in your data center and can be deployed in less than an hour. Cisco HyperFlex systems provide comprehensive end-to-end automation across computing, storage, and networking resources using a simple and intuitive wizard. The Cisco Unified Computing System™ (Cisco UCS®) help you increase your competitive advantage with cloud infrastructure that delivers programmability, unified infrastructure and management, and choice. Whether you need to deploy a private cloud or extend your capabilities with a secure hybrid cloud approach, Cisco UCS makes it easy to build and consistently manage cloud environments. With the Cisco UCS API, you can plug in to cloud management solutions such as the Cisco Intersight™ platform, enabling your administrators to use familiar management models and take advantage of built-in automation and intelligence to gain outstanding visibility into, and control over, your private, public, and hybrid cloud environments. The Cisco Intersight cloud-based management offering makes infrastructure management more intuitive. Cisco Intersight software offers intelligent management that enables IT organizations to analyze, simplify, and automate their environments in more advanced ways than with previous generations of tools.

ECK allows you to configure, manage, and scale your cluster quickly and easily. Cisco’s container management solution, Cisco® Container Platform (CCP), is well suited to this work. It is a ready-to-use, open, production-class software solution based on native Kubernetes (100 percent upstream) that is aimed at simplifying the deployment and management of container clusters. Cisco Container Platform provides a comprehensive stack for creating and managing Kubernetes clusters on a subscription-based platform, and it includes capabilities for networking, load balancing, persistent storage, security, monitoring, analytics, and optimization. Cisco Container Platform is based on industry standards, with an open architecture and open-source components providing flexible deployment options that avoid lock-in. It is infrastructure independent and can work across any infrastructure.

The Cisco HyperFlex system, with its ability to scale computing and storage resources independently, coupled with the cloud-like management experience provided by the Cisco Intersight platform, is well suited for ECK-like enterprise applications to achieve an excellent cost:performance ratio. In addition, with Cisco Container Platform, deployment and management of Elasticsearch clusters are simple, whether they are on-premises or in the cloud.

Solution benefits

Deploying ECK on the Cisco HyperFlex platform using Cisco Container Platform provides the following benefits:

      Single data center architecture based on Cisco UCS

      Independent resource scaling of computing and capacity tiers in Cisco HyperFlex systems

      Continuous data optimization with inline data deduplication and compression for data scaling in Cisco HyperFlex systems

      Greater virtual machine density with lower latencies in Cisco HyperFlex systems

      Multiple Kubernetes cluster management with Cisco Container Platform

      Easy upgrading to new stack versions on Cisco Container Platform

      On-demand resource allocation and dynamic load-balancing support on Cisco Container Platform

      End-to-end high availability on hardware and software stack

      Real-time, distributed, scalable search engine, for full-text and structured searches

      Real-time analytics to identify and resolve problems faster

In addition:

      ECK helps streamline Elastic Stack on Kubernetes and provide a software-as-a-service (SaaS)–like experience.

      Cisco HyperFlex systems coupled with Cisco UCS provides cloud-like scalability and reliability.


The intended audience for this document includes sales engineers, field consultants, professional services providers, IT managers, partners, Elasticsearch administrators, and customers who want to deploy ECK on Cisco HyperFlex systems using Cisco Container Platform. External references are provided wherever applicable, but readers of this document are expected to have working knowledge of the Cisco HyperFlex platform, VMware vSphere, Cisco Container Platform, and Elastic Stack.


Platform technology

The Elastic Cloud on Cisco HyperFlex solution is based Cisco HyperFlex HX Data Platform and Cisco Container Platform technology.

Cisco HyperFlex HX Data Platform

Cisco HyperFlex HX Data Platform, a foundation for Cisco HyperFlex systems, is a purpose-built, high-performing, scale-out file system. The data platform’s innovations redefine scale-out and distributed storage technology, going beyond the boundaries of first-generation hyperconverged infrastructure, and offer a wide range of enterprise-class data management services (Figure 1).

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220263_Cisco:7_Elastic Cloud on Kubernetes WP:art:fig01_Cisco-HyperFlex-HX-Data-Platform.jpg

Figure 1.            

Cisco HyperFlex HX Data Platform

The data platform is designed with a modular architecture so that it can easily adapt to support a broadening range of application environments and hardware platforms.

The core file system, cluster service, data service, and system service are designed to adapt to a rapidly evolving hardware ecosystem. This enables Cisco to be among the first to bring new server, storage, and networking capabilities to Cisco HyperFlex systems as they are developed. Each hypervisor or container environment is supported by a gateway and manager module that supports the higher layers of software with storage access suited to its needs. The data platform provides a REST API so that a wide range of management tools can interface with the data platform (Figure 2).

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220263_Cisco:7_Elastic Cloud on Kubernetes WP:art:fig02_Cisco-HyperFlex-modular-architecture.jpg

Figure 2.            

Cisco HyperFlex HX Data Platform is designed to adapt to a rapidly evolving hardware ecosystem

Cisco HyperFlex systems combine these features:

      Software-defined computing in the form of nodes based on Cisco UCS servers

      Software-defined storage with the powerful Cisco HyperFlex HX Data Platform software

      Software-defined networking with Cisco Application Centric Infrastructure (Cisco ACI®)

      Cloud-based management with multicloud container support from Cisco Container Platform for Cisco HyperFlex systems

For more information about Cisco HyperFlex HX Data Platform and how it works, refer to https://www.cisco.com/c/dam/en/us/products/collateral/hyperconverged-infrastructure/hyperflex-hx-series/white-paper-c11-736814.pdf.

Cisco Container Platform

Cisco Container Platform is a fully curated, lightweight container management platform for production-class environments, powered by Kubernetes and delivered with Cisco enterprise-class support. It reduces the complexity of configuring, deploying, securing, scaling, and managing containers through automation coupled with Cisco’s best practices for security and networking (Figure 3).

 Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220263_Cisco:7_Elastic Cloud on Kubernetes WP:art:fig03_Cisco-container-platform.jpg

Figure 3.            

Cisco Container Platform

Cisco Container Platform is built with an open architecture using open-source components. It works across both on-premises and public cloud environments. And because it is optimized with Cisco HyperFlex systems, this preconfigured, integrated solution can be set up in minutes.

Figure 4 shows the architecture of Cisco Container Platform.

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220325_Cisco:1_elastic-cloud-on-kubernetes-on-cisco-hx:art:fig04_Cisco-container-platform-architecture.jpg

Figure 4.            

Cisco Container Platform architecture


At the bottom of the stack is level 1, the networking layer, which can consist of Cisco Nexus® switches, Cisco Application Policy Infrastructure Controllers (APICs), and fabric interconnects.

Note: Cisco Container Platform can run on top of a Cisco ACI networking fabric as well as on a networking fabric other than Cisco ACI that performs standard L3 switching.

Level 2 is the computing layer, which consists of Cisco HyperFlex, Cisco UCS, or third-party servers that provide virtualized computing resources through VMware and distributed storage resources.

Level 3 is the hypervisor layer, which is implemented using VMware ESXi.

Level 4 consists of the Cisco Container Platform control plane and data plane (or tenant clusters). In Figure 4, you can see that Cisco Container Platform has two types of clusters: control-plane clusters and tenant clusters. Tenant clusters can be support a single master or multiple-master Kubernetes cluster for your container applications. These tenant clusters are preconfigured to support persistent volumes using the VMware vSphere Cloud Provider and Container Storage Interface (CSI) plug-in. Tenant clusters can be configured with multiple masters (master high availability) to maintain a steady state across all the worker nodes in the event of a single master failure.

In addition to being easy to deploy, Cisco Container Platform provides the following benefits:

      Reduced risk: Cisco Container Platform is a full stack solution built and tested on Cisco HyperFlex systems. It provides automated updates and enterprise-class support for the entire stack. It is built to handle production workloads.

      Greater efficiency: Cisco Container Platform provides your IT operations team a ready-to-use, preconfigured solution that automates repetitive tasks and removes pressure on staff to update people, processes, and skill sets in house. It provides developers with the flexibility and speed they need to be innovative and to respond to market requirements quickly.

      Remarkable flexibility: Cisco Container Platform gives you a choice of deployment on virtual environments: from hyperconverged infrastructure to vSphere ESXi clusters. And because it is based on open-source components, you are free from vendor lock-in.


Elastic Cloud on Kubernetes

Elastic Cloud on Kubernetes (ECK) is the official operator by Elastic for automating the deployment, provisioning, management, and orchestration of Elasticsearch, Kibana, APM Server, Beats, and Enterprise Search on Kubernetes.

Elastic Stack

Elastic Stack is a collection of three open-source projects: Elasticsearch, Logstash, and Kibana. These three technologies work well with each other even though they are separate projects (Figure 5).

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220325_Cisco:1_elastic-cloud-on-kubernetes-on-cisco-hx:art:fig05_elastic-stack.jpg

Figure 5.            

Elastic Stack

Elastic Stack is a comprehensive log-analysis solution that helps organizations deep search, analyze, and visualize the logs generated from different sources. For more information about Elastic Stack, refer to https://www.elastic.co/elastic-stack.

ECK operator model

ECK uses the basic Kubernetes orchestration capabilities and supports easy management of Elasticsearch and Kibana instances on Kubernetes at scale (Figure 6).

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220325_Cisco:1_elastic-cloud-on-kubernetes-on-cisco-hx:art:fig06_ECK-operator-model.jpg

Figure 6.            

ECK operator model

ECK, built on the Kubernetes operator pattern, is designed to automate daily operational tasks, managing multiple clusters, managing upgrades, scaling cluster capacity, adjusting cluster configuration, and dynamically scaling storage running Elasticsearch and Kibana. ECK is designed to orchestrate Elasticsearch on Kubernetes and provide a SaaS-like experience for Elastic products and solutions on Kubernetes.

Figure 7 shows the simple steps involved in successfully deploying ECK in a Kubernetes cluster.

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220325_Cisco:1_elastic-cloud-on-kubernetes-on-cisco-hx:art:fig07_ECK-in-a-Kubernetes.jpg 

Figure 7.            

Steps for deploying ECK in a Kubernetes cluster

Some of the advantages of ECK include:

      Multiple-cluster deployment and management: Scale Elasticsearch and Kibana instances with ECK Operator.

      Automatic security configuration: Automatically configure features such as native authentication, Transport Layer Security (TLS) encryption, and role-based access control (RBAC). Add Security Assertion Markup Language (SAML), OpenID, Kerberos, and custom certificates to meet your business needs.

      Snapshot scheduling and keystore support: Schedule backups to the cloud with secure keystore configurations.

      Local storage, global search: Deploy clusters in a single Kubernetes environment and connect them to clusters running in another Kubernetes environment. Use cross-cluster search or cross-cluster replication to optimize data migration and search operations.

      Hot-warm-cold patterns: Natively deploy common Elasticsearch architectures for logging, metrics, and other time-series use cases.

      Simple and fast updates: Update terabyte clusters in minutes with continuous updates and built-in StatefulSet objects.

For more information about ECK, refer to https://www.elastic.co/es/elastic-cloud-kubernetes

Solution design

This section provides an overview of the design of the Elastic Cloud on Cisco HyperFlex solution with Kubernetes and Cisco Container Platform.

ECK on Cisco HyperFlex systems with Cisco Container Platform provides cloud-like experience for deploying Elasticsearch. Cisco Container Platform deploys Kubernetes tenant cluster with the help of OVF template on the Cisco HyperFlex platform thereby abstracting the complexities of Kubernetes cluster deployment for the users.

The solution design described in this document uses a four-node all-flash Cisco HyperFlex cluster with 2nd Gen Intel® Xeon® Scalable CPUs to support a wide range of enterprise workloads. Additionally, Cisco HyperFlex features such as deduplication and inline compression help ensure hyperefficient resource utilization. With data deduplication, data is not only deduplicated in the persistence tier to save space but is also deduplicated when read into the caching tier. This approach allows a larger working set to be stored in the caching tier, accelerating read performance. And inline compression further reduces storage requirements, lowering costs.

To facilitate easy Kubernetes installation and maintenance, this solution uses Cisco Container Platform. Cisco Container Platform enterprise applications running container workloads can be seamlessly deployed both on-premises and in the public cloud. Using the Cisco supported container platform makes this solution even more attractive with Cisco HyperFlex HX Data Platform, which is designed to support container and cloud-native workloads.

This solution deployed a single master node and four worker nodes as a Cisco Container Platform tenant cluster for the Elasticsearch application. The worker nodes were further categorized as one single elastic master and three elastic worker nodes using zone awareness attribute. In labeling nodes appropriately for zone awareness, Elasticsearch distributes the primary shard and its replica shards on different nodes to reduce the risk of losing all shard copies in the event of a failure of one node (virtual machine) or host. Note that Cisco HyperFlex systems by design provide data availability by maintaining replicated copies across multiple physical hosts. However, customers may prefer to have additional shard replicas at the Elasticsearch level to help ensure better scalability for search operations.

Data high availability from the infrastructure to the application layer makes the solution more compelling for deploying enterprise applications that run critical workloads.

The solution described here deploys Elastic Stack using ECK Operator on a Cisco Container Platform tenant cluster. For more information about ECK Operator and Elastic Stack deployment, refer to https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html.

Physical infrastructure

The solution described here was validated with a four-node Cisco HyperFlex cluster, with Cisco HyperFlex HX240C M5 all-flash servers. This Cisco HyperFlex cluster connects to redundant network fabrics provided by a pair of Cisco UCS fabric interconnects. In a standard Cisco HyperFlex cluster, the servers are in a single Cisco UCS domain, and each server is dual-homed to the two fabric interconnects that make up that domain. Cisco HyperFlex servers are equipped with Cisco virtual interface cards (VICs) to connect to the Cisco UCS fabric. Each server uses two ports to connect to the fabric interconnects (FI-A and FI-B) in the Cisco UCS domain (Figure 8).

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220263_Cisco:7_Elastic Cloud on Kubernetes WP:art:fig08_hyperflex-cluster.jpg

Figure 8.            

Four-node Cisco HyperFlex cluster

Logical solution design

Figure 9 shows the logical solution design:

      VMware vCenter: Enables the virtualization administrator to manage and monitor the Cisco HyperFlex physical infrastructure. With the Cisco HyperFlex HTML plug-in for VMware vCenter, you can cross-launch Cisco HyperFlex Connect from the vSphere Client user interface, and you can perform management actions in the Cisco HyperFlex Connect user interface.

      Rally server: Rally is Elastic’s own benchmarking tool for Elasticsearch. Rally is discussed further in the Testing and validation section.

Note:      In the setup here, vCenter and Rally server are configured outside the Cisco HyperFlex cluster; however, they are in the same Cisco UCS domain (connected to same fabric interconnects).

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220325_Cisco:1_elastic-cloud-on-kubernetes-on-cisco-hx:art:fig09_logical-solution-design.jpg

Figure 9.            

Logical solution design

Virtual machines (represented as boxes outlined in blue) are within the Cisco HyperFlex cluster (box outlined in green).

      Proxy and DNS (optional) – You can configure the Domain Name System (DNS) and proxy server separately for this setup, or you can use the already configured servers for this purpose.

      Linux virtual machine (for Cisco Container Platform node access): Cisco Container Platform uses passwordless access to node login. A Linux virtual machine must be configured to provide passwordless Secure Shell (SSH) key access to the Cisco Container Platform tenant cluster nodes.

      Cisco HyperFlex data store: The Cisco HyperFlex data store provides the required persistence storage to the application pods through the vSphere volume provider plug-in.

      Cisco Container Platform nodes: The Cisco Container Platform tenant cluster with the Cisco HyperFlex system in Figure 9 is shown with a blue dotted outline. The virtual machines within Cisco Container Platform are the tenant cluster nodes. Cisco Container Platform deploys control-plane clusters and tenant clusters separately. Applications are deployed on the tenant cluster. In Figure 9 you can see that ECK is deployed on the tenant cluster nodes. Each node runs ECK pods and other Cisco Container Platform related pods as well (Figure 10).

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220325_Cisco:1_elastic-cloud-on-kubernetes-on-cisco-hx:art:fig10_Cisco-Container-Platform-nodes.jpg

Figure 10.         

Cisco Container Platform nodes

In this solution, ECK virtual machines in the tenant cluster are set to be zone aware as a best practice.

For more information about availability zone awareness, refer to https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-advanced-node-scheduling.html.


Hardware and software versions

Tables 1 and 2 list the hardware and software configurations used for this solution. The actual configuration for a specific use case depends on sizing results, taking into account data growth and response-time requirements. The configuration listed here should not be considered as a standard sizing template for all workloads. Care must be taken to evaluate and validate the configuration with the real workload.

Table 1.        Hardware configuration




Component details


4 x Cisco HyperFlex HX240c M5SX


2 Intel Xeon Gold 6240 CPUs with 18 cores each


24 x 32 GB = 768 GB

Disk controllers

Cisco 12-Gbps modular SAS controller

Solid-state disks (SSDs)

1 x 240-GB 6-Gbps SATA SSD for housekeeping tasks

6 x 960-GB 6-Gbps SATA SSDs for capacity tier

Non-Volatile Memory Express (NVMe) disk

1 x 375-GB NVMe for caching tier


1 x Cisco UCS VIC 1387 modular LAN on motherboard (mLOM)

Boot device

1 x 240-GB M.2 form-factor 6-Gbps SATA SSD


2 x Cisco UCS fabric interconnects


Cisco UCS 6332-16UP Fabric Interconnect

2 x Cisco Nexus switches


Cisco Nexus 9300 platform switches


Table 2.        Software versions




VMware ESXi 6.7.0 U3-15160138

(Cisco custom image for ESXi 6.7 to be downloaded from Cisco.com Downloads portal)

Management server

VMware vCenter Server for Windows or vCenter Server Appliance Release 6.7

Cisco HyperFlex HX Data Platform

Cisco HyperFlex HX Data Platform Software Release 4.0.2b

Cisco UCS firmware

Cisco UCS Release 4.0(4i)

Cisco Container Platform

Release 6.1.1

ECK operator

Release 1.2.1


Release 7.5.0


Release 7.5.0

Deploying ECK on Cisco HyperFlex using Cisco Container Platform

This section presents the high-level steps for deploying ECK on Cisco HyperFlex using Cisco Container Platform for Kubernetes. Figure 11 summarizes the steps.

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220325_Cisco:1_elastic-cloud-on-kubernetes-on-cisco-hx:art:fig11_High-level-deployment-steps.jpg

Figure 11.         

High-level deployment steps

For this solution, follow these steps:

1.     Deploy a four-node Cisco HyperFlex cluster with Cisco HyperFlex HX-Series M5 All Flash servers. After the Cisco HyperFlex cluster is up and running, your vCenter will display the results.

Graphical user interface, text, applicationDescription automatically generated

2.     Deploy Cisco Container Platform on the Cisco HyperFlex cluster. Cisco Container Platform installs a control-plane cluster and provides the user interface to create the tenant cluster used for deploying the applications pods. You can allocate resources to your tenant cluster nodes based on the application you choose to deploy. For running ECK, a single master cluster was created, shown under master-group (eck-master) with four worker nodes. One of the worker node becomes the Elastic master (eck-es-mstr), and the other worker nodes take on the Elasticsearch workload (eck-node-group). After the tenant cluster is deployed, the Node Pools page shows the nodes.

Graphical user interface, application, TeamsDescription automatically generated

Note:      For more information about Cisco Container Platform installation, refer to the Cisco Container Platform installation guide at https://www.cisco.com/c/en/us/td/docs/net_mgmt/cisco_container_platform/6-1/Installation_Guide/ccp-installation-guide-6-1-0.html.

3.     Download the Kubeconfig YAML file to your Linux server on which you want to run kubectl commands.

Graphical user interface, application, TeamsDescription automatically generated

4.     Deploy the ECK operator using the link https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html. The operator is deployed in the elastic-system namespace.

Related image, diagram or screenshot

5.     Label the ECK nodes for zone awareness as follows:

kubectl label node eck-es-mstr-e9a6df6075 failure-domain.beta.kubernetes.io/zone=zone-0

kubectl label node eck-node-group-02d61f1b5b failure-domain.beta.kubernetes.io/zone=zone-1

kubectl label node eck-node-group-532cc20821 failure-domain.beta.kubernetes.io/zone=zone-2

kubectl label node eck-node-group-8bdaf7ebfd failure-domain.beta.kubernetes.io/zone=zone-3

Graphical user interface, textDescription automatically generated

Note:      By combining Elasticsearch shard allocation awareness with Kubernetes node affinity, you can set up an availability zone-aware Elasticsearch cluster. With the cluster.routing.allocation.awareness.attributes setting, shards are allocated only to nodes that have values set for the specified awareness attributes.

For more information about zone settings, refer to https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-advanced-node-scheduling.html#k8s-availability-zone-awareness.

6.     Create the Elasticsearch YAML file with zone awareness and specify the required CPU, memory, and heap sizes.

TextDescription automatically generated

7.     Provide the storage-class name in the YAML file. The file here uses the vSphere volume provider Container Storage Interface (CSI) driver, which comes as a standard (default) storage class in Cisco Container Platform.

TextDescription automatically generated

Related image, diagram or screenshot

8.      Create the Kibana YAML file with the required CPU, memory, and heap sizes.

A picture containing textDescription automatically generated

9.     Apply the Elasticsearch and Kibana YAML files. After the settings have been applied, you will see the Elastic pods and Kibana pod.

Related image, diagram or screenshot

10.   On any Linux machine (using any version of Linux), install esrally. Note that esrally installation requires specific versions of Python, Git, and Java. For information about the installation of esrally, refer to https://esrally.readthedocs.io/en/stable/install.html.

Testing and validation

This section provides details about the test tool and methodology used to validate the solution described in this document. It also discusses the test results. Tables 3 and 4 list the resource allocation at the tenant virtual machine level and the pod level.

Table 3.        Cisco Container Platform tenant virtual machine resource allocation

Tenant cluster virtual machines


Cisco Container Platform workers

CPU for worker nodes (Elastic worker)

3 x 12 virtual CPUs (vCPUs)

Memory for worker nodes (Elastic worker)

3 x 96 GB

CPU for worker node (Elastic master)

1 x 8 vCPUs

Memory for worker nodes (Elastic master)

1 x 16 GB

Cisco Container Platform master

CPU for master node

1 X 8 vCPUs

Memory for master node

1 x 16 GB

Table 4.        Pod resource allocation



CPU (Elastic master pod)

4 CPUs

Memory (Elastic master pod)

8 GB

Heap size (Elastic master pod)

4 GB

Storage (Elastic master pod)

100 GB

CPU (Elastic worker pods)

10 CPUs

Memory (Elastic worker pods)

64 GB

Heap size (Elastic worker pods)

16 GB

Storage (Elastic worker pods)

300 GB

Rally tool for testing Elasticsearch on Cisco HyperFlex systems

The testing performed for this document used Rally, Elastic’s official benchmarking tool, to stress the elastic nodes. Rally is a macro-benchmarking framework for Elasticsearch. Rally can act as a load generator tool as well to spin up and clean up Elasticsearch clusters.

For more information about how to install and configure Rally, refer to the Rally documentation at https://esrally.readthedocs.io/en/stable/install.html.

Rally provides several default tracks. You can view the default track list by running the esrally list tracks command (Figure 12).

Graphical user interface, textDescription automatically generated

Figure 12.         

Default Rally tracks

For this document, the eventdata track and http_logs track were used. For the eventdata track, a variety of challenges are available. To view the available challenges, refer the link https://github.com/elastic/rally-eventdata-track.

Tracks and challenges

The validation process used the eventdata and http_logs tracks.

Eventdata track

Eventdata is repository containing a Rally track for simulating event-based data use cases. The track supports bulk indexing of autogenerated events as well as simulated Kibana queries and a range of management operations to make the track self-contained.

This track can be used as is, or it can be used to create more complex and realistic simulations by using custom runners and tweaking custom parameters in this track.

The elasticlogs-1bn-load challenge is used here. This challenge indexes 1 billion events into a number of indices of two primary shards each, and it results in the generation of about 200 GB of indices on disk.

For more information about the eventdata track, refer to https://github.com/elastic/rally-eventdata-track.

To run the eventdata track with the elasticlogs-1bn-load challenge on the Rally server, the following command was run:

esrally --track=eventdata --track-repository=/home/esrally/.rally/rally-eventdata-track --target-hosts=<elastic master:port> --pipeline=benchmark-only --challenge=elasticlogs-1bn-load --pipeline=benchmark-only --track-params="bulk_indexing_clients:40"


In the test setup, the target host is, as marked in Figure 13.

Graphical user interfaceDescription automatically generated

Figure 13.         

Target host of eventdata track with the elasticlogs-1bn-load challenge

Http_logs track

The http_logs track contains HTTP server log data. These are real logs and demonstrate Elasticsearch indexes. This track is based on Web server logs from the 1998 Football World Cup.

For more information about http_logs, refer to https://github.com/elastic/rally-tracks/tree/master/http_logs.

To run the http_logs track on the Rally server, the following command was run:

esrally --track=http_logs --target-hosts=<elastic master:port>  --pipeline=benchmark-only --track-params="bulk_indexing_clients:16"



Figure 14 shows the performance data on Rally for esrally tracks.

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220263_Cisco:7_Elastic Cloud on Kubernetes WP:art:fig14_Performance-data-on-Rally.jpg

Figure 14.         

Performance data on Rally for esrally tracks

The graph in Figure 14 clearly shows that the eventdata track with the elasticlogs-1bn-load challenge achieved close to 100,000 documents per second. This track ran for about 3 hours, and the latency recorded at the 90th percentile was less than 600 milliseconds (ms). The http_logs track achieved close to 450,000 documents per second. This track ran for about 1.5 hours, and the latency recorded at the 90th percentile was less than 200 ms.

Figure 15 shows the bandwidth achieved on the Cisco HyperFlex system for esrally tracks.

Macintosh HD:Users:sandygraul:Documents:ETMG:Cisco:220325_Cisco:1_elastic-cloud-on-kubernetes-on-cisco-hx:art:fig15_Performance-data-Hyperflex-on-Rally.jpg

Figure 15.         

Bandwidth achieved on the Cisco HyperFlex system for esrally tracks

The graph in Figure 15 shows that eventdata track with the elasticlogs-1bn-load challenge achieved close to 430 MBps, and the http_logs track achieved close to 200 MBps on the Cisco HyperFlex system. Eventdata with the elasticlogs-1bn-load challenge is more write intensive, and http_logs is less write intensive because it performs less indexing and implements segment merging in the document corpus.

Performance monitoring on Kibana and Grafana for both tracks

For the elasticlogs-1bn-load challenge, on Kibana we can monitor Elasticsearch resource use. For the eventdata track, from the CPU utilization shown in the Grafana graph, you can see that enough CPU is available to handle spikes during CPU-intensive workload processes. The graph is consistent through the run, and utilization does not exceed 65 percent (Figure 16).

 A screen shot of a video gameDescription automatically generated 

Figure 16.         

Grafana CPU utilization monitoring: eventdata track

From Kibana you can see resource utilization at the pod level. Figure 17 shows that the solution reached up to 75 percent of CPU utilization, and more room is available to handle workload spikes.

Chart, scatter chartDescription automatically generated

Figure 17.         

Kibana CPU utilization monitoring: eventdata track

Indexing threads show that there were no write queue or rejections during the run (Figure 18). The Java heap graph shows that at no point during the run did maximum use of the memory heap occur (Figure 19).

TimelineDescription automatically generated

Figure 18.         

Indexing threads monitoring: eventdata track

A picture containing graphical user interfaceDescription automatically generated

Figure 19.         

Java heap monitoring: eventdata track

For the http_logs track, from the CPU utilization shown in the Grafana graph, you can see that enough CPU is available to handle spikes while processing the workload. The http_logs track shows intense CPU activity at the beginning of the run, where indexing of the document occurs. The CPU activity graph falls during the merging of segments in the document corpus (Figure 20).

ChartDescription automatically generated

Figure 20.         

Grafana CPU utilization monitoring: http_logs track

From Kibana you can see resource utilization at the pod level. Figure 21 shows the CPU utilization graph for one of the elastic worker pods. The solution achieved CPU utilization of close to 80 percent during indexing of the document bulk.

A picture containing chart, line chartDescription automatically generated

Figure 21.         

Kibana CPU utilization monitoring: http_logs track

Indexing threads show that there were no write queue or rejections during the run (Figure 22). The Java heap graph shows that at no point during the run did maximum use of the memory heap occur (Figure 23).

A picture containing tableDescription automatically generated

Figure 22.         

Indexing threads monitoring: http_logs track

Chart, line chart, scatter chartDescription automatically generated

Figure 23.         

Java heap monitoring: http_logs track


Elastic Stack needs infrastructure that supports performance, resiliency, and scale, and Cisco HyperFlex systems are well suited for such requirements. This solution design with Cisco HyperFlex systems using Cisco Container Platform for Kubernetes provides the flexibility and scalability needed to run enterprise applications such as Elasticsearch. With capabilities to access hybrid cloud resources on demand and use Elasticsearch to search and extract real-time insights from structured and unstructured data, you can accelerate IT and business innovation.

The balanced, distributed data access architecture of Cisco HyperFlex systems supports models that are easy to scale out and scale up, reducing the hardware footprint and increasing data center efficiency. Cisco Container Platform incorporates ubiquitous monitoring and policy-based security and provides essential services, including load balancing. The platform can provide applications with extensions into network management, application performance monitoring (APM), analytics, and logging.

The scalable computing and storage capabilities of Cisco HyperFlex systems provide cloud-like flexibility to cloud-scale enterprise applications such as Elasticsearch, and Cisco Container Platform provides a comprehensive stack for creating and managing Kubernetes clusters. In addition, the Cisco Intersight platform provides visibility into all your Cisco UCS and Cisco HyperFlex deployments from one platform—anywhere and anytime— to help you proactively manage and troubleshoot your environment and manage firmware and server configurations.

Learn more