Cisco seed unit for SAP sizing and configuration guidelines
Installing Cisco HyperFlex servers
Installing the SLES for SAP Applications 12 SP4 operating system
Installing the RHEL for SAP Applications 7.6 operating system
Intel Optane DC PMM configuration for SAP HANA
SAP introduced HANA 1.0 in June 2011 to help customers accelerate their analytic and transactional database activities. Moving data to an in-memory database significantly improves the database application performance and provides a platform on which customers can build their next-generation data centers. SAP’s investments in the transactional database and application tiers were further accelerated by the introduction of SAP S/4 HANA in May 2016 and SAP HANA 2.0 in 2017.
For many organizations, migrating to SAP S/4 HANA and SAP HANA 2.0 is an expensive proposition because large data sets require terabytes of memory. Large quantities of memory in dedicated multiprocessor servers creates an expensive server with new operation requirements.
Intel introduced the Intel® Optane™ DC persistent memory in 2019 to help customers reduce the cost of acquisition while increasing the capacity of their in-memory database servers and reducing the time needed to start SAP HANA with large data sets. This new technology effectively reduces a significant cost barrier for customers as they are planning their migration efforts to SAP S/4 HANA and SAP HANA 2.0.
Cisco has launched a program with Intel to help customers achieve the benefits of Intel Optane DC Persistent Memory Modules (PMMs). Customers can purchase an initial SAP HANA 2.0 server at a significantly reduced price and use the cost savings for data migration services. Cisco is also offering an easy-to-order SAP S/4 HANA migration hardware bundle to simplify the acquisition effort for SAP S/4 HANA landscape migration.
This document describes how this initial SAP HANA 2.0 server and the SAP S/4 HANA migration bundle can be acquired and installed in the next-generation data center.
This section introduces the solution discussed in this document.
Introduction
SAP HANA is SAP’s implementation of in-memory database (IMDB) technology. The SAP HANA database takes advantage of the low-cost main memory (RAM), faster access, and data-processing capabilities of multicore processors to provide better performance for analytical and transactional applications. SAP HANA offers a multiple-engine, query-processing environment that supports relational data (with both row- and column-oriented physical representations in a hybrid engine) as well as graph and text processing for semistructured and unstructured data management within the same system. SAP HANA combines software components from SAP optimized for certified hardware.
SAP HANA Tailored Datacenter Integration (TDI) offers a more open and flexible way to integrate SAP HANA into the data center by reusing existing enterprise storage hardware, thereby reducing hardware costs. With the introduction of SAP HANA TDI for shared infrastructure, the Cisco UCS® Integrated Infrastructure solution provides the advantages of an integrated computing, storage, and network stack and the programmability of Cisco UCS. SAP HANA TDI enables organizations to run multiple SAP HANA production systems on a shared infrastructure. It also enables customers to run SAP application servers and an SAP HANA database hosted on the same infrastructure.
The Cisco UCS C480 M5 Rack Server supports a scale-up solution with prevalidated, ready-to-deploy infrastructure. Solution configuration and validation requires less time and is less complex than with a traditional data center deployment. The reference architecture discussed in this document demonstrates the resiliency and ease of deployment of an SAP HANA solution.
SAP landscapes frequently are deployed in virtualization environments. In recent years, SAP has been encouraging its customers to migrate to SAP’s own database platform of the future: SAP HANA. In the past, SAP HANA databases were deployable on virtual servers or on physical machines, and now they are allowed and certified to run under a hyperconverged infrastructure.
With the launch of the Cisco HyperFlex™ system, Cisco offers a low-cost, easy-to-deploy, high-performance hyperconverged virtual server platform that is an excellent solution for both SAP HANA databases and SAP landscapes. You can use this Cisco HyperFlex solution to deploy SAP application servers, fully virtualized SAP HANA servers, and other non-HANA virtual servers on the same hyperconverged infrastructure.
The Intel Optane DC Persistent Memory Module, or PMM, is a new Intel technology that helps SAP HANA customers accelerate their migration to HANA 2.0 and S/4 HANA. This accelerated adoption rate is propelled by increased SAP HANA server memory capacity, reduced server acquisition costs, and improved restart times.
For more information about SAP HANA, see the SAP help portal: http://help.sap.com/hana/.
Solution benefits
The Cisco HyperFlex for SAP HANA solution offers the following benefits:
● Single hardware platform: The Cisco Unified Computing System™ (Cisco UCS) is the base platform for Cisco HyperFlex systems, which provide a fully contained hyperconverged environment, combining networking, storage, and virtualization resources in a single system. You can deploy additional Cisco UCS servers alongside the Cisco HyperFlex solution in the same Cisco UCS domain to service other workloads.
● Simplified management: A single administrator can manage all aspects of Cisco UCS and the Cisco HyperFlex system through Cisco UCS Manager and the VMware vCenter Web Client, making tasks much easier and faster to complete.
● Rapid deployment: The programmability and ease of use of Cisco UCS Manager allow you to deploy Cisco HyperFlex systems quickly. These features also allow you to rapidly provision additional Cisco UCS servers for other workload requirements.
Customers who have already invested in Cisco® products and technologies have the opportunity to mitigate their risk further by deploying familiar and tested Cisco UCS technology.
Audience
The intended audience for this document includes sales engineers, field consultants, professional services staff, IT managers, partner engineers, and customers deploying the Cisco solution for SAP Applications and SAP HANA. External references are provided wherever applicable, but readers are expected to be familiar with the technology, infrastructure, and database security policies of the customer installation.
Purpose of this document
This document describes the steps required to deploy and configure a Cisco data center solution for SAP applications and SAP HANA. This document showcases one of the variants of Cisco’s solution for SAP applications and SAP HANA. Although readers of this document are expected to have sufficient knowledge to install and configure the products used, configuration details that are important to the deployment of this solution are provided in this document.
Cisco seed unit for SAP sizing and configuration guidelines
To get started, follow the sizing and migration guidelines in this section to configure the seed unit.
Sizing preparation
Prior to undertaking any migration exercise for SAP HANA 2.0, complete a full database sizing process for the SAP HANA server migration. This process will help ensure that the system selected can support the current and future workloads associated with the server database. Refer to SAP Note 1872170 for sizing directions for S/4 and Suite on HANA workloads. Use SAP Note 2296290 for Business Warehouse (BW) on HANA or BW/4 HANA sizing. Refer to SAP Note 2786237 for Intel Optane DC PMM sizing guidance. Note that these sizing exercises review up to 90 days of SAP workload activities and may not include peak utilization activities.
All SAP HANA 2.0 implementations with Intel Optane DC PMM are TDI deployments as defined by SAP.
This document does not provide guidance about how to perform the sizing exercise or interpret the sizing reports. These tasks must be completed by trained services personnel.
SAP HANA functions slightly different with Intel Optane DC PMM than with standard DRAM. Standard DRAM configurations reserve half the memory for data tables and half the memory for work space. As the data tables grow, the amount of RAM reserved for work space necessarily decreases. For this reason, SAP recommends that the SAP HANA data tables be maintained at 50 percent of the dynamic memory space over the life of the server. The size of data tables can grow over time and should be maintained at no more than 67 percent.
Intel Optane DC PMM is different. Half the server DIMM slots will be consumed with dynamic memory, and half with Intel Optane DC PMM. The entire capacity of DC PMM can be filled with data tables because no space needs to be reserved for work space memory. This difference results in two interesting results.
First, 6-TB Intel Optane DC PMM configurations manage the same amount of data as the standard SAP HANA server with only dynamic memory. However, the Intel Optane DC PMM is significantly less expensive, reducing the total procurement cost of the server.
Second, 9-TB Intel Optane DC PMM configurations can manage twice the data volume because the entire DC PMM can be populated with data. Further, the 9-TB configuration in a 4Socket server is significantly less expensive than a 9-TB configuration in the 8Socket server with only dynamic memory.
Ultimately, the decision about which server to choose is governed by the sizing exercise results. You can use this information to decide which server size best meets the needs of the business.
S/4 HANA migration offering
Cisco is offering four preconfigured seed units to start the S/4 HANA migration: two Cisco UCS C480 M5 Rack Servers and two Cisco UCS B480 M5 Blade Servers. Each server format can be configured as a 6-TB system or a 9-TB system. The 6-TB system includes 3 TB of 128-Gbps dynamic RAM and 3 TB of 128-Gbps Intel Optane DC persistent memory. The 9-TB system includes 3 TB of 128-Gbps dynamic RAM and 6 TB of 256-Gbps Intel Optane DC persistent memory. Table 1 lists server options and orderable parts.
Table 1. Cisco seed unit ordering table
Cisco orderable part number |
Product description |
UCS-SAP-C480M5-6TS |
6T C480M5 for SAP HANA 2.0 |
UCS-SAP-C480M5-9TS |
9T C480M5 for SAP HANA 2.0 |
UCS-SAP-B480M5-6TS |
6T B480M5 for SAP HANA 2.0 |
UCS-SAP-B480M5-9TS |
9T B480M5 for SAP HANA 2.0 |
Solid-state disks (SSD) can be added to the Cisco UCS C480 server if local storage is required for persistent storage. The customer must provide SAP HANA certified enterprise storage and network connectivity if no internal storage is configured with the server. This C480 M5 server can be managed as a stand-alone server, or it can be managed by Cisco UCS. The customer can also incorporate the C480 M5 server into a Cisco UCS managed server domain by incorporating the C480 M5 server into an existing fabric interconnect redundant pair.
This section briefly describes the components of the Cisco UCS C480 M5 Rack Server with DC PMM memory and the Cisco HyperFlex system.
Cisco UCS C480 M5 Rack Server with Intel Optane DC PMM
Cisco UCS M5 servers with second-generation Intel® Xeon® Scalable processors and Intel Optane DC persistent memory, combined with DRAM, revolutionizes the SAP HANA landscape by helping organizations achieve lower overall total cost of ownership (TCO), ensure business continuity, and increase the memory capacities of their SAP HANA deployments. Intel Optane DC persistent memory can transform the traditional SAP HANA data-tier infrastructure and revolutionize data processing and storage. Together, these technologies give organizations faster access to more data than ever before and provide better performance for advanced data processing technologies.
The Cisco UCS Solution for SAP HANA uses the Cisco UCS C480 M5 Rack Server. Tables 2, 3, and 4 summarize the server specifications and show proposed disk configurations for the SAP HANA use case.
Table 2. Overview of Cisco UCS C480 M5 Rack Server configuration with Intel Optane DC PMM
CPU specifications |
Intel Xeon Platinum 8276L/8280L processor: Quantity 4 |
Possible memory configurations |
● 32-GB DDR4: Quantity 24 (768 GB)
● 64-GB DDR4: Quantity 24 (1.5 TB)
● 128-GB DDR4: Quantity 24 (3 TB)
|
Possible DC PMM memory configurations |
● 128-GB DC PMM: Quantity 24 (3 TB)
● 256-GB DC PMM: Quantity 24 (6 TB)
● 512-GB DC PMM: Quantity 24 (12 TB)
|
Internal drives |
3.8-TB SSD: Quantity 8 |
BIOS |
Release C480M5.4.0.4b.0.0407190307 |
Cisco Integrated Management Controller (IMC) firmware |
Release 4.0(4d) or later |
LSI MegaRAID controller |
Cisco 12-Gbps SAS modular RAID controller |
Network card |
● Cisco UCS Virtual Interface Card (VIC) 1385: Quantity 1
● For 10-Gbps connectivity:
● Onboard Intel 1 Gigabit Ethernet controller: Quantity 2 ● Onboard Intel 10BASE-T Ethernet controller: Quantity 2 |
Possible memory configurations |
Redundant power supplies: Quantity 4 |
Table 3. Cisco UCS C480 M5 proposed disk layout
Disk |
Disk type |
Drive group |
RAID level |
Virtual drive |
Slot (1 through 8) |
SSD |
DG0 |
5 |
VD0 |
Table 4. Cisco UCS C480 M5 proposed disk configuration
Drives used |
RAID type |
Used for |
File system |
8 x 3.8-TB SSD |
RAID 5 |
Operating system |
Ext3 |
Data file system |
XFS |
||
Log file system |
XFS |
||
SAP HANA shared file system |
XFS |
Cisco HyperFlex HX-Series system
A Cisco HyperFlex HX-Series system provides a fully contained virtual server platform that combines all three layers of computing, storage, and networking resources with the powerful Cisco HyperFlex HX Data Platform software tool, resulting in a single point of connectivity for simplified management. The Cisco HyperFlex HX-Series system is a modular system designed to scale out through the addition of HX-Series nodes under a single Cisco UCS management domain. The hyperconverged system provides a unified pool of resources based on your workload needs (Figure 1).
Cisco HyperFlex distributed file system
Following are the components of a Cisco HyperFlex system for SAP on hyperconverged infrastructure (HCI):
● One pair of Cisco UCS fabric interconnects:
◦ Cisco UCS 6332 Fabric Interconnect
● Three to 32 Cisco HyperFlex HX-Series rack servers (minimum of four nodes recommended )
◦ Cisco HyperFlex HX240c M5SX All Flash rack servers
● Cisco HyperFlex HX Data Platform software
● VMware vSphere ESXi Hypervisor
● VMware vCenter Server (end-user supplied)
● VMware vCenter Plug-in
● Cisco HyperFlex Connect
● Cisco Intersight™ platform
Physical components
Table 5 lists the physical components for the solution.
Table 5. Cisco HyperFlex system components
Component |
Hardware required |
Fabric interconnects |
2 Cisco UCS 6332-16UP Fabric Interconnects |
Servers |
4 Cisco HyperFlex HX240c M5SX All Flash rack servers |
For complete server specifications and more information, please refer to the Cisco HyperFlex HX240c M5SX specification sheet:
Table 6 lists sample hardware component options for one Cisco HyperFlex HX240c M5SX All Flash Node model, shown as an example.
Table 6. Cisco HyperFlex HX240c M5SX All Flash Node sample server configuration
Cisco HyperFlex HX240c M5SX All Flash options |
|
Processor |
Intel Xeon CPU (All models certified for SAP HANA TDI with greater than or equal to 8 cores and listed in the Cisco HyperFlex compatibility list are supported.) |
Memory |
12 x 64-GB (768-GB) double-data-rate 4 (DDR4) 2933-MHz 1.2V modules |
Disk controller |
Cisco 12-Gbps modular SAS host bus adapter (HBA) |
Hard drives |
● System log drive: 1 x 240-GB 2.5-inch Cisco UCS Enterprise Value 6-Gbps SATA SSD
● Cache drive: 1 x 375-GB 2.5-inch Intel Optane Extreme Performance SSD
● Capacity or storage drive: 18 x 960-GB 2.5-inch Enterprise Value 6-Gbps SATA SSDs
|
Network |
Cisco UCS VIC 1387 modular LAN on motherboard (mLOM) |
Boot device |
1 x 240-GB M.2 form-factor SATA SSD |
Optional |
Cisco QSFP to SFP or SFP+ Adapter (QSA) module to convert 40 Gigabit Ethernet Quad Enhanced Small Form-Factor Pluggable (QSFP+) to 10 Gigabit Ethernet SFP+ |
Software components
Table 7 lists the software components and the versions required for the Cisco HyperFlex system.
Table 7. Software components used for certification
Component |
Software required |
Hypervisor |
VMware ESXi 6.5.0 U3-13932383 (Cisco custom image for ESXi 6.5 to be downloaded from Cisco.com Downloads portal) |
Management server |
VMware vCenter Server for Windows or vCenter Server Appliance 6.5 or later |
Cisco HyperFlex HX Data Platform |
Cisco HyperFlex HX Data Platform Software 4.0.1b or later |
Cisco UCS firmware |
Cisco UCS infrastructure software, Cisco UCS B-Series and C-Series bundles, revision 4.0(4d) or later |
SAP HANA |
SAP HANA 2.0 revision 37 or later |
Licensing
Cisco HyperFlex systems must be properly licensed using Cisco Smart Licensing, which is a cloud-based software licensing management solution used to automate many manual, time-consuming, and error-prone licensing tasks.
Beginning with Cisco HyperFlex 3.0, licensing of the system requires one license per node from one of three licensing types: Edge licenses, Standard licenses, or Enterprise licenses. You need to purchase licenses from the appropriate licensing tier according to the type of cluster you install and the features you want to activate and use in the system.
For more information about the Cisco Smart Software Manager satellite server, visit this website: https://www.cisco.com/c/en/us/buy/smart-accounts/software-manager-satellite.html.
Cabling
The fabric interconnects and HX-Series rack-mount servers need to be cabled properly before you begin the installation activities.
Table 8 provides a sample cabling map for installation of a Cisco HyperFlex system with four Cisco HyperFlex converged servers.
Table 8. Sample fabric interconnect cabling map
Device |
Port |
Connected to |
Port |
Type |
Length |
Note |
UCS6332-A |
L1 |
UCS6332-B |
L1 |
CAT5 |
1 ft |
|
UCS6332-A |
L2 |
UCS6332-B |
L2 |
CAT5 |
1 ft |
|
UCS6332-A |
mgmt0 |
Customer LAN |
|
|
|
|
UCS6332-A |
1/1 |
HX server 1 |
mLOM port 1 |
Twinax |
3m |
Server 1 |
UCS6332-A |
1/2 |
HX server 2 |
mLOM port 1 |
Twinax |
3m |
Server 2 |
UCS6332-A |
1/3 |
HX server 3 |
mLOM port 1 |
Twinax |
3m |
Server 3 |
UCS6332-A |
1/4 |
HX server 4 |
mLOM port 1 |
Twinax |
3m |
Server 4 |
UCS6332-A |
1/5 |
|
|
|
|
|
UCS6332-A |
1/6 |
|
|
|
|
|
UCS6332-A |
1/7 |
|
|
|
|
|
UCS6332-A |
1/8 |
|
|
|
|
|
UCS6332-A |
1/9 |
|
|
|
|
|
UCS6332-A |
1/10 |
|
|
|
|
|
UCS6332-A |
1/11 |
|
|
|
|
|
UCS6332-A |
1/12 |
|
|
|
|
|
UCS6332-A |
1/13 |
|
|
|
|
|
UCS6332-A |
1/14 |
|
|
|
|
|
UCS6332-A |
1/15 |
|
|
|
|
|
UCS6332-A |
1/16 |
|
|
|
|
|
UCS6332-A |
1/17 |
|
|
|
|
|
UCS6332-A |
1/18 |
|
|
|
|
|
UCS6332-A |
1/19 |
|
|
|
|
|
UCS6332-A |
1/20 |
|
|
|
|
|
UCS6332-A |
1/21 |
|
|
|
|
|
UCS6332-A |
1/22 |
|
|
|
|
|
UCS6332-A |
1/23 |
|
|
|
|
|
UCS6332-A |
1/24 |
|
|
|
|
|
UCS6332-A |
1/25 |
C480M5 |
VIC 1385 Port 1 |
Twinax |
3m |
Server5 |
UCS6332-A |
1/26 |
C480M5 |
VIC 1385 Port 1 |
Twinax |
3m |
Server6 |
UCS6332-A |
1/27 |
|
|
|
|
|
UCS6332-A |
1/28 |
|
|
|
|
|
UCS6332-A |
1/29 |
|
|
|
|
|
UCS6332-A |
1/30 |
|
|
|
|
|
UCS6332-A |
1/31 |
Customer LAN |
|
|
|
Uplink |
UCS6332-A |
1/32 |
Customer LAN |
|
|
|
Uplink |
UCS6332-B |
L1 |
UCS6332-A |
L1 |
Cat5 |
1 ft |
|
UCS6332-B |
L2 |
UCS6332-A |
L2 |
Cat5 |
1 ft |
|
UCS6332-B |
mgmt0 |
Customer LAN |
|
|
|
|
UCS6332-B |
1/1 |
HX server 1 |
mLOM port 2 |
Twinax |
3m |
Server 1 |
UCS6332-B |
1/2 |
HX server 2 |
mLOM port 2 |
Twinax |
3m |
Server 2 |
UCS6332-B |
1/3 |
HX server 3 |
mLOM port 2 |
Twinax |
3m |
Server 3 |
UCS6332-B |
1/4 |
HX server 4 |
mLOM port 2 |
Twinax |
3m |
Server 4 |
UCS6332-B |
1/5 |
|
|
|
|
|
UCS6332-B |
1/6 |
|
|
|
|
|
UCS6332-B |
1/7 |
|
|
|
|
|
UCS6332-B |
1/8 |
|
|
|
|
|
UCS6332-B |
1/9 |
|
|
|
|
|
UCS6332-B |
1/10 |
|
|
|
|
|
UCS6332-B |
1/11 |
|
|
|
|
|
UCS6332-B |
1/12 |
|
|
|
|
|
UCS6332-B |
1/13 |
|
|
|
|
|
UCS6332-B |
1/14 |
|
|
|
|
|
UCS6332-B |
1/15 |
|
|
|
|
|
UCS6332-B |
1/16 |
|
|
|
|
|
UCS6332-B |
1/17 |
|
|
|
|
|
UCS6332-B |
1/18 |
|
|
|
|
|
UCS6332-B |
1/19 |
|
|
|
|
|
UCS6332-B |
1/20 |
|
|
|
|
|
UCS6332-B |
1/21 |
|
|
|
|
|
UCS6332-B |
1/22 |
|
|
|
|
|
UCS6332-B |
1/23 |
|
|
|
|
|
UCS6332-B |
1/24 |
|
|
|
|
|
UCS6332-B |
1/25 |
C480M5 |
VIC 1385 Port 2 |
Twinax |
3m |
Server5 |
UCS6332-B |
1/26 |
C480M5 |
VIC 1385 Port 1 |
Twinax |
3m |
Server6 |
UCS6332-B |
1/27 |
|
|
|
|
|
UCS6332-B |
1/28 |
|
|
|
|
|
UCS6332-B |
1/29 |
|
|
|
|
|
UCS6332-B |
1/30 |
|
|
|
|
|
UCS6332-B |
1/31 |
Customer LAN |
|
|
|
Uplink |
UCS6332-B |
1/32 |
Customer LAN |
|
|
|
Uplink |
The Cisco Scale-Up Solution for SAP HANA uses the Cisco UCS M5 generation of Cisco UCS C-Series Rack Servers.
Cisco UCS C480 M5 Rack Server
The Cisco UCS C480 M5 Rack Server (Figure 2) can be deployed as a standalone server or in a Cisco UCS managed environment. When used in combination with Cisco UCS Manager, the C480 M5 brings the power and automation of unified computing to enterprise applications, including Cisco SingleConnect technology, drastically reducing switching and cabling requirements. Cisco UCS Manager uses service profiles, templates, and policy-based management to enable rapid deployment and help ensure deployment consistency. It also enables end-to-end server visibility, management, and control in both virtualized and bare-metal environments.
The C480 M5 is a storage- and I/O-optimized enterprise-class rack server that delivers industry-leading performance for:
● IMDBs
● Big data analytics
● Virtualization and virtual desktop infrastructure (VDI) workloads
● Bare-metal applications
It delivers outstanding levels of expandability and performance for standalone or Cisco UCS managed environments in a 4-rack-unit (4RU) form factor. And because of its modular design, you pay for only what you need.
The C480 M5 offers these capabilities:
● Latest Intel Xeon Scalable processors with up to 28 cores per socket and support for two- or four-processor configurations
● 2933-MHz DDR4 memory and 48 DIMM slots for up to 6 TB of total memory
● 12 PCI Express (PCIe) 3.0 slots
● Six x8 full-height, full-length slots
● Six x16 full-height, full-length slots
● Flexible storage options with support up to 32 small-form-factor (SFF) 2.5-inch, SAS, SATA, and PCIe Non-Volatile Memory Express (NVMe) disk drives
● Cisco 12-Gbps SAS modular RAID controller in a dedicated slot
● Internal Secure Digital (SD) and M.2 boot options
● Dual embedded 10 Gigabit Ethernet LAN-on-motherboard (LOM) ports
Cisco UCS C480 M5 Rack Server
Overview of Intel Optane DC PMM technology
The Cisco Integrated Management Controller (IMC) and Cisco UCS Manager Release 4.0(4) introduce support for Intel Optane DC PMM on Cisco UCS M5 servers based on second-generation Intel Xeon Scalable processors.
Persistent memory modules can be configured on servers managed by Cisco UCS using the IMC or Cisco UCS Manager. For more information, see Cisco UCS: Configuring and Managing Intel Optane Data Center Persistent Memory Modules.
PMMs can be managed using the software utilities installed on the operating system. This approach is known as host-managed mode.
Note: The solution discussed in this document uses PMMs in host-managed mode only.
Goal
A goal specifies the way that PMMs connected to a CPU socket are used. You can configure a PMM to be used in Memory mode, App Direct mode, or Mixed mode. If a PMM is configured as 100 percent Memory mode, it can be used entirely as volatile memory. If it is configured as 0 percent Memory mode, it becomes App Direct mode and can be used entirely as persistent memory. If you configure a PMM as x percent Memory mode, x percent is used as volatile memory, and the remaining percentage is used as persistent memory. For example, if you configure 20 percent Memory mode, 20 percent of the PMM is used as volatile memory, and the remaining 80 percent is used as persistent memory. This mode is called Mixed mode.
App Direct mode is the only mode currently supported by SAP HANA 2.0 SPS 03+. The App Direct mode configures all memory modules connected to a socket as one interleaved set and creates one region for the set.
You can create a goal only at the server level for all sockets together, not for each socket separately. After a goal is created and applied on a server, the regions that are created are visible in the server inventory. A region is a grouping of one or more PMMs that can be divided into one or more namespaces. When a host application uses namespaces, it stores application data in them.
Goal modification is a destructive operation. When a goal is modified, new regions are created based on the modified goal configuration. This modification results in the deletion of all existing regions and namespaces on the associated servers, which leads to the loss of data currently stored in the namespaces.
Region
A region is a grouping of one or more PMMs that can be divided into one or more namespaces. A region is created based on the persistent memory type selected during goal creation. When you create a goal with the App Direct persistent memory type, one region is created for all the memory modules connected to a socket.
Namespace
A namespace is a partition of a region. When you use the App Direct persistent memory type, you can create namespaces on the region mapped to the socket. A namespace can be created in Raw or Block mode. A namespace created in Raw mode is seen as a raw namespace in the host OS. A namespace created in Block mode is seen as a sector namespace in the host OS.
Deleting a namespace is a destructive operation and results in the loss of data stored in the namespace.
Direct access
Direct access (DAX) is a mechanism that allows applications to directly access the persistent media from the CPU (through loads and stores), bypassing the traditional I/O stack (page cache and block layer).
Managing Intel Optane DC PMM from the host
The software utilities ipmctl and ndctl manage DC PMM from the Linux command line. Use ipmctl for all tasks except namespace management.
ipmctl utility
Use the ipmctl utility to configure and manage Intel Optane DC PMM. It supports functions to:
● Discover PMMs on the platform
● Provision the platform memory configuration
● View and update PMM firmware
● Configure data-at-rest security on PMMs
● Monitor PMM health
● Track performance of PMMs
● Debug and troubleshoot PMMs
For detailed information, see https://github.com/intel/ipmctl.
ndctl utility
Use the ndctl utility library to manage the libnvdimm (nonvolatile memory device) subsystem in the Linux kernel.
For detailed information, see https://github.com/pmem/ndctl.
For detailed information about configuring Intel Optane DC PMM, https://software.intel.com/en-us/articles/quick-start-guide-configure-intel-optane-dc-persistent-memory-on-linux.
Design of Cisco UCS C480 M5 solution with Intel Optane DC PMM
Use the guidance here to design the Cisco UCS C480 M5 solution with Intel Optane DC PMM for a SAP HANA deployment.
Platform support and operating modes for persistent memory
Intel Optane DC PMM is supported on servers equipped with second-generation Intel Xeon Gold and Platinum processors. Two primary modes are supported: App Direct mode, including Block over App Direct mode, and Memory mode. App Direct mode is the only mode that is currently supported by SAP HANA 2.0 SPS 03+. In App Direct mode, the PMMs appear as byte-addressable memory resources that are controlled by SAP HANA 2.0 SPS 03+. In this mode, the persistent memory space is controlled directly by SAP HANA.
Hardware sizing for SAP HANA 2.0 SPS 03+
The sizing for an SAP HANA deployment can be accomplished using a fixed core-to-memory ratio based on workload type, or by performing a self-assessment using SAP HANA TDI and tools such as SAP Quick Sizer. The web-based SAP Quick Sizer tool can be used for sizing new (greenfield) systems as well as current production systems. The SAP Quick Sizer tool makes sizing recommendations based on the types of workloads that will be running on SAP HANA. Memory, CPU, disk I/O, network loads, and business requirements each play a part in determining the optimal configuration for SAP HANA. Because DRAM is used in addition to Intel Optane DC persistent memory, the SAP Quick Sizer tool takes into consideration the data that should be stored in DRAM and the data that should be stored in Intel Optane DC persistent memory when making recommendations. Note that SAP HANA uses persistent memory for all data that resides in the column data store.
For more information about the SAP Quick Sizer tool, see https://www.sap.com/about/benchmark/sizing.quick-sizer.html - quick-sizer.
Ratio of DRAM to persistent memory
Intel Optane DC PMMs must be installed with DRAM DIMMs in the same system. The PMMs will not function without any DRAM DIMMs installed. In two-, four-, and eight-socket configurations, each socket contains two IMCs. Each memory controller is connected to three double-data-rate (DDR) memory channels that are then connected to two physical DIMM persistent memory slots. In this configuration, a maximum of 12 memory slots per CPU socket can be configured with a combination of Intel Optane DC PMMs and DRAM DIMMs.
SAP HANA 2.0 SPS 03 currently supports various capacity ratios of Intel Optane DC PMMs to DIMMs. Ratio examples include the following:
● 1:1 ratio: A single 128-GB Intel Optane DC PMM is matched with a single 128-GB DDR4 DIMM, or a 256-GB Intel Optane DC PMM is matched with a single 256-GB DRAM DIMM.
● 2:1 ratio: A 256-GB Intel Optane DC PMM is matched with a 128-GB DRAM DIMM, or a 128-GB Intel Optane DC PMM is matched with a 64-GB DDR4 DIMM.
● 4:1 ratio: A 512-GB Intel Optane DC PMM is matched with a 128-GB DDR4 DIMM, or a 256-GB Intel Optane DC PMM is matched with a 64-GB DRAM DIMM.
Different-sized Intel Optane DC PMMs and DIMMs can be used together as long as supported ratios are maintained (Table 9).
Table 9. Supported ratios of Intel Optane DC PMMs to DIMMs
Memory configuration (PMM + DRAM) |
CPU type |
Capacity (GB) with number of CPUs |
Ratio of Intel Optane DC PMMs to DIMMs |
||
|
|
2 |
4 |
8 |
|
128-GB Intel Optane DC PMM + 32-GB DRAM |
Base |
1920 |
3480 |
7680 |
4:1 |
128-GB Intel Optane DC PMM + 64-GB DRAM |
M |
2304 |
4608 |
9216 |
2:1 |
128-GB Intel Optane DC PMM + 128-GB DRAM |
M |
3072 |
6144 |
12,228 |
1:1 |
256-GB Intel Optane DC PMM + 64-GB DRAM |
M |
3840 |
7680 |
15,360 |
4:1 |
256-GB Intel Optane DC PMM + 128-GB DRAM |
L |
4608 |
9216 |
18,432 |
2:1 |
256-GB Intel Optane DC PMM + 256-GB DRAM |
L |
6144 |
12,288 |
24,576 |
1:1 |
512-GB Intel Optane DC PMM + 128-GB DRAM |
L |
7680 |
15,360 |
|
4:1 |
512-GB Intel Optane DC PMM + 256-GB DRAM |
L |
9216 |
18,432 |
|
2:1 |
Sizing persistent storage
The storage size for the file system is based on the amount of memory (DRAM + Intel Optane DC PMM) on the SAP HANA host. For a single-node system with 9 TB of memory (3-TB DRAM + 6-TB Intel Optane DC PMM), the recommended file system sizes are as follows:
● /hana/data = 1.2 x memory (DRAM + Intel Optane DC PMM) = 1.2 x 9 TB = 10.8 TB
● /hana/log = 512 GB
● /hana/shared = 1 TB
Supported operating systems
SAP HANA with Intel Optane DC PMM is supported by the following operating systems:
● SUSE Linux Enterprise Server (SLES) for SAP Applications
◦ SLES for SAP Applications 12 SP4
◦ SLES for SAP Applications 15
● Red Hat Enterprise Linux (RHEL)
◦ RHEL 7.6 for SAP HANA
◦ RHEL 7.6 for SAP Solutions
Cisco HyperFlex HX-Series system components
The components for the Cisco HyperFlex system are as follows:
● Cisco HyperFlex HX-Series server; you can use any of the following servers to configure the Cisco HyperFlex system:
◦ All-flash converged nodes: Cisco HyperFlex HX240c M5, HX220c M5, HX240c M4, and HX220c M4 All Flash Nodes
◦ Hybrid converged nodes: Cisco HyperFlex HX240c M5, HX220c M5, HX240c M4, and HX220c M4 Nodes
◦ Computing-only nodes: Cisco UCS B200 M3 and M4, B260 M4, B420 M4, B460 M4, B480 M5, C240 M3 and M4, C220 M3 and M4, C480 M5, C460 M4, B200 M5, C220 M5, and C240 M5 servers
● Cisco HyperFlex HX Data Platform components:
◦ Cisco HyperFlex HX Data Platform Installer: Download this installer to a server connected to the storage cluster. The HX Data Platform Installer configures the service profiles and policies within Cisco UCS Manager, deploys the controller virtual machines, installs the software, creates the storage cluster, and updates the VMware vCenter plug-in.
◦ Storage controller virtual machine: Using the HX Data Platform Installer, the controller installs the storage controller virtual machine on each converged node in the managed storage cluster.
◦ Cisco HX Data Platform plug-in: This integrated VMware vSphere interface monitors and manages the storage in your storage cluster.
● Cisco UCS fabric interconnects:
◦ Fabric interconnects provide both network connectivity and management capabilities to any attached HX-Series server.
Cisco HyperFlex HX-Series system management components
The Cisco HyperFlex HX-Series system is managed using the following Cisco software components:
● Cisco UCS Manager
◦ Cisco UCS Manager is embedded software that resides on a pair of fabric interconnects, providing complete configuration and management capabilities for HX-Series server. The most common way to access Cisco UCS Manager is to use a web browser to open the GUI. Cisco UCS Manager supports role-based access control (RBAC).
◦ The configuration information is replicated between two Cisco UCS fabric interconnects, providing a high-availability solution. If one fabric interconnect becomes unavailable, the other takes over.
◦ An important benefit of Cisco UCS Manager is its implementation of stateless computing. Each node in an HX-Series cluster has no set configuration. MAC addresses, universally unique IDs (UUIDs), firmware, and BIOS settings, for example, are all configured on Cisco UCS Manager in a service profile and applied uniformly to all the HX-Series servers. This approach enables consistent configuration and ease of reuse. A new service profile can be applied in minutes.
● Cisco HyperFlex HX Data Platform
◦ HX Data Platform is a hyperconverged software appliance that transforms Cisco servers into a single pool of computing and storage resources. It eliminates the need for network storage and tightly integrates with VMware vSphere and its existing management application to provide a seamless data management experience. In addition, native compression and deduplication reduce the amount of storage space occupied by the virtual machines.
◦ HX Data Platform is installed on a virtualized platform, such as vSphere. It manages the storage for your virtual machines, applications, and data. During installation, you specify the Cisco HyperFlex cluster name, and HX Data Platform creates a hyperconverged storage cluster on each of the nodes. As your storage needs increase and you add nodes to the HX-Series cluster, HX Data Platform balances the storage across the additional resources.
● VMware vCenter management
◦ The Cisco HyperFlex System uses VMware vCenter-based management. The vCenter Server is a data center management server application developed to monitor virtualized environments. HX Data Platform is also accessed from the preconfigured vCenter Server to perform all storage tasks. vCenter supports important shared storage features such as VMware vMotion, Dynamic Resource Scheduler (DRS), High Availability (HA), and vSphere replication. More scalable, native HX Data Platform snapshots and clones replace VMware snapshots and cloning capabilities.
◦ You must have vCenter installed on a separate server to access HX Data Platform. vCenter is accessed through the vSphere Client, which is installed on the administrator's laptop or PC.
Cisco HyperFlex installation prerequisites
This section presents the prerequisites for a Cisco HyperFlex installation.
Software requirements for VMware ESXi
Verify that you are using compatible versions of Cisco HyperFlex systems components and VMware vSphere components.
● The Cisco HyperFlex software components—Cisco HyperFlex HX Data Platform Installer, Cisco HyperFlex HX Data Platform, and Cisco UCS firmware—are installed on different servers. Verify that all components on each server used with and within an HX-Series storage cluster are compatible.
● Verify that the preconfigured HX-Series servers have the same version of Cisco UCS server firmware installed.
● For new hybrid or all-flash M5 (Cisco HyperFlex HX240c M5 or HX220c M5) deployments, verify that Cisco UCS Manager 4.0(2d) is installed.
Cable requirements
Cable requirements are as follows:
● Use at least two 40 Gigabit Ethernet QSFP cables per server when using Cisco UCS 6300 Series Fabric Interconnects.
● Verify that the fabric interconnect console cable (CAB-CONSOLE-RJ45) has an RJ-45 connector on one end and a DB9 connector on the other end. This cable is used to connect the RS-232 console connection on a laptop.
● The keyboard, video, and mouse (KVM) cable provides a connection for the HX-Series servers to the system. It has a DB9 serial connector, a VGA connector for a monitor, and dual USB 2.0 ports for a keyboard and mouse. With this cable, you can create a direct connection to the operating system and the BIOS running on the system.
Host requirements
A Cisco HyperFlex cluster contains a minimum of three converged Cisco HyperFlex nodes. Optionally, you can add computing-only nodes to provide more computing power if you do not need additional storage. Each server in a Cisco HyperFlex cluster is also referred as a Cisco HyperFlex node. Make sure that each node has the settings listed here configured before you deploy the storage cluster.
For more information, refer to the Cisco HyperFlex HX240c and HX220c Node installation guides.
Verify that the following host requirements are met:
● Use the same VLAN IDs for all the servers (nodes or hosts) in the cluster.
● Use the same administrator login credentials for all the ESXi servers across the storage cluster.
● Keep Secure Shell (SSH) enabled on all ESXi hosts.
● Configure Domain Name Service (DNS) and Network Time Protocol (NTP) on all servers.
● Install and configure VMware vSphere.
● Use only a single VIC for converged nodes or computing–only nodes. Additional VICs or PCIe NICs are not supported.
Disk requirements
The disk requirements vary between converged nodes and computing-only nodes. To increase the available CPU and memory capacity, you can expand the existing cluster with computing-only nodes as needed. These computing-only nodes do not increase storage performance or storage capacity.
Alternatively, to increase storage performance and storage capacity, you can add converged nodes alongside CPU and memory resources.
Servers with only SSDs are all-flash servers. Servers with both SSDs and HDDs are hybrid servers.
The following requirements apply to all the disks in a Cisco HyperFlex cluster:
● All the disks in the storage cluster must have the same amount of storage capacity. All the nodes in the storage cluster must have the same number of disks.
● All SSDs must support the TRIM command and have TRIM enabled.
● All HDDs can be either the SATA or SAS type. All SAS disks in the storage cluster must be in pass-through mode.
● Disk partitions must be removed from SSDs and HDDs. Disks with partitions are ignored and not added to your Cisco HyperFlex storage cluster.
● Optionally, you can remove or back up existing data on disks. All existing data on a provided disk is overwritten.
● Only disks ordered directly from Cisco are supported.
Note: New servers are shipped from the factory with appropriate disk partition settings. Do not remove factory-installed disk partitions from new servers.
Converged node requirements
All M5 converged nodes have an M.2 SATA SSD with ESXi installed.
● Do not mix storage disk types or storage sizes on a server or across the storage cluster. Mixing storage disk types is not supported.
● When replacing cache or persistent disks, always use the same type and size as the original disk.
● Do not mix any of the persistent drives. Use all HDDs or SSDs and the same size of drives in a server.
● Do not mix hybrid and all-flash cache drive types. Use the hybrid cache device on hybrid servers and use all-flash cache devices on all-flash servers.
● Do not mix encrypted and nonencrypted drive types. Use self-encrypting drive (SED) hybrid or SED all-flash drives. On SED servers, both the cache and persistent drives must be the SED type.
● All nodes must use the same size and quantity of SSDs. Do not mix SSD types.
Port requirements
If your network is behind a firewall, in addition to the standard port requirements, VMware recommends ports for VMware ESXi and vCenter.
● CIP-M is the cluster management IP.
● Storage Controller Virtual Machine (SCVM) is the management IP for the controller virtual machine.
● VMware ESXi is the management IP for the hypervisor.
Verify that the firewall ports listed in Tables 10 through 18 are open.
Table 10. Time server
Port number |
Service and protocol |
Source |
Port destination |
Essential information |
123 |
NTP and User Datagram Protocol (UDP) |
● Each ESXi node
● Each SCVM node
● Cisco UCS Manager
|
Time server |
Bidirectional |
Table 11. Cisco HyperFlex HX Data Platform Installer
Port number |
Service and protocol |
Source |
Port destination |
Essential information |
22 |
SSH and TCP |
HX Data Platform Installer |
Each ESXi node |
Management addresses |
Each SCVM node |
Management addresses |
|||
CIP-M |
Cluster management |
|||
Cisco UCS Manager |
Cisco UCS Manager management addresses |
|||
80 |
HTTP and TCP |
HX Data Platform Installer |
Each ESXi node |
Management addresses |
Each SCVM node |
Management addresses |
|||
CIP-M |
Cluster management |
|||
Cisco UCS Manager |
Cisco UCS Manager management addresses |
|||
443 |
HTTPS and TCP |
HX Data Platform Installer |
Each ESXi node |
Management addresses |
Each SCVM node |
Management addresses |
|||
CIP-M |
Cluster management |
|||
Cisco UCS Manager |
Cisco UCS Manager management addresses |
|||
8089 |
vSphere Software Development Kit (SDK) and TCP |
HX Data Platform Installer |
Each ESXi node |
Management addresses |
902 |
Heartbeat, UDP, and TCP |
HX Data Platform Installer |
vCenter |
|
Each ESXi node |
|
|||
None |
Ping and Internet Control Message Protocol (ICMP) |
HX Data Platform Installer |
● ESXi IP addresses
● CVM IP addresses
|
Management addresses |
9333 |
UDP and TCP |
HX Data Platform Installer |
CIP-M |
Cluster management |
Table 12. Mail server (optional for email subscription to cluster events)
Port number |
Service and protocol |
Source |
Port destination |
Essential information |
25 |
Simple Mail Transfer Protocol (SMTP) and TCP |
● Each SCVM node CIP-M
● Cisco UCS Manager
|
Mail server |
Optional |
Table 13. Name server
Port number |
Service and protocol |
Source |
Port destination |
Essential information |
53 (external lookups) |
DNS, TCP, and UDP |
Each ESXi node |
Name server |
Management addresses |
Each SCVM node |
Name server |
Management addresses |
||
CIP-M |
Name server |
Cluster management |
||
Cisco UCS Manager |
Name server |
|
Table 14. VMware vCenter
Port number |
Service and protocol |
Source |
Port destination |
Essential information |
80 |
HTTP and TCP |
vCenter |
● Each SCVM node
● CIP-M
|
Bidirectional |
443 |
HTTPS (plug-in) and TCP |
vCenter |
● Each ESXi node
● Each SCVM node
● CIP-M
|
Bidirectional |
7444 |
HTTPS (vCenter single sign-on [SSO]) and TCP |
vCenter |
● Each ESXi node
● Each SCVM node
● CIP-M
|
Bidirectional |
9443 |
HTTPS (plug-in) and TCP |
vCenter |
● Each ESXi node
● Each SCVM node
● CIP-M
|
Bidirectional |
5989 |
Common Information Model (CIM) server and TCP |
vCenter |
Each ESXi node |
|
9080 |
CIM server and TCP |
vCenter |
Each ESXi node |
Introduced in ESXi Release 6.5 |
902 |
Heartbeat, TCP, and UDP |
vCenter |
Each ESXi node |
This port must be accessible from each host. Installation results in errors if the port is not open from the HX Data Platform Installer to the ESXi hosts. |
Table 15. User
Port number |
Service and protocol |
Source |
Port destination |
Essential information |
22 |
SSH and TCP |
User |
Each ESXi node |
Management addresses |
Each SCVM node |
Management addresses |
|||
CIP-M |
Cluster management |
|||
HX Data Platform Installer |
|
|||
Cisco UCS Manager |
Cisco UCS Manager management addresses |
|||
vCenter |
|
|||
SSO server |
|
|||
80 |
HTTP and TCP |
User |
Each SCVM node |
Management addresses |
CIP-M |
Cluster management |
|||
Cisco UCS Manager |
|
|||
HX Data Platform Installer |
|
|||
vCenter |
|
|||
443 |
HTTPS and TCP |
User |
Each SCVM node |
|
|
|
|
CIP-M |
|
|
|
|
Cisco UCS Manager |
Cisco UCS Manager management addresses |
|
|
|
HX Data Platform Installer |
|
|
|
|
vCenter |
|
7444 |
HTTPS (SSO) and TCP |
User |
vCenter SSO server |
|
8443 |
HTTPS (plug-in) and TCP |
User |
vCenter |
|
Table 16. Single-sign-on server
Port number |
Service and protocol |
Source |
Port destination |
Essential information |
7444 |
HTTPS (SSO) and TCP |
SSO server |
● Each ESXi node
● Each SCVM node
● CIP-M
|
Bidirectional |
Table 17. Cisco UCS Manager
Port number |
Service and protocol |
Source |
Port destination |
Essential information |
443 |
Encryption etc. and TCP |
Each CVM Node |
IMC out of band (OOB) |
Bidirectional for each Cisco UCS node |
81 |
KVM and HTTP |
User |
Cisco UCS Manager |
OOB KVM |
743 |
KVM and HTTP |
User |
Cisco UCS Manager |
OOB KVM encrypted |
Table 18. Miscellaneous ports
Port number |
Service and protocol |
Source |
Port destination |
Essential information |
9350 |
Hypervisor service and TCP |
Each CVM node |
Each CVM node |
Bidirectional; includes cluster management IP addresses |
9097 |
CIP-M failover and TCP |
Each CVM node |
Each CVM node |
Bidirectional for each CVM to other CVMs |
111 |
Remote procedure call (RPC) bind and TCP |
Each SCVM node |
Each SCVM node |
CVM outbound to installer |
8002 |
Installer and TCP |
Each SCVM node |
Installer |
Service location protocol |
8080 |
Apache Tomcat and TCP |
Each SCVM node |
Each SCVM node |
stDeploy makes connection; any request with uri /stdeploy
|
8082 |
Authentication service and TCP |
Each SCVM node |
Each SCVM node |
Any request with uri /auth/ |
9335 |
hxRoboControl and TCP |
Each SCVM node |
Each SCVM node |
Remote office and branch office (ROBO) deployments |
443 |
HTTPS and TCP |
Each CVM management IP, including CIP-M |
Cisco UCS Manager A/B and VIP |
Policy configuration |
5696 |
Transport Layer Security (TLS) and TCP |
IMC from each node |
KMS Server |
Key exchange |
8125 |
UDP |
Each SCVM node |
Each SCVM node |
Graphite |
427 |
UDP |
Each SCVM node |
Each SCVM node |
Service location protocol |
32768 to 65535 |
UDP |
Each SCVM node |
Each SCVM node |
SCVM outbound communication |
Fabric interconnect uplink provisioning requirements
Prior to setting up the Cisco HyperFlex cluster, plan the upstream bandwidth capacity for optimal network traffic management. This planning helps ensure that the flow is in a steady state, even if a component failure or partial network outage occurs.
Figure 3 shows Cisco HyperFlex HX Data Platform connectivity for a single host.
By default, the hx-vm-network virtual switch (vSwitch) is configured as active-active. All other vSwitches are configured as active-standby.
Note: For clusters running Cisco Catalyst® switches upstream from the fabric interconnects, set the best-effort quality-of-service (QoS) maximum transmission unit (MTU) to 9216 (go to in LAN > LAN Cloud > QoS System Class). Otherwise, the failover process will fail.
All VLANs by default are tagged on the fabric interconnect, so frames are passed untagged to each vSwitch.
The vm-network port groups are created automatically in Release 1.8 of the installer with the VLAN suffix.
Cisco HyperFlex HX Data Platform connectivity for a single host
Set the default vSwitch NIC teaming policy and failover policy to Yes to help ensure that all management, vMotion, and storage traffic is locally forwarded to the fabric interconnects to keep the flow in a steady state.
If vNIC-a fails, ESXi computes the load balancing, and all the virtual ports are repinned to vNIC-b. When vNIC-a comes back online, repinning applies, and virtual ports are rebalanced across vNIC-a and vNIC-b. This process reduces the latency and bandwidth utilization upstream from the Cisco UCS fabric interconnects (Figure 4).
![]() |
Traffic flow in steady state
Network settings requirements
Use these best practices for network settings:
● Use different subnets and VLANs for each network.
● Directly attach each host to a Cisco UCS fabric interconnect using a 10-Gbps cable.
● Do not use VLAN 1, which is the default VLAN, because doing so can cause networking issues, especially if a Disjoint Layer 2 configuration is used.
● The installer sets the VLANs as nonnative by default. Be sure to configure the upstream switches to accommodate the nonnative VLANs.
Each ESXi host needs the following networks:
● Management traffic network: From vCenter, manages hypervisor (ESXi server) management and storage cluster management
● Data traffic network: Manages hypervisor and storage data traffic
● vMotion network
● Virtual machine network
Four vSwitches are used, each carrying a different network:
● vswitch-hx-inband-mgmt: Used for ESXi management and storage controller management
● vswitch-hx-storage-data: Used for ESXi storage data and HX Data Platform replication
◦ These two vSwitches are further divided into two port groups with assigned static IP addresses to handle traffic between the storage cluster and the ESXi host.
● vswitch-hx-vmotion: Used for virtual machine and storage vMotion traffic
◦ This vSwitch has one port group for management, defined through vSphere, that connects to all the hosts in the vCenter cluster.
● vswitch-hx-vm-network: Used for virtual machine data traffic
◦ You can add or remove VLANs on the corresponding vNIC templates in Cisco UCS Manager. See Managing VLANs in Cisco UCS Manager and Managing vNIC templates in Cisco UCS Manager for detailed steps. To create port groups on the vSwitch, refer to Adding Virtual Port Groups to VMware Standard vSwitch.
Note: The HX Data Platform Installer automatically creates the vSwitches.
VLAN, vSwitch, UCS, Hypervisor requirements
Provide at least three VLAN IDs. All VLANs must be configured on the fabric interconnects during the installation. Table 19 lists the requirements.
Table 19. Required variable for the installation
VLAN type |
Description |
Note: You must use different subnets and VLANs for each of the networks listed here. |
|
VLAN for ESXi and Cisco HyperFlex management traffic |
VLAN name: hx-inband-mgmt VLAN ID : <VLAN ID> |
VLAN for Cisco HyperFlex storage traffic |
VLAN name: hx-storage-data VLAN ID : <VLAN ID> |
VLAN for virtual machine vMotion traffic |
VLAN name: hx-vmotion VLAN ID: <VLAN ID> |
VLAN for virtual machine Access |
VLAN name: hx-vmnetwork VLAN ID: <VLAN ID> |
IP address blocks |
KVM IP address pool |
Subnet mask |
<Subnet Mask> |
Default gateway |
<Gateway> |
Uplink switch model |
Provide the switch type and connection type (SFP+ Twinax or optic). |
Fabric interconnect cluster IP address |
<IP address> |
FI-A IP address |
<IP address> |
FI-B IP address |
<IP address> |
MAC address pool |
00:00:00: <MAC address pool> |
IP address blocks |
KVM IP address pool |
Subnet mask |
<subnet mask> |
Default gateway |
<gateway> |
Cisco UCS Manager host name |
Host name or IP address |
User name |
<admin user name> |
Password |
<admin user name> |
VLAN tagging with external switch VLAN tagging (EST) and vSwitch settings are applied using Cisco UCS Manager profiles. The HX Data Platform Installer simplifies this process.
Note the following:
● Do not use VLAN 1, which is the default VLAN, because doing so can cause networking issues, especially if a Disjoint Layer 2 configuration is used. Use a VLAN other than VLAN 1.
● The installer sets the VLANs as nonnative by default. Configure the upstream switches to accommodate the nonnative VLANs.
● Inband management is not supported on VLAN 2 or VLAN 3.
Hypervisor requirements
Enter the IP address from the range of addresses that are available to the ESXi servers on the storage management network or storage data network through vCenter (see Table 21). Provide static IP addresses for all network addresses.
● The data and management networks must be on different subnets.
● IP addresses cannot be changed after the storage cluster is created. Contact the Cisco Technical Assistance Center (TAC) for assistance.
● Although not required by itself, if you are specifying DNS names, enable IP addresses forward and reverse DNS lookup.
● The installer IP address must be reachable from the management subnet used by the hypervisor and the storage controller virtual machines.
● The installer appliance must run on the ESXi host or on a VMware workstation that is not part of the cluster to be installed.
Storage cluster requirements
The storage cluster component of the HX Data Platform reduces storage complexity by providing a single datastore that is easily provisioned in the vSphere Web Client. Data is fully distributed across the disks in all servers that are in the storage cluster to use controller resources efficiently and provide high availability.
A storage cluster is independent of the associated vCenter cluster. You can create a storage cluster using ESXi hosts that are in the vCenter cluster.
To define the storage cluster, provide the parameters listed in Table 20.
Table 20. Storage cluster parameters
Field |
Description |
Name |
Enter a name for the storage cluster. |
Management IP address |
● This address provides the storage management network access on each ESXi host.
● The IP address must be in the same subnet as the management IP addresses for the nodes.
● Do not allow cluster management IP addresses to share the last octet with another cluster on the same subnet.
● These IP addresses are in addition to the four IP addresses you assign to each node for the hypervisor.
|
Storage cluster data IP address |
● This address provides the storage data network and storage controller virtual machine network access on each ESXi host.
● The same IP address must be applied to all ESXi nodes in the cluster.
|
Data replication factor |
● The data replication factor defines the number of redundant replicas of your data across the storage cluster.
● This factor is set during HX Data Platform installation and cannot be changed.
● Choose a data replication factor. These are the choices:
● Data replication factor 3: A replication factor of three is highly recommended for all environments except Cisco HyperFlex Edge. A replication factor of two has a lower level of availability and resiliency. The risk of outage due to component or node failures should be mitigated through active and regular backups.
Note: This is the recommended option.
● Data replication factor 2: Keep two redundant replicas of the data. This approach consumes less storage resources, but it reduces your data protection in the event of simultaneous node or disk failures. If nodes or disks in the storage cluster fail, the cluster's ability to function is affected. If more than one node fails, or if one node and a disk on different nodes fail, this is called a simultaneous failure.
|
VMware vCenter configuration requirements
Provide an administrator-level account and password for vCenter (see Table 21). Verify that you have an existing vCenter Server. Verify that the vSphere services listed here are operational:
● Enable DRS (optional; enable if licensed).
● Enable vMotion.
● Enable HA (required for defining failover capacity and for expanding the datastore heartbeat).
● User virtual machines must be Release 9 or later (required to use Cisco HyperFlex HX Data Platform, native snapshots, and ReadyClones).
Table 21. VMware vCenter configuration requirements
Field |
Description |
vCenter Server |
Enter your current vCenter server web address. For example, enter http://<IP address>. |
User name |
<admin user name> |
Password |
<admin password> |
Data center name Note: You can use an existing data center object. If the data center doesn't exist in vCenter, it will be created |
Enter the name of the vCenter data center. |
Cluster name |
Enter the name of the vCenter cluster. The cluster must contain a minimum of three ESXi servers. |
System services requirements
Before installing HX Data Platform, verify that the following network connections and services are operational:
● DNS server
● NTP server
● Time zone
Note these guidelines:
● DNS and NTP servers should reside outside the Cisco HyperFlex storage cluster.
● Nested DNS and NTP servers can cause a cluster to not start after the entire cluster is shut down, such as after loss of power to the data center.
● Before configuring the storage cluster, manually verify that the NTP server is working and providing a reliable source for the time.
● Use the same NTP server for all nodes (both converged and computing) and all storage controller virtual machines.
● The NTP server must be stable, continuous (for the lifetime of the cluster), and reachable through a static IP address.
See Table 22.
Table 22. System services configuration requirements
Field |
Essential information |
DNS servers |
<IP address> The DNS server address is required if you use host names while installing the HX Data Platform. Note:
● If you do not have a DNS server, do not enter a host name under System Services on the Cluster Configuration page of the HX Data Platform Installer. Use only IP addresses.
● To provide more than one DNS server address, separate the addresses with a comma. Check carefully to verify that the DNS server addresses are entered correctly.
|
NTP servers (A reliable NTP server is required.) |
<IP address> The NTP server is used for clock synchronization between:
● Storage controller virtual machine
● ESXi hosts
● vCenter Server
Note: A static IP address for an NTP server is required to help ensure clock synchronization for the storage controller virtual machine, ESXi hosts, and vCenter Server. During installation, this information is propagated to all the storage controller virtual machines and corresponding hosts. The servers are automatically synchronized at storage cluster startup. |
Time zone |
<your time zone> Select a time zone for the storage controller virtual machines. The time zone is used to determine when to take scheduled snapshots. Note: All virtual machines must be in the same time zone. |
CPU resource reservation for controller virtual machines
Because the storage controller virtual machines provide critical functions for the Cisco HyperFlex HX Data Platform, the HX Data Platform Installer configures CPU resource reservations for the controller virtual machines. This reservation helps guarantee that the controller virtual machines have the minimum required CPU resources. This resource reservation is especially useful in situations in which the physical CPU resources of the ESXi hypervisor host are heavily consumed by guest virtual machines. Tables 23 and 24 show the CPU and memory resource reservation for storage controller virtual machines.
Table 23. CPU reservation for storage controller virtual machines
Number of virtual |
Shares |
Reservation |
Limit |
8 |
Low |
10,800 MHz |
Unlimited |
Table 24. Memory reservation for storage controller virtual machines
Server model |
Amount of guest memory |
Reserve all guest memory |
● HX240c-M5SX
● HXAF240C-M5SX
|
72 GB |
Yes |
Single-sign-on requirements
The SSO URL is provided by vCenter. If it is not directly reachable from the controller virtual machine, then configure the location explicitly using the installer’s Advanced Settings page. See Table 25.
Table 25. SSO requirements
Single sign-on |
|
SSO server URL |
The SSO URL can be found in vCenter at vCenter Server > Manage > Advanced Settings. Enter config.vpxd.sso.sts.uri. |
This section describes how to install the Cisco UCS seed server.
Installing a standalone Cisco UCS C480 M5 Rack Server with Intel Optane DC PMM
This section explains how to install a Cisco UCS C480 M5 Rack Server in standalone mode. This server can also be integrated with the existing seed Cisco HyperFlex infrastructure, which is discussed later in this document.
Table 26 shows the configuration variables for the installation.
Table 26. Configuration variables
Variable |
Description |
Customer implementation value |
<<var_cimc_ip_address>> |
Cisco UCS C480 M5 server’s IMC IP address |
|
<<var_cimc_ip_netmask>> |
Cisco UCS C480 M5 server’s IMC network netmask |
|
<<var_cimc_gateway_ip>> |
Cisco UCS C480 M5 server’s IMC network gateway IP address |
|
<<var_raid5_vd_name>> |
Name for virtual drive VD0 during RAID configuration |
|
<<var_hostname.domain>> |
SAP HANA node’s fully qualified domain name (FQDN) |
|
<<var_sys_root-pw>> |
SAP HANA node’s root password |
|
<<var_lvm_vg_name>> |
SAP HANA node’s OS logical volume management (LVM) volume group name |
|
<<var_mgmt_ip_address>> |
SAP HANA node’s management and administration IP address |
|
<<var_mgmt_nw_netmask>> |
SAP HANA node’s management network netmask |
|
<<var_mgmt_gateway_ip>> |
Cisco UCS C480 M5 server’s management and administrative network gateway IP address |
|
<<var_mgmt_netmask_prefix>> |
Netmask prefix in Classless Inter-Domain Routing (CIDR) notation |
|
Configuring the Cisco Integrated Management Controller
To configure the on-board IMC, you should connect a KVM switch to the server.
1. After the power cables and network cables are connected, turn on the power (Figures 5 and 6).
BIOS POST screen
Bios POST screen (continued)
2. Press F8 to display the IMC configuration (Figure 7).
Cisco UCS C480 IMC configuration view (local display)
3. Use the console network IP address <<var_cimc_ip_address>>, netmask <<var_cimc_ip_netmask>>, and gateway <<var_cimc_gateway>> for the IPv4 settings of the IMC. Select None for network interface card (NIC) redundancy.
4. Press F10 to save configuration and exit the utility.
5. Open a web browser on a computer on the same network with Java and Adobe Flash installed.
6. Enter the IMC IP address of the Cisco UCS C480 M5 server: http://<<var_cimc_ip_address>>.
7. Enter the login credentials as updated in the IMC configuration. The default user name and password are admin and password (Figure 8).
Cisco IMC login screen
Figure 9 shows the IMC Summary page of the C480M5 server.
Cisco IMC summary screen
Configuring BIOS settings
You need to power on the server and configure some BIOS settings before proceeding with the RAID configuration.
1. From the menu bar at the top of the KVM window, choose Power > Power on System (Figure 10).
Power on the system
2. After the server has booted, press F2 to enter the BIOS menu (Figure 11).
Press F2
3. For a better keyboard experience, from the View menu select the on-screen keyboard (Figure 12).
On-screen keyboard
4. From the BIOS menu, choose Boot Options > Boot Mode > UEFI Mode (Figure 13). This setting selects the Unified Extensible Firmware Interface (UEFI).
Choose UEFI Mode
5. Disable the C-states of the CPU as recommended in the SAP for HANA requirements. From the BIOS menu, choose Advanced > Socket Configuration (Figure 14).
Choose Socket Configuration
6. Choose Advanced Power Management Configuration (Figure 15).
Choose Advanced Power Management Configuration
7. Choose CPU C State control and then disable the C-states as shown in Figure 16.
Disabling C-states
8. After disabling the C-states, press F10 and save the BIOS settings.
Rebooting the server to implement BIOS changes
To make the boot options and CPU C-states take effect, reboot the server.
You are now ready to configure RAID.
To install the SAP HANA on C480M5 and meet the SAP performance KPI values, you must configure the SSDs with RAID 5.
Table 27 lists the settings that you need to configure when you create the virtual drive.
Table 27. RAID settings
RAID settings |
RAID 5 |
Stripe size |
256 |
Read policy |
Read ahead |
Write policy |
Write back |
I/O policy |
Default |
The following procedure applies to the creation of a RAID 5 virtual drive with eight SSDs.
1. Boot the server and press F2 to enter the BIOS menu.
2. Navigate to Advanced and select the Avago MegaRAID utility to proceed with the RAID configuration (Figure 17).
Select Avago MegaRAID
3. Choose Main Menu (Figure 18).
Choose Main Menu
4. Choose Configuration Management (Figure 19).
Choose Configuration Management
5. Choose Create Virtual Drive (Figure 20).
Choose Create Virtual Drive
6. Choose the following options to create a RAID 5 virtual drive.
a. For RAID Level, RAID 5.
b. Choose Select Drives (Figure 21).
Choose RAID options
c. Choose Select Drives and then select the eight SSDs by choosing Enabled as shown in Figure 22.
Choose Enabled
d. Scroll up or down and on the Select Drives screen and choose Apply Changes (Figure 23).
Apply the changes
e. Choose OK in the confirmation window.
7. Configure the virtual drive parameters as shown in Figure 24.
a. Name the virtual drive <<var_raid5_vd_name>>.
b. For Strip Size, choose 256 KB.
c. For Read Policy, choose Read Ahead
d. For Write Policy, choose Write Back.
When you are done, choose Save Configuration and press Enter.
Virtual drive parameters
8. In the next window, the utility will ask for confirmation. Choose OK to proceed.
9. Wait for the initialization process for VD0 to complete, which may take several minutes.
10. Press Esc and choose OK to exit the RAID configuration utility.
11. Press Ctrl+Alt+Del to reboot the server and to have the RAID changes to be effective.
12. After the server reboots, you can skip to the SLES for SAP 12 SP4 installation or RHEL for SAP Applications 7.6 section.
Installing Cisco HyperFlex servers
This section discusses how to install the Cisco HyperFlex servers.
Racking the Cisco HyperFlex servers
To rack the Cisco HyperFlex HX240c M5SX All Flash Node, follow the instructions here: Cisco UCS C240 M5 Server Installation and Service Guide.
Setting up the fabric interconnects
Configure a redundant pair of fabric interconnects for high availability as follows:
● Connect the two fabric interconnects directly using Ethernet cables between the Layer 1 and Layer 2 high-availability ports.
● Connect port L1 on fabric interconnect A to port L1 on fabric interconnect B, and port L2 on fabric interconnect A to port L2 on fabric interconnect B.
This approach allows both fabric interconnects to continuously monitor the status of each other.
Both fabric interconnects must go through the same setup process. Set up the primary fabric interconnect and enable it for cluster configuration. When you use the same process to set up the secondary fabric interconnect, it detects the first fabric interconnect as a peer.
Configure the primary fabric interconnect using the command-line interface
Configure the primary fabric interconnect from the command-line interface (CLI):
1. Connect to the console port.
2. Power on the fabric interconnect. You will see the power-on self-test (POST) messages as the fabric interconnect boots.
3. When the unconfigured system boots, it prompts you for the setup method to use. Enter console to continue the initial setup using the console CLI.
4. Enter setup to continue with the initial system setup.
5. Enter y to confirm that you want to continue the initial setup.
6. Enter the password for the admin account.
7. To confirm, reenter the password for the admin account.
8. Enter yes to continue the initial setup for a cluster configuration.
9. Enter the fabric interconnect fabric (either A or B).
10. Enter the system name.
11. Enter the IPv4 or IPv6 address for the management port of the fabric interconnect.
12. Enter the appropriate IPv4 subnet mask. Then press Enter.
13. Enter the IPv4 address of the default gateway.
14. Enter yes if you want to specify the IP address for the DNS server, or no if you do not.
15. (Optional) Enter the IPv4 address for the DNS server.
16. Enter yes if you want to specify the default domain name, or no if you do not.
17. (Optional) Enter the default domain name.
18. Review the setup summary and enter yes to save and apply the settings.
The following example sets up the first fabric interconnect for a cluster configuration using the console and IPv4 management addresses.
Enter the installation method (console/gui)? console
Enter the setup mode (restore from backup or initial setup) [restore/setup]? setup
You have chosen to setup a new switch. Continue? (y/n): y
Enter the password for "admin": adminpassword%958
Confirm the password for "admin": adminpassword%958
Do you want to create a new cluster on this switch (select 'no' for standalone setup or if you want this switch to be added to an existing cluster)? (yes/no) [n]: yes
Enter the switch fabric (A/B): A
Enter the system name: SEED-FI
Mgmt0 IPv4 address: 192.168.78.10
Mgmt0 IPv4 netmask: 255.255.255.0
IPv4 address of the default gateway: 192.168.78.1
Virtual IPv4 address: 192.168.78.12
Configure the DNS Server IPv4 address? (yes/no) [n]: yes
DNS IPv4 address: 192.168.78.5
Configure the default domain name? (yes/no) [n]: yes
Default domain name: seed.ciscolab.local
Join centralized management environment (UCS Central)? (yes/no) [n]: no
Following configurations will be applied:
Switch Fabric=A
System Name=SEED-DEMO
Management IP Address=192.168.78.10 Management IP Netmask=255.255.255.0
Default Gateway=192.168.10.1
Cluster Enabled=yes
Virtual Ip Address=192.168.78.12
DNS Server=192.168.78.5
Domain Name=domainname.com
Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes
Configure the subordinate fabric interconnect using the CLI
This procedure describes how setting up the second fabric interconnect using IPv4 or IPv6 addresses for the management port.
1. Connect to the console port.
2. Power on the fabric interconnect. You will see the POST messages as the fabric interconnect boots.
3. When the unconfigured system boots, it prompts you for the setup method to use. Enter console to continue the initial setup using the console CLI.
Note: The fabric interconnect should detect the peer fabric interconnect in the cluster. If it does not, check the physical connections between the Layer 1 and Layer 2 ports, and verify that the peer fabric interconnect has been enabled for a cluster configuration.
4. Enter y to add the subordinate fabric interconnect to the cluster.
5. Enter the admin password of the peer fabric interconnect.
6. Enter the IP address for the management port on the subordinate fabric interconnect.
7. Review the setup summary and enter yes to save and apply the settings
The following example sets up the second fabric interconnect for a cluster configuration using the console and IPv4 management addresses.
Enter the installation method (console/gui)? console
Installer has detected the presence of a peer Fabric interconnect. This Fabric interconnect will be added to the cluster. Continue (y/n) ? y
Enter the admin password of the peer Fabric Interconnect: adminpassword%958
Peer Fabric interconnect Mgmt0 IPv4 Address: 192.168.78.11
Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes
Verify fabric interconnect setup
You can verify that both fabric interconnect configurations are complete by logging in to the fabric interconnect through SSH.
Use the following commands to verify the cluster status using the Cisco UCS Manager CLI.
SEED-FI-A# show cluster state
Cluster Id: 0x4432f72a371511de-0xb97c000de1b1ada4
A: UP, PRIMARY
B: UP, SUBORDINATE
HA READY
SEED-FI-A # show cluster extended-state
Cluster Id: 0x1036bea8a73e11e8-0x8acf707db94a5083
Start time: Wed Aug 21 08:56:29 2019
Last election time: Thu Aug 22 23:51:37 2019
B: UP, PRIMARY
A: UP, SUBORDINATE
B: memb state UP, lead state PRIMARY, mgmt services state: UP
A: memb state UP, lead state SUBORDINATE, mgmt services state: UP
heartbeat state PRIMARY_OK
INTERNAL NETWORK INTERFACES:
eth1, UP
eth2, UP
HA READY
Detailed state of the device selected for HA storage:
Chassis 1, serial: FOX2107PBMN, state: active
Chassis 2, serial: FOX2102Q2NL, state: active
Server 2, serial: WZP22090HCV, state: active
Connect the Cisco HyperFlex servers to the fabric interconnect
Follow these steps to connect the Cisco HyperFlex servers to the fabric interconnect:
1. Set the Cisco IMC server to the factory default settings before integrating it with Cisco UCS Manager.
2. Do not connect dedicated IMC ports to the network for integrated nodes. Doing so causes the server to not be discovered in Cisco UCS Manager. If the server is not discovered, reset IMC to the factory settings for each server.
3. Before you connect the IMC server, make sure that a Cisco UCS VIC 1227 is installed in the PCIe slot 2 of a Cisco HyperFlex HX240c server. If the card is not installed in the correct slot, you cannot enable direct-connect management for the server.
4. Complete the physical cabling of servers to the fabric interconnects and configure the ports as server ports.
Discover Cisco HyperFlex nodes in the fabric interconnect
Follow these steps to let the fabric interconnect to discover the Cisco HyperFlex nodes.
1. After the Cisco HyperFlex servers are connected to the fabric interconnects, power on the servers to start the Cisco UCS Manager discovery process.
2. Log in to Cisco UCS Manager and monitor for the successful discovery of all the Cisco HyperFlex nodes (Figure 25).
Example of screen showing Cisco HyperFlex nodes discovered in Cisco UCS Manager
3. Verify that the servers are discovered without any errors and that Overall Status displays Unassociated (Figure 26).
Verifying server discovery
Deploying Cisco HyperFlex systems
Figure 27 shows the installation workflow involved in creating a standard cluster using the HX Data Platform Installer.
![]() |
Workflow for creating a standard cluster
Deploy Cisco HyperFlex HX Data Platform Installer
Deploy the HX Data Platform Installer OVA file using the vSphere Web Client. If your hypervisor wizard defaults to DHCP for assigning IP addresses to new virtual machines, deploy the HX Data Platform Installer OVA with a static IP address.
In addition to installing the HX Data Platform on an ESXi host, you can deploy the HX Data Platform Installer on VMware Workstation, VMware Fusion, or Oracle VM VirtualBox.
Note these guidelines:
● Connect to vCenter to deploy the OVA file and provide the IP address properties. Deploying directly from an ESXi host will not allow you to set the values correctly.
● Do not deploy the HX Data Platform Installer on an ESXi server that is going to be a node in the Cisco HyperFlex storage cluster.
1. Locate and download the HX Data Platform Installer OVA file from Download Software. For example, choose Cisco-HX-Data-Platform-Installer-v4.0.1b-xxx-esx.ova.
2. Deploy the HX Data Platform Installer using the VMware hypervisor to create an HX Data Platform Installer virtual machine.
Note: Use a version of the virtualization platform that supports virtual hardware Release 10.0 or later.
3. vSphere is a system requirement. You can use the vSphere thick client, thin client, or web client. To deploy the HX Data Platform Installer, you can also use VMware Workstation, VMware Fusion, or Oracle VM VirtualBox.
4. Open the ESXi host web client or vSphere client to import the HX Data Platform Installer from the OVA file. In this example, an ESXi host is used (Figure 28).
Opening VMware ESXi to import the Cisco HyperFlex HX Data Platform Installer
5. Click Create/Register VM in the top menu (Figure 29).
Click Create/Register VM
6. In the next window, choose “Deploy a virtual machine from an OVF or OVA file” and click Next (Figure 30).
Choosing to deploy a virtual machine from a file
7. Enter a name for the HX Data Platform Installer virtual machine and browse for the downloaded OVA file; then click Next (Figure 31).
Finding the downloaded file
8. Choose the datastore from the ESXi host on which you want to place the HX Data Platform Installer virtual machine and click Next (Figure 32).
Choosing the datastore
9. Expand the properties section for the virtual machine to customize the values for the installation; then click Next (Figure 33). Remember to choose a root password. If you do not choose a root password, you can use the default password Cisco123 while logging in.
Entering additional settings
10. Review the settings and click Finish to start deployment of the HX Data Platform Installer virtual machine (Figure 34).
Start deployment
After the HX Data Platform Installer virtual machine has been deployed, it will be listed in the ESXi host’s list of virtual machines (Figure 35).
The installer listed in the ESXi list of virtual machines
11. After the HX Data Platform Installer virtual machine has been deployed, you can access the https://<ip address> assigned to the virtual machine and log in as root using the password you chose while deploying the installer, or you can use the default password (Figure 36).
Logging in to the installer
Configure and deploy the Cisco HyperFlex cluster
Now configure and deploy the Cisco HyperFlex cluster.
1. Log in to the HX Data Platform Installer with root user credentials.
Note: If you used the factory-set default password for your first-time login, you will be prompted to change the default password to one of your choice.
2. In a browser, enter the URL for the virtual machine on which HX Data Platform Installer was installed.
3. Enter the following login credentials:
◦ Username: root
◦ Password: <<HX Data Platform Installer virtual machine password>> or default password Cisco123
4. Read the end-user license agreement (EULA), select the “I accept the terms and conditions” checkbox, and click Login (Figure 37).
Logging in to the installer
5. Choose Standard Cluster from the Create Cluster menu and click Continue (Figure 38).
Choosing Standard Cluster
6. Enter the Cisco UCS Manager and vCenter credentials and click Continue (Figure 39).
Entering Cisco UCS Manager and VMware vCenter credentials
7. After verifying the Cisco UCS Manager and vCenter credentials, the installer lists the available unassociated Cisco HyperFlex servers. Choose the servers you want to use to create the cluster and click Continue (Figure 40). The installer then validates the Cisco UCS Manager prerequisites (Figure 41).
List of available servers
Validating the configuration
8. Enter the VLAN information, MAC address pool, management IP address blocks, and cluster name for your setup and click Continue (Figure 42).
Entering configuration information
9. The next screen allows you to configure the hypervisor settings. You have the option to enter a new root password for the ESXi hosts. After filling in the required information, click Continue (Figure 43).
Configuring hypervisor settings
10. Enter the IP address details for the management and data networks. These will be assigned to the storage controller virtual machines of the Cisco HyperFlex cluster. The cluster IP addresses that you enter also will be used to connect to the HTML-5 based management page HyperFlex Connect. Click Continue (Figure 44).
Entering IP address details
11. Enter the desired cluster name.
12. Choose the replication factor.
13. Enter new passwords for the controller virtual machines.
14. Enter the vCenter information for the new Cisco HyperFlex cluster that is to be created.
15. Enter the DNS, NTP, and DNS information.
16. Choose the time zone of the cluster.
17. Verify that Jumbo Frames is enabled in the Advanced Configuration menu.
18. Review the settings in the side bar and click Start to start the deployment (Figure 45).
Finalizing the settings and starting the deployment
19. The installation starts with validation checks. After the checks are successful, the installer proceeds with the Cisco UCS Manager and hypervisor configurations. If any warnings or errors are reported, address them before proceeding (Figure 46).
Installation in progress
20. After the Cisco UCS Manager and hypervisor configurations are complete, the installer validates the deployment (Figure 47).
Validating the deployment
21. Upon successful completion of the validation, the actual deployment of the Cisco HyperFlex cluster starts. Wait until the deployment of the cluster completes successfully.
22. After the deployment is completed, the installer displays the status of the Cisco HyperFlex cluster (Figure 48).
Cluster status information
23. You can access the Cisco HyperFlex cluster through Cisco HyperFlex Connect using the cluster IP address that was assigned during the cluster configuration. Log in with the admin credentials (Figure 49).
Logging in to Cisco HyperFlex Connect
24. After the login, Cisco HyperFlex Connect displays a dashboard that provides a complete snapshot of the Cisco HyperFlex cluster on a single page (Figure 50).
Complete snapshot of the cluster
25. Log in to vCenter and verify that all four Cisco HyperFlex nodes are available and that the storage controller virtual machines are all powered on (Figure 51).
Checking the nodes
26. Check for any warnings or error messages and fix any problems as needed.
27. Create a datastore using the Cisco HyperFlex Connect management page, which will be mounted automatically on the ESXi hosts.
28. Log in to Cisco HyperFlex Connect and click Datastores in the left navigation pane.
29. Choose Create Datastore in the top menu bar (Figure 52).
Choosing to create a datastore
30. Enter the desired size of the datastore and the block size and click Create Datastore (Figure 53).
Click Create Datastore
The datastore will be created and mounted on the ESXi hosts of the Cisco HyperFlex cluster (Figure 54).
Mounting the datastore
31. Verify in vCenter that the datastore that you created has been mounted successfully on the ESXi hosts of the Cisco HyperFlex cluster (Figure 55).
Verifying that the datastore has been mounted
With this final check, you have successfully deployed the Cisco HyperFlex cluster, and the cluster is ready for service. New virtual machines can now be created and used.
Integrating and managing a Cisco UCS C480 M5 server with Cisco UCS Manager from the Cisco HyperFlex system
This section explains how to integrate and manage the Cisco UCS C480 M5 Rack Server with Cisco UCS Manager. This configuration uses the same fabric interconnects that connect the Cisco HyperFlex system. With this configuration, you can use a network setup in which the virtual machines running in the Cisco HyperFlex cluster can use the HANA database that is running in the C480 M5 server.
Connecting and discovering the Cisco UCS C480 M5 server in Cisco UCS Manager
Here are the steps for connecting and discovering the Cisco UCS C480 M5 server in Cisco UCS Manager.
1. After the C480 M5 has been racked, connect port 1 of the VIC 1385 to fabric interconnect A, and connect port 2 of the VIC 1385 to fabric interconnect B.
2. Power on the server.
3. Open a web browser and navigate to the Cisco UCS 6332 Fabric Interconnect cluster address.
4. When prompted, enter admin as the user name and enter the administrative password.
5. Click Login to log in to Cisco UCS Manager. Cisco UCS Manager starts the discovery of the C480 M5 and shows the overall status as Unassociated (Figure 56).
Discovery process for Cisco UCS C480 M5 server
6. Verify that the discovery was successful, without any errors.
7. Check the server inventory to see the available Intel Optane DC PMMs and their health (Figure 57).
Checking the PMMs
Configuration prerequisites for service profile template
This section explains the procedure for creating a service profile template that you we can use to create a service profile for the C480 M5 server and associate it with the server. You can use the same template to create as many C480 M5 service profiles as you need, so you do not need to repeat the same steps again.
Create a new organization in Cisco UCS Manager
For secure multitenancy within the Cisco UCS domain, you can create a logical entity called an organization. To create an organization unit, complete the following steps:
1. In Cisco UCS Manager, on the tool bar, click New.
2. In the drop-down menu, choose Create Organization.
3. Enter the desired name: for example, enter SEED-DEMO.
4. Click OK to create the organization.
Create an IP address block for KVM access
To create a block of IP addresses for server KVM access in the Cisco UCS environment, follow the procedure presented here.
Note: Verify that this IP address block is in the same range as the seed Cisco HyperFlex system fabric interconnects.
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Choose Pools > root > Sub-Organizations > <<Org Name>> IP Pools.
3. Right-click and choose Create IP Pool.
4. Enter a name for the IP address pool (Figure 58).
Entering a name for the IP address pool
5. Enter the starting IP address of the block, the number of IP addresses required, and the subnet and gateway information (Figure 59).
Entering the IP address block information
6. Click OK to create the IP address block.
7. Click Finish (Figure 60).
Create the IP address pool
Create MAC address pools
To configure the necessary MAC address pools for the Cisco UCS environment, complete the following steps.
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Choose Pools > root > Sub-Organizations > <<Org Name>>. In this procedure, two MAC address pools are created, one for each switching fabric.
3. Right-click MAC Pools.
4. Choose Create MAC Pool to create the MAC address pool (Figure 61).
Choosing to create a MAC address pool
5. Enter SEED-FI-A as the name of the MAC address pool.
6. (Optional) Enter a description for the MAC address pool.
7. For Assignment Order, select Sequential (Figure 62).
Selecting sequential assignment order for MAC addresses
8. Click Next.
9. Click Add at the bottom of the screen.
10. Specify a starting MAC address. The recommended approach is to place 0A in the next-to-last octet of the starting MAC address to identify all of the MAC addresses as fabric interconnect A addresses.
11. Specify a size for the MAC address pool that is sufficient to support the available blade and rack server resources (Figure 63).
Specifying the starting address and size
12. Repeat steps 1-11 to create the MAC address pools for fabric interconnect B. Be sure to name the MAC address pool with “B” identifier.
The recommended approach is to place 0B in the next-to-last octet of the starting MAC address to identify all the MAC addresses as fabric interconnect B addresses.
Create UUID suffix pools
To configure the necessary UUID suffix pool for the Cisco UCS environment, complete the following steps.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Pools > root > Sub-Organizations > <<Org Name>>.
3. Right-click UUID Suffix Pools.
4. Choose Create UUID Suffix Pool (Figure 64).
Choosing to create a UUID suffix pool
5. Enter UUID_Pool as the name of the UUID suffix pool (Figure 65).
Naming the UUID suffix pool
6. (Optional) Enter a description for the UUID suffix pool.
7. Keep the Prefix as the Derived option.
8. Select Sequential for Assignment Order.
9. Click Next.
10. Click Add to add a block of UUIDs.
11. Keep the From field at the default setting.
12. Specify a size for the UUID block that is sufficient to support the available rack and blade resources (Figure 66).
Configuring the block range and size
13. Click OK.
Create local disk configuration policy
A local disk configuration policy configures SSD local drives that have been installed on a server through the Cisco 12-Gbps RAID controller card.
To create a local disk configuration policy for the C480 M5 servers, follow the steps presented here.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Policies > Sub-Organizations > <<Org Name>>.
3. Right-click Local Disk Config Policies.
4. Choose Create Local Disk Configuration Policy (Figure 67).
Choose to create local disk configuration policy
5. Enter the desired name for the policy.
6. Change the mode to Any Configuration.
7. Click OK to create the local disk configuration policy (Figure 68).
Creating local disk configuration policy
Create server BIOS policy
The purpose of the seed C480 M5 is to host a SAP HANA database, so to get the best performance for SAP HANA, you must configure the server BIOS accurately. To create a server BIOS policy for the Cisco UCS environment, complete the steps here.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Policies > root > Sub-Organizations > <<Org Name>>.
3. Right-click BIOS Policies.
4. Choose Create BIOS Policy (Figure 69).
Choosing to create BIOS policy
5. Enter the desired BIOS policy name.
6. Change the Quiet Boot setting to Disabled (Figure 70).
Disable Quiet Boot
7. Click the Advanced tab to disable the C-states as recommended by SAP.
8. Set Power Technology and Energy Performance to Performance.
9. Click Save Changes at the bottom of the screen (Figure 71).
Configuring advanced settings
Create server maintenance policy
You should update the default maintenance policy with the reboot policy User Ack for the SAP HANA server. This policy will wait for the administrator to acknowledge the server reboot before the configuration changes take effect.
To update the default maintenance policy, complete the steps here.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Policies > root > Sub-Organizations > <<Org Name>>.
3. Right Click Maintenance Policies > Create Maintenance Policy (Figure 72).
Choosing to create maintenance policy
4. Change Reboot Policy to User Ack.
5. Click Save Changes.
6. Click OK to accept the change (Figure 73).
Applying the change
Creating the service profile template for the Cisco UCS C480 M5
The LAN configurations and relevant SAP HANA policies must be defined before you create a service profile template.
To create the service profile template, complete the steps shown here.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Service Profile Templates > root > Sub-Organization > <<org name>>
3. Right-click <<org name>>.
4. Choose Create Service Profile Template to open the Create Service Profile Template wizard (Figure 74).
Choosing to create a service profile template
5. Identify the service profile template:
a. Enter the desired name for the service profile template.
b. Choose the Updating Template option.
c. Under UUID, select the appropriate UUID as the UUID pool.
d. Click Next (Figure 75).
Identifying the template
6. On the Storage Provisioning page, do the following:
a. Select Local Disk Configuration Policy.
b. Choose the local storage with the policy name that was created in the previous steps.
c. Click Next (Figure 76).
Provisioning the storage
The seed unit will be integrated with the existing Cisco HyperFlex system’s Cisco UCS Manager, so the VLANs created by the HX Data Platform Installer will be used here. In this example, only the access and management VLANs are used. If you need more VLANs for different purposes, refer to the sections about creating VLANs in Cisco UCS Integrated Infrastructure for SAP HANA.
7. On the Networking page, do the following:
a. Select the Expert option.
b. Click Add to add the vNICs. You will be creating two vNICs in this example.
Selecting networking options
8. On the Create vNIC page, do the following:
a. Enter the desired name for the vNIC.
b. Choose the appropriate MAC address pool, which was created in the previous steps.
c. Choose the fabric ID “A.”
d. Select Enable Failover.
e. Choose the VLAN that corresponds to your access network.
f. Click OK to create the vNIC (Figure 78).
Creating the vNIC
9. Repeat step 8 to create the second vNIC for the management network (Figure 79).
Creating the second vNIC
10. Click OK to create the second vNIC (Figure 80).
Two vNICs created
11. Click Next.
12. This next step is for optional SAN connectivity configuration. Accept the default values and click Next (Figure 81).
Optional SAN configuration
13. If there is no SAN storage configuration, click Next (Figure 82).
Proceeding without SAN configuration
14. Review the vNIC and vHBA placements and click Next (Figure 83).
Reviewing the vNIC and vHBA placements
15. Click Next on the vMedia Policy page (Figure 84).
vMedia Policy page
16. Choose the appropriate boot policy, created in the previous steps, and click Next (Figure 85).
Setting boot policy
17. Choose the appropriate maintenance policy, created in the previous steps, and click Next (Figure 86).
Setting maintenance policy
18. Select the appropriate bios policy, created in the previous steps, and click Finish to create the service profile template (Figure 87).
Completing the service profile
Creating the service profile for the Cisco UCS C480 M5 from a template
To create service profiles from the service profile template HANA-C480, complete the following steps.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Service Profile Templates > root > Sub-Organization > <<org name>> > Service Template SEED-C480M5.
3. Right-click Service Template SEED-C480M5 and choose Create Service Profiles from Template.
4. Enter HANA-Server0 as the service profile prefix.
5. Enter 2 as name suffix starting number.
6. Enter 1 as the number of instances.
7. Click OK to create the service profile (Figure 88).
Creating a service profile from a template
Associating the service profile with a Cisco UCS C480 M5 server
To associate the service profile created for a specific slot, complete the following steps.
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Choose Service Profile > root > Sub-Organization > HANA > HANA-Server01.
3. Right-click HANA-Server01 and choose Change Service Profile Association.
4. For Server Assignment, choose “Select existing Server” for the drop-down list.
5. Select the server (Figure 89).
Selecting the server for the profile
6. Click OK to associate the service profile.
7. Check for any error messages before confirming the association. Then click Yes (Figure 90).
Confirming the association
8. Monitor the service profile association from the Finite State Machine (FSM) tab of the server or service profile (Figure 91).
Monitoring the service profile association
After the successful association of the service profile, the C480 M5 is ready for OS installation (Figure 92).
Service profile successfully associated
Configuring RAID for OS installation
After successful association of the service profile, you need to prepare the C480 M5 server with a RAID 5 disk configuration. This task was explained earlier in the section “Configuring RAID” under “Installing a standalone Cisco UCS C480 M5 Rack Server with Intel Optane DC PMM.”
Installing the SLES for SAP Applications 12 SP4 operating system
This section shows the installation procedure for SLES 12 for SAP SP4 on local drives.
1. Launch the KVM console and mount the OS ISO image.
2. Boot the server with the OS ISO image (Figure 93).
Booting to the ISO image
3. On the Language, Keyboard, and License Agreement page, select the English language and your preferred keyboard layout, agree to the license terms, and click Next.
4. On the Network Settings page, click Next. You will return to the network configuration as part of the post-installation tasks.
5. On the Registration page, click Skip Registration. You will register later as part of the post-installation tasks.
6. On the Choose Operating System Edition page, select the SUSE Linux Enterprise Server for SAP Applications option (Figure 94).
Select the product installation mode
7. On the Add On Product page, click Next. In this configuration example, there are no additional products to install.
8. On the Suggested Partitioning page, click Expert Partitioner (Figure 95).
Suggested partitioning initial proposal: Select Expert Partitioner
9. At the left, choose System View > Linux > Hard Disks > sda.
10. Clear the suggested partitions. The example here shows two suggested partitions: sda1 and sda2. Use the following steps to delete sda1 and sda2.
a. Delete partition sda2 (Figures 96 and 97).
Expert Partitioner: Delete partition sda2
Expert Partitioner: Confirm deletion of partition sda2
b. Delete partition sda1 (Figures 98 and 99).
Expert Partitioner: Delete partition sda1
Expert Partitioner: Confirm deletion of partition sda1
Now, from the unpartitioned device sda, you will use the steps here to do the following:
● Create a 200-MB /boot/efi partition (/dev/sda1) from the disk device available (/dev/sda).
● Create another partition (/dev/sda2), assigning the rest of the available space in the device (/dev/sda). Assign this partition to Linux LVM, thus making it a physical volume.
● Create a volume group (hanavg) and assign the available physical volume (/dev/sda2) to it.
● Create a logical volume for /filesystem with a size of 100 GB and using the Ext3 file system.
● Create a swap volume with a size of 2 GB.
11. In the Expert Partitioner, choose the device /dev/sda and click Add (Figure 100).
Add new partition
12. Create a partition with a size of 200 MB for /boot/efi (Figure 101).
Adding a partition: Specify the new partition size
13. Click Next. For Role, select EFI Boot Partition (Figure 102).
Adding a partition: Specify the role
14. Click Next. By default, the FAT file system is selected, and /boot/efi is selected as the mount point (Figure 103).
Adding a partition: Select formatting and mounting options
15. Click Finish. Then click Add to add another partition (Figure 104).
Expert Partitioner: Add another partition
16. Allocate the rest of available space to the partition (Figure 105).
Adding another partition: Specify the partition size
17. Click Next. For Role, choose Data and ISV Applications (Figure 106).
Adding another partition: Specify the role
18. Assign the partition with the file system ID 0x8E Linux LVM (Figure 107).
Adding another partition: Specify formatting and mounting options
19. Click Finish. You will see an overview of your partitions (Figure 108).
Expert Partitioner: Hard disk /dev/sda partitions overview
20. In the System View pane on the left, select Volume Management. Choose Add > Volume Group (Figure 109).
Expert Partitioner volume management: Add a volume group
21. Provide a name for the volume group, select /dev/sda2 from the list of available physical volumes, and click Add (Figures 110 and 111).
Add Volume Group: Select an available physical volume
Add Volume Group (continued)
22. Click Finish.
23. Under Volume Management, click Add and select Logical Volume (Figure 112).
Expert Partitioner: Add a logical volume
24. Add a logical volume with the name rootlv in the volume group (Figure 113).
Adding a logical volume: Specify the name and type
25. Click Next. Specify a size of 100 GB and 1 stripe (Figure 114).
Adding a logical volume: Specify the size and stripe
26. Click Next. For Role, specify Operating System (Figure 115).
Adding a logical volume: Specify the role
27. Click Next. Specify the formatting and mounting options. Format the 100-GB logical volume rootlv with the Ext3 file system and assign the / mount point (Figure 116).
Adding a logical volume: Specify formatting and mounting options
28. Click Finish.
29. Create a swap volume with a size of 2 GB. Under Volume Management, click Add and select Logical Volume (Figure 117).
Expert Partitioner volume management: Add another logical volume
30. Add a logical volume for swapping with the name swapvol. Then click Next (Figure 118).
Adding another logical volume: Specify the name and type
31. Assign a space of 2 GB and one stripe. Then click Next (Figure 119).
Adding another logical volume: Specify size and stripe information
32. For Role, select Swap. Then click Next (Figure 120).
Adding another logical volume: Specify the role
33. Specify the formatting and mounting options (Figure 121).
Adding another logical volume: Specify formatting and mounting options
34. Click Finish. A summary page appears (Figure 122).
Expert Partitioner: Volume management summary page
35. Click Accept to return to the Installation Settings page.
36. Review the updated partition information. Then click Next (Figure 123).
Updated partition information
37. For Clock and Time Zone, choose the appropriate time zone and select the hardware clock set to UTC.
38. For the password for the system administrator root, enter the appropriate password using <<var_sys_root-pw>>.
39. On the Installation Settings screen, review the default information (Figure 124).
Installation Settings: Default proposal
40. Customize the software selection. Click the Software headline to make changes as shown here (Figure 125):
a. Deselect Gnome Desktop Environment.
b. Deselect Web-Based Enterprise Management
c. Select C/C++ Compiler and Tools.
d. Select SAP HANA Server Base.
Software Selection and System Tasks: Customized settings
41. Click OK.
42. Under then Firewall and SSH headline, disable the firewall. This selection will automatically enable SSH service (Figure 126).
Firewall and SSH service customized
43. Click the “Default systemd target” headline and choose “Text mode” (Figure 127).
Setting the default system target to Text mode
44. Click OK.
45. Leave the Booting and System default selections unchanged (Figure 128).
Installation Settings: Final selections
46. Click Install. Also select Install at subsequent Confirm Installation prompts. The installation starts, and you can monitor the status (Figure 129).
Performing the installation
After the installation is complete, a reboot alert appears. The system will reboot and boot from disk upon startup (Figure 130).
Booting from hard disk
The system then displays the login prompt (Figure 131).
Login prompt
47. Use the KVM console to log in to the installed system as the user root with the password <<var_sys_root-pw>> (Figure 132).
Log in using root
48. Configure the host name and disable IPv6 (Figure 133).
#yast2
YaST Control Center: Network Settings
49. Choose System > Network Settings and press Alt+S to select the Hostname/DNS tab (Figure 134).
YaST Control Center: Hostname/DNS
50. Enter <<var_hostname.domain>>. Also enter the DNS server address of your network for resolution, if necessary. Then press Alt+O.
51. On the Global Options tab, using Alt+G, disable IPv6 by deselecting the Enable IPv6 option as shown in Figure 135. Note that changing the IPv6 setting requires a reboot to make the change take effect.
YaST: IPv6 setting
52. Press Alt+O to save the network configuration. Press Alt+Q to quit the YaST Control Center.
53. Reboot the server to make the IPv6 selection and the host-name settings take effect:
#reboot
54. Identify the Ethernet interface port that is connected to the management VLAN.
55. Assign <<var_mgmt_ip_address>> as the IP address and enter <<var_mgmt_nw_netmask>> as the subnet mask for the available interface (for example, eth1).
56. Go to the network configuration directory and create a configuration for eth0:
#cd /etc/sysconfig/network
#vi ifcfg-eth0
BOOTROTO=’static’ IPADDR=’<<var_mgmt_ip_address>>’ NETMASK=’<<var_mgmt_nw_netmask>>’ NETWORK=’’
MTU=’’ REMOTE_IPADDR=’’ STARTMODE=’auto’ USERCONTROL=’no’
57. Add the default gateway:
#cd /etc/sysconfig/network
#vi routes
default <<var_mgmt_gateway_ip>> - -
Note: Be sure that the system has access to the Internet or a SUSE update server to install the patches.
58. Verify /etc/hosts as shown in the example in Figure 136.
Verify /etc/hosts
59. If needed, set up a proxy service so that the appliance can reach the Internet (Figure 137).
#yast2
YaST: Proxy configuration
60. Enter the proxy server and port as shown in the sample configuration presented here. Select OK and then quit YaST to save the configuration (Figure 138).
YaST: Proxy configuration (continued)
61. Register the system with SUSE to receive the latest patches. For more information, refer to the SUSE knowledgebase article at https://www.suse.com/de-de/support/kb/doc?id=7016626.
The system must have access to the Internet to proceed with this step.
#SUSEConnect -r <<registration_code>> -e <<email_address>>
62. Update the system with the following command. Again, the system must have access to the Internet to proceed with this step.
#zypper update
63. Follow the on-screen instructions to complete the update process. Reboot the server and log in to the system again.
Post-installation OS configuration
To optimize the use of the SAP HANA database with SLES 12 or SLES for SAP 12 SP4, apply the settings by referring to SAP Note 2205917 - SAP HANA DB: Recommended OS settings for SLES 12 / SLES for SAP Applications 12.
Note: Following is the information from SAP Note 2205917 mentioned previously and is current at the time that this document was published. For the latest updates, please see the SAP Notes.
To customize the SLES 12 SP4 System for HANA Servers, follow these steps:
Turn off autoNUMA balancing, disable transparent hugepages and configure C- for lower latency
Edit /etc/default/grub, search for the line starting with “GRUB_CMDLINE_LINUX_DEFAULT” and append the following:
numa_balancing=disable transparent_hugepage=never intel_idle.max_cstate=1 processor.max_cstate=1
Save your changes and run:
grub2-mkconfig -o /boot/grub2/grub.cfg
Energy Performance Bias, CPU frequency/Voltage scaling and Kernel samepage merging (KSM).
Add the following commands to a script executed on system boot, such as /etc/init.d/boot.local:
cpupower set -b 0
cpupower frequency-set -g performance
echo 0 > /sys/kernel/mm/ksm/run
Activate tuned and Enable tuned profile
saptune daemon start
saptune solution apply HANA
Reboot the OS issuing reboot command
Preparing SAP HANA file systems
To prepare the file systems, you start by carving out logical volumes for the data, log, and HANA shared files. Then you create the file systems. Then you update /etc/fstab and mount the volumes.
1. Use the following command to check for the available physical volume (PV), as shown in Figure 139.
#pvdisplay
Checking for the physical volume
2. Use the following command to check for the available volume group (VG) hanavg (Figure 140).
#vgdisplay
Checking for the volume group
3. Create logical volumes (LVs) for the data, log, and HANA shared file systems (Figure 141).
lvcreate –name <<lvname>> -I<<stripesize>> -L<<volume-size>> <<parent-vg-name>> # lvcreate --name datalv -I256 -L9T hanavg
Note: The lvcreate command doesn’t require you to specify the stripe size when creating volumes on SSDs.
# lvcreate --name loglv -I256 -L512G hanavg
# lvcreate --name sharedlv -I256 -L3T hanavg
Creating logical volumes
4. Create file systems in the data, log, and HANA shared volumes (Figure 142).
#mkfs.xfs -f /dev/hanavg/datalv
#mkfs.xfs -f /dev/hanavg/loglv
#mkfs.xfs -f /dev/hanavg/sharedlv
Creating file systems
5. Create mount directories for the data, log, and HANA shared file systems:
#mkdir -p /hana/data
#mkdir -p /hana/log
#mkdir -p /hana/shared
6. Mount options vary from the default Linux settings for XFS for SAP HANA data and log volumes. The following is a sample /etc/fstab entry. Make sure that you use the same mount options for the data and log file systems as shown in the example.
/dev/mapper/hanavg-rootlv / ext3 defaults 0 0
UUID=fc2e52c4-e6f6-4e9a-9ad1-86aeb3369942 /boot ext3 defaults 1 2
/dev/mapper/hanavg-swapvol swap defaults 0 0
/dev/hanavg/datalv /hana/data xfs nobarrier,noatime,nodiratime,logbufs=8,logbsize=256k,async,swalloc,allocsize=131072k 1 2
/dev/hanavg/loglv /hana/log xfs nobarrier,noatime,nodiratime,logbufs=8,logbsize=256k,async,swalloc,allocsize=131072k 1 2
/dev/hanavg/sharedlv /hana/shared xfs defaults 1 2
This example illustrates the use of default settings for mount options when configuring SSDs.
/dev/hanavg/swapvol swap defaults 0 0
/dev/hanavg/rootlv / ext3 acl,user_xattr 1 1
UUID=912D-A3CB /boot/efi vfat umask=0002,utf8=true 0 0
/dev/hanavg/datalv /hana/data xfs defaults 1 2
/dev/hanavg/loglv /hana/log xfs defaults 1 2
/dev/hanavg/sharedlv /hana/log xfs defaults 1 2
7. Use the following command to mount the file systems:
#mount -a
8. Use the df –h command to check the status of all mounted volumes (Figure 143).
Checking the status of mounted volumes
9. Change the directory permissions before you install SAP HANA. Use the chown command on each SAP HANA node after the file systems are mounted:
#chmod -R 777 /hana/data
#chmod -R 777 /hana/log
#chmod –R 777 /hana/shared
10. The system is now ready for the SAP HANA installation.
11. You can now skip to the “Installing SAP HANA” section in this document.
Installing the RHEL for SAP Applications 7.6 operating system
This section shows the installation procedure for RHEL 7.6 for SAP on local drives.
1. Mount and boot the ISO image from KVM console (Figure 144).
Booting to the ISO image
2. Select the language and keyboard layout you want to use (Figure 145).
Select your preferred language and keyboard layout
3. Click Continue. The central installation summary page appears. Here you need to configure various features.
4. Choose Localization > Date & Time. Choose the appropriate region and city. You will configure NTP later. Click Done (Figure 146).
Setting the date and time
5. Choose Security > Security Policy. Turn off the security policy (Figure 147).
Setting security policy
6. Select Software Selection. Retain the default selection: Minimal Install (Figure 148).
Software Selection page
7. Select KDUMP. Deselect the Enable Kdump option to disable it (Figure 149).
Disabling Kdump
8. Choose System > Installation Destination. Under the other storage options, select the option to manually configure the disk partition layout: “I will configure partitioning” (Figure 150).
Installation Destination page
9. Click Done. The Manual Partitioning page appears (Figure 151).
Manual Partitioning page
10. You will first create the /boot partition with the standard partition scheme. Change the default partition scheme from Logical Volume Manager (LVM) to Standard Partition (Figure 152).
Choosing the Standard Partition type
11. Click the + button and create a /boot partition with a size of 200 MiB. Then click “Add mount point” (Figure 153).
Entering mount-point and capacity information
12. Change the file system from the default XFS to ext3 (Figure 154).
Changing the file system type to ext3
13. Create a /boot/efi partition of 200 MiB. Click the + button, choose /boot/efi as the mount point, enter 200 MiB as the desired capacity, and click “Add mount point” (Figure 155).
Creating the EFI boot partition
After you define the /boot and /boot/efi partitions, you will assign the remaining disk space to the LVM as a volume group (VG) and then carve out a root volume, swap volume, and SAP HANA system-related volumes.
14. Click the + button, select “/” as the mount point, enter 100 GiB as the desired capacity, and click “Add mount point” (Figure 156).
Creating the root file system with 100 GiB
15. Click Modify to change the device type (Figure 157).
Preparing to change the device type to LVM
16. Change the device type from Standard Partition to LVM.
17. Change the name of the volume group from the default rhel to hanavg. Then click Save (Figure 158).
Configuring the volume group
18. Change the file system type to ext3 and change the name to rootvol. Click Update Settings (Figure 159).
Updating the file system type and volume group name
19. You will now create a 2-GiB swap volume. Click the + button, choose swap as the mount point, enter 2 GiB as the desired capacity, and click “Add mount point” (Figure 160).
Creating a swap volume
20. Change the device type to LVM, verify that hanavg is selected as the volume group, and change the name to swapvol (Figure 161).
Updating swap volume properties
21. Next you will create the SAP HANA system’s data, log, and shared volumes.
a. Click the + button, choose /hana/data as the mount point and 4.5 TiB as the desired capacity, and click “Add mount point” (Figure 162).
Creating the /hana/data logical volume
b. Change the device type to LVM, verify that hanavg is selected as the volume group, and change the name to datavol (Figure 163).
Updating /hana/data logical volume properties
c. Click the + button, choose /hana/log as the mount point and 512 GiB as the desired capacity, and click “Add mount point” (Figure 164).
Creating the /hana/log logical volume
d. Change the device type to LVM, verify that hanavg is selected as the volume group, and change the name to logvol (Figure 165).
Updating /hana/log logical volume properties
e. Click the + button, choose /hana/shared as the mount point and 1.5 TiB as the desired capacity, and click “Add mount point” (Figure 166).
Creating the /hana/shared logical volume
f. Change the device type to LVM, verify that hanavg is selected as the volume group, and change the name to sharedlv. Click Update Settings (Figure 167).
Updating /hana/shared logical volume properties
22. Click Done. A summary of changes appears. Click Accept Changes (Figure 168).
Summary of changes for manual partition configuration
23. On the Installation Summary page that appears, click Begin Installation (Figure 169).
Beginning the installation
24. As the installation progress, set the root password (Figure 170).
Setting the root password
25. Enter and confirm the root password (Figure 171).
Entering and confirming the root user password
26. After the installation is complete, click Reboot (Figure 172).
Finishing the installation
Post-installation OS configuration
Follow the steps presented here to customize the server in preparation for SAP HANA installation.
Customize the host name
You can customize the host name.
1. Use the KVM console to log in to the installed system as the user root with the password <<var_sys_root_pw>>.
2. Update the /etc/hosts file with an entry matching the host name and IP address of the system (Figure 173).
Sample hosts file
3. Verify that the host name is set correctly.
The operating system must be configured so that the short name of the server is displayed with the command hostname -s, and the fully qualified host name is displayed with the command hostname -f. Figure 174 shows sample output.
Sample hostname command output
Configure the network
The Cisco UCS C480 M5 server comes with a pair of Cisco UCS VIC 1455 adapters. In addition to the administration and management networks, you can optionally have networks for backup, client access, etc. You can configure additional networks based on customer-specific requirements and use cases.
In RHEL 7.0, systemd and udev support a number of different naming schemes. By default, fixed names are assigned based on firmware, topology, and location information: for instance, eno5
With this naming convention, names stay fixed even if hardware is added or removed. However, the names are often more difficult to read than traditional kernel-native ethX names: for instance, eth0.
Another method for naming network interfaces, biosdevnames, is also available with the installation.
1. Configure the boot parameters net.ifnames=0 biosdevname=0 to disable both approaches to use the original kernel-active network names.
2. You can disable IPv6 support at this time because this solution uses IPv4. You accomplish this by appending ipv6.disable=1 to GRUB_CMDLINE_LINUX as shown in Figure 175.
Sample grub file with CMDLINE parameter additions
3. Run the grub2-mkconfig command to regenerate the grub.cfg file (Figure 176):
# grub2-mkconfig -o /boot/grub2/grub.cfg
Updating the grub configuration
4. Reboot the system to make the changes take effect:
# reboot
5. After the reboot, use the KVM console to log in to the installed system as the user root with password <<var_sys_root_pw>>.
6. Assign <<var_mgmt_ip_address>> as the IP address and enter <<var_mgmt_ip_mask>> as the subnet mask for the available interface example eth1.
7. Go to the network configuration directory and create a configuration for eth0 as shown in this example:
# cd /etc/sysconfig/network-scripts
# vi ifcfg-eth0
DEVICE=eth0
Type=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPV6INIT=no
USERCTL=no
NM_CONTROLLED=no
IPADDR=<<var_mgmt_ip_address>>
NETMASK=<<var_mgmt_ip_mask>>
8. Add the default gateway:
# vi /etc/sysconfig/network
NETWORKING=yes
GATEWAY=<<var_mgmt_gateway_ip>>
Configure the network time
Be sure that the time on all components used for SAP HANA is synchronized. Use the same NTP configuration on all systems.
# vi /etc/ntp.conf
server <<NTP-SERVER1 IP>>
server <<NTP-Server2 IP>>
# service ntpd stop
# ntpdate ntp.example.com
# service ntpd start
# chkconfig ntpd on
# chkconfig ntpdate on
Configure DNS
Configure DNS based on the local requirements. A sample configuration is shown here. Add the DNS IP address if it is required to access the Internet.
# vi /etc/resolv.conf
nameserver <<IP of DNS server 1>>
nameserver <<IP of DNS server 2>>
Red Hat system update and customization for SAP HANA
Before you can customize the OS for SAP HANA, you need to update the Red Hat system.
1. Update the Red Hat repository.
To patch the system, you must first update the repository. Note that the installed system does not include any update information. Before you can patch the Red Hat system, the system must be registered and attached to a valid subscription. The following code will register the installation and update the repository information:
#subscription-manager register --auto-attach
Username: <<user name>
Password: <<password>>
2. To list the repositories to which the subscription is attached, use the following command:
#yum repolist
Update only the OS kernel and firmware packages to the latest release that appeared in RHEL 7.6. Set the release version to 7.6.
#subscription-manager release -set=7.6
3. Apply the latest update for RHEL 7.6. Typically, the kernel is updated as well.
#yum -y update
4. Reboot the system to use the new kernel.
5. Install the base package group.
#yum -y groupinstall base
6. Install the additional required packages and Install the numactl package if the benchmark HWCCT is to be used.
#yum install gtk2 libicu xulrunner sudo tcsh libssh2 expect cairo graphviz iptraf-ng krb5-workstation krb5-libs libpng12 nfs-utils lm_sensors rsyslog openssl PackageKit-gtk3-module libcanberra-gtk2 libtool-ltdl xorg-x11-xauth numactl xfsprogs net-tools bind-utils screen compat-sap-c++-6 compat-sap-c++-5
7. Disable SELinux.
To help ensure that SELinux is fully disabled, modify the file /etc/selinux/config:
# sed -i 's/\(SELINUX=enforcing\|SELINUX=permissive\)/SELINUX=disabled/g' /etc/selinux/config
For compatibility reasons, four symbolic links are required:
#ln -s /usr/lib64/libssl.so.0.9.8e /usr/lib64/libssl.so.0.9.8
#ln -s /usr/lib64/libssl.so.1.0.1e /usr/lib64/libssl.so.1.0.1
#ln -s /usr/lib64/libcrypto.so.0.9.8e /usr/lib64/libcrypto.so.0.9.8
#ln -s /usr/lib64/libcrypto.so.1.0.1e /usr/lib64/libcrypto.so.1.0.1
8. Configure tuned to use the profile sap-hana. Run the following commands to install tuned profiles for SAP HANA:
#subscription-manager repos --enable="rhel-sap-hana-for-rhel-7-server-rpms" --enable="rhel-7-server-rpms"
# yum install tuned-profiles-sap-hana tuned
# systemctl start tuned
# systemctl enable tuned
# tuned-adm profile sap-hana
9. Disable the abort and crash dump features:
# systemctl disable abrtd
# systemctl disable abrt-ccpp
# systemctl stop abrtd
# systemctl stop abrt-ccpp
a. Disable core file creation. To disable core dumps for all users, open /etc/security/limits.conf and add the following lines:
* soft core 0
* hard core 0
b. Enable the sapsys group to create an unlimited number of processes:
echo "@sapsys soft nproc unlimited" > /etc/security/limits.d/99-sapsys.conf
10. To avoid problems with the firewall during SAP HANA installation, you can disable the firewall completely with the following commands:
#systemctl stop firewalld
#systemctl disable firewalld
11. Configure the network time and date. Make sure that NTP and its utilities are installed and that chrony is disabled:
# yum -y install ntp ntpdate
# systemctl stop ntpd.service
# systemctl stop chronyd.service
# sytemctl disable chronyd.service
12. Edit the /etc/ntp.conf file and make sure that the server lines reflect your NTP servers:
# grep ^server /etc/ntp.conf
server ntp.example.com
server ntp1.example.com
server ntp2.example.com
13. Force an update to the current time:
# ntpdate ntp.example.com
14. Enable and start the NTP daemon (NTPD) service:
# systemctl enable ntpd.service
# systemctl start ntpd.service
# systemctl restart systemd-timedated.service
15. Verify that the NTP service is enabled:
# systemctl list-unit-files | grep ntp
ntpd.service enabled
ntpdate.service disabled
16. The ntpdate script adjusts the time according to the NTP server every time the system comes up. This process occurs before the regular NTP service is started and helps ensure an exact system time even if the time deviation is too large to be compensated for by the NTP service. View the script:
# echo ntp.example.com >> /etc/ntp/step-tickers
# systemctl enable ntpdate.service
Tuning the OS for SAP HANA: Adapting SAP Notes
Use the following process to optimize the use of HANA database (HDB) with RHEL 7.6 for SAP.
1. Apply the SAP Notes settings as instructed. See SAP Note 2292690: SAP HANA DB: Recommended OS settings for RHEL 7.
2. Optionally, remove old kernels after the OS update:
Package-cleanup --oldkernels --count=1
3. Reboot the server after applying the SAP Notes
#reboot
The information from SAP Note 2292690 mentioned is shown here and is current at the time that this document was published. For the latest updates, please see the SAP Notes.
To customize the RHEL 7.6 System for HANA Servers, follow these steps:
Turn off autoNUMA balancing
Add "kernel.numa_balancing = 0" to /etc/sysctl.d/sap_hana.conf (please create this file if it does not already exist) and reconfigure the kernel by running
# sysctl -p /etc/sysctl.d/sap_hana.conf
Additionally the "numad" daemon must be disabled:
# systemctl stop numad
# systemctl disable numad
Disable transparent hugepages and configure C-States for lower latency
Edit /etc/default/grub, search for the line starting with “GRUB_CMDLINE_LINUX”: and append the following
transparent_hugepage=never processor.max_cstate=1 intel_idle.max_cstate=1
Energy Performance Bias, CPU frequency/Voltage scaling and Kernel samepage merging (KSM).
Add the following commands to a script executed on system boot, such as /etc/rc.d/boot.local:
cpupower frequency-set -g performance
Add the following commands to a script executed on system boot, such as /etc/init.d/boot.local:
cpupower set -b 0
echo 0 > /sys/kernel/mm/ksm/run
Activate tuned and Enable tuned profile
systemctl enable tuned
tuned-adm profile sap-hana
Reboot the OS issuing reboot command
To optimize the network configuration, apply the settings by referring to SAP Note 2382421: Optimizing the network configuration on HANA and OS level.
Use the official SAP documentation, which describes the installation process with and without the SAP unified installer. For the SAP HANA installation documentation, see SAP HANA Server Installation Guide. All other SAP HANA administration documentation is available at SAP HANA Administration Guide.
Important SAP Notes
Read the following SAP Notes before you start the installation. These SAP Notes contain the latest information about the installation, as well as corrections to the installation documentation.
The latest SAP Notes can be found at SAP Notes and Knowledgebase.
SAP HANA IMDB notes
● SAP Note 1514967: SAP HANA: Central note
● SAP Note 2298750: SAP HANA Platform SPS 12 Release Note
● SAP Note 1523337: SAP HANA database: Central note
● SAP Note 2000003: FAQ: SAP HANA
● SAP Note 2380257: SAP HANA 2.0 Release Notes
● SAP Note 1780950: Connection problems due to host name resolution
● SAP Note 1755396: Released disaster tolerant (DT) solutions for SAP HANA with disk replication
● SAP Note 2519630: Check whether power save mode is active
● SAP Note 1681092: Support for multiple SAP HANA databases on a single SAP HANA appliance
● SAP Note 1514966: SAP HANA: Sizing the SAP HANA database
● SAP Note 1637145: SAP BW on HANA: Sizing the SAP HANA database
● SAP Note 1793345: Sizing for Suite on HANA
● SAP Note 2399079: Elimination of hdbparam in HANA 2
● SAP Note 2186744: FAQ: SAP HANA parameters
Linux notes
● SAP Note 2292690: SAP HANA DB: Recommended OS settings for RHEL 7
● SAP Note 2009879: SAP HANA guidelines for the RHEL operating system
● SAP Note 2205917: SAP HANA DB: Recommended OS settings for SLES 12 and SLES for SAP Applications 12
● SAP Note 1944799: SAP HANA guidelines for the SLES operating system
● SAP Note 2235581: SAP HANA: Supported operating systems
● SAP Note 1731000: Non-recommended configuration changes
● SAP Note 1557506: Linux paging improvements
● SAP Note 1740136: SAP HANA: Wrong mount option may lead to corrupt persistency
● SAP Note 2382421: Optimizing the network configuration on HANA and OS level
Third-party software notes
● SAP Note 1730928: Using external software in an SAP HANA appliance
● SAP Note 1730929: Using external tools in an SAP HANA appliance
● SAP Note 1730930: Using antivirus software in an SAP HANA appliance
● SAP Note 1730932: Using backup tools with Backint for SAP HANA
SAP HANA virtualization notes
● SAP Note 1788665: SAP HANA running on VMware vSphere virtual machines
● SAP note 2652670: SAP HANA VM on VMware vSphere
● SAP note 2161991: VMware vSphere configuration guidelines
● SAP note 2393917: SAP HANA on VMware vSphere 6.5 and 6.7 in production
● SAP note 2015392: VMware recommendations for latency-sensitive SAP applications
Performing SAP HANA post-installation check
For an SAP HANA system installed with <SID> set to CLX and the system number <nr> set to 00, log in as <sid>admin clxadm and run the commands presented here.
Checking SAP HANA services
clxadm@cishana01:/usr/sap/CLX/HDB00> /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function GetProcessList
19.02.2019 11:29:27
GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
hdbdaemon, HDB Daemon, GREEN, Running, 2019 02 13 08:51:49, 866:37:38, 41691
hdbcompileserver, HDB Compileserver, GREEN, Running, 2019 02 13 08:51:56, 866:37:31, 41837
hdbindexserver, HDB Indexserver, GREEN, Running, 2019 02 13 08:52:00, 866:37:27, 41863
hdbnameserver, HDB Nameserver, GREEN, Running, 2019 02 13 08:51:50, 866:37:37, 41711
hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2019 02 13 08:51:56, 866:37:31, 41839
hdbwebdispatcher, HDB Web Dispatcher, GREEN, Running, 2019 02 13 08:53:11, 866:36:16, 42431
hdbxsengine, HDB XSEngine, GREEN, Running, 2019 02 13 08:52:00, 866:37:27, 41865
clxadm@cishana01-CLX:/usr/sap/CLX/HDB00>
Checking SAP HANA information
clxadm@cishana01:/usr/sap/CLX/HDB00> HDB info
USER PID PPID %CPU VSZ RSS COMMAND
clxadm 59578 59577 0.0 108472 1944 -sh
clxadm 59663 59578 0.0 114080 2020 \_ /bin/sh /usr/sap/CLX/HDB00/HDB info
clxadm 59692 59663 0.0 118048 1596 \_ ps fx -U clxadm -o user,pid,ppid,pcpu,vsz,rss,args
clxadm 41683 1 0.0 22188 1640 sapstart pf=/hana/shared/CLX/profile/CLX_HDB00_cishana01-CLX
clxadm 41691 41683 0.0 582888 290988 \_ /usr/sap/CLX/HDB00/cishana01- CLX/trace/hdb.sapCLX_HDB00 -d -nw -f /usr/sap/CLX/HDB00/cishana01-CLX/daemon.ini
clxadm 41711 41691 0.3 54292416 2058900 \_hdbnameserver
clxadm 41837 41691 0.1 4278472 1243356 \_hdbcompileserver
clxadm 41839 41691 0.2 11773976 8262724 \_hdbpreprocessor
clxadm 41863 41691 6.2 22143172 18184604 \_hdbindexserver
clxadm 41865 41691 0.5 8802064 2446612 \_hdbxsengine
clxadm 42431 41691 0.1 4352988 823220 \_hdbwebdispatcher
clxadm. 41607 1 0.0 497576 23232 /usr/sap/CLX/HDB00/exe/sapstartsrv
pf=/hana/shared/CLX/profile/CLX_HDB00_cishana01-CLX -D -u clxadm
clxadm@cishana01-CLX:/usr/sap/CLX/HDB00>
Tuning the SAP HANA performance parameters
After SAP HANA is installed, tune the parameters as shown in Table 28 and explained in the following SAP Notes.
Table 28. Tuning parameters
Parameters |
Data file system |
Log file system |
max_parallel_io_requests |
256 |
Default |
async_read_submit |
On |
On |
async_write_submit_blocks |
All |
All |
async_write_submit_active |
Auto |
On |
● SAP Note 2399079: Elimination of hdbparam in HANA 2
● SAP Note 2186744: FAQ: SAP HANA parameters
Monitoring SAP HANA
Three easy CLI methods are available to check the running HANA database.
saphostagent
1. Start a shell and connect to the SAP HANA system as the root user.
cishana01:~ # /usr/sap/hostctrl/exe/saphostctrl -function ListDatabases
Instance name: HDB00, Hostname: cishana01, Vendor: HDB, Type: hdb, Release: 1.00.60.0379371
Database name: HAN, Status: Error
cishana01:~ #
2. Get a list of installed HANA instances or databases.
cishana01:~ # /usr/sap/hostctrl/exe/saphostctrl -function ListInstances
Inst Info : HAN - 00 - cishana01 - 740, patch 17, changelist 1413428
cishana01:~ #
3. Using this information (system ID [SID] and system number), you can use sapcontrol to gather more information about the running HANA database.
sapcontrol
1. In a shell, use the sapcontrol function GetProcessList to display a list of running HANA OS processes.
cishana01:~ # /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function GetProcessList
19.02.2019 14:54:45
GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
hdbdaemon, HDB Daemon, GREEN, Running, 2019 02 15 11:57:45, 98:57:00, 8545
hdbnameserver, HDB Nameserver, GREEN, Running, 2019 02 15 12:05:27, 98:49:18, 11579
hdbpreprocessor, HDB Preprocessor, GREEN, Running, 2019 02 15 12:05:27, 98:49:18, 11580
hdbindexserver, HDB Indexserver, GREEN, Running, 2019 02 15 12:05:27, 98:49:18, 11581
hdbstatisticsserver, HDB Statisticsserver, GREEN, Running, 2019 02 15 12:05:27, 98:49:18, 11582
hdbxsengine, HDB XSEngine, GREEN, Running, 2019 02 15 12:05:27, 98:49:18, 11583
sapwebdisp_hdb, SAP WebDispatcher, GREEN, Running, 2019 02 15 12:05:27, 98:49:18, 11584
hdbcompileserver, HDB Compileserver, GREEN, Running, 2019 02 15 12:05:27, 98:49:18, 11585
You see processes such as hdbdaemon, hdbnameserver, and hdbindexserver that belong to a running HANA database.
2. You can also get a system instance list, which is more useful for a scale-out appliance.
cishana01:~ # /usr/sap/hostctrl/exe/sapcontrol -nr 00 -function GetSystemInstanceList
19.07.2019 15:03:12
GetSystemInstanceList
OK
hostname, instanceNr, httpPort, httpsPort, startPriority, features, dispstatus
cishana01, 0, 50013, 0, 0.3, HDB, GREEN
HDB info
Another important tool is the HDB command, which needs to be issued by the <SID>adm user: the OS user who owns the HANA database.
As the root user on the HANA appliance, enter the following command:
cishana01:~ # su – hanadm
cishana01:/usr/sap/HAN/HDB00> HDB info
USER PID PPID %CPU VSZ RSS COMMAND
hanadm 61208 61207 1.6 13840 2696 –sh
hanadm 61293 61208 0.0 11484 1632 \_ /bin/sh /usr/sap/HAN/HDB00/HDB info
hanadm 61316 61293 0.0 4904 872 \_ ps fx -U hanadm -o user,pid,ppid,pcpu,vsz,rss,args
hanadm 8532 1 0.0 20048 1468 sapstart pf=/hana/shared/HAN/profile/HAN_HDB00_cishana01
hanadm 8545 8532 1.5 811036 290140 \_ /usr/sap/HAN/HDB00/cishana01/trace/hdb.sapHAN_HDB00 -d -nw -f /usr/sap/HAN/HDB00/cis
hanadm 11579 8545 6.6 16616748 1789920 \_ hdbnameserver
hanadm 11580 8545 1.5 5675392 371984 \_ hdbpreprocessor
hanadm 11581 8545 10.9 18908436 6632128 \_ hdbindexserver
hanadm 11582 8545 8.7 17928872 3833184 \_ hdbstatisticsserver
hanadm 11583 8545 7.4 17946280 1872380 \_ hdbxsengine
hanadm 11584 8545 0.0 203396 16000 \_ sapwebdisp_hdb pf=/usr/sap/HAN/HDB00/cishana01/wdisp/sapwebdisp.pfl -f /usr/sap/H
hanadm 11585 8545 1.5 15941688 475708 \_ hdbcompileserver
hanadm 8368 1 0.0 216268 75072 /usr/sap/HAN/HDB00/exe/sapstartsrv pf=/hana/shared/HAN/profile/HAN_HDB00_cishana01 -D -u
This command produces output similar to that from the sapcontrol GetProcessList function, with a bit more information about the process hierarchy.
Downloading the latest revisions
To download revisions, you need to connect to the service marketplace and select the software download area to search for available patches.
Refer to SAP HANA Master Guide for update procedures for SAP HANA.
Intel Optane DC PMM configuration for SAP HANA
This section discusses how to configure the Intel Optane DC PMM for SAP HANA solution.
Use the following steps to configure Intel Optane DC PMM for SAP HANA.
1. Install tools to manage Intel Optane DC PMM.
2. Create a goal to configure Intel Optane DC PMM for the App Direct mode. App Direct mode will create a persistent memory region for each CPU.
3. Create a namespace for each persistent memory region, which creates a block device in file system direct access (fsdax) mode.
4. Create an XFS file system on each persistent memory block device and mount the file system on the SAP HANA server.
5. Set the SAP HANA base path to use persistent memory.
Installing tools on the SAP HANA server
Follow the github links here to install the latest version of ipmctl and ndctl on the SAP HANA Linux server:
● For ipmctl utility: https://github.com/intel/ipmctl
● For ndctl utility library: https://github.com/pmem/ndctl
Configuring Intel Optane DC PMM
The show -dimm command displays the PMMs discovered in the system and verifies that software can communicate with them. Among other information, this command outputs each DIMM ID, capacity, health state, and firmware version.
Here is an example of output from the ipmctl show -dimm command:
ipmctl show -dimm
DimmID | Capacity | HealthState | ActionRequired | LockState | FWVersion
==============================================================================
0x0021 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x0001 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x0011 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x0121 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x0101 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x0111 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x1021 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x1001 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x1011 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x1121 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x1101 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x1111 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x2021 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x2001 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x2011 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x2121 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x2101 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x2111 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x3021 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x3001 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x3011 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x3121 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x3101 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
0x3111 | 502.5 GiB | Healthy | 0 | Disabled | 01.02.00.5367
Intel Optane DC PMM goal creation
The default create -goal command creates an interleaved region configured for App Direct mode. Here is an example of output from the ipmctl create -goal command:
ipmctl create -goal
The following configuration will be applied:
SocketID | DimmID | MemorySize | AppDirect1Size | AppDirect2Size
==================================================================
0x0000 | 0x0021 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0000 | 0x0001 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0000 | 0x0011 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0000 | 0x0121 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0000 | 0x0101 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0000 | 0x0111 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1021 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1001 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1011 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1121 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1101 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1111 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2021 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2001 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2011 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2121 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2101 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2111 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3021 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3001 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3011 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3121 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3101 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3111 | 0.0 GiB | 502.0 GiB | 0.0 GiB
Do you want to continue? [y/n] y
Created following region configuration goal
SocketID | DimmID | MemorySize | AppDirect1Size | AppDirect2Size
==================================================================
0x0000 | 0x0021 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0000 | 0x0001 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0000 | 0x0011 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0000 | 0x0121 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0000 | 0x0101 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0000 | 0x0111 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1021 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1001 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1011 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1121 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1101 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0001 | 0x1111 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2021 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2001 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2011 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2121 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2101 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0002 | 0x2111 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3021 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3001 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3011 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3121 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3101 | 0.0 GiB | 502.0 GiB | 0.0 GiB
0x0003 | 0x3111 | 0.0 GiB | 502.0 GiB | 0.0 GiB
A reboot is required to process new memory allocation goals.
Reboot the server for new memory allocations.
Intel Optane DC PMM show -region command
Use the command ipmctl show -region to see the regions that were created. Here is an example of output from the ipmctl show -region command.
ipmctl show -region
SocketID | ISetID | PersistentMemoryType | Capacity | FreeCapacity | HealthState
================================================================================================
0x0000 | 0xf3c67f48e25d2ccc | AppDirect | 3012.0 GiB | 3012.0 GiB | Healthy
0x0001 | 0x03447f48e45c2ccc | AppDirect | 3012.0 GiB | 3012.0 GiB | Healthy
0x0002 | 0x4fa67f48cf692ccc | AppDirect | 3012.0 GiB | 3012.0 GiB | Healthy
0x0003 | 0xe0327f48d25d2ccc | AppDirect | 3012.0 GiB | 3012.0 GiB | Healthy
Intel Optane DC PMM default namespace mode
The fsdax mode is the default namespace mode. If you specify ndctl create-namespace with no options, a block device (/dev/pmemX[.Y]) is created that supports the direct-access (DAX) capabilities of Linux file systems. DAX removes the page cache from the I/O path and allows mmap(2) to establish direct mappings to persistent memory media.
In this mode, applications can either directly load and access storage using a persistent memory region or continue to use a storage API, thus requiring no changes to the application.
Intel Optane DC PMM create-namespace command for region
Use the ndctl create-namespace command to create a namespace for each region. You must run this command for each CPU in the server. Here is an example of output from the ndctl create-namespace command on a server with four CPUs.
ndctl create-namespace
{
"dev":"namespace3.0",
"mode":"fsdax",
"map":"dev",
"size":"2964.94 GiB (3183.58 GB)",
"uuid":"43002f2c-b37c-4cec-9474-d3d8b1223e65",
"raw_uuid":"7df74ccf-1032-4c12-905f-cd9e5e1ac1be",
"sector_size":512,
"blockdev":"pmem3",
"numa_node":3
}
ndctl create-namespace
{
"dev":"namespace2.0",
"mode":"fsdax",
"map":"dev",
"size":"2964.94 GiB (3183.58 GB)",
"uuid":"45e0fc9e-149c-4616-b308-eb10eecd5e19",
"raw_uuid":"6242e069-6637-4d75-a364-e2049fdf9bd7",
"sector_size":512,
"blockdev":"pmem2",
"numa_node":2
}
ndctl create-namespace
{
"dev":"namespace1.0",
"mode":"fsdax",
"map":"dev",
"size":"2964.94 GiB (3183.58 GB)",
"uuid":"9375a814-ac10-498a-9e73-3e28e7242519",
"raw_uuid":"4f6f69ce-6aaa-4076-be81-ab7504f43b58",
"sector_size":512,
"blockdev":"pmem1",
"numa_node":1
}
ndctl create-namespace
{
"dev":"namespace0.0",
"mode":"fsdax",
"map":"dev",
"size":"2964.94 GiB (3183.58 GB)",
"uuid":"83425d72-c451-4eb7-b450-8dc3f4b1978a",
"raw_uuid":"d8633063-012f-4b0b-be95-29ed455abcf8",
"sector_size":512,
"blockdev":"pmem0",
"numa_node":0
}
Intel Optane DC PMM namespace
Use the ndctl list command to list all the active namespaces. Here is an example of output from the ndctl list command.
ndctl list
[
{
"dev":"namespace3.0",
"mode":"fsdax",
"map":"dev",
"size":3183575302144,
"uuid":"43002f2c-b37c-4cec-9474-d3d8b1223e65",
"blockdev":"pmem3"
},
{
"dev":"namespace2.0",
"mode":"fsdax",
"map":"dev",
"size":3183575302144,
"uuid":"45e0fc9e-149c-4616-b308-eb10eecd5e19",
"blockdev":"pmem2"
},
{
"dev":"namespace1.0",
"mode":"fsdax",
"map":"dev",
"size":3183575302144,
"uuid":"9375a814-ac10-498a-9e73-3e28e7242519",
"blockdev":"pmem1"
},
{
"dev":"namespace0.0",
"mode":"fsdax",
"map":"dev",
"size":3183575302144,
"uuid":"83425d72-c451-4eb7-b450-8dc3f4b1978a",
"blockdev":"pmem0"
}
]
Creating the file system and mounting the PMMs
Use this set of commands to create the file system and mount the PMMs. This example uses a server with four CPUs. It therefore has four regions.
mkfs -t xfs -f /dev/pmem0
mkfs -t xfs -f /dev/pmem1
mkfs -t xfs -f /dev/pmem2
mkfs -t xfs -f /dev/pmem3
mkdir -p /hana/pmem/nvmem0
mkdir -p /hana/pmem/nvmem1
mkdir -p /hana/pmem/nvmem2
mkdir -p /hana/pmem/nvmem3
mount -t xfs -o dax /dev/pmem0 /hana/pmem/nvmem0
mount -t xfs -o dax /dev/pmem1 /hana/pmem/nvmem1
mount -t xfs -o dax /dev/pmem2 /hana/pmem/nvmem2
mount -t xfs -o dax /dev/pmem3 /hana/pmem/nvmem3
Configuring SAP HANA base path to use persistent memory.
The directory that SAP HANA uses as its base path must point to the XFS file system. Define the base path location with the configuration parameter basepath_persistent_memory_volumes in the persistence section of the SAP HANA global.ini file. This section can contain multiple locations separated by semicolons. Changes to this parameter require a restart of SAP HANA services.
[persistence]
basepath_datavolumes = /hana/data/AEP
basepath_logvolumes = /hana/log/AEP
basepath_persistent_memory_volumes=/hana/pmem/nvmem0;/hana/pmem/nvmem1;/hana/pmem/nvmem2;/hana
/pmem/nvmem3
At startup, SAP HANA tests for a DAX-enabled file system at the location defined in the base path. After SAP HANA verifies that the file system is DAX enabled, all tables will use persistent memory by default. Save points help ensure that the contents of data in persistent memory is consistent with the persistence and data log volumes.
Cisco UCS M5 servers with second-generation Intel Xeon Scalable processors and Intel Optane DC persistent memory combined with DRAM revolutionize the SAP HANA landscape by helping organizations achieve lower overall TCO, ensure business continuity, and increase the memory capacities of their SAP HANA deployments. Intel Optane DC persistent memory can transform the traditional SAP HANA data-tier infrastructure and revolutionize data processing and storage. Together, these technologies give organizations faster access to more data than ever before and provide better performance for advanced data processing technologies.
● For information about SAP HANA, visit: https://hana.sap.com/abouthana.html.
● For information about certified and supported SAP HANA hardware, refer to: https://global.sap.com/community/ebook/2014-09-02-hana-hardware/enEN/index.html.