Overview

Overview

Cisco Intersight provides an installation wizard to install, configure, and deploy Cisco HyperFlex clusters — HyperFlex Edge, FI-attached, and HyperFlex Datacenter without Fabric Interconnect. The wizard constructs a pre-configuration definition of your cluster called an HyperFlex Cluster Profile. This definition is a logical representation of the HyperFlex nodes in your HyperFlex cluster and includes:

  • Security—credentials for HyperFlex cluster such as controller VM password, Hypervisor username, and password.

  • Configuration—server requirements, firmware, etc.

  • Connectivity—upstream network, virtual network, etc.

HyperFlex Cluster Profiles are built on policies which administrator defined sets of rules and operating characteristics such as the node identity, interfaces, and network connectivity. Every active node in your HyperFlex cluster must be associated with an HyperFlex Cluster Profile.

After gathering the node configuration settings to build the HyperFlex Cluster Profile, the installation wizard will validate and deploy the HyperFlex Cluster Profile in your HyperFlex cluster. You can clone a successfully deployed HyperFlex Cluster Profile, and then use that copy as the basis to create a new cluster. For instructions on cloning HyperFlex cluster profiles, see Cloning HyperFlex Cluster Profiles.

Non Pre-configured Cisco HyperFlex Systems


Note


Beginning in April 2024, HyperFlex servers are shipped from the factory without VMware ESXi preinstalled. It is imperative that the ESXi hypervisor is installed before starting the HyperFlex Installation. For instructions to manually prepare factory shipped servers for the Cisco HyperFlex install, seeCisco HyperFlex install, see Cisco HyperFlex Systems Installation Guide for VMware ESXi.

HyperFlex Systems Supported Releases

Intersight supports the following HyperFlex Data Platform versions for HyperFlex installation:

  • 5.5(1a), 5.5(2a)

  • 5.0(2e), 5.0(2g)

  • 6.0(1b)


Note


  • HXDP versions 5.0(2a), 5.0(2b), 5.0(2c), 5.0(2d), 4.5(2a), 4.5(2b), 4.5(2c), 4.5(2d), and 4.5(2e) are still supported for cluster expansion only.

  • Upgrades from HXDP 4.0.2x are supported provided the ESXi version is compatible with 4.5(2x).


HyperFlex Data Platform Release

Features Added in the Release

Description

Reference

6.0(1b)

  • HXDP and Intersight license validation for HX upgrades

  • Intersight-based kernel migration through HXDP upgrade workflow

  • Credential validation check for upgrade

  • Server firmware upgrade

  • Support for ESX 8.0U2

  • Intersight HX install for HXDP 6.0.1b with new kernel-based CVM

  • Intersight HX cluster expansion for HXDP 6.0.1b with new kernel-based CVM

  • Health Checks support for 6.0.1b

  • Health Checks support on 6.0.1b stretch cluster

  • Support for software encryption clusters in 6.0.1b

Feature release

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

Supported Models/Versions for HyperFlex Edge Cluster Deployment

HyperFlex Datacenter Without Fabric Interconnect Cluster Deployment

5.5(2a)

  • Support for ESXI 8.0 U2 and ESXI 8.0 U3

  • Support for server firmware 4.2(3l) and 4.2(3m)

Feature release with ESXi 8.0 U2 and ESXI 8.0 U3 support

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

Supported Models/Versions for HyperFlex Edge Cluster Deployment

HyperFlex Datacenter Without Fabric Interconnect Cluster Deployment

5.5(1a)

  • VMware ESXi 8.0 U1 support

  • UCS FI 6536 support

Feature release with ESXi 8.0 U1 support

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

Supported Models/Versions for HyperFlex Edge Cluster Deployment

HyperFlex Datacenter Without Fabric Interconnect Cluster Deployment

5.0(2g)

N/A

Maintenance release with bug fixes.

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

Supported Models/Versions for HyperFlex Edge Cluster Deployment

HyperFlex Datacenter Without Fabric Interconnect Cluster Deployment

5.0(2e)

N/A

Maintenance release with bug fixes.

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

Supported Models/Versions for HyperFlex Edge Cluster Deployment

HyperFlex Datacenter Without Fabric Interconnect Cluster Deployment

HyperFlex Systems Unsupported Releases

HyperFlex cluster deployments and upgrade are no longer supported in Intersight for the following HyperFlex Data Platform releases:

  • 5.0(2a), 5.0(2b), 5.0(2c), 5.0(2d)

  • 4.5(2a), 4.5(2b), 4.5(2c), 4.5(2d), 4.5(2e)

  • 4.0(2a), 4.0(2b), 4.0(2c), 4.0(2d), 4.0(2e), 4.0(2f)

HyperFlex Data Platform releases that have reached End of Support:

  • 5.0(1a), 5.0(1b), 5.0(1c)

  • 4.5(1a)

  • 4.0(1a), 4.0(1b)


Note


  • Upgrades from HXDP 4.0.2x are supported provided the ESXi version is compatible with 4.5(2x).  You can upgrade both at the same time. For example, upgrade HXDP 4.0.2x + ESXi 6.0 to HXDP 4.5 + ESXi 6.5.

  • HXDP versions 5.0(2a), 5.0(2b), 5.0(2c), 5.0(2d) 4.5(2a) and 4.5(2b), 4.5(2c), 4.5(2d) and 4.5(2e) are still supported for cluster expansion only.


HyperFlex Data Platform Release Feature Matrix [Unsupported Releases]

Unsupported HyperFlex Data Platform Release

Features Added in the Release

Description

Reference

5.0(2d)

N/A

Maintenance release with bug fixes.

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

Supported Models/Versions for HyperFlex Edge Cluster Deployment

HyperFlex Datacenter Without Fabric Interconnect Cluster Deployment

5.0(2c)

N/A

Maintenance release with bug fixes.

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

Supported Models/Versions for HyperFlex Edge Cluster Deployment

HyperFlex Datacenter Without Fabric Interconnect Cluster Deployment

5.0(2b)

B200 M6 compute node support

Support for cluster expansion with B200 M6 compute nodes for Datacenter and DC-no-FI clusters

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

Supported Models/Versions for HyperFlex Edge Cluster Deployment

HyperFlex Datacenter Without Fabric Interconnect Cluster Deployment

5.0(2a)

HyperFlex HX245C/225C M6 All Flash/Hybrid Server nodes

Added support for HyperFlex HX245C/225C M6 All Flash/Hybrid Server nodes.

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

5.0(1c)

N/A

Maintenance release with bug fixes.

5.0(1b)

HyperFlex Software Encryption

The HyperFlex software encryption feature offers file level end-to-end data encryption to provide confidentiality of data at-rest from theft of storage media. Intersight manages the keys natively with Intersight Key Manager which increases both security and simplicity by eliminating the overhead of key management.

HyperFlex Software Encryption

5.0(1a)

Cisco UCS/HyperFlex M6 Node Support

Supported models are Cisco UCS 220c M6 and Cisco UCS 240c M6 HyperFlex nodes. You can use Intersight to install and upgrade a HyperFlex converged node on HyperFlex-Series M6 servers and BOM compliant UCS C-Series M6 servers.

HyperFlex Cluster Deployment

You can use HyperFlex-Series M6 servers, as well as BOM compliant UCS C-Series M6 servers, to deploy HyperFlex Clusters. HyperFlex installation is no longer restricted to Cisco UCS C-series nodes manufactured as HyperFlex nodes.

Cisco HyperFlex Cluster Deployment

HyperFlex Server Personality

With HyperFlex server personalities, you can use a BOM compliant C-Series M6 server to deploy a HyperFlex cluster. The HyperFlex Series M6-servers have the HyperFlex Server Personality configured. A BOM compliant UCS C-Series M6 servers in the deployed HyperFlex cluster will also have a HyperFlex Server Personality configured.

HyperFlex Server Personality

Backup and Restore for HyperFlex Clusters using Intersight

  • Ability to configure different retention counts on the source and target clusters.

  • Ability to use HyperFlex Fabric Interconnected clusters as source clusters.

  • Backup dashboard enhancements to include error reporting for failed snapshots and replication, drill down options, and consolidated view of failed backups and restores in the last 24 hours.

N:1 Replication for Cisco HyperFlex Clusters

Secure Boot

Secure boot is enabled by default.

4.5(2e)

HyperFlex Data Platform 4.5(2e)

Added HyperFlex Data Platform 4.5(2e) support for HyperFlex cluster deployment and upgrade.

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

Supported Models/Versions for HyperFlex Edge Cluster Deployment

HyperFlex Datacenter Without Fabric Interconnect Cluster Deployment

4.5(2d)

HyperFlex Data Platform 4.5(2d)

Added HyperFlex Data Platform 4.5(2d) support for HyperFlex cluster deployment and upgrade.

Added support for NVMe cache in All Flash models, All NVMe hardware models, and 40/100 GE Networking with DC-No-FI HyperFlex clusters.

Supported Models/Versions for HyperFlex Fabric Interconnect Cluster Deployment

HyperFlex Datacenter Without Fabric Interconnect Cluster Deployment

4.5(2c)

Support for VMware ESXi 7.0 U3

Added support for VMware ESXi 7.0 U3 version.

Supported Systems

4.5(2b)

End of Life Advisories for HyperFlex Data Platform Software Releases

Cisco Intersight alerts users about End of Life and End of Support dates for Cisco HyperFlex Data Platform software releases that are no longer supported with a list of devices that are affected.

Cisco End of Life Advisories

4.5(2a)

Health Check for HyperFlex Clusters

Provides that ability to run pre-defined health checks on HyperFlex clusters and view granular details about the health of HyperFlex clusters

Health Check for HyperFlex Clusters

Backup and Restore for HyperFlex Edge Clusters using Intersight

Provides the ability for HyperFlex Edge clusters to take snapshots of Virtual Machines and restore using Intersight.

N:1 Replication for Cisco HyperFlex Clusters

4.5(1a)

Backup and Restore for HyperFlex Edge Clusters using Intersight

Provides the ability for HyperFlex Edge clusters to take snapshots of Virtual Machines and restore using Intersight.

N:1 Replication for Cisco HyperFlex Clusters

Health Check for HyperFlex Clusters

Provides that ability to run pre-defined health checks on HyperFlex clusters and view granular details about the health of HyperFlex clusters

Health Check for HyperFlex Clusters

Upgrade of HyperFlex Clusters

Provides the ability to upgrade HyperFlex clusters

Upgrade Cisco HyperFlex Systems in Cisco Intersight

4.0(2a), 4.0(2b), 4.0(2c), 4.0(2d), 4.0(2e), 4.0(2f)

Upgrade of HyperFlex Edge Clusters

Combined upgrade of VMware ESXi and HyperFlex Data Platform.

Upgrade Cisco HyperFlex Systems in Cisco Intersight

HyperFlex Cluster Deployment

Support for 25GE networking topology for HyperFlex Edge.

Cisco HyperFlex Cluster Deployment

4.0(1b)

HyperFlex Cluster Deployment

View the progress and history of the HyperFlex Cluster Profile deployment both from the Requests page and the HyperFlex Cluster Profile Results page.

Note

 

Ability to view the progress in the Requests page is not available in HyperFlex Data Platform 4.0(2a) and 3.5(x).

Cisco HyperFlex Cluster Deployment

Upgrade of HyperFlex Edge Clusters

Multi-site remote HyperFlex Data Platform upgrade of HyperFlex Edge clusters installed using Intersight.

Note

 

Use HyperFlex Connect to upgrade Edge clusters at versions before HyperFlex Data Platform 4.0(1a)

Upgrade Cisco HyperFlex Systems in Cisco Intersight

4.0(1a)

Alarms and Health Status

Supports alarms and health status for HyperFlex with Hyper-V

HyperFlex Cluster Deployment

Comprehensive lifecycle management and includes remote cloud-based installation, and invisible witnessing of HyperFlex Edge clusters in 1GE and 10GE networking topology options. This release extends support to 2-Node Edge clusters to run HyperFlex in environments requiring a small footprint and 4-Node Edge clusters to enable scaling-up HyperFlex Edge clusters.

Cisco HyperFlex Cluster Deployment

Invisible Cloud Witness for Cisco HyperFlex Edge

Invisible Cloud Witness architecture uses a new deployment methodology that eliminates the need for witness VMs, ongoing patching and maintenance, or additional infrastructure at a third site to ensure data consistency during a node loss in a clustered file system. For a 2-Node Edge cluster, Intersight acts as the arbitrator to form a quorum to maintain data consistency in case of a node failure. To use this feature, you can use the basic Intersight requirements for Port and firewall rules.

Invisible Cloud Witness for Cisco HyperFlex Edge

Limitations

The following list outlines the limitations in performing cluster installation through Intersight:

  • Intersight cluster install is not supported on Stretched Clusters.

  • Intersight cluster install is not supported on Hyper-V Clusters.

  • 10G+ NIC-based cluster deployment is supported on HXDP version 5.0(2a) and later for HX Edge and DC-No-FI clusters.

  • HXDP release 5.5(1a) and above doesn't support VMware ESXi 6.5, 6.7 and 7.0 U1.

  • HXDP release 5.5(1a) and above doesn't support HyperFlex M4 platforms. If your cluster has M4 nodes, please plan and replace them with supported platform before upgrading to HXDP 5.5(x). 

  • NIC-based cluster deployment is supported on VMware ESXi version 7.0 U3 and later.

  • NIC-based cluster deployment is supported on M6 platform only for HX Edge and DC-No-FI clusters.

  • Mixing of NIC-based and VIC-based nodes within the same cluster is not supported.

  • In a UCS M6 server setup, a "Converge expansion failure" occurs during the "Configure HyperFlex Controller VM" stage.

    1. Login to the Cisco Integrated Management Controller (CIMC).

    2. Go to Compute >BIOS > Memory.

    3. Set IOMMU to AUTO.

    4. Reboot the CIMC.

    5. Reboot the hypervisor host.

    6. Retry the failed workflow in Intersight.

      Note: You can also set the IOMMU to AUTO before attempting expansion.

HyperFlex Cluster Policies in Intersight

Navigate to CONFIGURE > Policies > Create Policy > HyperFlex Cluster. HyperFlex Policies in Cisco Intersight provide different configurations including Auto Support, External Storage (such as FC and iSCSI), security, network configuration and more. A policy that is once configured can be assigned to any number of servers in order to provide a configuration baseline.

HyperFlex Policies can also be cloned by using the Policy Clone wizard with properties that are similar to the existing policies. The clone policy action is available on both the policies list and detailed views.

All HyperFlex policies can be shared by HyperFlex Edge and HyperFlex with Fabric Interconnect, unless otherwise mentioned below:

  • Auto Support Policy—Auto Support is the alert notification service provided through HyperFlex Data Platform in Intersight. If enabled, notifications are sent to designated email addresses or email aliases that you want to receive the notifications. Typically, Auto Support is configured during HyperFlex storage cluster creation by configuring the SMTP mail server and adding email recipients.

  • Backup Configuration Policy—The backup policy consists of the Edge cluster datastores being protected, the backup interval, snapshot retention value, and the backup target cluster. All virtual machines residing in the protected Edge cluster datastore will be automatically protected by the backup policy. This includes VMs created in the protected datastore, as well as VMs migrated into the protected datastore.

  • DNS, NTP, and Timezone Policy—Configures DNS, NTP, and Timezone on all servers. DNS and NTP servers should reside outside of the HyperFlex storage cluster. Use an internally-hosted NTP server to provide a reliable source for the time.

  • External FC Storage Policy—Enables the use of external FC Storage and configures the WWxN name and the associated VSAN details. This policy is not applicable to HyperFlex Edge clusters.

  • External iSCSI Storage Policy—Enables the use of external iSCSI Storage and configures the associated VLAN details. This policy is not applicable to HyperFlex Edge clusters.

  • HTTP Proxy—Specifies the HTTP proxy settings to be used by the HyperFlex installation process and the HyperFlex Storage Controller VMs. This policy is required when the internet access of your servers including CIMC and HyperFlex Storage Controller VMs is secured by an HTTP proxy.

  • Network Configuration Policy—Configures the VLAN and KVM for the management network in Fabric Interconnects; and Uplink Speed, VLAN, and Jumbo Frames for the management network in Edge clusters. The VLAN must have access to Intersight. This policy cannot be shared by HyperFlex Edge and HyperFlex with Fabric Interconnect clusters.

  • Node IP Ranges Policy—Configures the management IP ranges for hypervisors and controller VMs. The data IPs are automatically assigned in a /24 subnet in the range 169.254.x.2 to 169.254.239.254.

  • Replication Network Configuration Policy—The replication network policy consists of replication VLAN, gateway, subnet mask, bandwidth, MTU, and replication IP address range parameters. The replication network policy is unique for each Edge cluster configured to use N:1 Replication.

  • Security Policy—Configures ESXi and Controller VM password for the HyperFlex cluster. This policy presents an option to update the Hypervisor password in Intersight, if you have not already changed it on the Hypervisor.

  • Storage Configuration Policy—Configures the options for VDI Optimization (for hybrid HyperFlex systems). For HyperFlex with Fabric Interconnect and DC-No-FI, this policy provides the option to enable Logical Availability Zones as part of the Cluster Deployment workflow. Logical Availability Zones are not supported on HyperFlex Edge clusters.


    Note


    Logical Availability Zones are automatic partitions of the physical cluster into multiple logical zones. They are created to avoid multiple node and component failures on large clusters, and to increase cluster resiliency. HyperFlex Data Platform intelligently places a copy of data in every zone. When there is a node failure in a single zone, it does not cause the entire cluster to fail because the other zones contain data replicas. Logical Availability Zones can only be enabled with clusters that are 8 converged nodes or larger.

    The LAZ option in the Storage Configuration policy is recommended for clusters greater than 8 nodes.


  • vCenter Policy—An optional policy during installation of the HyperFlex cluster. However, post-installation, you must register the cluster to vCenter to ensure that the cluster functions smoothly. This policy is optional to the installation of the HyperFlex cluster.

  • DC-No-FI Policy—Cisco HyperFlex Datacenter without Fabric Interconnect (DC-No-FI) brings the simplicity of hyperconvergence to data center deployments without the requirement of connecting the converged nodes to Cisco Fabric Interconnect.

HyperFlex Server Personality

The HyperFlex Series M6-servers have the HyperFlex Server Personality configured. A BOM compliant UCS C-Series M6 servers in the deployed HyperFlex cluster will also have a HyperFlex Server Personality configured. Navigate to OPERATE > Servers and select the server(s) in the servers table view. You can view the server personality details under the General tab. Server personality is set by during cluster deployment to include information like the disk type, node role, and SED capability. Server Personality for a HyperFlex server can have the following values:

  • Compute Node

  • Converged Node

During cluster deployment, you can view the node role based on Server Personality in the Node Type column. If you choose a node that has a HyperFlex Compute Server or no personality, you must ensure that the required hardware is available in the server for successful cluster deployment. For information about the Product Identification Standards (PIDs) that are supported by Cisco Intersight, see Cisco HyperFlex HX-Series Data Sheet

Select a node and click the Ellipsis (…) icon to change the personality of a node from Compute to Converged or from Converged to Compute.

A server configured as HyperFlex node with a personality of either Compute or Converged node cannot be reset to factory default no-personality mode from Intersight. Use Cisco IMC or UCS Manager API to reset the personality to factory default No Personality state.