Deploy HyperFlex Edge Clusters

Installation Overview

The following table summarizes the installation workflow for Cisco HyperFlex Edge:

Step

Description

Reference

1.

Complete the preinstallation checklist.

Preinstallation Checklist for Cisco HyperFlex Edge

2.

Ensure that the network is set up.

3.

Log in to Cisco Intersight.

Log In to Cisco Intersight

4.

Claim Targets.

Note

 

Skip if you have already claimed HyperFlex Nodes.

Claim Edge Targets

5.

Verify Cisco UCS Firmware versions.

Verify Firmware Version for HyperFlex Edge

6.

Run the HyperFlex Cluster Profile Wizard.

Configure HyperFlex Edge Clusters

7.

Run the post installation script through the controller VM.

Post Installation Tasks

Preinstallation Checklist for HyperFlex Edge

Ensure that your system meets the following installation and configuration requirements before you begin to install a Cisco HyperFlex Edge system. Refer to the Preinstallation Checklist for Cisco HyperFlex Edge for detailed preinstallation requirements.


Note


Beginning in April 2024, HyperFlex servers are shipped from the factory without VMware ESXi preinstalled. It is imperative that the ESXi hypervisor is installed before starting the HyperFlex Installation. For instructions to manually prepare factory shipped servers for the Cisco HyperFlex install, seeCisco HyperFlex install, see Cisco HyperFlex Systems Installation Guide for VMware ESXi.

Supported Models/Versions for HyperFlex Edge Cluster Deployments

The following table lists the supported hardware platforms and software versions for HyperFlex Edge cluster deployments. For information about the Product Identification Standards (PIDs) that are supported by Cisco Intersight, see Cisco HyperFlex HX-Series Data Sheet.

Component

Models/Versions

M6 Servers

  • HXAF-E-225M6S

  • HX-E-225M6S

  • HXAF-E-245-M6SX

  • HX-E-245-M6SX

  • HX-E-240-M6SX

  • HXAF-E-240-M6SX

  • HX-E-220-M6S

  • HXAF-E-220-M6S

M5 Servers

  • HX240C-M5SD

  • HX240C-M5SD

  • HX220C-M5SX

  • HXAF220C-M5SX

Cisco HX Data Platform (HXDP)

  • 6.0(1b)

  • 5.5(1a), 5.5(2a)

  • 5.0(2e), 5.0(2g)

Note

 
  • HXDP versions 5.0(2a), 5.0(2b), 5.0(2c), 5.0(2d), 4.5(2a), 4.5(2b), 4.5(2c), 4.5(2d), and 4.5(2e) are still supported for cluster expansion only.

  • Upgrades from HXDP 4.0.2x are supported provided the ESXi version is compatible with 4.5(2x).

  • M6 servers require HXDP 5.0(1a) or later.

NIC Mode

This can be one of the following:
  • Dedicated Management Port

  • Shared LOM

Device Connector

Auto-upgraded by Cisco Intersight

Network Topologies

1GE and 10G+

Connectivity Type

Types:

  • VIC based

  • NIC-based (10G+ NIC-based clusters require HXDP version 5.0(2a) or later)

Installation

Log In to Cisco Intersight

Log In using Cisco ID

To login to Cisco Intersight, you must have a valid Cisco ID to create a Cisco Intersight account. If you do not have a Cisco ID, create one here.


Important


The device connector does not mandate the format of the login credentials, they are passed as is to the configured HTTP proxy server. Whether or not the username must be qualified with a domain name will depend on the configuration of the HTTP proxy server.


Log In using Single Sign-On

Single Sign-On (SSO) authentication enables you to use a single set of credentials to log in to multiple applications. With SSO authentication, you can log in to Intersight with your corporate credentials instead of your Cisco ID. Intersight supports SSO through SAML 2.0, and acts as a service provider (SP), and enables integration with Identity Providers (IdPs) for SSO authentication. You can configure your account to sign in to Intersight with your Cisco ID and SSO. Learn more about SSO with Intersight here.

Claim Edge Targets

Complete the following steps to claim one or more Targets to be managed by Cisco Intersight:

Before you begin

This procedure assumes that you are an existing user with a Cisco account. If not, see Log In to Cisco Intersight.

Procedure


Step 1

In the Cisco Intersight, left navigation pane, select ADMIN > Targets.

Step 2

In the Targets details page, click Claim a New Target.

Step 3

In the Claim a New Target wizard, select Hyperconverged > Cisco HyperFlex Cluster and complete the following fields:

Note

 

You can locate the Device ID and the Claim Code information in:

  1. Cisco IMC by navigating to Admin > Device Connector.

  2. Cisco HyperFlex by navigating to HyperFlex Connect UI > Settings > Device Connector.

UI Element

Essential Information

Device ID

Enter the applicable Device ID.

  • For a Cisco UCS C-Series Standalone server, use serial number.

    Example: NGTR12345

  • For HyperFlex, use Cluster UUID.

    Example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Claim Code

Enter target claim code. You can find this code in the Device Connector for the target type.

Note

 

Before you gather the Claim Code, ensure that the Device Connector has outbound network access to Cisco Intersight, and is in the “Not Claimed” state.

Step 4

Click Claim.

Note

 

Refresh the Targets page to view the newly claimed target.


Verify Firmware Version for HyperFlex Edge

View current BIOS, CIMC, SAS HBA, and drive firmware versions, and verify that those versions match the Cisco HyperFlex Edge and Firmware Compatibility Matrix in the Common Network Requirements. Refer to the Preinstallation Checklist for Cisco HyperFlex Edge for 2-Node, 3-Node, and 4-Node Edge clusters for more details.

Procedure


Step 1

In your browser, log in to the CIMC web UI by navigating to https://<CIMC IP>. You can also cross launch CIMC from Cisco Intersight from the Servers tableview.

Step 2

In the Navigation pane, click Server.

Step 3

On the Server page, click Summary.

Step 4

In the Cisco Integrated Management Controller (CIMC) Information section of the Server Summary page, locate and make a note of the BIOS Version and CIMC Firmware Version.

Step 5

In CIMC, navigate to Inventory > Storage. Double-click on Cisco 12G Modular SAS HBA (max 16 drives) (MRAID) and navigate to Details > Physical Drive Info.

Step 6

Compare the current BIOS, CIMC, SAS HBA, and drive firmware versions with the versions listed in the Cisco HyperFlex Edge and Firmware Compatibility Matrix in the Common Network Requirements. Refer to the Preinstallation Checklist for Cisco HyperFlex Edge for 2-Node, 3-Node, and 4-Node Edge clusters for more details.

Step 7

If the minimum versions are not met, use the Host Update Utility (HUU) Download Links in the compatibility matrix to upgrade the firmware versions running on the system, including Cisco Virtual Interface Cards (VIC), PCI Adapter, RAID controllers, and drive (HDD/SSD) firmware. You can find current and previous releases of the Cisco HUU User Guide at this location: http://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-c-series-rack-servers/products-user-guide-list.html.


Configure HyperFlex Edge Clusters

To configure a HyperFlex Edge Cluster in Intersight, do the following:

Procedure


Step 1

Log in to Intersight with HyperFlex Cluster administrator or Account Administrator privileges.

Step 2

Navigate to CONFIGURE > Profiles.

Step 3

In the Profiles page, make sure that the HyperFlex Cluster Profiles tab is selected, and click Create HyperFlex Cluster Profile to launch the Create HyperFlex Cluster Profile installation wizard.

Step 4

Select Edge as the deployment type. Click Start.

Step 5

In the General page, complete the following fields:

Field

Description

Organization drop-down list

You can make the HyperFlex Cluster Profile belong to the default organization or a specific organization. Choose:

  • default—To make the Cluster Profile belong to the default organization. All the policies that belong the default organization will be listed on the Create HyperFlex Cluster Profile wizard.

  • Specific Organization—To make the HyperFlex Cluster Profile belong to the specified organization only. Only the policies that belong to the selected organization will be listed on the Create HyperFlex Cluster Profile wizard.

    For example, if HyperFlex nodes are shared across two organizations and are associated to a Cluster Profile in one organization, you cannot associate the same node to a Cluster Profile in another organization. The Cluster Profile will be available only to users who belong the specified Organization.

Name field

Enter a name for the HyperFlex cluster.

The cluster name will be used as the vCenter cluster name, HyperFlex storage controller name, and HyperFlex storage cluster name.

Note

 

The name of the HyperFlex Cluster Profile belonging to an organization must be unique. You may create a HyperFlex Cluster Profile with the same name in a different organization.

HyperFlex Data Platform Version drop-down list

Select the version of the Cisco HyperFlex Data Platform to be installed. This can be one of the following:

  • 6.0(1b)

  • 5.5(1a), 5.5(2a)

  • 5.0(2e), 5.0(2g)

Note

 

The version that you select impacts the types of HyperFlex policies that you can choose later in the configuration wizard.

(Optional) Description field

Add a description for the HyperFlex cluster profile.

(Optional) Set Tags field

Enter a tag key.

Click Next.

Step 6

In the Nodes Assignment page, you can assign nodes now or optionally, you can choose to assign the nodes later. To assign nodes, click the Assign nodes check box and select the node you want to assign.

You can view the node role based on Server Personality in the Node Type column. If you choose a node that has a HyperFlex Compute Server or no personality, you must ensure that the required hardware is available in the server for succesful cluster deployment. For information about the Product Identification Standards (PIDs) that are supported by Cisco Intersight, see Cisco HyperFlex HX-Series Data Sheet

Important

 

Cisco HyperFlex Edge cluster only allows a minimum of 2 to a maximum of 4 nodes.

Note

 

The expansion of edge clusters beyond 4 nodes, changes the deployment type from Edge type to DC-No-FI type.

Click Next.

Step 7

In the Cluster Configuration page, complete the following fields:

Note

 

For the various cluster configuration tasks, you can enter the configuration details or import the required configuration data from policies. To use pre-configured policies, click Select Policy, next to the configuration task and choose the appropriate policy from the list.

Field

Description

Security

Hypervisor Admin field

Enter the Hypervisor administrator username.

Note

 

Use root account for ESXi deployments.

Hypervisor Password field

Enter the Hypervisor password, this can be one of the following:

Remember

 

The default ESXi password of Cisco123 must be changed as part of installation. For fresh ESXi installation, ensure the checkbox for The Hypervisor on this node uses the factory default password is checked. Provide a new ESXi root password that will be set on all nodes during installation.

If the ESXi installation has a non-default root password, ensure the checkbox The Hypervisor on this node uses the factory default password is unchecked. Provide the ESXi root password that you configured. This password will not be changed during installation.

Hypervisor Password Confirmation field

Retype the Hypervisor password.

Controller VM Admin Password field

Enter a user-supplied HyperFlex storage controller VM password.

Important

 

Make a note of this password as it will be used for the administrator account.

Controller VM Admin Password Confirmation field

Retype Controller VM administrator password.

DNS, NTP, and Timezone

Timezone field

Select the local timezone.

DNS Suffix field

Enter the suffix for the DNS. This is applicable only for HX Data Platform 3.0 and later.

DNS Servers field

Enter one or more DNS servers. A DNS server that can resolve public domains is required for Intersight.

NTP Servers field

Enter one or more NTP servers (IP address or FQDN). A local NTP server is highly recommended.

vCenter (Optional Policy)

vCenter Server FQDN or IP field

Enter the vCenter server FQDN or IP address.

vCenter Username field

Enter the vCenter username. For example, administrator@vsphere.local

vCenter Password field

Enter the vCenter password.

vCenter Datacenter Name field

Enter the vCenter datacenter name.

Storage Configuration (Optional Policy)

VDI Optimization check box

Check this check box to enable VDI optimization (hybrid HyperFlex systems only).

Auto Support (Optional Policy)

Auto Support check box

Check this check box to enable Auto Support.

Send Service Ticket Notifications To field

Enter the email address recipient for support tickets.

Node IP Ranges

Note

 

This section configures the management IP pool. You must complete the management network fields to define a range of IPs for deployment. On the node configuration screen, these IPs will be automatically assigned to the selected nodes. If you wish to assign a secondary range of IPs for the controller VM Management network, you may optionally fill out the additional fields below. Both IP ranges must be part of the same subnet.

Management Network Starting IP field

The starting IP address for the management IP pool.

Management Network Ending IP field

The ending IP address for the management IP pool.

Management Network Subnet Mask field

The subnet mask for the management VLAN.

Management Network Gateway field

The default gateway for the management VLAN.

Controller VM Management Network Starting IP field (Optional)

The starting IP address for the controller VM management network.

Controller VM Management Network Ending IP field (Optional)

The ending IP address for the controller VM management network.

Controller VM Management Network Subnet Mask field (Optional)

The subnet mask for the controller VM management network.

Controller VM Management Network Gateway field (Optional)

The default gateway for the controller VM management network.

Cluster Network

Uplink Speed drop-down list

Select the link speed of the server adapter port to the upstream switch. The Uplink speed can be:

  • 1G (HyperFlex Edge)

  • 10G+ (HyperFlex Edge)

When the policy is attached to a cluster profile with Edge management platform, the uplink speed can be '1G' or '10G+'. When the policy is attached to a cluster profile with Fabric Interconnect management platform, the uplink speed can be 'default' only.

Refer to the Preinstallation Checklist for Cisco HyperFlex Edge for more details of the supported Network Topologies.

Attention

 

Using 10G+ mode typically requires the use of forward error correction (FEC) depending on the transceiver or the type & length of cabling selected. The VIC 1400 series by default is configured in CL91 FEC mode (FEC mode “auto” if available in the Cisco IMC UI is the same as CL91) and does not support auto FEC negotiation. Certain switches will need to be manually set to match this FEC mode to bring the link state up. The FEC mode must match on both the switch and VIC port for the link to come up. If the switch in use does not support CL91, you may configure the VIC ports to use CL74 to match the FEC mode available on the switch. This will require a manual FEC mode change in the CIMC UI under the VIC configuration tab. Do not start a HyperFlex Edge deployment until the link state is up as reported by the switch and the VIC ports. CL74 is also known as FC-FEC (Firecode) and CL91 is also known as RS-FEC (Reed Solomon). See the Cisco UCS C-Series Integrated Management Controller GUI Configuration Guide for further details on how to change the FEC mode configured on the VIC using the Cisco IMC GUI.

Management Network VLAN ID field

Enter the VLAN ID for the management network. VLAN must have access to Intersight.

An ID of 0 means the traffic is untagged. The VLAN ID can be any number between 0 and 4095, inclusive.

Jumbo Frames check box

Check this check box to enable Jumbo Frames.

Jumbo Frames are optional and can remain disabled for HyperFlex Edge deployments.

Proxy Setting (Optional Policy)

Hostname field

Enter the HTTP proxy server FQDN or IP address.

Port field

Enter the proxy port number.

Username field

Enter the HTTP Proxy username.

Password field

Enter the HTTP Proxy password.

HyperFlex Storage Network

Storage Network VLAN ID field

Enter the VLAN ID for the storage VLAN traffic. The VLAN must be unique per HyperFlex cluster.

Note

 

The storage VLAN must be unique per HyperFlex cluster. This VLAN does not need to be routable and can remain layer 2 only. IP addresses from the link local range 169.254.0.0/16 are automatically assigned to storage interfaces. A storage VLAN is not required for two node HyperFlex Edge 1GE configurations, and you should enter 1 for this field.

Click Next.

Step 8

In the Nodes Configuration page, you can view the IP and Hostname settings that were automatically assigned. Intersight will make an attempt to auto-allocate IP addresses. Complete the following fields:

Field

Description

Cluster Management IP Address field

The cluster management IP should belong to the same subnet as the Management IPs.

MAC Prefix Address field

The MAC Prefix Address is auto-allocated for NIC-based and 1G HyperFlex Edge clusters. For 10G+ HyperFlex Edge clusters you can overwrite the MAC Prefix address, using a MAC Prefix address from the range 00:25:B5:00 to 00:25:B5:EF.

Attention

 

Ensure that the MAC prefix is unique across all clusters for successful HyperFlex cluster deployment. Intersight does a validation for duplicate MAC prefix and shows appropriate warning if any duplicate MAC prefix is found.

Replication Factor radio button

The number of copies of each data block written. The options are 2 or 3 redundant replicas of your data across the storage cluster.

Important

 

Replication factor 3 is the recommended option.

Hostname Prefix field

The specified Hostname Prefix will be applied to all nodes.

Step 9

In the Summary page, you can view the cluster configuration and node configuration details. Review and confirm that all information entered is correct. Ensure that there are no errors triggered under the Errors/Warnings tab.

Note

 

When deploying 2 Node Edge clusters a warning will be displayed reminding of the importance of connectivity to Intersight. Ensure that your cluster remains connected to Intersight at all times. A second warning will be shown reminding users to implement a backup strategy to ensure that all user data is protected.

Step 10

Click Validate and Deploy to begin the deployment. Optionally, click Validate, and click Save & Close to complete deployment later. The Results page displays the progress of the various configuration tasks. You can also view the progress of the HyperFlex Cluster Profile deployment from the Requests page.


What to do next

Monitoring cluster deployment

Check your cluster deployment progress in the following ways:

  • You can remain on the Results page to watch the cluster deployment progress in real time.

  • You can also close the current view and allow the installation to continue in the background. To return to the results screen, navigate to CONFIGURE > Profiles > HyperFlex Cluster Profiles, and click on the name of your cluster.

  • You can see the current state of your deployment in the status column in the HyperFlex Cluster Profile Table view.

Post Installation

Post Installation Tasks

Procedure


Step 1

Confirm that the HyperFlex Cluster is claimed in Intersight.

Step 2

Confirm that the cluster is registered to vCenter.

Step 3

Navigate to HyperFlex Clusters, select your cluster and click ... to launch HyperFlex Connect.

Step 4

SSH to the cluster management IP address and login using admin username and the controller VM password provided during installation. Verify the cluster is online and healthy.

Step 5

Paste the following command in the Shell, and hit enter:

hx_post_install

Step 6

Follow the on-screen prompts to complete the installation. The post_install script completes the following:

  • License the vCenter host.

  • Enable HA/DRS on the cluster per best practices.

  • Suppress SSH/Shell warnings in vCenter.

  • Configure vMotion per best practices.

  • Add additional guest VLANs/portgroups.

  • Perform HyperFlex configuration check.