Preinstallation Checklist for Cisco HX Data Platform

HyperFlex Edge Deployments

Cisco HyperFlex Edge brings the simplicity of hyperconvergence to remote and branch office (ROBO) and edge environments.

Starting with Cisco HX Data Platform Release 4.0, HyperFlex Edge deployments can be based on 2-Node, 3-Node, or 4-Node Edge clusters. For the key requirements and supported topologies that must be understood and configured before starting a Cisco HyperFlex Edge deployment, refer the Preinstallation Checklist for Cisco HyperFlex Edge.

Checklist Instructions

This is a preengagement checklist for Cisco HyperFlex Systems sales, services, and partners to send to customers. Cisco uses this form to create a configuration file for the initial setup of your system enabling a timely and accurate installation.


Important

You CANNOT fill in the checklist using the HTML page.


Checklist Download Location

Download the editable checklist PDF from the following location:

Cisco_HX_Data_Platform_Preinstallation_Checklist_form.pdf

After you completely fill in the form, return it to your Cisco account team.

Contact Information

Customer Account Team and Contact Information

Name

Title

E-mail

Phone

Equipment Shipping Address

Company Name

Attention Name/Dept

Street Address #1

Street Address #2

City, State, and Zip

Data Center Floor and Room #

Office Address (if different than shipping address)

Company Name

Attention Name/Dept

Street Address #1

Street Address #2

City, State, and Zip

HyperFlex Software Versions

The HX components—Cisco HX Data Platform Installer, Cisco HX Data Platform, and Cisco UCS firmware—are installed on different servers. Verify that each component on each server used with and within an HX Storage Cluster are compatible.

  • HyperFlex does not support UCS Manager and UCS Server Firmware versions 4.0(4a), 4.0(4b), and 4.0(4c).


    Important

    Do not upgrade to these versions of firmware.

    Do not upgrade to these versions of UCS Manager.


  • Verify that the preconfigured HX servers have the same version of Cisco UCS server firmware installed. If the Cisco UCS Fabric Interconnects (FI) firmware versions are different, see the Cisco HyperFlex Systems Upgrade Guide for steps to align the firmware versions.

    • M4: For NEW hybrid or All Flash (Cisco HyperFlex HX240c M4 or HX220c M4) deployments, verify that Cisco UCS Manager 3.1(3k), 3.2(3i), or 4.0(2b) is installed.

    • M5: For NEW hybrid or All Flash (Cisco HyperFlex HX240c M5 or HX220c M5) deployments, verify that Cisco UCS Manager 4.0(2b) is installed.


      Important

      For SED-based HyperFlex systems, ensure that the A (Infrastructure), B (Blade server) and C (Rack server) bundles are at Cisco UCS Manager version 4.0(2b) or later for all SED M4/M5 systems. For more details, see CSCvh04307.

      For SED-based HyperFlex systems, also ensure that all clusters are at HyperFlex Release 3.5(2b) or later. For more information, see Field Notice (70234) and CSCvk17250.


    • To reinstall an HX server, download supported and compatible versions of the software. See the Cisco HyperFlex Systems Installation Guide for VMware ESXi for the requirements and steps.

  • Verify that you have installed the minimum version of HyperFlex (2.5.1) to support Cisco Intersight for edge clusters.


    Important

    For Intersight edge servers running older than 4.0(1a) CIMC version, HUU is the suggested mechanism to update the firmware.
  • Please review the Release Notes related to the recommended FI/Server Firmware.

Table 1. HyperFlex Software Versions for M4 Servers (Non-SED)

HyperFlex Release

M4 Recommended FI/Server Firmware

*(be sure to review important notes above)

4.0(1b)

4.0(4e)

4.0(1a)

4.0(2e)

3.5(2e)

4.0(4e)

3.5(2d)

4.0(4e)

3.5(2c)

4.0(2d)

3.5(2b)

4.0(2d), 3.2(3i), 3.1(3k)

3.5(2a)

4.0(1c), 3.2(3i), 3.1(3k)

3.5(1a)

4.0(1b), 3.2(3h), 3.1(3j)

3.0(1i)

3.2(3h), 3.1(3j)

3.0(1h)

3.2(3h), 3.1(3j)

3.0(1e)

3.2(3h), 3.1(3j)

3.0(1d)

3.2(3g), 3.1(3j)

3.0(1c)

3.2(3g), 3.1(3h)

3.0(1b)

3.2(3d), 3.1(3h)

3.0(1a)

3.2(3d), 3.1(3f)

2.6(1e)

3.2(3d), 3.1(3f)

2.6(1d)

3.2(3d), 3.1(3c)

2.6(1b)

3.2(2d), 3.1(3c)

2.6(1a)

3.2(2d), 3.1(3c)

Table 2. HyperFlex Software Version for M5 Servers (Non-SED)

HyperFlex Release

M5 Recommended FI/Server Firmware

*(be sure to review important notes above)

4.0(1b)

4.0(4e)

4.0(1a)

4.0(2e)

3.5(2e)

4.0(4e)

3.5(2d)

4.0(4e)

3.5(2c)

4.0(2d)

3.5(2b)

4.0(2d)

3.5(2a)

4.0(1c)

3.5(1a)

4.0(1a)

3.0(1i)

3.2(3h)

3.0(1h)

3.2(3h)

3.0(1e)

3.2(3h)

3.0(1d)

3.2(3h)

3.0(1c)

3.2(3h)

3.0(1b)

3.2(3d)

3.0(1a)

3.2(3d)

2.6(1c)

3.2(3d)

2.6(1d)

3.2(3d)

2.6(1b)

3.2(2d)

2.6(1a)

3.2(2d)

Physical Requirements

Physical Server Requirements

  • For a HX220c/HXAF220c Cluster:

    • Two rack units (RU) for the UCS 6248UP, 6332UP, 6332-16UP Fabric Interconnects (FI) or four RU for the UCS 6296UP FI

    • HX220c Nodes are one RU each; for example, for a three-node cluster, three RU are required; for a four-node cluster, 4 RU are required

    • If a Top-of-Rack switch is included in the install, add at least two additional RU of space for the switch.

  • For a HX240c/HXAF240c Cluster:

    • Two rack units (RU) for the UCS 6248UP, 6332UP, 6332-16UP Fabric Interconnects (FI) or four RU for the UCS 6296UP FI

    • HX240c Nodes are two RU each; for example, for a three-node cluster, six RU are required; for a four-node cluster, eight RU are required

    • If a Top-of-Rack switch is included in the install, add at least two additional RU of space for the switch.

    Although there is no requirement for contiguous rack space, it makes installation easier.

  • The system requires two C13/C14 power cords connected to a 15-amp circuit per device in the cluster. At a minimum, there are three HX nodes and two FI, and it can scale to eight HX nodes, two FI, and blade chassis.

  • Up to 2 - 4x uplink connections for UCS Fabric Interconnects.

  • Per best practice, each FI requires either 2x10 Gb optical connections in an existing network, or 2x10 Gb Twinax cables. Each HX node requires two Twinax cables for connectivity (10 Gb optics can be used). For deployment with 6300 series FI, use 2x40GbE uplinks per FI and connect each HX node with dual native 40GbE.

  • Use single VIC only for Converged nodes or Compute–only nodes. Additional VICs or PCIe NICs are not supported.


Note

Single FI HX deployment is not supported.


Network Requirements

Verify that your environment adheres to the following best practices:

  • Must use a different subnet and VLANs for each network.

  • Verify that each host directly attaches to a UCS Fabric Interconnect using a 10-Gbps cable.

  • Do not use VLAN 1, the default VLAN, because it can cause networking issues, especially if Disjoint Layer 2 configuration is used. Use a different VLAN.

  • Configure the upstream switches to accommodate non-native VLANs. Cisco HX Data Platform Installer sets the VLANs as non-native by default.

Each VMware ESXi host needs the following separate networks:

  • Management traffic network—From the VMware vCenter, handles hypervisor (ESXi server) management and storage cluster management.

  • Data traffic network—Handles the hypervisor and storage data traffic.

  • vMotion network

  • VM network

There are four vSwitches, each one carrying a different network:

  • vswitch-hx-inband-mgmt—Used for ESXi management and storage controller management.

  • vswitch-hx-storage-data—Used for ESXi storage data and HX Data Platform replication.

    The vswitch-hx-inband-mgmt and vswitch-hx-storage-data vSwitches further divide into two port groups with assigned static IP addresses to handle traffic between the storage cluster and ESXi host.

  • vswitch-hx-vmotion—Used for VM and storage VMware vMotion.

    This vSwitch has one port group for management, defined through VMware vSphere, which connects to all of the hosts in the vCenter cluster.

  • vswitch-hx-vm-network—Used for VM data traffic.

    You can add or remove VLANs on the corresponding vNIC templates in Cisco UCS Manager, and create port groups on the vSwitch.


Note

  • HX Data Platform Installer creates the vSwitches automatically.

  • Ensure that you enable the following services in vSphere after you create the HX Storage Cluster:

    • DRS (vSphere Enterprise Plus only)

    • vMotion

    • High Availability


Port Requirements

If your network is behind a firewall, in addition to the standard port requirements, VMware recommends ports for VMware ESXi and VMware vCenter.

  • CIP-M is for the cluster management IP.

  • SCVM is the management IP for the controller VM.

  • ESXi is the management IP for the hypervisor.

Verify that the following firewall ports are open:

Time Server

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

123

NTP/UDP

Each ESXi Node

Each SCVM Node

UCSM

Time Server

Bidirectional

HX Data Platform Installer

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

22

SSH/TCP

HX Data Platform Installer

Each ESXi Node

Management addresses

Each SCVM Node

Management addresses

CIP-M

Cluster management

UCSM

UCSM management addresses

80

HTTP/TCP

HX Data Platform Installer

Each ESXi Node

Management addresses

Each SCVM Node

Management addresses

CIP-M

Cluster management

UCSM

UCSM management addresses

443

HTTPS/TCP

HX Data Platform Installer

Each ESXi Node

Management addresses

Each SCVM Node

Management addresses

CIP-M

Cluster management

UCSM

UCSM management addresses

8089

vSphere SDK/TCP

HX Data Platform Installer

Each ESXi Node

Management addresses

902

Heartbeat/UDP/TCP

HX Data Platform Installer

vCenter

Each ESXi Node

None

Ping/ICMP

HX Data Platform Installer

ESXi IPs

CVM IPs

Management addresses

9333

UDP/TCP

HX Data Platform Installer

CIP-M

Cluster management

Mail Server

Optional for email subscription to cluster events.

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

25

SMTP/TCP

Each SCVM Node

CIP-M

UCSM

Mail Server

Optional

Monitoring

Optional for monitoring UCS infrastructure.

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

161

SNMP Poll/UDP

Monitoring Server

UCSM

Optional

162

SNMP Trap/UDP

UCSM

Monitoring Server

Optional

Name Server

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

53 (external lookups)

DNS/TCP/UDP

Each ESXi Node

Name Server

Management addresses

Each SCVM Node

Name Server

Management addresses

CIP-M

Name Server

Cluster management

UCSM

Name Server

vCenter

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

80

HTTP/TCP

vCenter

Each SCVM Node

CIP-M

Bidirectional

443

HTTPS (Plug-in)/TCP

vCenter

Each ESXi Node

Each SCVM Node

CIP-M

Bidirectional

7444

HTTPS (VC SSO)/TCP

vCenter

Each ESXi Node

Each SCVM Node

CIP-M

Bidirectional

9443

HTTPS (Plug-in)/TCP

vCenter

Each ESXi Node

Each SCVM Node

CIP-M

Bidirectional

5989

CIM Server/TCP

vCenter

Each ESXi Node

9080

CIM Server/TCP

vCenter

Each ESXi Node

Introduced in ESXi Release 6.5

902

Heartbeat/TCP/UDP

vCenter

Each ESXi Node

This port must be accessible from each host. Installation results in errors if the port is not open from the HX Installer to the ESXi hosts.

User

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

22

SSH/TCP

User

Each ESXi Node

Management addresses

Each SCVM Node

Management addresses

CIP-M

Cluster management

HX Data Platform Installer

UCSM

UCSM management addresses

vCenter

SSO Server

80

HTTP/TCP

User

Each SCVM Node

Management addresses

CIP-M

Cluster management

UCSM

HX Data Platform Installer

vCenter

443

HTTPS/TCP

User

Each SCVM Node

CIP-M

UCSM

UCSM management addresses

HX Data Platform Installer

vCenter

7444

HTTPS (SSO)/TCP

User

vCenter

SSO Server

9443

HTTPS (Plug-in)/TCP

User

vCenter

SSO Server

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

7444

HTTPS (SSO)/TCP

SSO Server

Each ESXi Node

Each SCVM Node

CIP-M

Bidirectional

Stretch Witness

Required only when deploying HyperFlex Stretched Cluster.

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

2181

2888

3888

Zookeeper/TCP

Witness

Each CVM Node

Bidirectional, management addresses

8180

Exhibitor (Zookeeper lifecycle)/TCP

Witness

Each CVM Node

Bidirectional, management addresses

80

HTTP/TCP

Witness

Each CVM Node

Potential future requirement

443

HTTPS/TCP

Witness

Each CVM Node

Potential future requirement

Replication

Required only when configuring native HX asynchronous cluster to cluster replication.

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

9338

Data Services Manager Peer/TCP

Each CVM Node

Each CVM Node

Bidirectional, include cluster management IP addresses

3049

Replication for CVM/TCP

Each CVM Node

Each CVM Node

Bidirectional, include cluster management IP addresses

4049

Cluster Map/TCP

Each CVM Node

Each CVM Node

Bidirectional, include cluster management IP addresses

4059

NR NFS/TCP

Each CVM Node

Each CVM Node

Bidirectional, include cluster management IP addresses

9098

Replication Service

Each CVM Node

Each CVM Node

Bidirectional, include cluster management IP addresses

8889

NR Master for Coordination/TCP

Each CVM Node

Each CVM Node

Bidirectional, include cluster management IP addresses

9350

Hypervisor Service/TCP

Each CVM Node

Each CVM Node

Bidirectional, include cluster management IP addresses

SED Cluster

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

443

HTTPS

Each SCVM Management IP (including cluster management IP)

UCSM (Fabric A, Fabric B, VIP)

Policy Configuration

5696

TLS

CIMC from each node

KVM Server

Key Exchange

UCSM

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

443

Encryption etc./TCP

Each CVM Node

CIMC OOB

Bidirectional for each UCS node

81

KVM/HTTP

User

UCSM

OOB KVM

743

KVM/HTTP

User

UCSM

OOB KVM encrypted

Miscellaneous

Port Number

Service/Protocol

Source

Port Destinations

Essential Information

9350

Hypervisor Service/TCP

Each CVM Node

Each CVM Node

Bidirectional, include cluster management IP addresses

9097

CIP-M Failover/TCP

Each CVM Node

Each CVM Node

Bidirectional for each CVM to other CVMs

111

RPC Bind/TCP

Each SCVM node

Each SCVM node

CVM outbound to Installer

8002

Installer/TCP

Each SCVM node

Installer

Service Location Protocol

8080

Apache Tomcat/TCP

Each SCVM node

Each SCVM node

stDeploy makes connection, any request with uri /stdeploy

8082

Auth Service/TCP

Each SCVM node

Each SCVM node

Any request with uri /auth/

9335

hxRoboControl/TCP

Each SCVM node

Each SCVM node

Robo deployments

443

HTTPS/TCP

Each CVM Mgmt IP including CIP-M

UCSM A/B and VIP

Policy Configuration

5696

TLS/TCP

CIMC from each node

KMS Server

Key Exchange

8125

UDP

Each SCVM node

Each SCVM node

Graphite

427

UDP

Each SCVM node

Each SCVM node

Service Location Protocol

32768 to 65535

UDP

Each SCVM node

Each SCVM node

SCVM outbound communication


Tip

If you do not have standard configurations and need different port settings, refer to Table C-5 Port Literal Values for customizing your environment.


HyperFlex External Connections

External Connection

Description

IP Address/ FQDN/ Ports/Version

Essential Information

Intersight Device Connector

Supported HX systems are connected to Cisco Intersight through a device connector that is embedded in the management controller of each system.

HTTPS Port Number: 443

1.0.5-2084 or later (Auto-upgraded by Cisco Intersight)

All device connectors must properly resolve svc.ucs-connect.com and allow outbound-initiated HTTPS connections on port 443. The current HX Installer supports the use of an HTTP proxy.

The IP addresses of ESXi management must be reachable from Cisco UCS Manager over all the ports that are listed as being needed from installer to ESXi management, to ensure deployment of ESXi management from Cisco Intersight.

Auto Support

Auto Support (ASUP) is the alert notification service provided through HX Data Platform.

SMTP Port Number: 25

Enabling Auto Support is strongly recommended because it provides historical hardware counters that are valuable in diagnosing future hardware issues, such as a drive failure for a node.

Post Installation Script

To complete the post installation tasks, you can run a post installation script on the Installer VM. The script pings across all network interfaces (management, vMotion, and storage network) to ensure full fabric availability. The script also validates the correct tagging of VLANs and jumbo frame configurations on the northbound switch.

HTTP Port Number: 80

The post install script requires name resolution to http://cs.co/hx-scripts via port 80 (HTTP).

Deployment Information

Before deploying Cisco HX Data Platform and creating a cluster, collect the following information about your system.

Cisco UCS Fabric Interconnects (FI) Information

UCS cluster name

FI cluster IP address

UCS FI-A IP address

UCS FI-B IP address

Pool for KVM IP addresses

(one per HX node is required)

Subnet mask IP address

Default gateway IP address

MAC pool prefix

(provide two hex characters)

00:25:B5:

UCS Manager username

admin

Password

VLAN Information

Tag the VLAN IDs to the Fabric Interconnects.


Note

HX Data Platform, release 1.7—Document and configure all customer VLANs as native on the FI prior to installation.

HX Data Platform, release 1.8 and later—Default configuration of all customer VLANs is non-native.


Network

VLAN ID

VLAN Name

Description

Use separate subnet and VLANs for each of the following networks:

VLAN for VMware ESXi and Cisco HyperFlex (HX) management

Hypervisor Management Network

Storage controller management network

Used for management traffic among ESXi, HX, and VMware vCenter; must be routable.

VLAN for HX storage traffic

Hypervisor Data Network

Storage controller data network

Used for storage traffic and requires L2.

VLAN for VM VMware vMotion

vswitch-hx-vmotion

Used for vMotion VLAN, if applicable.

VLAN for VM network

vswitch-hx-vm-network

Used for VM/application network.

Customer Deployment Information

Deploy the HX Data Platform using an OVF installer appliance. A separate ESXi server, which is not a member of the vCenter HX Cluster, is required to host the installer appliance. The installer requires one IP address on the management network.

The installer appliance IP address must be reachable from the management subnet used by the hypervisor and the storage controller VMs. The installer appliance must run on the ESXi host or on a VM Player/VMware workstation that is not a part of the cluster installation. In addition, the HX Data Platform Installer VM IP address must be reachable by Cisco UCS Manager, ESXi, and vCenter IP addresses where HyperFlex hosts are added.

Installer appliance IP address

Network IP Addresses


Note

  • Data network IPs in the range of 169.254.X.X in a network larger than /24 is not supported and should not be used.

  • Data network IPs in the range of 169.254.254.0/24 must not be used.


Management Network IP Addresses

(must be routable)

Data Network IP Addresses

(does not have to be routable)

Important 

Ensure that the Data and Management Networks are on different subnets for a successful installation.

ESXi Hostname*

Hypervisor Management Network

Storage Controller Management Network

Hypervisor Data Network (Not Required for Cisco Intersight)1

Storage Controller Data Network (Not Required for Cisco Intersight)2

Server 1:

Server 2:

Server 3:

Server 4:

Server 5:

Storage Cluster Management IP address

Storage Cluster Data IP address

Subnet mask IP address

Subnet mask IP address

Default gateway IP address

Default gateway IP address

1 Data network IPs are automatically assigned to the 169.254.X.0/24 subnet based on MAC address prefix.
2 Data network IPs are automatically assigned to the 169.254.X.0/24 subnet based on MAC address prefix.

* Verify DNS forward and reverse records are created for each host. If no DNS records exist, hosts are added to vCenter by IP address instead of FQDN.

VMware vMotion Network IP Addresses

vMotion Network IP Addresses (not configured by software)

Hypervisor Credentials

root username

root

root password

VMware vCenter Configuration


Note

HyperFlex communicates with vCenter through standard ports. Port 80 is used for reverse HTTP proxy. Port 443 is used for secure communication to the vCenter SDK and may not be changed.


vCenter FQDN or IP address

vCenter admin username

username@domain

vCenter admin password

vCenter data center name

VMware vSphere compute cluster and storage cluster name

Single Sign-On (SSO)

SSO Server URL*

  • This information is required only if the SSO URL is not reachable.

  • This is automatic for ESXi version 6.0 and later.

* SSO Server URL can be found in vCenter at vCenter Server > Manage > Advanced Settings, key config.vpxd.sso.sts.uri.

Network Services


Note

  • At least one DNS and NTP server must reside outside of the HX storage cluster.

  • Use an internally-hosted NTP server to provide a reliable source for the time.


DNS Servers

<Primary DNS Server IP address, Secondary DNS Server IP address, …>

NTP servers

<Primary NTP Server IP address, Secondary NTP Server IP address, …>

Time zone

Example: US/Eastern, US/Pacific

Connected Services

Enable Connected Services (Recommended)

Yes or No required

Email for service request notifications

Example: name@company.com

Contacting Cisco TAC

You can open a Cisco Technical Assistance Center (TAC) support case to reduce time addressing issues, and get efficient support directly with Cisco Prime Collaboration application.

For all customers, partners, resellers, and distributors with valid Cisco service contracts, Cisco Technical Support provides around-the-clock, award-winning technical support services. The Cisco Technical Support website provides online documents and tools for troubleshooting and resolving technical issues with Cisco products and technologies:

http://www.cisco.com/techsupport

Using the TAC Support Case Manager online tool is the fastest way to open S3 and S4 support cases. (S3 and S4 support cases consist of minimal network impairment issues and product information requests.) After you describe your situation, the TAC Support Case Manager automatically provides recommended solutions. If your issue is not resolved by using the recommended resources, TAC Support Case Manager assigns your support case to a Cisco TAC engineer. You can access the TAC Support Case Manager from this location:

https://mycase.cloudapps.cisco.com/case

For S1 or S2 support cases or if you do not have Internet access, contact the Cisco TAC by telephone. (S1 or S2 support cases consist of production network issues, such as a severe degradation or outage.) S1 and S2 support cases have Cisco TAC engineers assigned immediately to ensure your business operations continue to run smoothly.

To open a support case by telephone, use one of the following numbers:

  • Asia-Pacific: +61 2 8446 7411

  • Australia: 1 800 805 227

  • EMEA: +32 2 704 5555

  • USA: 1 800 553 2447

For a complete list of Cisco TAC contacts for Enterprise and Service Provider products, see http://www.cisco.com/c/en/us/support/web/tsd-cisco-worldwide-contacts.html.

For a complete list of Cisco Small Business Support Center (SBSC) contacts, see http://www.cisco.com/c/en/us/support/web/tsd-cisco-small-business-support-center-contacts.html.