Preinstallation Checklist for Cisco HX Data Platform
HyperFlex Edge Deployments
Cisco HyperFlex Edge brings the simplicity of hyperconvergence to remote and branch office (ROBO) and edge environments.
Starting with Cisco HX Data Platform Release 4.0, HyperFlex Edge deployments can be based on 2-Node, 3-Node, or 4-Node Edge clusters. For the key requirements and supported topologies that must be understood and configured before starting a Cisco HyperFlex Edge deployment, refer the Preinstallation Checklist for Cisco HyperFlex Edge.
Cisco HyperFlex Pre-Install Interactive Tool
Cisco recommends using the HyperFlex Pre-Install Tool https://hxpreinstall.cloudapps.cisco.com/ for pre-deployment planning. The tool enables a collection of HyperFlex cluster configuration parameters and enables a simple configuration transfer to either Intersight SaaS or the HyperFlex installer VM. The features and benefits of using the HyperFlex Pre-Install tool include:
-
Create and validate a cluster configuration before starting installation.
-
Multi-cluster import capability using a Microsoft Excel template.
-
Push the cluster configuration directly to Intersight SaaS, resulting in automatic cluster profile creation.
-
Download JSON configuration files for use with the HyperFlex OVA installer VM.
-
Creation of PDF reports with the configuration for record keeping.
-
Clone a cluster profile for easy scaling.
Cisco HyperFlex Systems Documentation Roadmap
For a complete list of all Cisco HyperFlex Systems documentation, see Cisco HyperFlex Systems Documentation Roadmap.
Checklist Instructions
This is a preengagement checklist for Cisco HyperFlex Systems sales, services, and partners to send to customers. Cisco uses this form to create a configuration file for the initial setup of your system enabling a timely and accurate installation.
Important |
You CANNOT fill in the checklist using the HTML page. |
Download a Local Copy of the Editable Form
-
Download the Cisco HX Data Platform Checklist Form.
-
Open the local file and fill in the form.
-
Save the form.
-
Return the form to your Cisco account team and keep a copy of the form for your records.
Note
The root user is created with the same password as the admin user upon the cluster creation. It is important to track the root user password because future changes to the admin password do not automatically update the root password .
Supported Versions and System Requirements for Cisco HXDP
Cisco HX Data Platform requires specific software and hardware versions, and networking settings for successful installation.
For a complete list of requirements, see the Cisco HyperFlex Systems Installation Guide for your installation.
Requirement |
Link to Details |
---|---|
For a complete list of hardware and software inter-dependencies, |
Hardware and Software Interoperability for Cisco HyperFlex HX-Series |
Details on cluster limits and Cisco HX Data Platform Compatibility and Scalability Details. See the Cisco HX Data Platform Compatibility and Scalability Details section for your installed release. |
|
Verify that each component, on each server used with and within an HX Storage Cluster is compatible. See the Recommended FI/Server Firmware section for your installed release. |
|
Confirm the component firmware on the server meets the minimum versions supported. See the HyperFlex Edge and Firmware Compatibility Matrix for your installed release. |
|
HX Data Platform Software Versions for HyperFlex Witness Node for Stretched Cluster. See the Software Requirements for HyperFlex Witness Node for Stretched Cluster for your installed release. |
|
Verify that you are using compatible versions of Cisco HyperFlex Systems (HX) components and VMware vSphere, VMware vCenter, and VMware ESXi. See the Software Requirements for VMware ESXi for your installed release. |
|
To verify that you are using compatible versions of Cisco HyperFlex Systems (HX) components and Microsoft Hyper-V (Hyper-V) components. See the Software Requirements for Microsoft Hyper-V for your installed release. |
|
To verify that you are using compatible versions of Microsoft Software. See the Supported Microsoft Software table for your installed release. |
|
List of recommended browsers. See the Browser Recommendations section for your installed release. |
Contact Information
Customer Account Team and Contact Information
Name |
Title |
|
Phone |
---|---|---|---|
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
Equipment Shipping Address
Company Name |
|
Attention Name/Dept |
|
Street Address #1 |
|
Street Address #2 |
|
City, State, and Zip |
|
Data Center Floor and Room # |
Office Address (if different than shipping address)
Company Name |
|
Attention Name/Dept |
|
Street Address #1 |
|
Street Address #2 |
|
City, State, and Zip |
Physical Requirements
Physical Server Requirements
-
For a HX220c/HXAF220c Cluster:
-
Two rack units (RU) for the UCS 6248UP, 6332UP, 6332-16UP Fabric Interconnects (FI) or four RU for the UCS 6296UP FI
-
HX220c Nodes are one RU each; for example, for a three-node cluster, three RU are required; for a four-node cluster, 4 RU are required
-
If a Top-of-Rack switch is included in the install, add at least two additional RU of space for the switch.
-
-
For a HX240c/HXAF240c Cluster:
-
Two rack units (RU) for the UCS 6248UP, 6332UP, 6332-16UP Fabric Interconnects (FI) or four RU for the UCS 6296UP FI
-
HX240c Nodes are two RU each; for example, for a three-node cluster, six RU are required; for a four-node cluster, eight RU are required
-
If a Top-of-Rack switch is included in the install, add at least two additional RU of space for the switch.
Although there is no requirement for contiguous rack space, it makes installation easier.
-
-
The system requires two C13/C14 power cords connected to a 15-amp circuit per device in the cluster. At a minimum, there are three HX nodes and two FI, and it can scale to eight HX nodes, two FI, and blade chassis.
-
Up to 2 - 4x uplink connections for UCS Fabric Interconnects.
-
Per best practice, each FI requires either 2x10 Gb optical connections in an existing network, or 2x10 Gb Twinax cables. Each HX node requires two Twinax cables for connectivity (10 Gb optics can be used). For deployment with 6300 series FI, use 2x40GbE uplinks per FI and connect each HX node with dual native 40GbE.
-
VIC and NIC Support: For details, see the Cisco HyperFlex Systems—Networking Topologies document.
Note |
Single FI HX deployment is not supported. |
Network Requirements
Verify that your environment adheres to the following best practices:
-
Must use a different subnet and VLANs for each network.
-
Verify that each host directly attaches to a UCS Fabric Interconnect using an appropriate cable.
-
Do not use VLAN 1, the default VLAN, because it can cause networking issues, especially if Disjoint Layer 2 configuration is used. Use a different VLAN.
Important
Reserved VLAN IDs- VLANs with IDs from 4030 to 4047 and from 4094 to 4095 are reserved in UCS. You cannot use VLANs with IDs from this range. Until Cisco UCS Manager Release 4.0(1d), VLAN ID 4093 was in the list of reserved VLANs. VLAN 4093 has been removed from the list of reserved VLANs and is available for configuration.
The VLAN IDs you specify must also be supported on the uplink switch that you are using. For example, VLAN IDs 3968 to 4095 are reserved by Nexus switches and VLAN IDs 1002 to 1005 are reserved by Catalyst switches. Before you decide the VLAN IDs for HyperFlex use, make sure that the same VLAN IDs are available on your uplink switch.
-
Configure the upstream switches to accommodate non-native VLANs. Cisco HX Data Platform Installer sets the VLANs as non-native by default.
-
Uplinks from the UCS Fabric Interconnects to all top of rack switch ports must configure spanning tree in edge trunk or portfast edge mode depending on the vendor and model of the switch. This extra configuration ensures that when links flap or change state, they do not transition through unnecessary spanning tree states and incur an extra delay before traffic forwarding begins. Failure to properly configure FI uplinks in portfast edge mode may result in network and cluster outages during failure scenarios and during infrastructure upgrades that leverage the highly available network design native to HyperFlex.
Each VMware ESXi host needs the following separate networks:
-
Management traffic network—From the VMware vCenter, handles hypervisor (ESXi server) management and storage cluster management.
-
Data traffic network—Handles the hypervisor and storage data traffic.
-
vMotion network
-
VM network
There are four vSwitches, each one carrying a different network:
-
vswitch-hx-inband-mgmt—Used for ESXi management and storage controller management.
-
vswitch-hx-storage-data—Used for ESXi storage data and HX Data Platform replication.
The vswitch-hx-inband-mgmt and vswitch-hx-storage-data vSwitches further divide into two port groups with assigned static IP addresses to handle traffic between the storage cluster and ESXi host.
-
vswitch-hx-vmotion—Used for VM and storage VMware vMotion.
This vswitch has one port group for management, defined through VMware vSphere, which connects to all hosts in the vCenter cluster.
-
vswitch-hx-vm-network—Used for VM data traffic.
You can add or remove VLANs on the corresponding vNIC templates in Cisco UCS Manager and create port groups on the vswitch.
Note |
|
Port Requirements
If your network is behind a firewall, in addition to the standard port requirements, VMware recommends ports for VMware ESXi and VMware vCenter.
-
CIP-M is for the cluster management IP.
-
SCVM is the management IP for the controller VM.
-
ESXi is the management IP for the hypervisor.
The comprehensive list of ports required for component communication for the HyperFlex solution is located in Appendix A of the HX Data Platform Security Hardening Guide
Tip |
If you do not have standard configurations and need different port settings, refer to Appendix A of the HX Data Platform Security Hardening Guide for customizing your environment. |
HyperFlex External Connections
External Connection |
Description |
IP Address/ FQDN/ Ports/Version |
Essential Information |
||
---|---|---|---|---|---|
Intersight Device Connector |
Supported HX systems are connected to Cisco Intersight through a device connector that is embedded in the management controller of each system. |
HTTPS Port Number: 443 1.0.5-2084 or later (Auto-upgraded by Cisco Intersight) |
All device connectors must properly resolve
The IP addresses of ESXi management must be reachable from Cisco UCS Manager over all the ports that are listed as being needed from installer to ESXi management, to ensure deployment of ESXi management from Cisco Intersight.
For more information, see the Network Connectivity Requirements section of the Intersight Help Center. |
||
Auto Support |
Auto Support (ASUP) is the alert notification service provided through HX Data Platform. |
SMTP Port Number: 25 |
Enabling Auto Support is strongly recommended because it provides historical hardware counters that are valuable in diagnosing future hardware issues, such as a drive failure for a node. |
Intersight Connectivity
Consider the following prerequisites pertaining to Intersight connectivity:
-
Before installing the HX cluster on a set of HX servers, make sure that the device connector on the corresponding Cisco IMC instance is properly configured to connect to Cisco Intersight and claimed.
-
Communication between CIMC and vCenter via ports 80, 443 and 8089 during installation phase.
-
All device connectors must properly resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. The current version of the HX Installer supports the use of an HTTP proxy.
-
All controller VM management interfaces must properly resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. The current version of HX Installer supports the use of an HTTP proxy if direct Internet connectivity is unavailable.
-
IP connectivity (L2 or L3) is required from the CIMC management IP on each server to all of the following: ESXi management interfaces, HyperFlex controller VM management interfaces, and vCenter server. Any firewalls in this path should be configured to allow the necessary ports as outlined in the Hyperflex Hardening Guide.
-
When redeploying HyperFlex on the same servers, new controller VMs must be downloaded from Intersight into all ESXi hosts. This requires each ESXi host to be able to resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. Use of a proxy server for controller VM downloads is supported and can be configured in the HyperFlex Cluster Profile if desired.
-
Post-cluster deployment, the new HX cluster is automatically claimed in Intersight for ongoing management.
Deployment Information
Before deploying HX Data Platform and creating a cluster, collect the following information about your system.
Cisco UCS Fabric Interconnects (FI) Information
UCS cluster name |
|
FI cluster IP address |
|
UCS FI-A IP address |
|
UCS FI-B IP address |
|
Pool for KVM IP addresses (one per HX node is required) |
|
Subnet mask IP address |
|
Default gateway IP address |
|
MAC pool prefix (provide two hex characters) |
00:25:B5: |
UCS Manager username |
admin |
Password |
VLAN Information
Tag the VLAN IDs to the Fabric Interconnects.
Important |
Reserved VLAN IDs- VLANs with IDs from 4030 to 4047 and from 4094 to 4095 are reserved in UCS. You cannot use VLANs with IDs from this range. Until Cisco UCS Manager Release 4.0(1d), VLAN ID 4093 was in the list of reserved VLANs. VLAN 4093 has been removed from the list of reserved VLANs and is available for configuration. The VLAN IDs you specify must also be supported on the uplink switch that you are using. For example, VLAN IDs 3968 to 4095 are reserved by Nexus switches and VLAN IDs 1002 to 1005 are reserved by Catalyst switches. Before you decide the VLAN IDs for HyperFlex use, make sure that the same VLAN IDs are available on your uplink switch. |
Network |
VLAN ID |
VLAN Name |
Description |
---|---|---|---|
Use separate subnet and VLANs for each of the following networks: |
|||
VLAN for VMware ESXi and Cisco HyperFlex (HX) management |
Hypervisor Management Network Storage controller management network |
Used for management traffic among ESXi, HX, and VMware vCenter; must be routable. |
|
VLAN for HX storage traffic |
Hypervisor Data Network
Storage controller management network |
Used for storage traffic and requires L2. |
|
VLAN for VM VMware vMotion |
vswitch-hx-vmotion |
Used for vMotion VLAN, if applicable. |
|
VLAN for VM network |
vswitch-hx-vm-network |
Used for VM/application network. |
Customer Deployment Information
Deploy the HX Data Platform using an OVF installer appliance. A separate ESXi server, which is not a member of the vCenter HX Cluster, is required to host the installer appliance. The installer requires one IP address on the management network.
The installer appliance IP address must be reachable from the management subnet used by the hypervisor and the storage controller VMs. The installer appliance must run on the ESXi host or on a VM Player/VMware workstation that is not a part of the cluster installation. In addition, the HX Data Platform Installer VM IP address must be reachable by Cisco UCS Manager, ESXi, and vCenter IP addresses where HyperFlex hosts are added.
Installer appliance IP address |
Network IP Addresses
Note |
|
Management Network IP Addresses (must be routable) |
Data Network IP Addresses (does not have to be routable) |
|||||
---|---|---|---|---|---|---|
|
||||||
ESXi Hostname* |
Hypervisor Management Network |
Storage Controller Management Network |
Hypervisor Data Network (Not Required for Cisco Intersight)1 |
Storage Controller Data Network (Not Required for Cisco Intersight)2 |
||
Server 1: |
||||||
Server 2: |
||||||
Server 3: |
||||||
Server 4: |
||||||
Server 5: |
||||||
Storage Cluster Management IP address |
Storage Cluster Data IP address |
|||||
Subnet mask IP address |
Subnet mask IP address |
|||||
Default gateway IP address |
Default gateway IP address |
* Verify DNS forward and reverse records are created for each host. If no DNS records exist, hosts are added to vCenter by IP address instead of FQDN.
VMware vMotion Network IP Addresses
VMware vMotion Network IP Addresses (not configured by software) |
---|
|
|
|
|
|
Hypervisor Credentials
root username |
root |
root password |
VMware vCenter Configuration
Note |
HyperFlex communicates with vCenter through standard ports. Port 80 is used for reverse HTTP proxy. Port 443 is used for secure communication to the vCenter SDK and may not be changed. |
vCenter FQDN or IP address |
|
vCenter admin username username@domain |
|
vCenter admin password |
|
vCenter data center name |
|
VMware vSphere compute cluster and storage cluster name |
Single Sign-On (SSO)
SSO Server URL*
|
* SSO Server URL can be found in vCenter at
, key config.vpxd.sso.sts.uri.Network Services
Note |
|
DNS Servers <Primary DNS Server IP address, Secondary DNS Server IP address, …> |
|
NTP servers <Primary NTP Server IP address, Secondary NTP Server IP address, …> |
|
Time zone Example: US/Eastern, US/Pacific |
Connected Services
Enable Connected Services (Recommended) Yes or No required |
|
Email for service request notifications Example: name@company.com |
Contacting Cisco TAC
You can open a Cisco Technical Assistance Center (TAC) support case to reduce time addressing issues and get efficient support directly with Cisco Prime Collaboration application.
For all customers, partners, resellers, and distributors with valid Cisco service contracts, Cisco Technical Support provides around-the-clock, award-winning technical support services. The Cisco Technical Support website provides online documents and tools for troubleshooting and resolving technical issues with Cisco products and technologies:
http://www.cisco.com/techsupport
Using the TAC Support Case Manager online tool is the fastest way to open S3 and S4 support cases. (S3 and S4 support cases consist of minimal network impairment issues and product information requests.) After you describe your situation, the TAC Support Case Manager automatically provides recommended solutions. If your issue is not resolved by using the recommended resources, TAC Support Case Manager assigns your support case to a Cisco TAC engineer. You can access the TAC Support Case Manager from this location:
https://mycase.cloudapps.cisco.com/case
For S1 or S2 support cases or if you do not have Internet access, contact the Cisco TAC by telephone. (S1 or S2 support cases consist of production network issues, such as a severe degradation or outage.) S1 and S2 support cases have Cisco TAC engineers assigned immediately to ensure your business operations continue to run smoothly.
To open a support case by telephone, use one of the following numbers:
-
Asia-Pacific: +61 2 8446 7411
-
Australia: 1 800 805 227
-
EMEA: +32 2 704 5555
-
USA: 1 800 553 2447
For a complete list of Cisco TAC contacts for Enterprise and Service Provider products, see http://www.cisco.com/c/en/us/support/web/tsd-cisco-worldwide-contacts.html.
For a complete list of Cisco Small Business Support Center (SBSC) contacts, see http://www.cisco.com/c/en/us/support/web/tsd-cisco-small-business-support-center-contacts.html.
Communications, Services, Bias-free Language, and Additional Information
-
To receive timely, relevant information from Cisco, sign up at Cisco Profile Manager.
-
To get the business impact you’re looking for with the technologies that matter, visit Cisco Services.
-
To submit a service request, visit Cisco Support.
-
To discover and browse secure, validated enterprise-class apps, products, solutions and services, visit Cisco Marketplace.
-
To obtain general networking, training, and certification titles, visit Cisco Press.
-
To find warranty information for a specific product or product family, access Cisco Warranty Finder.
Documentation Feedback
To provide feedback about Cisco technical documentation, use the feedback form available in the right pane of every online document.
Cisco Bug Search Tool
Cisco Bug Search Tool (BST) is a web-based tool that acts as a gateway to the Cisco bug tracking system that maintains a comprehensive list of defects and vulnerabilities in Cisco products and software. BST provides you with detailed defect information about your products and software.
Bias-Free Language
The documentation set for this product strives to use bias-free language. For purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on standards documentation, or language that is used by a referenced third-party product.