Cisco HyperFlex Systems Installation Guide for VMware ESXi, Release 3.0
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Keep SSH enabled on all ESXi hosts. This is required for the following Cisco HyperFlex post cluster configuration operations.
Do not change these pre-configured values without approval from Cisco.
Enabling PCI Passthrough for a Network Device on a Host
Passthrough devices provide the means to more efficiently use resources and improve performance in your environment. Enabling
PCI passthrough allows a VM to use a host device as if the device were directly attached to the VM.
The following procedure describes how to configure a network device (such as NVIDIA GPUs) for PCI passthrough on an ESXi host.
In vSphere Client, browse to the ESXi host in the Navigator panel.
Enter HX maintenance mode on the node that has the GPUs installed. To enter maintenance mode, right click on the node > Cisco HX Maintenance Mode > Enter HX Maintenance Mode
In a new browser window, login directly to the ESXi node.
Under the Hardware tab, click PCI Devices. A list of available passthrough devices appears.
Select PCI device you want to enable for passthrough. Click Toggle passthrough.
Reboot the host to make the PCI device available for use.
When the reboot completes, ensure that the node is not in maintenance mode.
Log in to vCenter Server.
Locate the VM, right click and select elect Edit Settings.
From the New device drop-down, select PCI Device, and click Add.
Click the passthrough device to use (example: NVIDIA GPU) and click OK.
Log in to the ESXi host and open the virtual machine configuration file (.vmx) in a text editor.
Add the following lines, save, and exit the text editor.
To complete the post-installation tasks, you can run a post-installation script on the Installer VM.
Ensure that you run post_install and confirm network operation immediately following the deployment of the HyperFlex System.
Use an SSH client to connect to the shell on the installer VM.
Log in with installer VM root credentials.
Type post_install and hit Enter.
Set the post-install script parameters as specified in the following table:
If you run into any post-install script issues, set the post-install script parameters manually.
Enable HA/DRS on cluster?
Enables vSphere High Availability (HA) feature per best practice.
Disable SSH warning?
Suppresses the SSH and shell warnings in the vCenter.
Add vMotion interfaces
Configure vMotion interfaces per best practice. Requires IP address and VLAN ID input.
Add VM network VLANs
Add additional guest VLANs to Cisco UCS Manager and within ESXi on all cluster hosts.
Correct network errors reported, if any.
Sample Post-Install Script
Select post_install workflow-
1. New/Existing Cluster
2. Expanded Cluster
3. Generate Certificate
Note: Workflow No.3 is mandatory to have unique SSL certificate in the cluster.
By Generating this certificate, it will replace your current certificate.
If you're performing cluster expansion, then this option is not required.
Certificate generation workflow selected
Logging in to controller 10.20.1.64
HX CVM admin password:
Getting ESX hosts from HX cluster...
Select Certificate Generation Workflow-
1. With vCenter
2. Without vCenter
vCenter URL: 10.33.16.40
Enter vCenter username (user@domain): email@example.com
Starting certificate generation and re-registration.
Trying to retrieve vCenterDatacenter information ....
Trying to retrieve vCenterCluster information ....
Certificate generated successfully.
Cluster re-registration in progress ....
Cluster re-registered successfully.
Sample Network Errors
No errors found
No errors found
No errors found
No errors found
controller VM clocks:
stctlVM-FCH1946V34Y - 2016-09-16 22:34:04
stCtlVM-FCH1946V23M - 2016-09-16 22:34:04
stctIVM-FCH1951V2TT - 2016-09-16 22:34:04
stctlVM-FCH2004VINS - 2016-09-16 22:34:04
Version - 1.8.1a-19499
Model - HX220C-M4S
Health - HEALTHY
Access policy - LENIENT
ASUP enabled - False
SMTP server - smtp.cisco.com
Changing ESXi Host Root Password
You can change the default ESXi password for the following scenarios:
During creation of a standard and stretch cluster (supports only converged nodes)
During expansion of a standard cluster (supports both converged or compute node expansion)
During Edge cluster creation
In the above cases, the ESXi root password is secured as soon as installation is complete. In the event a subsequent password
change is required, the procedure outlined below may be used after installation to manually change the root password.
As the ESXi comes up with the factory default password, you should change the password for security reasons. To change the
default ESXi root password post-installation, do the following.
If you have forgotten the ESXi root password, for password recovery please contact Cisco TAC.
Log in to the ESXi host service control using SSH.
Acquire root privileges.
Enter the current root password.
Change the root password.
Enter the new password, and press Enter. Enter the password a second time for confirmation.
If the password entered the second time does not match, you must start over.
Changing Storage Controller Password
To reset the HyperFlex storage controller password post-installation, do the following.
Log in to a storage controller VM.
Change the Cisco HyperFlex storage controller password.
# stcli security password set
This command applies the change to all the controller VMs in the storage cluster.
If you add new compute nodes and try to reset the cluster password using the scli security password set command, the converged nodes get updated, but the compute nodes may still have the default password. To change the compute
node password, use the following procedure.
To change the password on compute nodes:
Vmotion all the user VMs off the ESXi hosts.
Launch the storage controller VM console from vCenter and log in as the root user.
Run the passwd command to change the password.
Log out and re-login to confirm that the password changed successfully.
Run the stcli node add -f command to add the node back into the cluster.
Type in the new password.
Access the HX Data
Platform Plugin through vSphere
To manage your
storage cluster through a GUI, launch the vSphere Web Client. You access your
storage cluster through the vSphere Web Client and HX Data Platform plug-in.
From the HX Data
Platform installer, after installation is completed, on the Summary page, click
Launch vSphere Web Client.
Display the login page, click Login to vSphere Web Client and enter your vSphere credentials.
View the HX Data
From the vSphere Web Client Navigator, select
vCenter Inventory Lists > Cisco HyperFlex Systems > Cisco
HX Data Platform.
Add Datastores in
the Storage Cluster
A new HyperFlex cluster has no default datastores configured for virtual machine storage, so the datastores must be created
using VMware vSphere Web Client.
A minimum of two datastores is recommended for high availability.
From the vSphere Web Client Navigator, Global Inventory Lists expand Cisco HyperFlex Systems > Cisco HX Data Platform > cluster > Manage > Datastores.
Click the Create Datastore icon.
Enter a Name for the datastore. The vSphere Web Client enforces a 42 character limit for the datastore name. Assign each datastore a unique name.
Specify the Size for the datastore. Choose GB or TB from the drop-down list. Click OK.
Click the Refresh button to display your new datastore.
Click the Hosts tab to see the Mount Status of the new datastore.
Under the vSphere HA
settings, ensure that you set the Datastore for Heartbeating option to allow
selecting any datastore from the list of available datastores.
Verify DRS is
From vSphere Home > Hosts and Clusters > cluster > ConfigureServices. Click vSphere DRS.
Select the Edit button. Click vSphere HA. Click Edit.
Select Turn on vSphere HA if it is not selected.
Expand Admission Control > Define Failover capacity by > Cluster resource percentage from the drop-down menu. You may use the default value or enable Override calculated failover capacity and enter a percentage.
Expand Heartbeat Datastores and select Use datastore only from the specified list. Select which datastores to include.
Auto Support and Smart Call Home for HyperFlex
You can configure the HX storage cluster to send automated email notifications regarding
documented events. You can use the data collected in the notifications to help
troubleshoot issues in your HX storage cluster.
Auto Support (ASUP) and Smart Call Home (SCH) support the use of a proxy server. You can
enable the use of a proxy server and configure proxy settings for both using HX
Auto Support (ASUP)
Auto Support is the alert notification service provided through HX Data Platform. If you enable Auto Support, notifications are sent from HX Data Platform to designated email addresses or email aliases that you want to receive the
notifications. Typically, Auto Support is configured during HX storage cluster creation by configuring the SMTP mail server and adding email recipients.
Only unauthenticated SMTP is supported for ASUP.
If the Enable Auto Support check box was not selected during configuration, Auto Support can be enabled post-cluster creation using the following methods:
Auto Support can also be used to connect your HX storage cluster to monitoring tools.
Smart Call Home (SCH)
Smart Call Home is an automated support capability that monitors your HX storage clusters and then flags issues and initiates resolution
before your business operations are affected. This results in higher network availability and increased operational efficiency.
Call Home is a product feature embedded in the operating system of Cisco devices that detects and notifies the user of a variety
of fault conditions and critical system events. Smart Call Home adds automation and convenience features to enhance basic Call Home functionality. After Smart Call Home is enabled, Call Home messages/alerts are sent to Smart Call Home.
Smart Call Home is included with many Cisco service contracts and includes:
Automated, around-the-clock device monitoring, proactive diagnostics, real-time email alerts, service ticket notifications,
and remediation recommendations.
Proactive messaging sent to your designated contacts by capturing and processing Call Home diagnostics and inventory alarms.
These email messages contain links to the Smart Call Home portal and the TAC case if one was automatically created.
Expedited support from the Cisco Technical Assistance Center (TAC). With Smart Call Home, if an alert is critical enough, a TAC case is automatically generated and routed to the appropriate support team through
https, with debug and other CLI output attached.
Customized status reports and performance analysis.
Web-based access to: all Call Home messages, diagnostics, and recommendations for remediation in one place; TAC case status;
and up-to-date inventory and configuration information for all Call Home devices.
Typically, Auto Support (ASUP) is configured during HX storage cluster creation. If it was not, you can enable it post cluster
creation using the HX Connect user interface.
Log in to HX Connect.
In the banner, click Edit settings (gear icon) > Auto Support Settings and fill in the following fields.
Enable Auto Support (Recommended) check box
Configures Call home for this HX storage cluster by enabling:
Data delivery to Cisco TAC for analysis.
Notifications from Support as part of proactive support.
Send service ticket notifications to field
Enter the email address that you want to receive the notifications.
Terms and Conditions check box
End user usage agreement. The check box must be checked to use the Auto-Support feature.
Use Proxy Server check box
Web Proxy Server url
In the banner, click Edit settings (gear icon) > Notifications Settings and fill in the following fields.
Send email notifications for alarms check box
If checked, fill in the following fields:
Mail Server Address
From Address—Enter the email address used to identify your HX storage cluster in Support service tickets, and as the sender for Auto Support
notifications. Support information is currently not sent to this email address.
Recipient List (Comma separated)
Configuring Notification Settings Using CLI
Use the following procedure to configure and verify that you are set up to receive alarm notifications from your HX storage
Only unauthenticated SMTP is supported for ASUP.
Log in to a storage controller VM in your HX storage cluster using ssh.
Configure the SMTP mail server, then verify the configuration.
Email address used by the SMTP mail server to send email notifications to designated recipients.
Syntax: stcli services smtp set [-h] --smtp SMTPSERVER --fromaddress FROMADDRESS
# stcli services smtp set --smtp mailhost.eng.mycompany.com --fromaddress firstname.lastname@example.org
# stcli services smtp show
Enable ASUP notifications.
# stcli services asup enable
Add recipient email addresses, then verify the configuration.
List of email addresses or email aliases to receive email notifications. Separate multiple emails with a space.
Syntax: stcli services asup recipients add --recipients RECIPIENTS
# stcli services asup recipients add --recipients email@example.com firstname.lastname@example.org
# stcli services asup show
From the controller VM that owns the eth1:0 IP address for the HX storage cluster, send a test ASUP notification to your email.
# sendasup -t
To determine the node that owns the eth1:0 IP address, log in to each storage controller VM in your HX storage cluster using
ssh and run the ifconfig command. Running the sendasup command from any other node does not return any output and tests are not received by recipients.
Configure your email server to allow email to be sent from the IP address of all the storage controller VMs.
Configuring Smart Call Home for Data Collection
Data collection is enabled by default but, during installation, you can opt-out (disable). You can also enable data collection
post cluster creation. During an upgrade, Smart Call Home is set up based on your legacy configuration. For example, if stcli services asup show is enabled, Smart Call Home is enabled on upgrade.
Data collection about your HX storage cluster is forwarded to Cisco TAC through https. If you have a firewall installed, configuring a proxy server for Smart Call Home is completed post cluster creation.
In HyperFlex Data Platform release 2.5(1.a), Smart Call Home Service Request (SR) generation does not use a proxy server.
Using Smart Call Home requires the following:
A Cisco.com ID associated with a corresponding Cisco Unified Computing Support Service or Cisco Unified Computing Mission
Critical Support Service contract for your company.
Cisco Unified Computing Support Service or Cisco Unified Computing Mission Critical Support Service for the device to be registered.
Log in to a storage controller VM in your HX storage cluster.
Register your HX storage cluster with Support.
Registering your HX storage cluster adds identification to the collected data and automatically enables Smart Call Home. To register your HX storage cluster, you need to specify an email address. After registration, this email address receives
support notifications whenever there is an issue and a TAC service request is generated.
Upon configuring Smart Call Home in Hyperflex, an email will be sent to the configured address containing a link to complete
registration. If this step is not completed, the device will remain in an inactive state and an automatic Service Request
will not be opened.
stcli services sch set [-h] --email EMAILADDRESS
# stcli services sch set --email email@example.com
Verify data flow from your HX storage cluster to Support is operational.
Operational data flow ensures that pertinent information is readily available to help Support troubleshoot any issues that
--all option runs the commands on all the nodes in the HX cluster.
# asupcli [--all] ping
If you upgraded your HX storage cluster from HyperFlex 1.7.1 to 2.1.1b, also run the following command:
# asupcli [--all] post --type alert
Contact Support if you receive the following error:
root@ucs-stctlvm-554-1:/tmp# asupcli post --type alert
/bin/sh: 1: ansible: not found
Failed to post - not enough arguments for format string
(Optional) Configure a proxy server to enable Smart Call Home access through port 443.
If your HX storage cluster is behind a firewall, after cluster creation, you must configure the Smart Call Home proxy server. Support collects data at the url: https://diag.hyperflex.io:443 endpoint.
Clear any existing registration email and proxy settings.
Ping to verify the proxy server is working and data can flow from your HX storage cluster to the Support location.
# asupcli [--all] ping
--all option runs the command on all the nodes in the HX cluster.
Verify Smart Call Home is enabled.
When Smart Call Home configuration is set, it is automatically enabled.
# stcli services sch show
If Smart Call Home is disabled, enable it manually.
# stcli services sch enable
Enable Auto Support (ASUP) notifications.
Typically, Auto Support (ASUP) is configured during HX storage cluster
creation. If it was not, you can enable it post cluster creation using HX
Connect or CLI.
Creating a replication cluster pair is a pre-requisite for setting up VMs for replication. The replication network and at
least one datastore must be configured prior to creating the replication pair.
By pairing cluster 1 with cluster 2, you are specifying that all VMs on cluster 1 that are explicitly set up for replication
can replicate to cluster 2, and that all VMs on cluster 2 that are explicitly set up for replication can replicate to cluster
By pairing a datastore A on cluster 1 with a datastore B on cluster 2, you are specifying that for any VM on cluster 1 that
is set up for replication, if it has files in datastore A, those files will be replicated to datastore B on cluster 2. Similarly,
for any VM on cluster 2 that is set up for replication, if it has files in datastore B, those files will be replicated to
datastore A on cluster 1.
Pairing is strictly 1-to-1. A cluster can be paired with no more than one other cluster. A datastore on a paired cluster,
can be paired with no more than one datastore on the other cluster.
A private VLAN
partitions the Layer 2 broadcast domain of a VLAN into subdomains, allowing you
to isolate the ports on the switch from each other. A subdomain consists of a
primary VLAN and one or more secondary VLANs. A private VLAN domain has only
one primary VLAN. Each port in a private VLAN domain is a member of the primary
VLAN, and the primary VLAN is the entire private VLAN domain.
Private VLAN Ports
Table 1. Types of Private VLAN Ports
Promiscuous Primary VLAN
Belongs to the primary VLAN. Can communicate with all interfaces that belong to those secondary VLANs that are associated
to the promiscuous port and associated with the primary VLAN. Those interfaces include the community and isolated host ports.
All packets from the secondary VLANs go through this VLAN.
Isolated Secondary VLAN
Host port that belongs to an isolated secondary VLAN. This port has complete isolation from other ports within the same private
VLAN domain, except that it can communicate with associated promiscuous ports.
Community Secondary VLAN
Host port that belongs to a community secondary VLAN. Community ports communicate with other ports in the same community VLAN
and with associated promiscuous ports.
Following HX deployment, a VM network uses a regular VLAN by default. To use a Private VLAN for the VM network, see the following
from vSphere standard switch to the newly created vSphere distributed switch.
Right-click the vCenter Virtual Machine and click Migrate Virtual Machine Networking.
source network and
destination network from the drop-down list.
Virtual Machines that you want to migrate.
connection of the network adapter on the VMs to private VLAN.
Right-click the vCenter Virtual Machine and click Edit Settings.
Hardware tab, select the network adapter you want to
Network Connection you want to use from the
Network Label drop-down list.
Deleting VMNICs on
the vSphere Standard Switch
Log on to
VMware vSphere Client.
Select the ESX
host from which you want to delete the VMNIC.
Select the switch you wish to remove a VMNIC from.
Click the Manage the physical adapters connected to the selected switch button.
vminc you want to delete and click
selection by clicking
Log on to the VMware vSphere Client.
Select Home > Networking.
Right click on the cluster Distributed Switch > New Distributed Switch.
In the Name and Location dialog box, enter a name for the distributed switch.
In the Select Version dialog box, select the distributed switch version that correlates to your version and configuration requirements.
In the Edit Settings dialog box, specify the following:
Number of uplink ports
Enable Network I/O Control.
Create a default port group should be checked.
Enter the default port group name in the Port Group Name box.
Review the settings in the Ready to complete dialog box.
VLANs on vSphere Distributed Switch
From the VMware
vSphere Client, select
Right-click on the dvSwitch.
Click Edit Settings.
private VLAN ID tab, enter a
private VLAN ID.
Secondary private VLAN ID tab, enter a
private VLAN ID.
Select the type of VLAN from the Type drop-down list. Valid values include:
Set Private VLAN in
Distributed Port Group
Before you begin
Create Private VLAN
on the vSphere Distribute Switch.
dvSwitch, and click
VLAN, from the
type drop-down list.
VLAN Entry drop-down list, select the type of private VLAN. It can
be one of the following:
Community private VLAN is recommended.
Promiscuous ports are not supported
Distributed Virtual Switches and Cisco Nexus 1000v
Considerations when Deploying Distributed Switches
Using Distributed Virtual Switches (DVS) or Cisco Nexus 1000v (NK1v) is an optional and not a required step.
DVS for your
vMotion network is available only if your environment has Enterprise Plus
License for vSphere.
You can use only
one of the two switches at a given time.
There may be a potential conflict between the Quality of Service (QoS) policy for HyperFlex and Nexus 1000v. Make sure that
the QoS classes for N1Kv are set as per the HyperFlex policy. See Creating a QoS Policy, in the Network and Storage Management Guide.
If you choose to deploy N1Kv switch, apply the settings as described, so that the traffic between the HyperFlex hosts flows
locally on the FIs in a steady state. If not configured accurately, it could lead to a situation where most traffic will go
through the upstream switches leading to latency. In order to avoid that scenario, ensure that the Storage Controller, Management
Network, and vMotion port groups are configured with active/standby and failover enabled.
Distributed switches ensure that each node is using the same configuration. It helps prioritize traffic and allows other network
streams to utilize available bandwidth when no vMotion traffic is active.
The HyperFlex (HX)
Data Platform can use Distributed Virtual Switch (DVS) Networks for
non-HyperFlex dependent networks.
These non-HX dependent
The HX Data Platform
has dependency that the following networks use standard vSwitches.
vswitch-hx-storage-data: Storage Hypervisor Data Network
vswitch-hx-storage-data: Storage Controller Data Network
During HX Data
Platform installation, all the networks are configured with standard vSwitch
networks. After the storage cluster is configured, the non-HX dependent
networks can be migrated to DVS networks. For example:
vswitch-hx-vm-network: VM Network
For further details on how to migrate the vMotion Network to Distributed Virtual Switches, please see the Migrating vMotion Networks to Distributed Virtual Switches (DVS) or Cisco Nexus 1000v (N1Kv) in the Network and Storage Management Guide.
AMD FirePro S7150 series GPUs are supported in HX240c M5 nodes. These graphic accelerators enable highly secure, high performance,
and cost-effective VDI deployments. Follow the steps below to deploy AMD GPUs in HyperFlex.
For the service profiles attached to the servers modify the BIOS policy.