Introduction

Cisco HyperFlex™ Systems unlock the full potential of hyperconvergence. The systems are based on an end-to-end software-defined infrastructure, combining software-defined computing in the form of Cisco Unified Computing System (Cisco UCS) servers, software-defined storage with the powerful Cisco HX Data Platform, and software-defined networking with the Cisco UCS fabric that integrates smoothly with Cisco Application Centric Infrastructure (Cisco ACI). Together with a single point of connectivity and hardware management, these technologies deliver a pre-integrated and adaptable cluster that is ready to provide a unified pool of resources to power applications as your business needs dictate.

These release notes pertain to the Cisco HX Data Platform, Release 2.0, and describe the features, limitations and caveats for the Cisco HX Data Platform.

Revision History

Release

Date

Description

2.1(1c)

August 15, 2017

Updated release notes for Cisco HX Data Platform Software, Release 2.1(1c).

2.1(1b)

July 24, 2017

Added CSCve76825 - Behavior Change

June 21, 2017

Added UCS B260 M4, B420 M4, B460 M4, and C460 M4 servers in "Cisco HX Data Platform supported compute-only nodes" section.

May 17, 2017

Updated "Upgrade Guidelines" section for Release 2.1(1b).

May 15, 2017

Added CSCve34343 to Open Caveats for Cisco HX Data Platform Software, Release 2.1(1b).

May 4, 2017

Added support for 240GB Boot SSD for Cisco HX Data Platform Software, Release 2.1(1b).

April 28, 2017

Updated release notes for Cisco HX Data Platform Software, Release 2.1(1b).

2.0

March 6, 2017

Created release notes for Cisco HX Data Platform Software, Release 2.0

New Features in Release 2.X

New Features in Release 2.1(1b)

  • HyperFlex Systems support for Self-Encrypting Drives (SEDs):

    • Installation, cluster creation and use of HyperFlex Systems with SEDs.

      For more details, refer to the Cisco HX240c M4 HyperFlex Node Installation Guide and

      Cisco HX220c M4 HyperFlex Node Installation Guide.
    • HX Plug-in reporting of SED capable systems.

  • Enhanced Call Home support—Added option to collect support logs through https. For more details, Cisco HyperFlex Systems Installation Guide for VMware ESXi.

  • Cisco HyperFlex Edge—HyperFlex Systems solution for remote and branch office (ROBO) and edge environments. This feature is supported from release 2.0(1a) and later. For more details, refer to the Cisco HyperFlex Edge Deployment Guide.

  • Enhanced Cluster scaling:

    • 32 node cluster support for HXAF220 and HXAF240 nodes.

    • Up to 16 All Flash HX Nodes + 16 Compute-only nodes.

    • Linear performance and capacity scale.

  • Support for adding HX nodes into new or existing UCS-FI domain with Cisco UCS 6300 Series Fabric Interconnects(UCS 6332 FI, UCS 6332-16UP FI), 1387 VIC network adapter and direct connectivity to 40 GB ports. Revision of the ASIC (VID) needs to be at version B2 or later for VIC 1387 when used with HyperFlex.


    Note

    • For 6332, a maximum of 31 converged nodes are supported.

    • For 6332-16UP, a maximum of 23 converged nodes are supported.


  • Support for 240GB Boot SSD—To add a node using 240GB boot drive, existing HX clusters with releases prior to 1.8.1(f) must be upgraded to 1.8.1(f) or later. This guidance does not apply to new clusters and clusters running 1.8.1(f) or later releases.

  • Optimizations in Capacity Tier—Backend access is optimized to significantly reduce the magnitude and frequency of high latency spikes.

    Important Upgrade Guidelines

    • This upgrade is recommended for only those customers who have been identified having this problem.

    • For hybrid clusters—The default upgrade process will not enable this optimization. Contact Cisco TAC to enable this performance enhancement during the upgrade process. Enabling this optimization will require a longer maintenance window.

    • For All Flash clusters—The upgrade times will not be significantly affected and the default upgrade path will enable this performance enhancement.

Behavior Change in 2.1(1b)

Configuring Auto Support Using CLI—From the controller VM that owns the eth1:0 IP address for the HX storage cluster, send a test ASUP notification to your email.

# sendasup -t

To determine the node that owns the eth1:0 IP address, log in to each storage controller VM in your HX storage cluster using ssh and run the ifconfig command. Running the sendasup command from any other node does not return any output and tests are not received by recipients.


Note

Up to HyperFlex 2.0, the sendasup command could be run from any node in the HX storage cluster. Starting in HyperFlex 2.1, the functionality changed and the command must be run on the controller VM that owns the eth1:0 IP address for the HX storage cluster.

New Features in Release 2.0(1a)

  • Cisco HyperFlex Data Platform Software Enhancements:

    • Support for all flash storage using SSDs for persistent storage.

      SSDs are used for housekeeping and persistent storage on all flash servers. Hybrid servers continue to use SSDs for housekeeping and HDDs for persistent storage. HDDs and SSDs for persistent storage cannot be combined on HX servers. The HX Data Platform Installer identifies the type of storage during installation. For more details, refer to the Cisco HyperFlex Systems Installation Guide for VMware ESXi or Cisco HyperFlex Data Platform Administration Guide.

    • Support for adding external storage ( iSCSI, FC Storage and FCoE) during HX Data Platform software installation. For more details, refer to the Cisco HyperFlex Systems Installation Guide for VMware ESXi

  • Upgrade enhancements—Support for UI based offline upgrade. For more details, refer to the Cisco HyperFlex Systems Upgrade Guide

  • Support for adding HX nodes in an existing UCS-FI domain. Contact Cisco TAC for performing this procedure.

  • Operational improvements—Improved cluster start up time.

  • Cisco HyperFlex Sizer—End to end sizing tool for compute, capacity and performance. This sizing tool is a cloud-based application that can be conveniently accessed from anywhere at https://hyperflexsizer.cloudapps.cisco.com/(CCO login required)

Supported Versions and System Requirements

Cisco HX Data Platform requires specific software and hardware versions, and networking settings for successful installation.

For a complete list of requirements, see:

Hardware and Software Interoperability

For a complete list of hardware and software inter-dependencies, refer to respective Cisco UCS Manager release version of Hardware and Software Interoperability for Cisco HyperFlex HX-Series.

Software Requirements

The software requirements include verification that you are using compatible versions of Cisco HyperFlex Systems (HX) components and VMware vSphere components.

HyperFlex Software Versions

The HX components—Cisco HX Data Platform Installer, Cisco HX Data Platform, and Cisco UCS firmware—are installed on different servers. Verify that each component on each server used with and within an HX Storage Cluster are compatible.

  • Verify that the preconfigured HX servers have the same version of Cisco UCS Server Firmware installed. If the Cisco UCS Fabric Interconnects (FI) firmware versions are different, see the Cisco HyperFlex Systems Upgrade Guide for steps to align the firmware versions.

  • For new hybrid or All Flash (Cisco HyperFlex HX240c M4 or HX220c M4) deployments, verify that Cisco UCS Manager 3.1(2g) or later is installed. Contact Cisco TAC for guidance.

  • To reinstall an HX server, download supported and compatible versions of the software. See the Cisco HyperFlex Systems Installation Guide for VMware ESXi for the requirements and steps.

HyperFlex Release

HX Data Platform Installer

HX Data Platform

Recommended UCS FI Firmware

2.1(1c)

2.1(1c)

2.1(1c)

3.1(2g)

2.1(1b)

2.1(1b)

2.1(1b)

3.1(2g)

2.0(1a)

2.0(1a)

2.0(1a)

3.1(2f)

1.8(1f)

1.8(1f)

1.8(1f)

3.1(2b)

1.8(1e)

1.8(1e)

1.8(1e)

3.1(2b)

1.8(1c)

1.8(1c)

1.8(1c)

3.1(2b)

1.8(1b)

1.8(1b)

1.8(1b)

3.1(2b)

1.8(1a)

1.8(1a)

1.8(1a)

3.1(2b)

1.7.3

1.7.3

1.7.3

2.2(7c)

1.7.1

1.7.1-14835

1.7

2.2(7c)

1.7.1

1.7.1

1.7

2.2(6f)

Supported VMware vSphere Versions and Editions

Each HyperFlex release is compatible with specific versions of vSphere, VMware vCenter, and VMware ESXi.

  • Verify that all HX servers and FIs have a compatible version of vSphere preinstalled.

  • Verify that the vCenter version is the same or later than the ESXi version.

  • Verify that you have a vCenter administrator account with root-level privileges and the associated password.


    Note

    • You cannot preinstall the vSphere Standard, Essentials Plus, and ROBO editions on HX servers.

    • vSphere version 5.5 U3 supports only HX240c or HXAF240c HX servers.

    • If you have the ESXi 5.5 U3b version, we recommend an ESXi upgrade. Contact Cisco TAC for additional information.

    • If you have the ESXi 6.0 U1 version, we recommend an ESXi upgrade. There is a known VMware issue where the node becomes unresponsive due to a PSOD and OS crash. See VMware KB article, VMware ESXi 6.0, Patch ESXi600-201608401-BG: Updates esx-base, vsanhealth, vsan VIBs (2145664).


HyperFlex Version

vSphere Versions

vSphere Editions

2.1(1c)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3, 6.0 U2 Patch 4, 6.0 U3.

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

2.1(1b)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3, 6.0 U2 Patch 4, 6.0 U3.

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

2.0(1a)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3, 6.0 U2 Patch 4

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

1.8(1f)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

1.8(1e)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

1.8(1c)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

1.8(1b)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

1.8(1a)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

1.7.3

6.0 U1b, 6.0 U2

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

1.7.3

5.5 U3

Enterprise, Enterprise Plus

1.7.1

6.0 U1b

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

1.7.1

5.5 U3

Enterprise, Enterprise Plus

VMware vSphere Licensing Requirements

How you purchase your vSphere license determines how your license applies to your HyperFlex system.

  • If you purchased your vSphere license with HyperFlex

    Each HyperFlex server either has the Enterprise or Enterprise Plus edition preinstalled at the factory.


    Note

    • The SD cards have the OEM licenses preinstalled. If you delete or overwrite the content of the SD cards after receiving the HX servers, you also delete the factory-installed licenses from the SD cards.

    • OEM license keys is a new VMware vCenter 6.0 U1b feature. Earlier versions do not support OEM licenses.

    • All factory-installed HX nodes share the same OEM license key. With vSphere OEM keys, the "Usage" count can exceed the "Capacity" value.

    • When you add an HX host to vCenter through the "Add Host" wizard, in the "Assign license" section, select the OEM license.

      We obfuscate the actual vSphere OEM license key; for example, 0N085-XXXXX-XXXXX-XXXXX-10LHH

    • Standard, Essentials Plus, and ROBO editions are not available preinstalled on HX servers.


  • If you did NOT purchase your vSphere license with HyperFlex

    The HX nodes have a vSphere Foundation license preinstalled. After initial setup, the license can be applied to a supported version of vSphere.

    vSphere 5.5 U3 can only license the Enterprise or Enterprise Plus editions.

  • If you purchased your vSphere license from Cisco without a HyperFlex system

    Contact Cisco TAC to obtain a spare vSphere license at no additional cost.

Browser Recommendations

HX Data Platform Installer

The following browsers support Cisco HX Data Platform Installer, Cisco HX Data Platform Plug-in, and Cisco UCS Manager.

  • Microsoft Internet Explorer 11.0 and later

  • Mozilla Firefox 45 or higher

  • Google Chrome 52.0 or higher

HX Connect

The following browsers support HX Connect:

  • Microsoft Internet Explorer 11 or higher

  • Mozilla Firefox 52 or higher

  • Google Chrome 54 or higher


Note

The minimum recommended resolution is 1024 X 768.


Cisco HX Data Platform Storage Cluster Specifications

Cluster Limits:

  • Cisco HX Data Platform supports up to 100 clusters managed per vCenter as per VMware sizing guidelines.

  • Cisco HX Data Platform supports any number of clusters on a single FI domain. Each HX converged node must be directly connected to a dedicated FI port on fabric A and fabric B without the use of a FEX. C-series compute only nodes must also directly connect to both FIs. B-series compute only nodes will connect through a chassis I/O module to both fabrics. In the end, the number of physical ports on the FI will dictate the maximum cluster size and maximum number of individual clusters supported in a UCS domain.

Node Limits for All Flash:

  • Minimum converged nodes (per cluster): 3

  • Maximum converged nodes (per cluster): 16

  • Maximum compute-only nodes (per cluster): 16


    Note

    The number of compute-only nodes cannot exceed the number of converged nodes.


Node Limits for Hybrid:

  • Minimum converged nodes (per cluster): 3

  • Maximum converged nodes (per cluster): 8

  • Maximum compute-only nodes (per cluster): 8


    Note

    The number of compute-only nodes cannot exceed the number of converged nodes.


Node Limits for HX Edge:

Cisco HX Data Platform storage clusters supported nodes:

  • Converged nodes—All Flash: Cisco HyperFlex HXAF240c M5, HXAF220c M5, HXAF240c M4, and HXAF220c M4.

  • Converged nodes—Hybrid: Cisco HyperFlex HX240c M5, HX220c M5, HX240c M4, and HX220c M4.

  • Compute-only—Cisco B200 M3/M4, B260 M4, B420 M4, B460 M4, B480 M5, C240 M3/M4, C220 M3/M4, C480 M5, C460 M4, B200 M5, C220 M5, and C240 M5.

Upgrade Guidelines

The 2.5(1d) release supports upgrading from older HX versions.

The list below is a highlight of critical criteria for performing an upgrade of your HyperFlex system.

  • Cluster Expansion—To expand a cluster running on version 2.5(1d), upgrade the cluster to 2.6(1e) and then continue to expand the cluster with 2.6(1e).

    .
  • Hybrid Clusters—Do not upgrade to release 2.5(1a), 2.5(1b), 2.5(1c), or 2.6(1a) if you are running a hybrid cluster. Ensure you use the latest 2.5(1d) release for all hybrid upgrades.

  • vDS Support—Upgrades for VMware vDS environments are supported in release 2.5(1c) and later.

  • Initiating Upgrade―Use either the CLI stcli commands or the HX Data Platform Plug-in to the vSphere Web Client. Do not use HX Connect or the Tech Preview UI to upgrade to from a pre-2.5 version to a 2.5 version.

  • Cluster Readiness—Ensure that the cluster is properly bootstrapped and the updated plug-in loaded before proceeding.

  • HX Data Platform 1.7.x clusters—Users upgrading from 1.7.x must step through an intermediate version before upgrading to 2.5.

  • HX Data Platform 2.1(1b) with SED—Upgrading SED-ready systems running 2.1 require UCS infrastructure and server firmware upgrades.

  • vSphere 5.5 Upgrades—Users on vSphere 5.5 must upgrade to 6.0/6.5 before starting HX Data Platform upgrade. vSphere 5.5 support is deprecated with HX Data Platform 2.5(1a) and upgrade fails if attempted.

    • For HX220 users running 5.5, contact TAC for upgrade assistance.

    • For HX240 users running 5.5, upgrade components in the following order.

      1. Upgrade vCenter to 6.0 or 6.5. If upgrading to 6.5, you must upgrade your vCenter in place. Using a new vCenter 6.5 is not supported for users migrating from 5.5.

      2. Upgrade ESXi to 6.0/6.5 using the offline zip bundle.


        Note

        During upgrade it might be necessary to reconnect ESXi host manually in vCenter after ESXi upgrade and host reboot.


      3. Upgrade HX Data Platform to 2.5 (and optionally the UCS firmware).

    • If Upgrading to vSphere 6.5:

      • Certain cluster functions such as native and scheduled snapshots, ReadyClones, and Enter/Exit HX Maintenance Mode will not operate from the time the upgrade is started until the HX Data Platform upgrade to 2.5 is complete.

      • After upgrading ESXi using the offline zip bundle, use the ESX Exit Maintenance Mode option. The Exit HX Maintenance Mode option does not operate in the vSphere Web Client until the HX Data Platform upgrade to 2.5 is complete.

  • vSphere 6.0 Upgrades—Users on vSphere 6.0 migrating to 6.5, upgrade components in the following order:

    1. HX Data Platform upgrade to 2.5 (and optionally the UCS firmware).

    2. Upgrade vCenter Server following VMware documentation and best practices. Optionally, deploy a new vCenter server and perform stcli cluster re-register.

    3. Upgrade ESXi to 6.5 using the offline zip bundle.

  • Server Firmware Upgrades—Server firmware should be upgraded to ensure smooth operation and to correct known issues. Specifically, newer SAS HBA firmware is available in this release and is recommended for long-term stability.


    Note

    • Users are encouraged to upgrade to 3.1(3c) C-bundle or later whenever possible.

    • Users running C-bundle versions prior to 3.1(2f) must upgrade server firmware by performing a combined upgrade of UCS server firmware (C-bundle) to 3.1(3c) or later and HX Data Platform to 2.5. Do not split the upgrade into two separate operations.

    • If the cluster is already on 3.1(2f) C-bundle or later, you may perform an HX Data Platform only or combined upgrade, as required.


  • Maintenance Window—If upgrading both HX Data Platform and UCS firmware, either a combined or split upgrade can be selected through the vSphere HX Data Platform Plug-in depending on the length of the maintenance window. Direct firmware upgrade using server firmware auto install through Cisco UCS Manager should not be attempted. Instead, use the UCS server upgrade orchestration framework provided by the HX Data Platform Plug-in.

Resolved Caveats in Release 2.1(1c)

Defect ID

Symptom

First Release Affected

Resolved in Release

CSCvf42894

Host can become unresponsive due to connectivity loss to a datastore. This is due to an issue with ESXi vmsyslogd that can cause a deadlock.

2.1(1b)

2.1(1c)

CSCve00042

HX upgrade hangs when entering maintenance mode.

2.1(1b)

2.1(1c)

CSCve01880

During upgrade, if the upgrade procedure fails or is interrupted for any reason, the cleaner process might not restart.

2.1(1b)

2.1(1c)

CSCve00257

Controller VM shows as powered ON even though HX Plugin shows as offline.

2.1(1b)

2.1(1c)

CSCve06665

SAS Expander Firmware did not upgrade from an earlier release to 3.1(2b), 3.1(2c) and 3.1(2e).

Note 

This issue is only applicable for hybrid and All Flash HX240 nodes.

2.1(1b)

2.1(1c)

CSCve08605

Cluster Creation fails with NTP sync.

2.1(1b)

2.1(1c)

CSCve34343

Deploy validation fails during cluster node expansion with the following error: "Validation framework execution failed: <type 'exceptions.TypeError'> --> Incorrect padding".

2.1(1b)

2.1(1c)

CSCvc39250

Performance charts could display skewed data if the CPU utilization on nodes in the cluster is very high.

2.0(1a)

2.1(1c)

CSCvd37521

A datastore is not mounted on all the nodes in a storage cluster(partially mounted datastore).

2.0(1a)

2.1(1c)

CSCvb81393

Firmware upgrade fails from Server Storage Controller with unsupported board.

1.8(1c)

2.1(1c)

CSCvd00100

Performance charts show zero or a white space gap or performance IO counters drop to zero during converged node going down or entering/exiting maintenance mode.

1.8(1e)

2.1(1c)

CSCvb91799

Controller VM may fail to power on during UCS upgrade.

1.8(1c)

2.1(1c)

CSCvb30039

When SeSparse disks are used in the virtual machine, native snapshots and ReadyClones fails with error: .

"unable to rename VMDK_FILE_NAME.flat.vmdk"

1.8(1a)

2.1(1c)

CSCuy87828

Performance chart display is not formatted at 100% zoom.

Selecting an optional metric and a smaller resolution at the same time shows a chart that is not formatted correctly.

1.7.1

2.1(1c)

CSCuy87775

Datastore failed to mount on all ESX hosts. This is a VMware issue.

1.7.1

2.1(1c)

CSCvd18626

Upon upgrade/power-up, for optimized performance, a full index-replay in 2.1(1b) could take a much longer time to complete a per node upgrade. 2.1(1c), the index-replay time and related upgrade time to be significantly improved.

1.7.1

2.1(1c)

Resolved Caveats in Release 2.1(1b)

Defect ID

Symptom

First Release Affected

Resolved in Release

CSCvd18626

In rare cases, latency spikes can be seen for workloads with large working sets. These workloads require accessing data from the capacity tier.

1.7.1

2.1(1b)

CSCva46884

ESX host may get disconnected and enter a non-responsive state.

1.8(1c)

2.1(1b)

CSCvb78874

In rare cases, SAS HBA crashes occasionally with Firmware 11.65.01.00.

1.8(1a)

2.1(1b)

CSCvd37469

Online upgrade may timeout under certain conditions leading to cleaner being offline.

2.0(1a)

2.1(1b)

CSCvd18060

Configuration failure seen when MTU size is changed.

2.0(1a)

2.1(1b)

CSCvd25130

The number of controllers reported in plugin UI is incorrect if hostnames on servers are same.

2.0(1a)

2.1(1b)

CSCvd25532

During the HX cluster upgrade, performance chart shows IOPS drop temporarily.

Note 

This is a display issue only.

2.0(1a)

2.1(1b)

Resolved Caveats in Release 2.0(1a)

Defect ID

Symptom

First Release Affected

Resolved in Release

CSCvc98591

HX Cluster expansion fails at the Deploy Stage when multiple converged nodes are expanded simultaneously.

1.8(1e)

2.0(1a)

CSCvc30465

HX installer raises an incorrect error for the DRIVER-Check mpt3sas firmware version.

1.8(1c)

2.0(1a)

Open Caveats in Release 2.1(1c)

Defect ID

Symptom

Workaround

Defect Found in Release

CSCvf54865

When using shipped 2.5 HyperFlex nodes to create or expand an existing cluster, with a lower than 2.5(1x) release build, the cluster creation or expansion workflow might fail with the error:

'[DependencyError] File path of ‘/opt/springpath/support/ README.VscsiStats’ is claimed by multiple non-overlay VIBs: set([‘_bootbank_scvmclient_2.1.1c-21048’, …

If installation or expansion fails with the listed error, then:

  1. Uninstall the VIBs. Run the following commands in the vSphere Command-Line Interface.

    esxcli software vib remove --vibname vmware-esx-STFSNasPlugin

    esxcli software vib remove --vibname stHypervisorSvc

  2. Reboot the ESXi host.

  3. After the ESXi host comes back up, proceed with the installer workflow to create or expand the cluster.

2.0(1c)

CSCvc32497

Cluster creation or expansion fails when UCSM configuration is not chosen as an advanced option in the Cisco HX Data Platform Installer. This happens due to non-reachability of ESX.

In the Cisco HX Data Platform Installer's configuration page, you will see that the default VLAN for hx-inband-mgmt 3091 is tagged to the ESX and not the user-specified VLAN.

To resolve this issue, follow the steps below:

  1. Tag the correct VLAN on the ESX by launching KVM console from Cisco UCS Manager.

  2. In Cisco HX Data Platform Installer, retry deploy validation.

Note 

Before you retry deploy validation, you may have to place the nodes that were previously added to the vCenter, in maintenance mode.

2.0(1a)

CSCvc62266

After an offline upgrade, due to a VMware EAM issue, sometimes all the controller VMs do not restart. The stcli start cluster command returns an error: "Node not available".

If this issue occurs, do the following:

Manually power on the controller VMs.

  • Login to the vSphere Web Client.

  • Locate the controller VMs that are not powered on. From the Navigator select, vCenter Inventory Lists > Virtual Machines > vm. Storage controller VMs, have the prefix, stCtlVM.

  • From the Actions menu, select Power > Power On.

Restart the storage cluster.

  • Login to the command line of any controller VM.

Run the command: # stcli cluster start.

2.0(1a)

CSCvb90391, CSCvb94564

"No cluster found message" seen during cluster expansion.

If the cluster was not discovered, enter the Cluster Management IP address manually in the field provided. To find the cluster IP address:

  • In vSphere Web Client, select vCenter Inventory Lists > Cisco HyperFlex Systems > Cisco HX Data Platform.

  • Double click cluster name. Select the Action Menu at the top and select Summary.

  • Note the Cluster Management IP address.

1.8(1c)

CSCvb91838

Cluster expansion failed with no operational DNS server from the list.

If the DNS server becomes non-operational after deployment or cluster creation, add a new operational DNS to the controller. Use the following commands:

stcli services dns remove --dns <non_operational_dns_ip>

stcli services dns add --dns <operational_dns_ip>

1.8(1c)

CSCvb94112

HX Installer may be stuck at Cluster Expansion Validation screen during the cluster expansion process.

In your browser, type http://ip_of_installer/api/reset to restart the workflow.

Note 

First check logs to ensure expansion workflow is hung.

1.8(1c)

CSCvb28697

Removing a compute node from the storage cluster did not remove the associated datastores.

Manually remove the datastores.

1.8(1a)

CSCvb30029

Compute node is not included in list of storage cluster nodes, and cannot be accessed, when the compute node is powered off.

Ensure all nodes are up and running and cluster is healthy before starting an upgrade or other maintenance activities.

1.8(1a)

CSCvb29790

Cluster creation fails due to failure to locate vCenter server.

In the vSphere Web Client, change the vCenter host name to an IP address in the config.vpxd.sso.sts.uri variable.

1.8(1a)

Open Caveats in Release 2.1(1b)

Defect ID

Symptom

Workaround

Defect Found in Release

CSCve00042

HX upgrade hangs when entering maintenance mode.

This is due to a known VMware issue that is fixed in vSphere 6.0 U3. Upgrade to vSphere 6.0 U3 and rerun the HX upgrade.

See VMware KB article: Networking settings in ESXi host is blank in the vSphere Client (2147497).

2.1(1b)

CSCve01880

During upgrade, if the upgrade procedure fails or is interrupted for any reason, the cleaner process might not restart.

Manually re-start the cleaner using stcli command:

# stcli cleaner start

2.1(1b)

CSCve00257

Controller VM shows as powered ON even though HX Plugin shows as offline.

None.

2.1(1b)

CSCve06665

SAS Expander Firmware did not upgrade from an earlier release to 3.1(2b), 3.1(2c) and 3.1(2e).

Note 

This issue is only applicable for hybrid and All Flash HX240 nodes.

Cisco strongly recommends that you upgrade server firmware C-bundle to 3.1(2f). Or, try the following:

  • If you upgrade to releases 3.1(2b), 3.1(2c) or 3.1(2e) exclude SAS Expander from your host firmware pack.

  • You will not encounter any issues if you upgrade to releases 3.1(2f) or 3.1(2g).

2.1(1b)

CSCve08605

Cluster Creation fails with NTP sync.

Check that the controller VM can reach the NTP server and retry cluster creation.

2.1(1b)

CSCve34343

Deploy validation fails during cluster node expansion with the following error: "Validation framework execution failed: <type 'exceptions.TypeError'> --> Incorrect padding".

If this issue occurs, do one of the following:

  1. Change the ESXi password of the new node being added to one that is a multiple of 8 characters.

  2. Contact Cisco TAC for further assistance.

2.1(1b)

CSCvc32497

Cluster creation or expansion fails when UCSM configuration is not chosen as an advanced option in the Cisco HX Data Platform Installer. This happens due to non-reachability of ESX.

In the Cisco HX Data Platform Installer's configuration page, you will see that the default VLAN for hx-inband-mgmt 3091 is tagged to the ESX and not the user-specified VLAN.

To resolve this issue, follow the steps below:

  1. Tag the correct VLAN on the ESX by launching KVM console from Cisco UCS Manager.

  2. In Cisco HX Data Platform Installer, retry deploy validation.

Note 

Before you retry deploy validation, you may have to place the nodes that were previously added to the vCenter, in maintenance mode.

2.0(1a)

CSCvc39250

Performance charts could display skewed data if the CPU utilization on nodes in the cluster is very high.

The display will correct itself if the CPU utilization goes down.

2.0(1a)

CSCvc62266

After an offline upgrade, due to a VMware EAM issue, sometimes all the controller VMs do not restart. The stcli start cluster command returns an error: "Node not available".

If this issue occurs, do the following:

Manually power on the controller VMs.

  • Login to the vSphere Web Client.

  • Locate the controller VMs that are not powered on. From the Navigator select, vCenter Inventory Lists > Virtual Machines > vm. Storage controller VMs, have the prefix, stCtlVM.

  • From the Actions menu, select Power > Power On.

Restart the storage cluster.

  • Login to the command line of any controller VM.

Run the command: # stcli cluster start.

2.0(1a)

CSCvd37521

A datastore is not mounted on all the nodes in a storage cluster(partially mounted datastore).

If this issue occurs, do the following:

  1. Verify conditions.

    • The storage cluster is healthy.

    • Remounting the datastore fails.

    • # stcli datastore mount --name <datastore_name>

      Look for:

      Failed to mount the datastore on some or all hosts ...

      EntityRef (<host_id>) ...mounted: False

  2. Login into the ESX host with the unmounted datastore.

  3. Delete the datastore mount point on the ESXi host.

    # esxcfg-nas -d <datastore_name>

  4. Restart service on the ESX and the ESX host.

    # /etc/init.d/vpxa restart

    # /etc/init.d/hostd restart

  5. Verify that the deleted datastore is not listed for the ESXi host. From vCenter refresh and rescan for the datastore

  6. Retry mounting the datastore.

2.0(1a)

CSCvb81393

Firmware upgrade fails from Server Storage Controller with unsupported board.

If this issue occurs, do the following:
  • Decommission and then recommission the referenced board.

  • Verify that the server is healthy.

  • Retry the firmware upgrade.

1.8(1c)

CSCvb90391, CSCvb94564

"No cluster found message" seen during cluster expansion.

If the cluster was not discovered, enter the Cluster Management IP address manually in the field provided. To find the cluster IP address:

  • In vSphere Web Client, select vCenter Inventory Lists > Cisco HyperFlex Systems > Cisco HX Data Platform.

  • Double click cluster name. Select the Action Menu at the top and select Summary.

  • Note the Cluster Management IP address.

1.8(1c)

CSCvd00100

Performance charts show zero or a white space gap or performance IO counters drop to zero during converged node going down or entering/exiting maintenance mode.

This is a display issue and has no impact on actual cluster performance. The display will correct itself after few minutes.

1.8(1e)

CSCvb91838

Cluster expansion failed with no operational DNS server from the list.

If the DNS server becomes non-operational after deployment or cluster creation, add a new operational DNS to the controller. Use the following commands:

stcli services dns remove --dns <non_operational_dns_ip>

stcli services dns add --dns <operational_dns_ip>

1.8(1c)

CSCvb91799

Controller VM may fail to power on during UCS upgrade.

This is a VMware issue. Refer to the KB article(2214) for more details.

Please remember to reboot afterwards.

1.8(1c)

CSCvb94112

HX Installer may be stuck at Cluster Expansion Validation screen during the cluster expansion process.

In your browser, type http://ip_of_installer/api/reset to restart the workflow.

Note 

First check logs to ensure expansion workflow is hung.

1.8(1c)

CSCvb28697

Removing a compute node from the storage cluster did not remove the associated datastores.

Manually remove the datastores.

1.8(1a)

CSCvb30039

When SeSparse disks are used in the virtual machine, native snapshots and ReadyClones fails with error: .

"unable to rename VMDK_FILE_NAME.flat.vmdk"

None. Use the VMware default FlatVer2 format virtual disks to ensure HX Snapshots and Clones work for the VMs.

1.8(1a)

CSCvb30029

Compute node is not included in list of storage cluster nodes, and cannot be accessed, when the compute node is powered off.

Ensure all nodes are up and running and cluster is healthy before starting an upgrade or other maintenance activities.

1.8(1a)

CSCvb29790

Cluster creation fails due to failure to locate vCenter server.

In the vSphere Web Client, change the vCenter host name to an IP address in the config.vpxd.sso.sts.uri variable.

1.8(1a)

CSCuy87828

Performance chart display is not formatted at 100% zoom.

Selecting an optional metric and a smaller resolution at the same time shows a chart that is not formatted correctly.

Change the zoom on the chart.

1.7.1

CSCuy87775

Datastore failed to mount on all ESX hosts. This is a VMware issue.

Try mounting the datastore one at a time.

1.7.1

Open Caveats in Release 2.0(1a)

Defect ID

Symptom

Workaround

Defect Found in Relesase

CSCvc32497

Cluster creation or expansion fails when UCSM configuration is not chosen as an advanced option in the Cisco HX Data Platform Installer. This happens due to non-reachability of ESX.

In the Cisco HX Data Platform Installer's configuration page, you will see that the default VLAN for hx-inband-mgmt 3091 is tagged to the ESX and not the user-specified VLAN.

To resolve this issue, follow the steps below:

  1. Tag the correct VLAN on the ESX by launching KVM console from Cisco UCS Manager.

  2. In Cisco HX Data Platform Installer, retry deploy validation.

Note 

Before you retry deploy validation, you may have to place the nodes that were previously added to the vCenter, in maintenance mode.

2.0(1a)

CSCvc39250

Performance charts could display skewed data if the CPU utilization on nodes in the cluster is very high.

The display will correct itself if the CPU utilization goes down.

2.0(1a)

CSCvc62266

After an offline upgrade, due to a VMware EAM issue, sometimes all the controller VMs do not restart. The stcli start cluster command returns an error: "Node not available".

If this issue occurs, do the following:

  1. Manually power on the controller VMs.

    • Login to the vSphere Web Client.

    • Locate the controller VMs that are not powered on. From the Navigator select, vCenter Inventory Lists > Virtual Machines > vm. Storage controller VMs, have the prefix, stCtlVM.

    • From the Actions menu, select Power > Power On.

  2. Restart the storage cluster.

    1. Login to the command line of any controller VM.

  3. Run the command: # stcli cluster start

2.0(1a)

CSCvd37469

Online upgrade timesout and fails with cleaner offline in all the nodes.

Ensure there in enough space in /tmp/scratch partition of ESXi. Retry the upgrade.

2.0(1a)

CSCvd37534

In some rare cases, restarting online upgrade on a hyperflex cluster with the previous upgrade in failed state may fail again. Even though the cluster recovered from failure and in a healthy state.

When retrying upgrade using CLI, please use "-f" or "--force" option command "stcli cluster upgrade" or use Springpath WebUI/plugin to retry the upgrade.

2.0(1a)

CSCvd37521

A datastore is not mounted on all the nodes in a storage cluster (partially mounted datastore).

If this issue occurs, do the following:

  1. Verify conditions.

    • The storage cluster is healthy.

    • Remounting the datastore fails.

    • From a controller VM, run:

      # stcli datastore mount --name <datastore_name>

      Look for:

      Failed to mount the datastore on some or all hosts ...

      EntityRef (<host_id>) ...

      mounted: False

    • ESXi host confirms datastore unmounted.

      From ESXi host with the unmounted datastore, run:

      # esxcgf-nas -l

      Look for: <datastore_name> ... unmounted unavailable

  2. Login into the ESX host with the unmounted datastore.

  3. Delete the datastore mount point on the ESXi host.

    # esxcfg-nas -d <datastore_name>

  4. Restart service on the ESX and the ESX host.

    # /etc/init.d/vpxa restart

    # /etc/init.d/hostd restart

  5. Verify that the deleted datastore is not listed for the ESXi host. From vCenter refresh and rescan for the datastore.

  6. Retry mounting the datastore.

2.0(1a)

CSCvd10551

Cisco HX Data Platform upgrade from release 1.7.1 to 2.0(1a) fails due to vMotion timing out.

Retry the upgrade, and then, if required, perform a manual vMotion.

2.0(1a)

CSCvd25130

The number of controllers reported in plugin UI is incorrect if hostnames on servers are same.

Ensure that hostnames are unique.

2.0(1a)

CSCvb81393

Firmware upgrade fails from Server Storage Controller with unsupported board.

If this issue occurs, do the following:

  • Decommission and then recommission the referenced board.

  • Verify that the server is healthy.

  • Retry the firmware upgrade.

If this does not resolve the issue, contact Cisco TAC for more assistance.

1.8(1c)

CSCvb90391, CSCvb94564

"No cluster found message" seen during cluster expansion.

If the cluster was not discovered, enter the Cluster Management IP address manually in the field provided. To find the cluster IP address:

  1. In vSphere Web Client, select vCenter Inventory Lists > Cisco HyperFlex Systems > Cisco HX Data Platform.

  2. Double click cluster name. Select the Action Menu at the top and select Summary.

  3. Note the Cluster Management IP address.

1.8(1c)

CSCvb91799

Controller VM may fail to power on during UCS upgrade.

This is a VMware issue. Refer to the KB article(2214) for more details.

Please remember to reboot afterwards.

1.8(1c)

CSCvb91838

Cluster expansion failed with no operational DNS server from the list.

If the DNS server becomes non-operational after deployment or cluster creation, add a new operational DNS to the controller. Use the following commands:

  1. stcli services dns remove --dns <non_operational_dns_ip>

  2. stcli services dns add --dns <operational_dns_ip>

1.8(1c)

CSCvb94112

HX Installer may be stuck at Cluster Expansion Validation screen during the cluster expansion process.

In your browser, type http://ip_of_installer/api/reset to restart the workflow.

Note 

First check logs to ensure expansion workflow is hung.

1.8(1c)

CSCvb28697

Removing a compute node from the storage cluster did not remove the associated datastores.

Manually remove the datastores.

1.8(1a)

CSCvb29790

Cluster creation fails due to failure to locate vCenter server.

In the vSphere Web Client, change the vCenter host name to an IP address in the config.vpxd.sso.sts.uri variable.

1.8(1a)

CSCvb30029

Compute node is not included in list of storage cluster nodes, and cannot be accessed, when the compute node is powered off.

Ensure all nodes are up and running and cluster is healthy before starting an upgrade or other maintenance activities.

1.8(1a)

CSCvb30039

When SeSparse disks are used in the virtual machine, native snapshots and ReadyClones fails with error "unable to rename VMDK_FILE_NAME.flat.vmdk"

None. Use the VMware default FlatVer2 format virtual disks to ensure HX Snapshots and Clones work for the VMs.

1.8(1a)

CSCva46884

8 node longevity - ESX host becomes disconnected and enters non-responsive state.

This is a known VMware defect. Call Cisco TAC for assistance.

1.8(1a)

CSCuy87775

Datastore failed to mount on all ESX hosts. This is a VMware issue.

Try mounting the datastore one at a time.

1.7.1

CSCuy87821

Native snapshot creation can fail due to timeout.

Power off the VM, then take the snapshot or use the non-quiesce default option.

1.7.1

CSCuy87793

Backup software sometimes fails when SSLv3 is disabled due to VMware bugs on vSphere 5.5 and 6.0 u1.

See VMware KB articles.

  • vSphere 6.0u1, See Enabling support for SSLv3 in ESXi (2121021).

  • vSphere 5.5, See Enabling support for SSLv3 on vSphere 5.5 (2139396).

1.7.1

Related Documentation

Document

Description

Preinstallation Checklist

Provides an editable file for gathering required configuration information prior to starting an installation. This checklist must be filled out and returned to a Cisco account team.

Ordering and Licensing Guide

Provides information about licensing and ordering Cisco HyperFlex Systems from contract creation, activation, renewals, and cotermination.

Installation Guide for VMware ESXi

Provides detailed information about Day 0 configuration of HyperFlex Systems and related post cluster configuration tasks. It also describes how to set up multiple HX clusters, expand an HX cluster, set up a mixed HX cluster, and attach external storage.

Stretched Cluster Guide

Provides installation and configuration procedures for HyperFlex Stretched cluster, enabling you to deploy an Active-Active disaster avoidance solution for mission critical workloads.

Installation Guide on Microsoft Hyper-V

Provides installation and configuration procedures on how to install and configure Cisco HyperFlex Systems on Microsoft Hyper-V.

Edge Deployment Guide

Provides deployment procedures for HyperFlex Edge, designed to bring hyperconvergence to remote and branch office (ROBO) and edge environments.

Administration Guide

Provides information about how to manage and monitor the cluster, encryption, data protection (replication and recovery), ReadyClones, Native snapshots, and user management. Interfaces include HX Connect, HX Data Platform Plug-in, and the stcli commands.

HyperFlex Intersight Installation Guide

Provides installation, configuration, and deployment procedures for HyperFlex Intersight, designed to deliver secure infrastructure management anywhere from the cloud.

Upgrade Guide

Provides information on how to upgrade an existing installation of Cisco HX Data Platform, upgrade guidelines, and information about various upgrade tasks.

Network and External Storage Management Guide

Provides information about HyperFlex Systems specific network and external storage management tasks.

Command Line Interface (CLI) Guide

Provides CLI reference information for HX Data Platform stcli commands.

REST API Getting Started Guide

REST API Reference

Provides information related to REST APIs that enable external applications to interface directly with the Cisco HyperFlex management plane.

Troubleshooting Guide

Provides troubleshooting for installation, configuration, Cisco UCS Manager to Cisco HyperFlex configuration, and VMware vSphere to HyperFlex configuration. In addition, this guide provides information about understanding system events, errors, Smart Call Home, and Cisco support.

TechNotes

Provides independent knowledge base articles.