Introduction

Cisco HyperFlex™ Systems unlock the full potential of hyperconvergence. The systems are based on an end-to-end software-defined infrastructure, combining software-defined computing in the form of Cisco Unified Computing System (Cisco UCS) servers, software-defined storage with the powerful Cisco HX Data Platform, and software-defined networking with the Cisco UCS fabric that integrates smoothly with Cisco Application Centric Infrastructure (Cisco ACI). Together with a single point of connectivity and hardware management, these technologies deliver a pre-integrated and adaptable cluster that is ready to provide a unified pool of resources to power applications as your business needs dictate.

These release notes pertain to the Cisco HX Data Platform, Release 2.5, and describe the features, limitations and caveats for the Cisco HX Data Platform.

Revision History

Release

Date

Description

2.5(1d)

May 8, 2018

Added "Cluster Expansion Guidelines" section.

February 9, 2018

Updated support for VMware ESXi 6.5 U1 in the "Supported VMware vSphere Versions and Editions" table.

February 7, 2018

Revised workaround description for CSCvh54563 to add support for VMware ESXi 6.5 U1.

January 30, 2018

Added CSCvh54563 to the list of open caveats for 2.5(1d).

October 28, 2017

Created release notes for Cisco HX Data Platform Software, Release 2.5(1d).

2.5(1c)

October 13, 2017

Added section on "Upgrade Advisory for Hybrid Clusters with Replication Factor of 2".

September 8, 2017

Created release notes for Cisco HX Data Platform Software, Release 2.5(1c).

2.5(1b)

August 22, 2017

Corrected note regarding Cisco TAC support in the "Upgrade Guidelines" section.

August 16, 2017

Updated "Upgrade Guidelines" section to specify 2.5(1b) upgrade requirements.

July 31, 2017

Created release notes for Cisco HX Data Platform Software, Release 2.5(1b).

2.5(1a)

July 24, 2017

Created release notes for Cisco HX Data Platform Software, Release 2.5(1a).

New Features in Release 2.5

  • Qualified a new NVMe write log drive option (HX-NVMEM4-H1600) for HX All-Flash Systems.

  • Qualified new cache drives options (HX-SD16TSASS3-EP) for HX240 and (HX-SD800GSAS3-EP) for HX220 nodes.

  • Native Data Protection and Security:

    • Native Replication for Disaster Recovery.

    • Data-at-rest Encryption support with Self-Encrypting Drives (SEDs) and enterprise key management support (KMIP).

    • Security hardening and Role-Based Access Control (RBAC).

    For more details, refer to the Cisco HyperFlex Data Platform Administration Guide, Release 2.5, and Cisco HyperFlex Systems Upgrade Guide, Release 2.5.

  • HyperFlex Management Lifecycle Enhancements:

    • HX Connect—New HTML5 UI for native management and monitoring with intuitive dashboard for cluster health, capacity, and performance. Localized for Simplified Chinese, Japanese, and Korean.

      For more details, refer to the Cisco HyperFlex Data Platform Administration Guide, Release 2.5 and the built-in online help.

    • REST APIs—RESTful APIs accessible through REST API Explorer to enable automation and integration with third-party management and monitoring tools.

    • Smart Call Home―Enhanced auto-support with Smart Call Home integration to enable automated support service request generation for important events. Added option in the HX Data Platform Installer to enable Smart Call Home (SCH)/Auto-Support. For more details, refer to the Cisco HyperFlex Systems Installation Guide for VMware ESXi, Release 2.5.

    • Smart Licensing integration.

    • Enabled for Tech Preview of a cloud-based management service (Project Starship based).

  • Enhanced performance and density:

    • Increased capacity for All-Flash up to 23 capacity drives on Cisco HyperFlex HXAF240 nodes.

    • Support for NVMe caching for All-Flash.

    • Performance optimizations.

  • Enhanced compatibility and qualifications:

    • Support for VMware vSphere 6.5—Deprecated support for vSphere 5.5.

    • HyperFlex Edge installer automation and enhanced configurations.

    • Support for M10 GPU and dual GPU configurations.

    • Expanded compute-only support for B-Series and C-Series servers.

    • Expanded storage qualifications.

Supported Versions and System Requirements

Cisco HX Data Platform requires specific software and hardware versions, and networking settings for successful installation.

For a complete list of requirements, see:

Hardware and Software Interoperability

For a complete list of hardware and software inter-dependencies, refer to respective Cisco UCS Manager release version of Hardware and Software Interoperability for Cisco HyperFlex HX-Series.

Software Requirements

The software requirements include verification that you are using compatible versions of Cisco HyperFlex Systems (HX) components and VMware vSphere components.

HyperFlex Software Versions

The HX components—Cisco HX Data Platform Installer, Cisco HX Data Platform, and Cisco UCS firmware—are installed on different servers. Verify that each component on each server used with and within an HX Storage Cluster are compatible.

  • Verify that the preconfigured HX servers have the same version of Cisco UCS server firmware installed. If the Cisco UCS Fabric Interconnects (FI) firmware versions are different, see the Cisco HyperFlex Systems Upgrade Guide for steps to align the firmware versions.

  • For new hybrid or All Flash (Cisco HyperFlex HX240c M4 or HX220c M4) deployments, verify that Cisco UCS Manager 3.1(2g) or later is installed. Contact Cisco TAC for guidance.

  • For SED based HyperFlex systems, ensure that the A (Infrastructure) and C (Rack server) bundles are at Cisco UCS Manager version 3.1(3c).

  • To reinstall an HX server, download supported and compatible versions of the software. See the Cisco HyperFlex Systems Installation Guide for VMware ESXi for the requirements and steps.

HyperFlex Release

HX Data Platform Installer

HX Data Platform

Recommended UCS FI Firmware

2.5(1d)

2.5(1d)

2.5(1d)

3.1(3c) required for SED systems and recommended for non-SED systems.

2.5(1c)

2.5(1c)

2.5(1c)

3.1(3c) required for SED systems and recommended for non-SED systems.

2.5(1b)

2.5(1b)

2.5(1b)

3.1(3c) required for SED systems and recommended for non-SED systems.

2.5(1a)

2.5(1a)

2.5(1a)

3.1(3c) required for SED systems and recommended for non-SED systems.

2.1(1c)

2.1(1c)

2.1(1c)

3.1(2g)

2.1(1b)

2.1(1b)

2.1(1b)

3.1(2g)

2.0(1a)

2.0(1a)

2.0(1a)

3.1(2f)

1.8(1f)

1.8(1f)

1.8(1f)

3.1(2b)

1.8(1e)

1.8(1e)

1.8(1e)

3.1(2b)

1.8(1c)

1.8(1c)

1.8(1c)

3.1(2b)

1.8(1b)

1.8(1b)

1.8(1b)

3.1(2b)

1.8(1a)

1.8(1a)

1.8(1a)

3.1(2b)

HyperFlex Licensing

As of version 2.5(1a), HyperFlex uses a Smart Licensing mechanism to apply your licenses. See the Cisco HyperFlex Systems Installation Guide for VMware ESXi for details and steps.

Supported VMware vSphere Versions and Editions

Each HyperFlex release is compatible with specific versions of vSphere, VMware vCenter, and VMware ESXi.

  • Verify that all HX servers have a compatible version of vSphere preinstalled.

  • Verify that the vCenter version is the same or later than the ESXi version.

  • Verify that you have a vCenter administrator account with root-level privileges and the associated password.

HyperFlex Version

vSphere Versions

vSphere Editions

2.5(1d)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3, 6.0 U2 Patch 4, 6.0 U3, 6.5 U1

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

2.5(1c)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3, 6.0 U2 Patch 4, 6.0 U3, 6.5 U1

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

2.5(1b)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3, 6.0 U2 Patch 4, 6.0 U3, 6.5 U1

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

2.5(1a)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3, 6.0 U2 Patch 4, 6.0 U3, 6.5 U1

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

2.1(1c)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3, 6.0 U2 Patch 4, 6.0 U3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

2.1(1b)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3, 6.0 U2 Patch 4, 6.0 U3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

2.0(1a)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3, 6.0 U2 Patch 4

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

1.8(1f)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

1.8(1e)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

1.8(1c)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

5.5 U3

Enterprise, Enterprise Plus

1.8(1b)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

1.8(1a)

6.0 U1b, 6.0 U2, 6.0 U2 Patch 3

Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO

VMware vSphere Licensing Requirements

How you purchase your vSphere license determines how your license applies to your HyperFlex system.

  • If you purchased your vSphere license with HyperFlex

    Each HyperFlex server either has the Enterprise or Enterprise Plus edition preinstalled at the factory.

    note.gif
    Note

    • The SD cards have the OEM licenses preinstalled. If you delete or overwrite the content of the SD cards after receiving the HX servers, you also delete the factory-installed licenses from the SD cards.

    • OEM license keys is a new VMware vCenter 6.0 U1b feature. Earlier versions do not support OEM licenses.

    • All factory-installed HX nodes share the same OEM license key. With vSphere OEM keys, the "Usage count can exceed the Capacity value.

    • When you add an HX host to vCenter through the Add Host wizard, in the Assign license section, select the OEM license.

      We obfuscate the actual vSphere OEM license key; for example, 0N085-XXXXX-XXXXX-XXXXX-10LHH

    • Standard, Essentials Plus, and ROBO editions are not available preinstalled on HX servers.


  • If you did NOT purchase your vSphere license with HyperFlex

    The HX nodes have a vSphere Foundation license preinstalled. After initial setup, the license can be applied to a supported version of vSphere.

  • If you purchased your vSphere license from Cisco without a HyperFlex system

    Contact Cisco TAC to obtain a spare vSphere license at no additional cost.

Browser Recommendations

Use one of the following browsers to run the listed HyperFlex components. These browsers have been tested and approved. Other browsers might work, but full functionality has not been tested and confirmed.

Table 1. Supported Browsers

Browser

Cisco UCS Manager

HX Data Platform Installer

HX Connect

Microsoft Internet Explorer

9 or higher

Unsupported

11 or higher

Google Chrome

14 or higher

52 or higher

54 or higher

Mozilla Firefox

7 or higher

54 or higher

52 or higher

Notes

  • Cisco HyperFlex Connect

    The minimum recommended resolution is 1024 X 768.

  • Cisco HX Data Platform Plug-in

    The Cisco HX Data Platform Plug-in runs in vSphere. For VMware Host Client System browser requirements, see the VMware documentation, at https://www.vmware.com/support/pubs/.

  • Cisco UCS Manager

    The browser must support the following:

    • Java Runtime Environment 1.6 or later.

    • Adobe Flash Player 10 or higher is required for some features.

    For the latest browser information about Cisco UCS Manager, refer to the most recent Cisco UCS Manager Getting Started Guide.

Cisco HX Data Platform Storage Cluster Specifications

Cluster Limits:

  • Cisco HX Data Platform supports up to 100 clusters managed per vCenter as per VMware sizing guidelines.

  • Cisco HX Data Platform supports any number of clusters on a single FI domain. Each HX converged node must be directly connected to a dedicated FI port on fabric A and fabric B without the use of a FEX. C-series compute only nodes must also directly connect to both FIs. B-series compute only nodes will connect through a chassis I/O module to both fabrics. In the end, the number of physical ports on the FI will dictate the maximum cluster size and maximum number of individual clusters supported in a UCS domain.

Node Limits for All Flash:

  • Minimum converged nodes (per cluster): 3

  • Maximum converged nodes (per cluster): 16

  • Maximum compute-only nodes (per cluster): 16

    note.gif
    Note

    The number of compute-only nodes cannot exceed the number of converged nodes.


Node Limits for Hybrid:

  • Minimum converged nodes (per cluster): 3

  • Maximum converged nodes (per cluster): 8

  • Maximum compute-only nodes (per cluster): 8

    note.gif
    Note

    The number of compute-only nodes cannot exceed the number of converged nodes.


Node Limits for HX Edge:

Cisco HX Data Platform storage clusters supported nodes:

  • Converged nodes—All Flash: Cisco HyperFlex HXAF240c M5, HXAF220c M5, HXAF240c M4, and HXAF220c M4.

  • Converged nodes—Hybrid: Cisco HyperFlex HX240c M5, HX220c M5, HX240c M4, and HX220c M4.

  • Compute-only—Cisco B200 M3/M4, B260 M4, B420 M4, B460 M4, B480 M5, C240 M3/M4, C220 M3/M4, C480 M5, C460 M4, B200 M5, C220 M5, and C240 M5.

Upgrade Guidelines

The 2.5(1d) release supports upgrading from older HX versions.

The list below is a highlight of critical criteria for performing an upgrade of your HyperFlex system.

  • Cluster Expansion—To expand a cluster running on version 2.5(1d), upgrade the cluster to 2.6(1e) and then continue to expand the cluster with 2.6(1e).

    .
  • Hybrid Clusters—Do not upgrade to release 2.5(1a), 2.5(1b), 2.5(1c), or 2.6(1a) if you are running a hybrid cluster. Ensure you use the latest 2.5(1d) release for all hybrid upgrades.

  • vDS Support—Upgrades for VMware vDS environments are supported in release 2.5(1c) and later.

  • Initiating Upgrade―Use either the CLI stcli commands or the HX Data Platform Plug-in to the vSphere Web Client. Do not use HX Connect or the Tech Preview UI to upgrade to from a pre-2.5 version to a 2.5 version.

  • Cluster Readiness—Ensure that the cluster is properly bootstrapped and the updated plug-in loaded before proceeding.

  • HX Data Platform 1.7.x clusters—Users upgrading from 1.7.x must step through an intermediate version before upgrading to 2.5.

  • HX Data Platform 2.1(1b) with SED—Upgrading SED-ready systems running 2.1 require UCS infrastructure and server firmware upgrades.

  • vSphere 5.5 Upgrades—Users on vSphere 5.5 must upgrade to 6.0/6.5 before starting HX Data Platform upgrade. vSphere 5.5 support is deprecated with HX Data Platform 2.5(1a) and upgrade fails if attempted.

    • For HX220 users running 5.5, contact TAC for upgrade assistance.

    • For HX240 users running 5.5, upgrade components in the following order.

      1. Upgrade vCenter to 6.0 or 6.5. If upgrading to 6.5, you must upgrade your vCenter in place. Using a new vCenter 6.5 is not supported for users migrating from 5.5.

      2. Upgrade ESXi to 6.0/6.5 using the offline zip bundle.

        note.gif
        Note

        During upgrade it might be necessary to reconnect ESXi host manually in vCenter after ESXi upgrade and host reboot.


      3. Upgrade HX Data Platform to 2.5 (and optionally the UCS firmware).

    • If Upgrading to vSphere 6.5:

      • Certain cluster functions such as native and scheduled snapshots, ReadyClones, and Enter/Exit HX Maintenance Mode will not operate from the time the upgrade is started until the HX Data Platform upgrade to 2.5 is complete.

      • After upgrading ESXi using the offline zip bundle, use the ESX Exit Maintenance Mode option. The Exit HX Maintenance Mode option does not operate in the vSphere Web Client until the HX Data Platform upgrade to 2.5 is complete.

  • vSphere 6.0 Upgrades—Users on vSphere 6.0 migrating to 6.5, upgrade components in the following order:

    1. HX Data Platform upgrade to 2.5 (and optionally the UCS firmware).

    2. Upgrade vCenter Server following VMware documentation and best practices. Optionally, deploy a new vCenter server and perform stcli cluster re-register.

    3. Upgrade ESXi to 6.5 using the offline zip bundle.

  • Server Firmware Upgrades—Server firmware should be upgraded to ensure smooth operation and to correct known issues. Specifically, newer SAS HBA firmware is available in this release and is recommended for long-term stability.

    note.gif
    Note

    • Users are encouraged to upgrade to 3.1(3c) C-bundle or later whenever possible.

    • Users running C-bundle versions prior to 3.1(2f) must upgrade server firmware by performing a combined upgrade of UCS server firmware (C-bundle) to 3.1(3c) or later and HX Data Platform to 2.5. Do not split the upgrade into two separate operations.

    • If the cluster is already on 3.1(2f) C-bundle or later, you may perform an HX Data Platform only or combined upgrade, as required.


  • Maintenance Window—If upgrading both HX Data Platform and UCS firmware, either a combined or split upgrade can be selected through the vSphere HX Data Platform Plug-in depending on the length of the maintenance window. Direct firmware upgrade using server firmware auto install through Cisco UCS Manager should not be attempted. Instead, use the UCS server upgrade orchestration framework provided by the HX Data Platform Plug-in.

Cluster Expansion Guidelines

Expanding existing HyperFlex clusters with converged or compute-only nodes running ESXi 6.5 U1 requires HXDP release 2.6(1a) or later. If running HXDP release 2.5, upgrade to 2.6 or later before starting expansion with the matching installer build. Contact Cisco TAC for further details.

Security Fixes

The following security issues are resolved:

Release

Defect ID

CVE

Description

2.5(1d)

CSCvg31472

CVE-2017-12315

The vulnerability associated with lack of proper masking of sensitive information in system log files.

Resolved Caveats in Release 2.5(1c)

Defect ID

Symptom

First Release Affected

Resolved in Release

CSCvf61822

Retry of the upgrade after upgrading some components, may not start storfs successfully.

2.5(1a)

2.5(1c)

CSCvf58104

One or more Storage Controller VMs may be deleted due to insufficient resources.

2.5(1a)

2.5(1c)

CSCvf44793

Single node kernel upgrade fails due to panic in the VM.

2.5(1a)

2.5(1c)

CSCvf5944

Upgrade from release 2.1(1b) to 2.5(1b) failed.

2.5(1a)

2.5(1c)

CSCvf23255

ASUP email notification does not work when multiple Notification Settings are open.

2.5(1a)

2.5(1c)

CSCuy87775

Datastore failed to mount on all ESX hosts. This is a VMware issue.

1.7.1

2.5(1c)

Resolved Caveats in Release 2.5(1b)

Defect ID

Symptom

First Release Affected

Resolved in Release

CSCvf40007

When any of the out-of-box HX virtual switches - vMotion or vm-network - are changed to use the vSphere distributed switch (vDS), upgrades to 2.5(1a) fail.

2.5(1a)

2.5(1b)

Resolved Caveats in Release 2.5(1a)

Defect ID

Symptom

First Release Affected

Resolved in Release

CSCuy87828

Performance chart display is not formatted at 100% zoom.

Selecting an optional metric and a smaller resolution at the same time shows a chart that is not formatted correctly.

1.7.1

2.5(1a)

CSCvb30039

When SeSparse disks are used in the virtual machine, native snapshots and ReadyClones fails with error:

"Unable to rename VMDK_FILE_NAME.flat.vmdk"

1.8(1a)

2.5(1a)

CSCvb81393

Firmware upgrade fails from Server Storage Controller with unsupported board.

1.8(1c)

2.5(1a)

CSCvb91799

Controller VM may fail to power on during UCS upgrade.

1.8(1c)

2.5(1a)

CSCvc39250

Performance charts could display skewed data if the CPU utilization on nodes in the cluster is very high.

2.0(1a)

2.5(1a)

CSCvd00100

Performance charts show zero or a white space gap or performance IO counters drop to zero during converged node going down or entering/exiting maintenance mode.

1.8(1e)

2.5(1a)

CSCvd37521

A datastore is not mounted on all the nodes in a storage cluster (partially mounted datastore).

2.0(1a)

2.5(1a)

CSCvd37534

In some rare cases, restarting online upgrade on a hyperflex cluster with the previous upgrade in failed state may fail again. Even though the cluster recovered from failure and in a healthy state.

2.0(1a)

2.5(1a)

CSCve01880

During upgrade, if the upgrade procedure fails or is interrupted for any reason, the cleaner process might not restart.

2.1(1b)

2.5(1a)

Open Caveats in Release 2.5(1d)

Defect ID

Symptom

Workaround

Defect Found in Release

Install, Upgrade, Expand

CSCvh54563

Generating HyperFlex logs on HX240 platform causes an all paths down state for the cluster.

This issue is caused by the vmw_ahci driver in ESXi 6.5 GA release.

If this issue occurs, do one of the following:
  • (Recommended) Upgrade to VMware ESXi 6.5 Update 1.

  • Disable the vmw_ahci driver on ESXi host one node at a time, making sure that the cluster is healthy before moving to the next node. Use the following steps:

    1. Run the following command on the ESXi nodes:

      # esxcli system module set --enabled=false --module=vmw_ahci
    2. Reboot the node.

Note 

If you are using releases prior to 2.5(1d), upgrade to release 2.5(1d) before you try this workaound.

2.5(1d)

CSCvf84968

Combined upgrade using stcli command fails without Cisco UCS Manager and vCenter credentials.

Provide the Cisco UCS Manager and vCenter credentials as part of CLI command. They are not optional.

2.5(1c)

CSCvf82238

After upgrade to a cluster with compute only nodes, some VMs are not managed by EAM.

This occurs when a cluster with compute nodes has been re-registered on a vCenter cluster.

Add the compute node to the HX cluster.

# stcli node add --node-ips 10.104.2.25 --controller-root-password Ca$hc0w5t --esx-username root --esx-password Cisco123

2.5(1c)

CSCvf62204

Customer created the cluster without any vCenter option and intended to:

  1. Create the cluster.

  2. Create/mount a datastore.

  3. Install vCenter on HyperFlex cluster (nested vCenter).

However, the create/mount datastore step failed with “Mount status failure”.

If you see this issue, do the following:

  1. Using SSH, connect to all controller VMs and identify the node that contains the stcli_create_cluster.shfile by running the following command:

    #ls /bin/stcli_create_cluster.sh
  2. On that node, secure copy (SCP) the

    /etc/springpath/secure/springpath_keystore.jceks

    file to every other node in the cluster, replacing the original springpath_keystore.jceks file. We recommend that you back up the existing keystore file by running the following command prior to the secure copy operation:

    #mv /etc/springpath/secure/springpath_keystore.jceks /etc/springpath/secure/springpath_keystore.jceks.bak
  3. Restart stMgr service on all nodes by running the following command:

    #restart stMgr

For further information, see Troubleshooting TechNote: "How to Deploy vCenter on the HX Data Platform".

2.5(1c)

CSCvf12501

Sometimes, after fresh cluster creation, controller VM memory usage is high and in a critical state.

This is a known VMware issue. See the article, https://kb.vmware.com/s/article/2149787.

2.5(1a)

CSCve73004

UCS Manager does not update the disk firmware status, if a firmware upgrade from 2.1(1b) to 2.5 was initiated by the HX Data Platform.

Perform a soft reset:

# CIMC-soft-rest

2.5(1a)

CSCvc62266

After an offline upgrade, due to a VMware EAM issue, sometimes all the controller VMs do not restart. The stcli start cluster command returns an error: "Node not available".

Manually power on the controller VM and start the cluster.

  1. Manually power on the controller VMs.

    • Log in to the vSphere Web Client.

    • Locate the controller VMs that are not powered on. From the vCenter Navigator select, Inventory Lists > Virtual Machines > vm. Storage controller VMs have the prefix, stCtlVM.

    • From the Actions menu, select Power > Power On.

  2. Restart the storage cluster.

    • Log in to the command line of any controller VM.

    • Run the command:

      # stcli cluster start

2.0(1a)

CSCvc32497

Cluster creation or expansion fails when UCSM configuration is not chosen as an advanced option in the Cisco HX Data Platform Installer. This happens due to non-reachability of ESX.

In the Cisco HX Data Platform Installer's configuration page, you will see that the default VLAN for hx-inband-mgmt 3091 is tagged to the ESX and not the user-specified VLAN.

Correct the VLAN and retry deploy.

Note 

Before you retry deploy validation, you may have to place the nodes that were previously added to the vCenter, into maintenance mode.

  1. Tag the correct VLAN on the ESX by launching KVM console from Cisco UCS Manager.

  2. In Cisco HX Data Platform Installer, retry deploy validation.

2.0(1a)

CSCvb94112

HX Installer may be stuck at Cluster Expansion Validation screen during the cluster expansion process.

  1. Check logs to verify that the expansion workflow is hung.

  2. In your browser, type http://ip_of_installer/api/reset to restart the workflow.

1.8(1c)

CSCvb91838

Cluster expansion failed with no operational DNS server from the list.

If the DNS server becomes non-operational after deployment or cluster creation, add a new operational DNS to the controller. Use the following commands:

# stcli services dns remove --dns <non_operational_dns_ip>

# stcli services dns add --dns <operational_dns_ip>

1.8(1c)

CSCvb29790

Cluster creation fails due to failure to locate vCenter server.

In the vSphere Web Client, change the vCenter host name to an IP address in the config.vpxd.sso.sts.uri variable.

1.8(1a)

Management

CSCvf25130

HX Connect times out after 30 minutes.

When left idle for more than 30 minutes, the HX Connect Virtual Machine page times out. When you return to a page and click anywhere, refreshed data might be incomplete or you might receive the following error: VI SDK invoke exception:; nested exception is: com.vmware.vim25. NotAuthenticated.

Retry refresh HX Connect through the browser or HX Connect buttons. Alternatively, log out of HX Connect and log back in.

This is a known VMware issue. See VMware KB, vCenter Server logs report the error: SOAP session count limit reached (2004663).

2.5(1a)

CSCve17284

Performance charts show a gap for several minutes during an All Flash cluster upgrade.

This is expected behavior because the reporting services are taken down during the upgrade. Only the Reporting Chart is affected, not the actual performance.

2.5(1a)

CSCvd88557

When creating many datastores through the stcli command-line, a temporary error displays indicating that some datastores fail to mount.

This is temporary. However, as the number of datastores being created increases, it takes longer to complete the task and clear the mount error.

2.0(1a)

Replication

CSCvb54848

vSphere Replication Plug-in fails after HX Plug-in is deployed.

To prevent the issue, first install the vSphere Replication plug-in, and then install the HX Data Platform plug-in.

For complete steps for uninstalling required elements and reinstalling them in the supported order, see the 2.5 Release Troubleshooting guide.

1.7.1

CSCvf29202

Recovery might not include disks that are not in the same folder on a datastore as the virtual machine being protected.

If any virtual machine disk resides outside the same folder and datastore of a protected virtual machine:

  1. Move the disk to the same folder on the datastore.

  2. Then add (re-add) the disk to the virtual machine.

This ensures protection and recovery work successfully.

2.5(1a)

CSCvf27609

Query for recovery job returns summary_step_state and state fields.

Refer to the state field only. Ignore the information in the summary_step_state field.

2.5(1a)

Encryption

CSCvf17183

If a CIMC reboots while a modify-security command is in-progress, and the server is secured with local key management, a subsequent disable-security command may fail, because the server doesn't know the correct key to use.

CIMC was rebooted while a modify-security command was in-progress.

Log in to the controller VM and use the sed-client to update the physical drive keys to match the server's key.

2.5(1a)

CSCvf06510

UCSM might indicate partially disabled encryption security.

No action required. This is a sync issue between reporting interfaces.

To verify from HX Connect, select System Information > Disks > Security. All disks and the controller VM should indicate Security Disabled.

2.5(1a)

CSCvf04240

Encryption may not be enabled on the new node, after it is added to the cluster.

One of the potential causes is that the serial number was not reported correctly from the ESX host.

Restart hostd service on the ESX host, and enable encryption on the cluster from the HX Connect UI. All the nodes that already have encryption enabled are not impacted.

2.5(1a)

CSCve91866

Cannot modify encryption KMIP policy on UCSM to clear an IP address.

UCSM does not allow this behavior. On UCSM, delete the KMIP policy, adjusting for the IP addresses as needed, and retry the task.

2.5(1a)

Open Caveats in Release 2.5(1c)

Defect ID

Symptom

Workaround

Defect Found in Release

Install, Upgrade, Expand

CSCvf84968

Combined upgrade using stcli command fails without Cisco UCS Manager and vCenter credentials.

Provide the Cisco UCS Manager and vCenter credentials as part of CLI command. They are not optional.

2.5(1c)

CSCvf82238

After upgrade to a cluster with compute only nodes, some VMs are not managed by EAM.

This occurs when a cluster with compute nodes has been re-registered on a vCenter cluster.

Add the compute node to the HX cluster.

# stcli node add --node-ips 10.104.2.25 --controller-root-password Ca$hc0w5t --esx-username root --esx-password Cisco123

2.5(1c)

CSCvf62204

Customer created the cluster without any vCenter option and intended to:

  1. Create the cluster.

  2. Create/mount a datastore.

  3. Install vCenter on HyperFlex cluster (nested vCenter).

However, the create/mount datastore step failed with “Mount status failure”.

If you see this issue, do the following:

  1. Using SSH, connect to all controller VMs and identify the node that contains the stcli_create_cluster.shfile by running the following command:

    #ls /bin/stcli_create_cluster.sh
  2. On that node, secure copy (SCP) the

    /etc/springpath/secure/springpath_keystore.jceks

    file to every other node in the cluster, replacing the original springpath_keystore.jceks file. We recommend that you back up the existing keystore file by running the following command prior to the secure copy operation:

    #mv /etc/springpath/secure/springpath_keystore.jceks /etc/springpath/secure/springpath_keystore.jceks.bak
  3. Restart stMgr service on all nodes by running the following command:

    #restart stMgr

For further information, see Troubleshooting TechNote: "How to Deploy vCenter on the HX Data Platform".

2.5(1c)

CSCvf12501

Sometimes, after fresh cluster creation, controller VM memory usage is high and in a critical state.

This is a known VMware issue. See the article, https://kb.vmware.com/s/article/2149787.

2.5(1a)

CSCve73004

UCS Manager does not update the disk firmware status, if a firmware upgrade from 2.1(1b) to 2.5 was initiated by the HX Data Platform.

Perform a soft reset:

# CIMC-soft-rest

2.5(1a)

CSCvc62266

After an offline upgrade, due to a VMware EAM issue, sometimes all the controller VMs do not restart. The stcli start cluster command returns an error: "Node not available".

Manually power on the controller VM and start the cluster.

  1. Manually power on the controller VMs.

    • Log in to the vSphere Web Client.

    • Locate the controller VMs that are not powered on. From the vCenter Navigator select, Inventory Lists > Virtual Machines > vm. Storage controller VMs have the prefix, stCtlVM.

    • From the Actions menu, select Power > Power On.

  2. Restart the storage cluster.

    • Log in to the command line of any controller VM.

    • Run the command:

      # stcli cluster start

2.0(1a)

CSCvc32497

Cluster creation or expansion fails when UCSM configuration is not chosen as an advanced option in the Cisco HX Data Platform Installer. This happens due to non-reachability of ESX.

In the Cisco HX Data Platform Installer's configuration page, you will see that the default VLAN for hx-inband-mgmt 3091 is tagged to the ESX and not the user-specified VLAN.

Correct the VLAN and retry deploy.

Note 

Before you retry deploy validation, you may have to place the nodes that were previously added to the vCenter, into maintenance mode.

  1. Tag the correct VLAN on the ESX by launching KVM console from Cisco UCS Manager.

  2. In Cisco HX Data Platform Installer, retry deploy validation.

2.0(1a)

CSCvb94112

HX Installer may be stuck at Cluster Expansion Validation screen during the cluster expansion process.

  1. Check logs to verify that the expansion workflow is hung.

  2. In your browser, type http://ip_of_installer/api/reset to restart the workflow.

1.8(1c)

CSCvb91838

Cluster expansion failed with no operational DNS server from the list.

If the DNS server becomes non-operational after deployment or cluster creation, add a new operational DNS to the controller. Use the following commands:

# stcli services dns remove --dns <non_operational_dns_ip>

# stcli services dns add --dns <operational_dns_ip>

1.8(1c)

CSCvb29790

Cluster creation fails due to failure to locate vCenter server.

In the vSphere Web Client, change the vCenter host name to an IP address in the config.vpxd.sso.sts.uri variable.

1.8(1a)

Management

CSCvf25130

HX Connect times out after 30 minutes.

When left idle for more than 30 minutes, the HX Connect Virtual Machine page times out. When you return to a page and click anywhere, refreshed data might be incomplete or you might receive the following error: VI SDK invoke exception:; nested exception is: com.vmware.vim25. NotAuthenticated.

Retry refresh HX Connect through the browser or HX Connect buttons. Alternatively, log out of HX Connect and log back in.

This is a known VMware issue. See VMware KB, vCenter Server logs report the error: SOAP session count limit reached (2004663).

2.5(1a)

CSCve17284

Performance charts show a gap for several minutes during an All Flash cluster upgrade.

This is expected behavior because the reporting services are taken down during the upgrade. Only the Reporting Chart is affected, not the actual performance.

2.5(1a)

CSCvd88557

When creating many datastores through the stcli command-line, a temporary error displays indicating that some datastores fail to mount.

This is temporary. However, as the number of datastores being created increases, it takes longer to complete the task and clear the mount error.

2.0(1a)

Replication

CDCvb54848

vSphere Replication Plug-in fails after HX Plug-in is deployed.

To prevent the issue, first install the vSphere Replication plug-in, and then install the HX Data Platform plug-in.

For complete steps for uninstalling required elements and reinstalling them in the supported order, see the 2.5 Release Troubleshooting guide.

1.7.1

CSCvf29202

Recovery might not include disks that are not in the same folder on a datastore as the virtual machine being protected.

If any virtual machine disk resides outside the same folder and datastore of a protected virtual machine:

  1. Move the disk to the same folder on the datastore.

  2. Then add (re-add) the disk to the virtual machine.

This ensures protection and recovery work successfully.

2.5(1a)

CSCvf27609

Query for recovery job returns summary_step_state and state fields.

Refer to the state field only. Ignore the information in the summary_step_state field.

2.5(1a)

Encryption

CSCvf17183

If a CIMC reboots while a modify-security command is in-progress, and the server is secured with local key management, a subsequent disable-security command may fail, because the server doesn't know the correct key to use.

CIMC was rebooted while a modify-security command was in-progress.

Log in to the controller VM and use the sed-client to update the physical drive keys to match the server's key.

2.5(1a)

CSCvf06510

UCSM might indicate partially disabled encryption security.

No action required. This is a sync issue between reporting interfaces.

To verify from HX Connect, select System Information > Disks > Security. All disks and the controller VM should indicate Security Disabled.

2.5(1a)

CSCvf04240

Encryption may not be enabled on the new node, after it is added to the cluster.

One of the potential causes is that the serial number was not reported correctly from the ESX host.

Restart hostd service on the ESX host, and enable encryption on the cluster from the HX Connect UI. All the nodes that already have encryption enabled are not impacted.

2.5(1a)

CSCve91866

Cannot modify encryption KMIP policy on UCSM to clear an IP address.

UCSM does not allow this behavior. On UCSM, delete the KMIP policy, adjusting for the IP addresses as needed, and retry the task.

2.5(1a)

Open Caveats in Release 2.5(1b)

Defect ID

Symptom

Workaround

Defect Found in Release

Install, Upgrade, Expand

CSCvf12501

Sometimes, after fresh cluster creation, controller VM memory usage is high and in a critical state.

This is a known VMware issue. See the article, VM Memory Usage heuristic over-reporting on ESXi 6.5.

2.5(1a)

CSCve73004

UCS Manager does not update the disk firmware status, if a firmware upgrade from 2.1(1b) to 2.5 was initiated by the HX Data Platform.

Perform a soft reset:

# CIMC-soft-rest

2.5(1a)

CSCvc62266

After an offline upgrade, due to a VMware EAM issue, sometimes all the controller VMs do not restart. The stcli start cluster command returns an error: "Node not available".

Manually power on the controller VM and start the cluster.

  1. Manually power on the controller VMs.

    • Log in to the vSphere Web Client.

    • Locate the controller VMs that are not powered on. From the vCenter Navigator select, Inventory Lists > Virtual Machines > vm. Storage controller VMs have the prefix, stCtlVM.

    • From the Actions menu, select Power > Power On.

  2. Restart the storage cluster.

    • Log in to the command line of any controller VM.

    • Run the command:

      # stcli cluster start

2.0(1a)

CSCvc32497

Cluster creation or expansion fails when UCSM configuration is not chosen as an advanced option in the Cisco HX Data Platform Installer. This happens due to non-reachability of ESX.

In the Cisco HX Data Platform Installer's configuration page, you will see that the default VLAN for hx-inband-mgmt 3091 is tagged to the ESX and not the user-specified VLAN.

Correct the VLAN and retry deploy.

Note 

Before you retry deploy validation, you may have to place the nodes that were previously added to the vCenter, into maintenance mode.

  1. Tag the correct VLAN on the ESX by launching KVM console from Cisco UCS Manager.

  2. In Cisco HX Data Platform Installer, retry deploy validation.

2.0(1a)

CSCvb94112

HX Installer may be stuck at Cluster Expansion Validation screen during the cluster expansion process.

  1. Check logs to verify that the expansion workflow is hung.

  2. In your browser, type http://ip_of_installer/api/reset to restart the workflow.

1.8(1c)

CSCvb91838

Cluster expansion failed with no operational DNS server from the list.

If the DNS server becomes non-operational after deployment or cluster creation, add a new operational DNS to the controller. Use the following commands:

# stcli services dns remove --dns <non_operational_dns_ip>

# stcli services dns add --dns <operational_dns_ip>

1.8(1c)

CSCvb29790

Cluster creation fails due to failure to locate vCenter server.

In the vSphere Web Client, change the vCenter host name to an IP address in the config.vpxd.sso.sts.uri variable.

1.8(1a)

Management

CSCvf25130

HX Connect times out after 30 minutes.

When left idle for more than 30 minutes, the HX Connect Virtual Machine page times out. When you return to a page and click anywhere, refreshed data might be incomplete or you might receive the following error: VI SDK invoke exception:; nested exception is: com.vmware.vim25. NotAuthenticated.

Retry refresh HX Connect through the browser or HX Connect buttons. Alternatively, log out of HX Connect and log back in.

This is a known VMware issue. See VMware KB, vCenter Server logs report the error: SOAP session count limit reached (2004663).

2.5(1a)

CSCvf23255

ASUP email notification does not work when multiple Notification Settings are open.

Disable, then re-enable notification settings.

2.5(1a)

CSCve17284

Performance charts show a gap for several minutes during an All Flash cluster upgrade.

This is expected behavior because the reporting services are taken down during the upgrade. Only the Reporting Chart is affected, not the actual performance.

2.5(1a)

CSCvd88557

When creating many datastores through the stcli command-line, a temporary error displays indicating that some datastores fail to mount.

This is temporary. However, as the number of datastores being created increases, it takes longer to complete the task and clear the mount error.

2.0(1a)

CSCuy87775

Datastore failed to mount on all ESX hosts. This is a VMware issue.

Try mounting the datastore one at a time.

1.7.1

Replication

CSCvf29202

Recovery might not include disks that are not in the same folder on a datastore as the virtual machine being protected.

If any virtual machine disk resides outside the same folder and datastore of a protected virtual machine:

  1. Move the disk to the same folder on the datastore.

  2. Then add (re-add) the disk to the virtual machine.

This ensures protection and recovery work successfully.

2.5(1a)

CSCvf27609

Query for recovery job returns summary_step_state and state fields.

Refer to the state field only. Ignore the information in the summary_step_state field.

2.5(1a)

Encryption

CSCvf17183

If a CIMC reboots while a modify-security command is in-progress, and the server is secured with local key management, a subsequent disable-security command may fail, because the server doesn't know the correct key to use.

CIMC was rebooted while a modify-security command was in-progress.

Log in to the controller VM and use the sed-client to update the physical drive keys to match the server's key.

2.5(1a)

CSCvf06510

UCSM might indicate partially disabled encryption security.

No action required. This is a sync issue between reporting interfaces.

To verify from HX Connect, select System Information > Disks > Security. All disks and the controller VM should indicate Security Disabled.

2.5(1a)

CSCvf04240

Encryption may not be enabled on the new node, after it is added to the cluster.

One of the potential causes is that the serial number was not reported correctly from the ESX host.

Restart hostd service on the ESX host, and enable encryption on the cluster from the HX Connect UI. All the nodes that already have encryption enabled are not impacted.

2.5(1a)

CSCve91866

Cannot modify encryption KMIP policy on UCSM to clear an IP address.

UCSM does not allow this behavior. On UCSM, delete the KMIP policy, adjusting for the IP addresses as needed, and retry the task.

2.5(1a)

Related Documentation

Document

Description

Preinstallation Checklist

Provides an editable file for gathering required configuration information prior to starting an installation. This checklist must be filled out and returned to a Cisco account team.

Ordering and Licensing Guide

Provides information about licensing and ordering Cisco HyperFlex Systems from contract creation, activation, renewals, and cotermination.

Installation Guide for VMware ESXi

Provides detailed information about Day 0 configuration of HyperFlex Systems and related post cluster configuration tasks. It also describes how to set up multiple HX clusters, expand an HX cluster, set up a mixed HX cluster, and attach external storage.

Stretched Cluster Guide

Provides installation and configuration procedures for HyperFlex Stretched cluster, enabling you to deploy an Active-Active disaster avoidance solution for mission critical workloads.

Installation Guide on Microsoft Hyper-V

Provides installation and configuration procedures on how to install and configure Cisco HyperFlex Systems on Microsoft Hyper-V.

Edge Deployment Guide

Provides deployment procedures for HyperFlex Edge, designed to bring hyperconvergence to remote and branch office (ROBO) and edge environments.

Administration Guide

Provides information about how to manage and monitor the cluster, encryption, data protection (replication and recovery), ReadyClones, Native snapshots, and user management. Interfaces include HX Connect, HX Data Platform Plug-in, and the stcli commands.

Administration Guide for Kubernetes, Release 3.5

Provides information about HyperFlex storage integration for Kubernates, information on Kubernates support in HyperFlex Connect, and instructions on how to configure HyperFlex FlexVolume storage integration for both the Cisco container platform and the RedHat OpenShift container platform.

HyperFlex Intersight Installation Guide

Provides installation, configuration, and deployment procedures for HyperFlex Intersight, designed to deliver secure infrastructure management anywhere from the cloud.

Upgrade Guide

Provides information on how to upgrade an existing installation of Cisco HX Data Platform, upgrade guidelines, and information about various upgrade tasks.

Network and External Storage Management Guide

Provides information about HyperFlex Systems specific network and external storage management tasks.

Command Line Interface (CLI) Guide

Provides CLI reference information for HX Data Platform stcli commands.

REST API Getting Started Guide

REST API Reference

Provides information related to REST APIs that enable external applications to interface directly with the Cisco HyperFlex management plane.

Troubleshooting Guide

Provides troubleshooting for installation, configuration, Cisco UCS Manager to Cisco HyperFlex configuration, and VMware vSphere to HyperFlex configuration. In addition, this guide provides information about understanding system events, errors, Smart Call Home, and Cisco support.

TechNotes

Provides independent knowledge base articles.