Cisco HX Release 5.0(x) - Software Requirements

Cisco HX Data Platform Compatibility and Scalability Details - 5.0(x) Releases

Cluster Limits

  • Cisco HX Data Platform supports up to 100 clusters managed per vCenter as per VMware configuration maximums.

  • Cisco HX Data Platform supports any number of clusters on a single FI domain. Each HX converged node must be directly connected to a dedicated FI port on fabric A and fabric B without the use of a FEX. C-series compute only nodes must also directly connect to both FIs. B-series compute only nodes will connect through a chassis I/O module to both fabrics. In the end, the number of physical ports on the FI will dictate the maximum cluster size and maximum number of individual clusters supported in a UCS domain.

  • Using a FEX on uplink ports connecting the Fabric Interconnects to the top of rack (ToR) switches is not supported due to the possibility of network oversubscription leading to the inability to handle HyperFlex storage traffic during failure scenarios.

The following tables provide Cisco HX Data Platform Compatibility and Scalability Details.

Table 1. Cisco HX Data Platform Storage Cluster Specifications for VMware ESXi

Node

VMware ESXi

Stretched Cluster* (Available on ESX Only)

Deployment Type

FI-Connected

Edge

DC-No-FI

FI-Connected

HX Servers (Intel and AMD Servers)

HX220C-M6S

HXAF220C-M6S

HXAF220C-M6SN

HX240C-M6SX

HXAF240C-M6SN

HXAF240C-M6SX

HXAF220C-M6SN

HXAF240C-M6SN

HXAF240C-M5SX

HXAF220C-M5SN

HXAF240C-M5SN

HX220C-M5

HXAF220C-M5

HX240C-M5

HXAF240C-M5

HX220C-M4

HXAF220C-M4

HX240C-M4

HXAF240C-M4

HXAF225C-M6S

HXAF245C-M6SX

HX225C-M6S

HX245C-M6SX

HX240C-M6L

HX240C-M5L

HX240 M6SX Edge

HXAF240 M6SX Edge

HX220 M6S Edge

HXAF220 M6S Edge

HX240C M5 Edge Full Depth

HXAF240C M5 Edge Full Depth

HX240C M5 Edge Short Depth

HXAF240C M5 Edge Short Depth

HX220C M5 Edge

HXAF220C M5 Edge

HX220C M4 Edge

HXAF220C M4 Edge

HXAF225-M6SX Edge

HX225-M6SX Edge

HXAF245-M6SX Edge

HX245-M6SX Edge

HX220C-M6

HX240C-M6

HX220C-M6S

HXAF220C-M6S

HXAF220C-M6SN

HX240C-M6SX

HXAF240C-M6SN

HXAF240C-M6SX

HX220C-M5

HXAF220C-M5

HX240C-M5

HXAF240C-M5

HXAF240C-M5SX

HXAF225C-M6S

HXAF245C-M6SX

HX225C-M6S

HX245C-M6SX

HX220C-M6S

HXAF220C-M6S

HXAF220C-M6SN

HX240C-M6SX

HXAF240C-M6SX

HXAF240C-M6SN

HXAF220C-M5SN

HX220C-M5

HXAF220C-M5

HX240C-M5

HXAF240C-M5

HXAF225C-M6S

HXAF245C-M6SX

HX225C-M6S

HX245C-M6SX

HX240C-M6L

HX240C-M5L

Compute-Only UCS B-Series/C-Series Servers

B200 M61, B200 M5/M4, B260 M4, B420 M4, B460 M4, B480 M5, C220 M6/M5/M4, C240 M6/M5/M4, C460 M4, C480 M5, C225 M6, C245 M6

B200 M62, B200 M5/M4, B260 M4, B420 M4, B460 M4, B480 M5, C220 M6/M5/M4, C240 M6/M5/M4, C460 M4, C480 M5

-

C220 M6, C220 M5

C225 M6, C245 M6

C240 M6, C240 M5

B200 M63, B200 M5/M4, B260 M4, B420 M4, B460 M4, B480 M5, C220 M6/M5/M4, C240 M6/M5/M4, C460 M4, C480 M5, C225 M6, C245 M6

B200 M64, B200 M5/M4, B260 M4, B420 M4, B460 M4, B480 M5, C220 M6/M5/M4, C240 M6/M5/M4, C460 M4, C480 M5

Supported Nodes

Converged and Compute-only nodes Converged and Compute-only nodes Converged nodes only

Converged and Compute-only nodes

Converged and Compute-only nodes

Supported Nodes

HXDP-DC-AD Licensed Node Limits

1:1 ratio of HXDP-DC-AD to Compute only nodes

(Min—Max)

Converged nodes:3-32 Compute only nodes: 0-32

Compute-only nodes: 0-32

All NVMe PIDs require HXDP-DC-PR License

Converged nodes:3-16

Compute-only nodes: 0-16

M4 Converged nodes: 3

M5 Converged nodes: 2,3,or 4

Converged nodes: 3-12

Compute-only nodes: 0-12

All NVMe PIDs require HXDP-DC-PR License

N/A

N/A

HXDP-DC-PR Licensed Node Limits

1:2 ratio of HXDP-DC-PR to Compute only nodes

(Min—Max)

Converged nodes:3-32 Compute only nodes: 0-64 (up to max cluster size)

Compute-only nodes: 0-64

(up to max cluster size)

All NVMe PIDs require HXDP-DC-PR License

Converged nodes:3-16

Compute-only nodes: 0-32

M4 Converged nodes: 3

M5 Converged nodes: 2,3,or 4

Converged nodes: 3-12

Compute-only nodes: 0-24

Required for HXAF220c M6SN, HXAF240c M6SN, HXAF220c M5SN

All NVMe PIDs require HXDP-DC-PR License

Converged nodes: 2-16 per Site

Compute-only nodes: 0-21 per Site

(up to max cluster size)

Compute only nodes: 0-64 (up to max cluster size)

Converged nodes: 2-8 per Site

Compute-only nodes: 0-16 per Site

(up to max cluster size)

Max Cluster Size

965

48

4

36

32 per Site/ 64 per cluster

24 per Site/ 48 per cluster

Max Compute to Converged ratio

2:1

2:1

2:1

2:1

2:1

Expansion

6

✔*

✔*

1 B200 M6 support is limited to HXDP Release 5.0(2b) and later.
2 B200 M6 support is limited to HXDP Release 5.0(2b) and later.
3 B200 M6 support is limited to HXDP Release 5.0(2b) and later.
4 B200 M6 support is limited to HXDP Release 5.0(2b) and later.
5 Cluster sizes greater than 64 nodes require ESXi 7.0 U1 or later.
6 Edge cluster expansion with 1G network topology is not supported
Table 2. Cisco HX Data Platform Storage Cluster Specifications for Microsoft Hyper-V

Node

Microsoft Hyper-V

Deployment Type

FI-Connected

HX Servers

HX220c M5

HX220c AF M5

HX240c M5

HX240c AF M5

HX240c M5L

Compute-Only UCS B-Series/C-Series Servers

C240 M5, C220 M5, B200 M4, B200 M5

C220 M5,C240 M5, B200 M4, B200 M5

Supported Nodes

Converged and Compute-only nodes Converged and Compute-only nodes

HXDP-DC-AD Licensed Node Limits

1:1 ratio of HXDP-DC-AD to Compute only nodes

(Min—Max)

Converged nodes: 3-16

Compute-only nodes: 0-16

Converged nodes: 3-16 (12TB HDD option is not supported for HyperV)

Compute-only nodes: 0-16

HXDP-DC-PR Licensed Node Limits

1:2 ratio of HXDP-DC-PR to Compute only nodes

(Min—Max)

Converged nodes: 3-16

Compute-only nodes: 0-16

Converged nodes: 3-12 (12TB HDD option is not supported for HyperV)

Compute-only nodes: 0-16

Max Cluster Size

32

32

Max Compute to Converged ratio

1:1

1:1

Expansion

FI/Server Firmware - 5.0(x) Releases

If you are installing new cluster(s), or upgrading existing clusters and require guidance on UCS FI/Server firmware versions, see Choosing UCS Server Firmware Versions.

If you are installing or upgrading HyperFlex clusters with All NVMe nodes, please see the Note below.

Table 3. FI/Server Firmware Versions for M4/M5/M6 Servers

Release

M4/M5 Qualified FI/Server Firmware

M6 Qualified FI/Server Firmware

5.0(2g)7

4.0(4k), 4.0(4m)

4.1(1e), 4.1(2a), 4.1(2c), 4.1(3j), 4.1(3k)

4.2(1n), 4.2(3d), 4.2(3g), 4.2(3h), 4.2(3i),8 4.2(3j)

4.2(1i), 4.2(1m), 4.2(1n), 4.2(3d), 4.2(3g), 4.2(3h), 4.2(3i),9 4.2(3j)

5.0(2e)

4.0(4k), 4.0(4m)

4.1(1e), 4.1(2a), 4.1(2c), 4.1(3j), 4.1(3k)

4.2(1n), 4.2(3d),4.2(3e), 4.2(3g), 4.2(3h)

4.2(1i), 4.2(1m), 4.2(1n), 4.2(3d), 4.2(3e), 4.2(3g), 4.2(3h)

5.0(2d)

4.0(4k), 4.0(4m)

4.1(1e), 4.1(2a), 4.1(2c), 4.1(3j), 4.1(3k)

4.2(1n), 4.2(3d),4.2(3e), 4.2(3g)

4.2(1i), 4.2(1m), 4.2(1n), 4.2(3d), 4.2(3e), 4.2(3g)

5.0(2c)10

4.0(4k), 4.0(4m)

4.1(1e), 4.1(2a), 4.1(2c), 4.1(3j), 4.1(3k)

4.2(1n), 4.2(3d).

4.2(1i), 4.2(1m), 4.2(1n), 4.2(3d)

5.0(2b)

4.0(4k), 4.0(4m)

4.1(1e), 4.1(2a), 4.1(2c), 4.1(3j)

4.2(1n)

4.2(1i), 4.2(1m), 4.2(1n))11

5.0(2a)

4.0(4k), 4.0(4m)

4.1(1e), 4.1(2a), 4.1(2c), 4.1(3j)

4.2(1n)

4.2(1i), 4.2(1m), 4.2(1n)

5.0(1x)

4.0(4k), 4.0(4m)

4.1(1e), 4.1(2a), 4.1(2c)

4.2(1n)

4.2(1i), 4.2(1m), 4.2(1n)

7 HX240 M6 SED clusters in HXDP Release 5.0(2g) are supported with Server Firmware version 4.2(3j) .
8 Server firmware 4.2(3i) can be run with UCS Infrastructure (A bundle) 4.2(3i) or 4.3(2e). The cross compatibility with Infrastructure firmware 4.3(2e) is only supported with server firmware 4.2(3i).
9 Server firmware 4.2(3i) can be run with UCS Infrastructure (A bundle) 4.2(3i) or 4.3(2e). The cross compatibility with Infrastructure firmware 4.3(2e) is only supported with server firmware 4.2(3i).
10 HX240 M6 SED clusters are not supported.
11 HX240 M6 SED clusters are not supported.

Important FI/Server Firmware Notes:


Restriction


HXAF240-M5 Clusters using Samsung SSDs with 3.8Tb and 7.6Tb capacities:

Do not install or upgrade to UCS versions 4.2(3) for HXAF240-M5 Clusters using Samsung SSD drives with PID HX-SD76T61X-EV or HX-SD38T61X-EV (UCS-SD76T61X-EV or UCS-SD38T61X-EV). The Highest UCS A/B/C versions used should be 4.2(1n). For more information, see CSCwf93621.


  • Legacy BIOS Mode: For all NVMe HyperFlex clusters using legacy BIOS mode, do not upgrade the server firmware to 4.1(3h), 4.1(3i), 4.1(3j), 4.2(1m) or 4.2(1n). For more information, see CSCwd04797.

    To review the BIOS version, see Verifying Firmware Versions.

  • Fabric Interconnect 6400: If your environment (or deployment) is a Fabric Interconnect 6400 connected to VIC 1455/1457 using SFP-H25G-CU3M or SFP-H25G-CU5M cables, only use UCS Release 4.0(4k), or 4.1(2a) and later. Do not use any other UCS version listed in the table of qualified releases. Using a UCS Release that is not UCS Release 4.0(4k), or 4.1(2a) and later may cause cluster outages.

    Refer to Release Notes for UCS Manager, Firmware/Drivers, and Blade BIOS for any UCS issues that may affect your environment.

    Use the following upgrade sequence ONLY for Fabric Interconnect 6400 connected to VIC 1455/1457 using SFP-H25G-CU3M or SFP-H25G-CU5M cables:

    • Upgrade the UCS server firmware from HX Connect.

    • Upgrade the UCS Infrastructure.

    • Upgrade HXDP.

    • Upgrade ESXi.

    If you have the described hardware and software combination, the combined upgrade of UCS server firmware is not supported. However, combined upgrade of HXDP and ESXi is supported after UCS server firmware and UCS infrastructure firmware upgrade is completed.

    If the current UCS F/W version is later than 4.0(4k) or 4.1(2a), then combined upgrade of UCS server firmware, HX and ESXi is supported.

  • Intersight Edge Servers: Intersight edge servers running a CIMC version before 4.0(1a), HUU is the suggested mechanism to update the firmware.

SED Notes:

  • For clusters with self-encrypting drives (SED):

    • M6 nodes with HXDP version 5.0(1x) use server firmware version 4.2(1i) only.

    • M6 nodes with HXDP version 5.0(2a) use server firmware version 4.2(1m) or later.

    • M5/M6 nodes with HXDP version 5.0(2b) use server firmware version 4.2(3d) or later.

  • The following UCS Server Firmware versions are not supported on clusters with self-encrypting drives (SED):

    • M4/M5: 4.1(2a), 4.1(2c), 4.1(3e), 4.1(3f), 4.1(3h), 4.1(3i), 4.2(1f), 4.1(3j), 4.2(1i), 4.2(1m), 4.2(1n). For more information, see CSCvv69704.

    • HX240 M6: 4.2(3b). For more information, see CSCwe56797and CSCwe33804

M6 Specific Notes:

  • If you are using PCIE-Offload cards with M6 servers, do not use server firmware 4.2(1n) or later.

  • HX225 and HX245 M6 AMD nodes require minimum server firmware version 4.2(1n) or later.

General Notes:

The HX components—Cisco HX Data Platform Installer, Cisco HX Data Platform, and Cisco UCS firmware—are installed on different servers. Verify that each component on each server used with and within an HX Storage Cluster are compatible.

  • Verify that the preconfigured HX servers have the same version of Cisco UCS server firmware installed. If the Cisco UCS Fabric Interconnects (FI) firmware versions are different, see the Cisco HyperFlex Systems Upgrade Guide for steps to align the firmware versions.

    • M5: For NEW hybrid or All Flash (Cisco HyperFlex HX220C-M5SX, HX240C-M5SX, HXAF220C-M5SX, HXAF240C-M5SX) deployments, verify that the qualified UCS firmware version is installed.

    • To reinstall an HX server, download supported and compatible versions of the software. See the Cisco HyperFlex Systems Installation Guide for VMware ESXi, Release 5.0 for the requirements and steps.

HyperFlex Edge/DC-No-FI and Firmware Compatibility Matrix for 5.0(x) Deployments

Cisco HX Data Platform, Release 5.0(x) based Deployments

Confirm the component firmware on the server meets the minimum versions listed in the following tables.


Important


HyperFlex Edge does not support Cisco IMC versions 4.0(4a), 4.0(4b), 4.0(4c), 4.0(4d), and 4.0(4e).
Table 4. HX220c M4 / HXAF220c M4 Cluster

Component

Qualified Firmware Version - HXDP 5.0(x)

*Review all notes at the beginning of this section.

Host Upgrade Utility (HUU) Version

4.0(2h)12, 4.1(2g), 4.1(2h)

Download Software

Then click on the UCS Server Firmware link to download the desired HUU version.

Note

 

M4 is not supported with DC-No-FI.

12 DC-No-FI is not supported on 4.0(x).
Table 5. HX220C-M5SX / HXAF220C-M5SX, HX240C-M5SX / HXAF240C-M5SX (Full Depth) Cluster / HX220C M5 / HXAF220C M5 / HXAF220C-M5SN and HXAF240C-M5SN

Component

Qualified Firmware Version - HXDP 5.0(x)

*Review all notes at the beginning of this section.

Host Upgrade Utility (HUU) Version

4.1(3f), 4.1(3h), 4.1(3i), 4.2(2g), 4.2(3d), 4.2(3e), 4.2(3g), 4.2(3h) and 4.2(3j)

Download Software for 220

Download Software for 240

Then click on the UCS Server Firmware link to download the desired HUU version.

Note

 

All NVMe nodes are not supported for HX edge deployments.

Table 6. HX220C-M6S / HX240C-M6SX / HXAF240C-M6SX / HX220C-M6S / HXAF220C-M6S / HXAF220C-M6S / HXAF220C-M6SN / HXAF240C-M6SN / HXAF245C-M6SX / HX245C-M6SX / HXAF225C-M6SX / HX225C-M6SX / HXAF245C-M6SN / HXAF225C-M6SN

Component

Qualified Firmware Version - HXDP 5.0(x)

*Review all notes at the beginning of this section.

Host Upgrade Utility (HUU) Download Link

4.1(3f), 4.1(3h), 4.1(3i), 4.2(1i), 4.2(2g), 4.2(3d), 4.2(3e), 4.2(3g), 4.2(3h), and 4.2(3j).

Download Software for 220

Download Software for 240

Then click on the UCS Server Firmware link to download the desired HUU version.

Note

 

All NVMe nodes are not supported for HX edge deployments.

Note

 

HX Edge HX225 and HX245 M6 AMD nodes require minimum server firmware version 4.2(1i) or later.

HX Data Platform Software Versions for HyperFlex Witness Node for Stretched Cluster - 5.0(x) Releases

Table 7. HX Data Platform Software Versions for HyperFlex Witness Node for Stretched Cluster

HyperFlex Release

Witness Node Version

5.0(x)

1.1.3

Software Requirements for VMware ESXi - 5.0(x) Releases

The software requirements include verification that you are using compatible versions of Cisco HyperFlex Systems (HX) components and VMware vSphere, VMware vCenter, and VMware ESXi.
  • Verify that all HX servers have a compatible version of vSphere preinstalled.

  • Verify that the vCenter version is the same or later than the ESXi version.

  • Verify that the vCenter and ESXi versions are compatible by consulting the VMware Product Interoperability Matrix. Newer vCenter versions may be used with older ESXi versions, so long as both ESXi and vCenter are supported in the table below.

  • Verify that you have a vCenter administrator account with root-level privileges and the associated password.

  • M6 nodes with HX-PCIE-OFFLOAD-1 installation or expansion require ESXi 7.0 Update 3d or earlier.

The following tables apply to VMware vSphere Editions: Enterprise, Enterprise Plus, Standard, Essentials Plus, ROBO. All other licensed editions of VMware, including the Essentials Edition are not supported.

Table 8. Software Requirements for VMware ESXi

Version

VMware ESXi Versions for M4 & M5 Servers

VMware ESXi Versions for M6 Servers

5.0(2g)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(2e)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(2d)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(2c)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(2b)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(2a)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(1x)

6.5 U3, 6.7 U3, 7.0 U2

6.7 U3, 7.0 U2

Table 9. Software Requirements for VMware vCenter

Version

VMware vCenter Versions for M4 & M5 Servers

VMware vCenter Versions for M6 Servers

5.0(2g)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(2e)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(2d)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(2c)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(2b)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(2a)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3

5.0(1x)

6.5 U3, 6.7 U3, 7.0 U2, 7.0 U3

6.7 U3, 7.0 U2, 7.0 U3


Note


For vSphere 6.x users. VMware has announced vSphere 6.5 and 6.7 End of general support as of October 15, 2022. Cisco strongly recommends upgrading as soon as possible to a supported VMware vSphere 7.x release and follow Cisco’s recommendations as outlined in General Recommendation for New and Existing Deployments.


Software Requirements for Microsoft Hyper-V - 5.0(x) Releases

The software requirements include verification that you are using compatible versions of Cisco HyperFlex Systems (HX) components and Microsoft Hyper-V (Hyper-V) components.

HyperFlex Software versions

The HX components—Cisco HX Data Platform Installer, Cisco HX Data Platform, and Cisco UCS firmware—are installed on different servers. Verify that each component on each server used with and within the HX Storage Cluster are compatible. For detailed information on installation requirements and steps, see the Cisco HyperFlex Systems Installation Guide on Microsoft Hyper-V.

Table 10. Qualified Server Firmware for M5 Servers on Hyper-V

HyperFlex Release

M5 Qualified Server Firmware

5.0(x)

4.2(3i)

Table 11. Supported Microsoft Software versions

Microsoft Component

Version

Windows Operating System (Windows OS)

Windows Server 2016 Datacenter Core & Desktop Experience.

Note

 
For Windows Server 2016 Datacenter Core and Desktop Experience, the Windows 2016 ISO image should be Update Build Revision (UBR) 1884 at a minimum.

Windows Server 2019 Datacenter-Desktop Experience is supported starting from HXDP 4.0.1(a) onwards.

Note

 
For Windows Server 2019 Desktop Experience, the Windows 2019 ISO image should be Update Build Revision (UBR) 107 at a minimum.

Windows Server 2019 Datacenter–Core is not supported currently.

Also note that the following are currently not supported:

OEM activated ISOs and Retail ISOs are not supported.

Earlier versions of Windows Server such as Windows 2012r2 are not supported.

Non-English versions of the ISO are not supported.

Active Directory

A Windows 2012 or later domain and forest functionality level.

Supported Microsoft License Editions

The Microsoft Windows Server version that is installed on one or more HyperFlex hosts must be licensed as per Microsoft licensing requirements listed on Microsoft Licensing.

Browser Recommendations - 5.0(x) Releases

Use one of the following browsers to run the listed HyperFlex components. These browsers have been tested and approved. Other browsers might work, but full functionality has not been tested and confirmed.

Table 12. Supported Browsers

Browser

Cisco Intersight

Cisco UCS Manager

HX Data Platform Installer

HX Connect

Microsoft Internet Explorer

NA

11 or later

11 or later

11 or later

Google Chrome

62 or later

57 or later

70 or later

70 or later

Mozilla Firefox

57 or later

45 or later

60 or later

60 or later

Apple Safari

10 or later

9 or later

NA

NA

Opera

NA

35 or later

NA

NA

Notes

  • Cisco HyperFlex Connect:

    The minimum recommended resolution is 1024 X 768.

  • Cisco HX Data Platform Plug-In:

    The Cisco HX Data Platform Plug-In runs in vSphere. For VMware Host Client System browser requirements, see the VMware documentation.

    The Cisco HX Data Platform Plug-In is not displayed in the vCenter HTML client. You must use the vCenter flash client.

  • Cisco UCS Manager:

    The browser must support the following:

    • Java Runtime Environment 1.6 or later.

    • Adobe Flash Player 10 or later is required for some features.

    For the latest browser information, see the Cisco UCS Manager Getting Started Guide for your deployment Cisco UCS Manager Getting Started Guide.