System Requirements and Pre-install Worksheet

About this chapter

The Cisco HyperFlex Edge product supports three networking topologies: Single 1GbE switch, Dual 1GbE, and 10GbE switch (either single or dual) configuration depending on requirements and the available switching hardware. This chapter describes specific requirements for each topology, as well as common network requirements that apply to all three topologies.

Single Switch Configuration

Single switch configuration provides a simple topology requiring only a single switch, and two 1GE ports per server. Link or switch redundancy is not provided. Access ports and trunk ports are the two supported network port configurations.

Network Topology

Upstream Network Requirements

  • A managed switch with VLAN capability

  • Six physical 1GE ports for three HyperFlex nodes

  • Jumbo frames are not required to be configured

  • Portfast or portfast trunk should be configured on all ports to ensure uninterrupted access to Cisco Integrated Management Controller (CIMC)

Virtual Network Requirements

The recommended configuration for each ESXi calls for the following networks to be separated:

  • Management traffic network

  • Data traffic network

  • vMotion network

  • VM network

The minimum network configuration requires at least two separate networks:

  • Management network (includes vMotion and VM network)

  • Data network (for storage traffic)

Two vSwitches each carrying different networks are required:

  • vswitch-hx-inband-mgmt—ESXi management (vmk0), storage controller management, vMotion (vmk2), VM guest portgroups

  • vswitch-hx-storage-data—HyperFlex storage data network, Hypervisor storage interface (vmk1)


Note

After some HyperFlex Edge deployments using the single switch configuration, it is normal to see the storage data vSwitch and associated portgroup failover order with only a standby adapter populated. The missing active adapter does not cause any functional issue with the cluster and we recommend leaving the failover order as configured by the installation process.

Port Requirements

Two 1GE ports are required per server:

  • Port 1—management (ESXi and CIMC), vMotion traffic, and VM guest traffic

  • Port 2—HyperFlex storage traffic

  • There are two supported network port configurations: access ports or trunk ports.

  • Spanning tree portfast (access ports) or portfast trunk (trunk ports) must be enabled for all network ports connected to HyperFlex servers.

    • Failure to configure portfast causes intermittent CIMC disconnects during ESXi bootup and longer than necessary network re-convergence during physical link failure.

  • To decide if your deployment will use access ports or trunk ports, see the following section "About Access and Trunk Ports".

Physical network topology guidance:

  • Cable both integrated LOM ports to the same ToR switch.

  • If desired, cable the dedicated CIMC port to the same switch or to an out-of-band management switch.

  • Do no use the 10GE ports on the VIC.

About Access and Trunk Ports

Ethernet interfaces can be configured either as access ports or trunk ports, as follows:

  • An access port can have only one VLAN configured on the interface; it can carry traffic for only one VLAN.

  • A trunk port can have one or more VLANs configured on the interface; it can carry traffic for several VLANs simultaneously.

The following table summarizes the differences between access and trunk ports. You can use the details described in this table to determine which ports to use for your deployment.


Important

Trunk ports are assumed in this guide, and is highly recommended for your deployment.

Trunk Ports

Access Ports

Requires more setup and definition of VLAN tags within CIMC, ESXi, and HX Data Platform Installer.

Provides a simpler deployment process than trunk ports.

Provides the ability to logically separate management, vMotion, and VM guest traffic on separate subnets.

Requires that management, vMotion, and VM guest traffic must share a single subnet.

Provides flexibility to bring in additional L2 networks to ESXi.

Requires a managed switch to configure ports 1 and 2 on discrete VLANs; storage traffic must use a dedicated VLAN, no exceptions.


Note

Both trunk and access ports require a managed switch to configure ports 1 and 2 on discrete VLANs.

See Sample Network Configurations for more details.

Dual Switch Configuration

Dual switch configuration provides a slightly more complex topology with full redundancy that protects against: switch failure, link and port failure, and LOM/PCIe NIC HW failures. It requires two switches that may be standalone or stacked, and four 1GE ports and one additional PCIe NIC per server. Trunk ports are the only supported network port configuration.

Network Topology

Upstream Network Requirements

  • Two managed switches with VLAN capability (standalone or stacked)

  • 12 physical 1GE ports for three HyperFlex nodes

    All 12 ports must trunk and allow all applicable VLANs.

  • Jumbo frames are not required to be configured

  • Portfast trunk should be configured on all ports to ensure uninterrupted access to Cisco Integrated Management Controller (CIMC)

Virtual Network Requirements

The recommended configuration for each ESXi calls for the following networks to be separated:

  • Management traffic network

  • Data traffic network

  • vMotion network

  • VM network

The minimum network configuration requires at least two separate networks:

  • Management network (includes vMotion and VM network)

  • Data network (for storage traffic)

Two vSwitches each carrying different networks are required:

  • vswitch-hx-inband-mgmt—ESXi management (vmk0), storage controller management, VM guest portgroups

  • vswitch-hx-storage-data—HyperFlex storage data network, Hypervisor storage interface (vmk1), vMotion (vmk2)

Failover order:

  • vswitch-hx-inband-mgmt—entire vSwitch is set for active/standby. All services by default consume a single uplink port and failover when needed. Failover order for VM portgroups may be overridden as needed.

  • vswitch-hx-storage-data—HyperFlex storage data network and vmk1 are set to the same active/standby order. The vMotion VMKernel port is set to use the opposite order when configured using the post_install script.

Port Requirements

Four 1GE ports are required per server:

  • Port 1—management (ESXi, HyperFlex controller, and CIMC) and VM guest traffic

  • Port 2—HyperFlex storage traffic (and vMotion standby)

  • Port 3—VM guest traffic (and management standby)

  • Port 4—vMotion traffic (and storage standby)

  • Two ports using LOM and two ports from a PCIe add-in NIC:

    • 1 LOM and 1 PCIe port serve management and VM guest traffic in a redundant configuration

    • 1 LOM and 1 PCIe port serve storage data and vMotion traffic in a redundant and load balanced configuration

  • The Intel i350 quad port NIC (UCSC-PCIE-IRJ45) must be installed for this topology:

    • The NIC may be selected at ordering time and shipped preinstalled from the factory.

    • The NIC may also be field-installed if ordered separately. Either riser #1 or #2 may be used, although riser #1 is recommended.

  • Only trunk ports are supported in the dual switch configuration.

  • Spanning tree portfast trunk must be enabled for all network ports connected to HyperFlex servers.

    • Failure to configure portfast causes intermittent CIMC disconnects during ESXi bootup and longer than necessary network re-convergence during physical link failure.

Physical network topology guidance:


Warning

Proper cabling is important to ensure full network redundancy.


  • Cable both integrated LOM ports to the same ToR switch.

  • Cable any two out of four PCIe NIC ports to the same ToR switch. Do not connect more than two PCIe NIC ports prior to installation. Post cluster installation, you may freely use the remaining ports.

  • Redundancy occurs at the vSwitch level and includes one uplink port from the onboard LOM and one uplink port from PCIe NIC for each vSwitch

  • If desired, cable the dedicated CIMC port to the same switch or to an out-of-band management switch.

  • Do no use the 10GE ports on the VIC.

10/25GE Switch Configuration

10GE switch configuration provides a fully redundant technology that protects against switch (if using dual or stacked switches), and link and port failures. The 10/25GE switch may be standalone or stacked. In addition, this configuration requires the following:

  • Two 10/25GE ports and a VIC 1387 with 2x QSAs per server

  • Use of trunk mode

  • Deployment using On-premises OVA Installer and not through Intersight

Network Topology

Upstream Network Requirements

  • Two 10/25Gb ports are required per server using a VIC 1387

    • Each physical VIC port is logically divided into 4 vNICs as seen by the hypervisor

      Only 10/25Gb speeds are supported [no 40Gb]

    • M5 servers require VIC 1387 and QTY 2 QSA to reach 10/25 Gb speeds

    • M4 servers require VIC 1227 to reach 10/25Gb speeds

  • Additional NIC cards

    • Additional 3rd party NIC cards may be installed in the HX Edge nodes as needed

    • All non-VIC interfaces must be shutdown until install is completed

    • Only a single VIC is supported per HX Edge node

  • Only trunk ports are supported in 10/25GE switch configurations

  • Spanning tree portfast trunk should be enabled for all network ports connected to HX servers.


    Note

    Failures to configure portfast will cause longer than necessary network re-convergence during physical link failure.

Virtual Network Requirements

Four vSwitches are required:

  • vswitch-hx-inband-mgmt—ESXi management (vmk0), storage controller management

  • vswitch-hx-storage-data—HyperFlex storage data network, Hypervisor storage interface (vmk1)

  • vmotion—vMotion (vmk2)

  • vswitch-hx-vm-network—VM guest portgroups


    Note

    Due to the nature of the Cisco VIC carving up multiple vNICs from the same physical port, it is not possible for guest VM traffic configured on vswitch-hx-vm-network to communicate L2 to interfaces or services running on the same host. It is recommended to either a) use a separate VLAN and perform L3 routing or b) ensure any guest VMs that need access to management interfaces be placed on the vswitch-hx-inband-mgmt vSwitch. In general, guest VMs should not be put on any of the HyperFlex configured vSwitches except for the vm-network vSwitch. An example use case would be if you need to run vCenter on one of the nodes and it requires connectivity to manage the ESXi host it is running on. In this case, use one of the recommendations above to ensure uninterrupted connectivity.

Failover order:

  • vswitch-hx-inband-mgmt—entire vSwitch is set for active/standby. All services by default consume a single uplink port and failover when needed. Failover order for VM portgroups may be overridden as needed.

  • vswitch-hx-storage-data—HyperFlex storage data network and vmk1 are set to the same active/standby order.

  • vmotion—The vMotion VMkernel port (vmk2) is configured when using the post_install script. Failover order is set for active/standby.

  • vswitch-hx-vm-network—vSwitch is set for active/active. Individual portgroups can be overridden as needed.

Port Requirements

Physical network topology guidance:

  • For M5 servers, ensure a Cisco 40G to 10G QSA is installed in both VIC ports.

  • If using a single 10GE switch, cable both 10GE ports to the same switch.

  • If using dual 10GE switches or stacked switches, cable 1X10GE port to each switch, ensuring that all port #1 from all nodes go to the same switch and all port #2 from all nodes are connected to the other switch.

  • Cable the dedicated CIMC port to the same switch or to an out-of-band management switch.

Common Network Requirements

Before you begin installation, confirm that your environment meets the following specific software and hardware requirements.


Attention

On HyperFlex M5 nodes, when using a 1GE topology manually configure the port speed to 1000/full on all switch ports. See the Common Network Requirements.


VLAN Requirements

Single Switch Network Topology

Dual Switch Network Topology

Network

VLAN ID

Description

Use a separate subnet and VLANs for each of the following networks:

VLAN for VMware ESXi, and Cisco HyperFlex management

Used for management traffic among ESXi, HyperFlex, and VMware vCenter, and must be routable.

Note 

This VLAN must have access to Intersight.

CIMC VLAN

Can be same or different from the Management VLAN.

Note 

This VLAN must have access to Intersight.

VLAN for HX storage traffic

Used for storage traffic and requires only L2 connectivity.

VLAN for VMware vMotion

Used for vMotion VLAN, if applicable.

Note 

Can be the same as the management VLAN but not recommended.

VLAN(s) for VM network(s)

Used for VM/application network.

Note 

Can be multiple VLANs separated by a VM portgroup in ESXi.

Inband versus Out-of-Band CIMC

This guides assume the use of inband CIMC using Shared LOM Ext mode. The result is CIMC management traffic multiplexed with vSphere traffic onto the LOM ports, reducing cabling, switchports, and additional configuration.

Customers may opt to use the dedicated CIMC management port for out-of-band use. Users should account for this third 1GE port when planning their upstream switch configuration. Additionally, the user should set the CIMC to dedicated mode during CIMC configuration. Follow Cisco UCS C-series documentation to configure the CIMC in dedicated NIC mode. Under NIC properties, set the NIC mode to dedicated before saving the configuration.

In either case, CIMC must have network access to Intersight.

Supported vCenter Topologies

Use the following table to determine the topology supported for vCenter.

Topology

Description

Recommendation

Single vCenter

Virtual or physical vCenter that runs on an external server and is local to the site. A management rack mount server can be used for this purpose.

Highly recommended

Centralized vCenter

vCenter that manages multiple sites across a WAN.

Highly recommended

Nested vCenter

vCenter that runs within the cluster you plan to deploy.

Installation for a HyperFlex Edge cluster may be performed without a vCenter. Alternatively, you may deploy with an external vCenter and migrate it into the cluster.

For the latest information, see the How to Deploy vCenter on the HX Data Platform tech note.

Customer Deployment Information

A typical three-node HyperFlex Edge deployment requires 13 IP addresses – 10 IP addresses for the management network and 3 IP addresses for the vMotion network.

CIMC Management IP Addresses

Server

CIMC Management IP Addresses

Server 1

Server 2

Server 3

Subnet mask

Gateway

DNS Server

NTP Server

Note 

NTP configuration on CIMC is required for proper Intersight connectivity.

Network IP Addresses


Note

By default, the HX Installer automatically assigns IP addresses in the 169.254.1.X range, to the Hypervisor Data Network and the Storage Controller Data Network.



Note

Spanning Tree portfast trunk (trunk ports) should be enabled for all network ports.

Failure to configure portfast may cause intermittent disconnects during ESXi bootup and longer than necessary network re-convergence during physical link failure.


Management Network IP Addresses

(must be routable)

Hypervisor Management Network

Storage Controller Management Network

Server 1:

Server 1:

Server 2:

Server 2:

Server 3:

Server 3:

Storage Cluster Management IP address

Subnet mask

Default gateway

VMware vMotion Network IP Addresses

For vMotion services, you may configure a unique VMkernel port or, if necessary, reuse the vmk0 if you are using the management VLAN for vMotion (not recommended).

Server

vMotion Network IP Addresses (configured using the post_install script)

Server 1

Server 2

Server 3

Subnet mask

Gateway

Port Requirements


Important

Ensure that the following port requirements are met in addition to the prerequistes listed for Intersight Connectivity.

If your network is behind a firewall, in addition to the standard port requirements, VMware recommends ports for VMware ESXi and VMware vCenter.

  • CIP-M is for the cluster management IP.

  • SCVM is the management IP for the controller VM.

  • ESXi is the management IP for the hypervisor.

The compehensive list of ports required for component communication for the HyperFlex solution is located in Appendix A of the HX Data Platform Security Hardening Guide


Tip

If you do not have standard configurations and need different port settings, refer to Table C-5 Port Literal Values for customizing your environment.


Hypervisor Credentials

root username

root

root password

Cisco123

Important 

Deployments based on Cisco HX Data Platform Release, 3.0 and higher, require a new custom password if you have not changed the default factory password prior to starting installation.

VMware vCenter Configuration


Note

HyperFlex communicates with vCenter through standard ports. Port 80 is used for reverse HTTP proxy and may be changed with TAC assistance. Port 443 is used for secure communication to the vCenter SDK and may not be changed.


vCenter admin username

username@domain

vCenter admin password

vCenter data center name

VMware vSphere compute cluster and storage cluster name

Network Services


Note

  • DNS and NTP servers should reside outside of the HX storage cluster.

  • Use an internally-hosted NTP server to provide a reliable source for the time.

  • All DNS servers should be pre-configured with forward (A) and reverse (PTR) DNS records for each ESXi host before starting deployment. When DNS is configured correctly in advance, the ESXi hosts are added to vCenter via FQDN rather than IP address.

    Skipping this step will result in the hosts being added to the vCenter inventory via IP address and require users to change to FQDN using the following procedure: Changing Node Identification Form in vCenter Cluster from IP to FQDN.


DNS Servers

<Primary DNS Server IP address, Secondary DNS Server IP address, …>

NTP servers

<Primary NTP Server IP address, Secondary NTP Server IP address, …>

Time zone

Example: US/Eastern, US/Pacific

Connected Services

Enable Connected Services (Recommended)

Yes or No required

Email for service request notifications

Example: name@company.com

Supported VMware vSphere Versions and Editions

Confirm that a compatible version of vSphere is preinstalled on all HyperFlex servers. For the current list, see the Software Requirements for VMware ESXi chapter in the Cisco HyperFlex Recommended Software Release and Requirements Guide.

Physical Requirements

HX220c nodes are 1 RU each. For a three-node cluster, 3 RU are required.

Reinstallation

To perform reinstallation of a HyperFlex Edge System, contact Cisco TAC.

HyperFlex Edge Compatibility and Software Requirements: HyperFlex Release 3.5(x)

For details about compatibility and software requirements for Cisco HX Release 3.5(x), review the Cisco HX Release 3.5(x) - Software Requirements chapter of the Cisco HyperFlex Recommended Software Release and Requirements Guide.

Intersight Connectivity

Consider the following prerequisites pertaining to Intersight connectivity:

  • Before installing the HX cluster on a set of HX servers, make sure that the device connector on the corresponding Cisco IMC instance is properly configured to connect to Cisco Intersight and claimed.

  • Communication between CIMC and vCenter via ports 80, 443 and 8089 during installation phase.

  • All device connectors must properly resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. The current version of the HX Installer supports the use of an HTTP proxy.

  • All controller VM management interfaces must properly resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. The current version of HX Installer supports the use of an HTTP proxy if direct Internet connectivity is unavailable.

  • IP connectivity (L2 or L3) is required from the CIMC management IP on each server to all of the following: ESXi management interfaces, HyperFlex controller VM management interfaces, and vCenter server. Any firewalls in this path should be configured to allow the necessary ports as outlined in the Hyperflex Hardening Guide.

  • Starting with HXDP release 3.5(2a), the Intersight installer does not require a factory installed controller VM to be present on the HyperFlex servers.

    When redeploying HyperFlex on the same servers, new controller VMs must be downloaded from Intersight into all ESXi hosts. This requires each ESXi host to be able to resolve svc.intersight.com and allow outbound initiated HTTPS connections on port 443. Use of a proxy server for controller VM downloads is supported and can be configured in the HyperFlex Cluster Profile if desired.

  • Post-cluster deployment, the new HX cluster is automatically claimed in Intersight for ongoing management.