New and Changed Information for this Release

The following table provides an overview of the significant changes to this guide for this current release. The table does not provide an exhaustive list of all changes made to this guide or of all new features in this release.

Table 1. New Features and Changed Behavior in Cisco HX Data Platform, Release 3.5(2a)
Feature Description Where Documented

6400 Series Fabric Interconnects Support

The 6400 Series FIs now support connectivity with Cisco UCS VIC 1400 Series, Cisco UCS VIC 1300 Series, and Cisco UCS VIC 1200 Series Adapters.

Cisco UCS VIC 1400 Series and 6400 Series Fabric Interconnects and Cisco 6454 Series Fabric Interconnects

Cisco VIC Series Adapters and UCS 6400 Series Fabric Interconnects

Cisco UCS VIC 1400 Series and 6400 Series Fabric Interconnects

The Cisco UCS VIC 1400 series is based on 4th generation Cisco ASIC technology, and is well-suited for next-generation networks requiring 10/25 Gigabit Ethernet for C-Series and S-Series servers and 10/40 Gigabit Ethernet connectivity for B-Series servers. Additional features support low latency kernel bypass for performance optimization such as usNIC, RDMA/RoCEv2, DPDK, Netqueue, and VMQ/VMMQ.

The following list summarizes a set of general support guidelines for the VIC 1400 series and the 6400 Series FIs.

  • For Cisco HX Data Platform releases 3.5(2a) and higher, Cisco UCS VIC 1400 series is:

    • Supported for both compute and converged nodes.

  • All converged nodes must have the same connectivity speed. For example:

    • Mixed M4/M5 clusters are supported with VIC1227 on M4 nodes and VIC 1457 in 1x10G mode only.

    • A uniform M5 cluster should have all VIC 1457s and may not mix VIC 1387 with VIC 1457.

    • A uniform M5 cluster should have all VIC 1457s ports running the same speed and count (for example, do not combine 10G and 25G and use the same number of uplinks for all nodes).

The following table describes the topologies supported with Cisco UCS VIC 1400 Series and UCS Fabric Interconnects 6200, 6300 and 6400 Series:

Table 2. VIC 1400 Series Support for UCS Fabric Interconnects
M5 Cisco UCS VIC 1400 Series Connectivity Cisco UCS VIC 1400 Series Adapters for both B-Series and C-Series

6400 Series

6300 Series

6200 Series

1 x 10G Supported starting Release 3.5(2a) Not Supported Supported starting Release 3.5(1a)
2 x 10G Supported starting Release 3.5(2a) Not Supported Supported starting Release 3.5(1a)
1 x 25G Supported starting Release 3.5(2a) Not Supported Not Supported
2 x 25G Supported starting Release 3.5(2a) Not Supported Not Supported

Cisco 6454 Series Fabric Interconnects

The Cisco UCS 6454 Fabric Interconnect is a core part of the Cisco Unified Computing System, providing both network connectivity and management capabilities for the system. The Cisco UCS 6454 offers line-rate, low-latency, lossless 10/25/40/100 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE), and Fibre Channel functions.

The following table presents an HX modular LAN-on-Motherboard (mLOM) and UCS Fabric Interconnects matrix:

Table 3. UCS Fabric Interconnects Matrix

MLOM VIC

Interfaces

HX-FI-6454 or UCS-FI-6454

HX-MLOM-25Q-04 (VIC 1457)
Note 

Requires vSphere 6.5 update 2 or later.

2 or 4 ports, 10-Gbps Ethernet 10Gbps support, starting with Release 3.5(2a)
HX-MLOM-25Q-04 (VIC 1457)
Note 

Requires vSphere 6.5 update 2 or later.

2 or 4 ports, 25-Gbps Ethernet 25Gbps support, starting with Release 3.5(2a)
HX-MLOM-C40Q-03 (VIC 1387) 2 ports, 10- Gbps Ethernet (with QSA Adapter) 10Gbps support, starting with Release 3.5(2a)
HX-MLOM-C40Q-03 (VIC 1387) 2 ports, 40-Gbps Ethernet Not Supported
HX-MLOM-CSC-02 (VIC 1227) 2 ports, 10-Gbps Ethernet 10Gbps support, starting with Release 3.5(2a)

Physical Connectivity Illustrations for Direct Connect Mode Cluster Setup

The following images show a sample of direct connect mode physical connectivity for C-Series Rack-Mount Server with Cisco UCS VIC 1455. The port connections remain the same for Cisco UCS VIC 1457.


Warning

Use of 25GE passive copper cables is not recommended. For more information, see CSCvp49398.


Figure 1. Direct Connect Cabling Configuration with Cisco VIC 1455 (4-Port Linking)
Figure 2. Direct Connect Cabling Configuration with Cisco VIC 1455 (2-Port Linking)

Note

The following restrictions apply:
  • Ports 1 and 2 must connect to same Fabric Interconnect, i.e. Fabric-A.

  • Ports 3 and 4 must connect to same Fabric Interconnect, i.e. Fabric-B.

This is due to the internal port-channeling architecture inside the card. Ports 1 and 3 are used because the connections between ports 1 and 2 (also 3 and 4) form an internal port-channel.



Caution

Do not connect port 1 to Fabric Interconnect A, and port 2 to Fabric Interconnect B. Use ports 1 and 3 only. Using ports 1 and 2 results in discovery and configuration failures.


Multi-VIC Support

Multi-VIC Support in HyperFlex Clusters

About Multi-VIC Support

Multiple VIC adapters may be added in HyperFlex clusters as described in the following illustration, and offer the following benefits:

  • Adaptive network infrastructure.

  • Maximum network design flexibility.

  • Physical VIC redundancy auto-failover (See the following section on Multi-VIC support with Auto-failover.

Guidelines and Special Requirements

Use the following set of guidelines and special requirements to add more than one VIC adapter in your HyperFlex clusters.

  • Important—Only supported on new deployments with Cisco HX Data Platform, Release 3.5(1a) and later. In other words, existing clusters deployed prior to release 3.5(1a) CANNOT install multiple VICs.

  • Supported for VMware ESXi clusters only. Multi-VIC is not supported on Hyper-V clusters.

  • Supported for HyperFlex M5 Converged nodes or Compute-only nodes.

  • Use with FI-attached systems only (not supported for HX Edge Systems).

  • Mandatory VIC 1387 (MLOM) and VIC 1457 (MLOM) is required.

  • (Optional) You can either add QTY 1 or QTY 2 VIC 1385 or VIC 1455 PCIe VIC.

  • You may not combine VIC 1300 series and VIC 1400 series in the same node or within the same cluster.

  • Interface speeds must be the same. Either all 10GbE (with enough QSAs if using VIC 1300), or all 40GbE. All VIC ports MUST be connected and discovered before starting installation. Installation will check to ensure all VIC ports are properly connected to the FIs.

  • All nodes should use the same number of uplinks. This is important for VIC 1400 that can utilize either 1 or 2 uplinks per fabric interconnect.

  • PCIe VIC links will be down, as the discovery process only occurs with the mLOM slot. Once the discovery process is complete, it is expected behavior for the PCIe links to the FI pair to remain down. This is due to only the mLOM slot getting standby power while the server is powered off. Once association process begins, and the server powers on, the PCIe VIC links will come online as well.

Multi-VIC Support with Auto Failover

Multi-VIC support in HyperFlex is designed to tolerate any single link failure, fabric failure, or a hardware VIC failure. As shown in the following illustration, all HX services are pinned to port 1 on VIC #1 and port 2 on VIC #2. This spreading of vNICS across the two VIC cards enables seamless failover during a hardware failure. The aggregate bandwidth available to HX services (management traffic, VM traffic, storage traffic, and vMotion traffic) is therefore the same whether there are one or two physical VICs installed. However, the additional unused ports, may be used as needed for other use cases. User vNICS can be created on port 2 of VIC #1 and port 1 of VIC #2. You may design your own failover strategies and virtual networking topologies with this additional bandwidth available to each HX server.

Important

You must place only new customer defined vNICS on these unused ports to avoid contention with production HX traffic. Do not place them on the ports used for HX services.


Third Party NIC Support

Introduction

Third-party NIC support in HyperFlex is designed to give customers maximum flexibility in configuring their servers as needed for an expanding set of applications and use cases. Refer to the following section for important considerations when adding additional networking hardware to HyperFlex servers.

Prerequisites

  • Installing third-party NIC Cards - Before cluster installation, install third party NIC cards; uncabled or cabled with links shut down. After deployment is complete, you may enable the links and create additional vSwitches and port groups for any application or VM requirements.

General guidelines for support

  • Supported for VMware ESXi clusters only. Third party NICs are not supported on Hyper-V clusters.

  • Additional vSwitches may be created that use leftover vmnics of third party network adapters. Care should be taken to ensure no changes are made to the vSwitches defined by HyperFlex. Third party vmnics should never be connected to the pre-defined vSwitches created by the HyperFlex installer. Additional user created vSwitches are the sole responsibility of the administrator, and are not managed by HyperFlex.

  • Third party NICs are supported on all HX converged and compute-only systems, including HX running under FIs and HX Edge.

  • Refer to the UCS HCL Tool for the complete list of network adapters supported. HX servers follow the same certification process as C-series servers and will support the same adapters.

  • The most popular network adapters will be available to order preinstalled from Cisco when configuring HX converged or compute-only nodes. However, any supported adapter may be ordered as a spare, and installed in the server before beginning HyperFlex installation.

  • Support for third party adapters begins with HX Data Platform release 3.5(1a) and later.

  • Adding new networking hardware after a cluster is deployed, regardless of HXDP version, is not recommended. Physical hardware changes can disrupt existing virtual networking configurations and require manual intervention to restore HyperFlex and other applications running on that host.

  • The maximum quantity of NICs supported is based on physical PCIe space in the server. Mixing of various port speeds, adapter vendors, and models is allowed, as these interfaces are not used to run the HyperFlex infrastructure.

  • Third party NICs may be directly connected to any external switch. Connectivity through the Fabric Interconnects is not required for third party NICs.

  • Important—Special consideration must be taken when using third party adapters on HX systems running under FIs. Manual policy modification is required during installation. Follow these steps:

    1. Launch Cisco UCS Manager and login as an administrator.

    2. In the Cisco UCS Manager UI, go to the Servers tab.

    3. Navigate to the HX Cluster org that you wish to change and fix the issue when you get a configuration failure message on each Service Profile, such as There is not enough resources all for connection-placement.

    4. On each Service Profile template listed (hx-nodes, hx-nodes-m5, compute-nodes, compute-nodes-m5), change vNIC / vHBA policy to Let System Placement

      1. Click Modify vNIC / vHBA Placement.

      2. Change the Placement back to Let System Perform Placement as shown in the following illustration.

      3. This action will trigger all the Service Profiles associated to any of the 4 Service Profile Templates to go into a pending acknowledgement state. You must reboot each affected system to fix this issue. This should ONLY be done on a fresh cluster and not an existing cluster during upgrade or expansion.

    5. If the config failure fault on the HX service profiles does not clear in UCS Manager, perform the following additional steps:

      1. Click Modify vNIC / vHBA Placement.

      2. Change the Placement back to HyperFlex and click OK.

      3. Click Modify vNIC / vHBA Placement.

      4. Change the Placement back to Let System Perform Placement and click OK.

      5. Confirm the faults are cleared from the service profiles on all HX servers.