Maintaining the Server

Status LEDs and Buttons

This section contains information for interpreting front, rear, and internal LED states.

Front-Panel LEDs

Figure 1. Front Panel LEDs
Table 1. Front Panel LEDs, Definition of States

LED Name

States

1

SAS

SAS/SATA drive fault

Note

 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The hard drive is operating properly.

  • Amber—Drive fault detected.

  • Amber, blinking—The device is rebuilding.

  • Amber, blinking with one-second interval—Drive locate function activated in the software.

2

SAS

SAS/SATA drive activity LED

  • Off—There is no hard drive in the hard drive tray (no access, no fault).

  • Green—The hard drive is ready.

  • Green, blinking—The hard drive is reading or writing data.

1

NVMe

NVMe SSD drive fault

Note

 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The drive is not in use and can be safely removed.

  • Green—The drive is in use and functioning properly.

  • Green, blinking—the driver is initializing following insertion or the driver is unloading following an eject command.

  • Amber—The drive has failed.

  • Amber, blinking—A drive Locate command has been issued in the software.

2

NVMe

NVMe SSD activity

  • Off—No drive activity.

  • Green, blinking—There is drive activity.

3

Power button/LED

  • Off—There is no AC power to the server.

  • Amber—The server is in standby power mode. Power is supplied only to the Cisco IMC and some motherboard functions.

  • Green—The server is in main power mode. Power is supplied to all server components.

4

Unit identification

  • Off—The unit identification function is not in use.

  • Blue, blinking—The unit identification function is activated.

5

System health

  • Green—The server is running in normal operating condition.

  • Green, blinking—The server is performing system initialization and memory check.

  • Amber, steady—The server is in a degraded operational state (minor fault). For example:

    • Power supply redundancy is lost.

    • CPUs are mismatched.

    • At least one CPU is faulty.

    • At least one DIMM is faulty.

    • At least one drive in a RAID configuration failed.

  • Amber, 2 blinks—There is a major fault with the system board.

  • Amber, 3 blinks—There is a major fault with the memory DIMMs.

  • Amber, 4 blinks—There is a major fault with the CPUs.

6

Power supply status

  • Green—All power supplies are operating normally.

  • Amber, steady—One or more power supplies are in a degraded operational state.

  • Amber, blinking—One or more power supplies are in a critical fault state.

7

Fan status

  • Green—All fan modules are operating properly.

  • Amber, blinking—One or more fan modules breached the non-recoverable threshold.

8

Network link activity

  • Off—The Ethernet LOM port link is idle.

  • Green—One or more Ethernet LOM ports are link-active, but there is no activity.

  • Green, blinking—One or more Ethernet LOM ports are link-active, with activity.

9

Temperature status

  • Green—The server is operating at normal temperature.

  • Amber, steady—One or more temperature sensors breached the critical threshold.

  • Amber, blinking—One or more temperature sensors breached the non-recoverable threshold.

Rear-Panel LEDs

Figure 2. Rear Panel LEDs
Table 2. Rear Panel LEDs, Definition of States

LED Name

States

1

1-Gb/10-Gb Ethernet link speed (on both LAN1 and LAN2)

  • Off—Link speed is 100 Mbps.

  • Amber—Link speed is 1 Gbps.

  • Green—Link speed is 10 Gbps.

2

1-Gb/10-Gb Ethernet link status (on both LAN1 and LAN2)

  • Off—No link is present.

  • Green—Link is active.

  • Green, blinking—Traffic is present on the active link.

3

1-Gb Ethernet dedicated management link speed

  • Off—Link speed is 10 Mbps.

  • Amber—Link speed is 100 Mbps.

  • Green—Link speed is 1 Gbps.

4

1-Gb Ethernet dedicated management link status

  • Off—No link is present.

  • Green—Link is active.

  • Green, blinking—Traffic is present on the active link.

5

Rear unit identification

  • Off—The unit identification function is not in use.

  • Blue, blinking—The unit identification function is activated.

6

Power supply status (one LED each power supply unit)

AC power supplies:

  • Off—No AC input (12 V main power off, 12 V standby power off).

  • Green, blinking—12 V main power off; 12 V standby power on.

  • Green, solid—12 V main power on; 12 V standby power on.

  • Amber, blinking—Warning threshold detected but 12 V main power on.

  • Amber, solid—Critical error detected; 12 V main power off (for example, over-current, over-voltage, or over-temperature failure).

Internal Diagnostic LEDs

The server has internal fault LEDs for CPUs, DIMMs, and fan modules.

Figure 3. Internal Diagnostic LED Locations

1

Fan module fault LEDs (one behind each fan connector on the motherboard)

  • Amber—Fan has a fault or is not fully seated.

  • Green—Fan is OK.

3

DIMM fault LEDs (one behind each DIMM socket on the motherboard)

These LEDs operate only when the server is in standby power mode.

  • Amber—DIMM has a fault.

  • Off—DIMM is OK.

2

CPU fault LEDs (one behind each CPU socket on the motherboard).

These LEDs operate only when the server is in standby power mode.

  • Amber—CPU has a fault.

  • Off—CPU is OK.

-

Preparing For Component Installation

This section includes information and tasks that help prepare the node for component installation.

Required Equipment For Service Procedures

The following tools and equipment are used to perform the procedures in this chapter:

  • T-30 Torx driver (supplied with replacement CPUs for heatsink removal)

  • #1 flat-head screwdriver (supplied with replacement CPUs for heatsink removal)

  • #1 Phillips-head screwdriver (for M.2 SSD replacement)

  • Electrostatic discharge (ESD) strap or other grounding equipment such as a grounded mat

Decommissioning the Node Using Cisco UCS Manager

Before replacing an internal component of a node, you must decommission the node to remove it from the Cisco UCS configuration. When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.

Procedure


Step 1

In the Navigation pane, click Equipment.

Step 2

Expand Equipment > Rack Mounts > Servers.

Step 3

Choose the node that you want to decommission.

Step 4

In the Work pane, click the General tab.

Step 5

In the Actions area, click Server Maintenance.

Step 6

In the Maintenance dialog box, click Decommission, then click OK.

The node is removed from the Cisco UCS configuration.


Shutting Down and Removing Power From the Node

The node can run in either of two power modes:

  • Main power mode—Power is supplied to all node components and any operating system on your drives can run.

  • Standby power mode—Power is supplied only to the service processor and certain components. It is safe for the operating system and data to remove power cords from the node in this mode.


Caution


After a node is shut down to standby power, electric current is still present in the node. To completely remove power as directed in some service procedures, you must disconnect all power cords from all power supplies in the node.


You can shut down the node by using the front-panel power button or the software management interfaces.

Shutting Down Using The Cisco UCS Manager Equipment Tab

When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.


Note


If the Shutdown Server link is dimmed in the Actions area, the node is not running.


Procedure

Step 1

In the Navigation pane, click Equipment.

Step 2

Expand Equipment > Rack Mounts > Servers.

Step 3

Choose the node that you want to shut down.

Step 4

In the Work pane, click the General tab.

Step 5

In the Actions area, click Shutdown Server.

Step 6

If a confirmation dialog displays, click Yes.

After the node has been successfully shut down, the Overall Status field on the General tab displays a power-off status.

Step 7

If a service procedure instructs you to completely remove power from the node, disconnect all power cords from the power supplies in the node.


Shutting Down Using The Cisco UCS Manager Service Profile

When you use this procedure to shut down an HX node, Cisco UCS Manager triggers the OS into a graceful shutdown sequence.


Note


If the Shutdown Server link is dimmed in the Actions area, the node is not running.


Procedure

Step 1

In the Navigation pane, click Servers.

Step 2

Expand Servers > Service Profiles.

Step 3

Expand the node for the organization that contains the service profile of the node that you are shutting down.

Step 4

Choose the service profile of the node that you are shutting down.

Step 5

In the Work pane, click the General tab.

Step 6

In the Actions area, click Shutdown Server.

Step 7

If a confirmation dialog displays, click Yes.

After the node has been successfully shut down, the Overall Status field on the General tab displays a power-off status.

Step 8

If a service procedure instructs you to completely remove power from the node, disconnect all power cords from the power supplies in the node.


Shutting Down Using vSphere With HX Maintenance Mode

Some procedures directly place the node into Cisco HX Maintenance mode. This procedure migrates all VMs to other nodes before the node is shut down and decommissioned from Cisco UCS Manager.

Procedure

Step 1

Put the node in Cisco HX Maintenance mode by using the vSphere interface:

  • Using the vSphere web client:

    1. Log in to the vSphere web client.

    2. Go to Home > Hosts and Clusters.

    3. Expand the Datacenter that contains the HX Cluster.

    4. Expand the HX Cluster and select the node.

    5. Right-click the node and select Cisco HX Maintenance Mode > Enter HX Maintenance Mode.

  • Using the command-line interface:

    1. Log in to the storage controller cluster command line as a user with root privileges.

    2. Identify the node ID and IP address:

      # stcli node list --summary 
    3. Enter the node into HX Maintenance Mode:

      # stcli node maintenanceMode (--id   ID  | --ip   IP Address  ) --mode enter 

      (See also stcli node maintenanceMode --help).

    4. Log into the ESXi command line of this node as a user with root privileges.

    5. Verify that the node has entered HX Maintenance Mode:

      # esxcli system maintenanceMode get 

Step 2

Shut down the node using UCS Manager as described in Shutting Down and Removing Power From the Node.


Shutting Down Using the Power Button


Note


This method is not recommended for a HyperFlex node, but the operation of the physical power button is explained here in case an emergency shutdown is required.


Procedure

Step 1

Check the color of the Power Status LED:

  • Green—The node is in main power mode and must be shut down before you can safely remove power.

  • Amber—The node is already in standby mode and you can safely remove power.

Step 2

Invoke either a graceful shutdown or a hard shutdown:

Caution

 
To avoid data loss or damage to your operating system, you should always invoke a graceful shutdown of the operating system.
  • Graceful shutdown—Press and release the Power button. The operating system performs a graceful shutdown and the node goes to standby mode, which is indicated by an amber Power button/LED.

  • Emergency shutdown—Press and hold the Power button for 4 seconds to force the main power off and immediately enter standby mode.

Step 3

If a service procedure instructs you to completely remove power from the node, disconnect all power cords from the power supplies in the node.


Post-Maintenance Procedures

This section contains procedures that are referenced at the end of some maintenance procedures.

Recommissioning the Node Using Cisco UCS Manager

After replacing an internal component of a node, you must recommission the node to add it back into the Cisco UCS configuration.

Procedure

Step 1

In the Navigation pane, click Equipment.

Step 2

Expand Equipment > Rack Mounts.

Step 3

In the Work pane, click the Decommissioned tab.

Step 4

On the row for each node that you want to recommission, do the following:

  1. In the Recommission column, check the check box.

  2. Click Save Changes.

Step 5

If a confirmation dialog box displays, click Yes.

Step 6

(Optional) Monitor the progress of the server recommission and discovery on the FSM tab for the node.


Associating a Service Profile With an HX Node

Use this procedure to associate an HX node to its service profile after recommissioning.

Procedure

Step 1

In the Navigation pane, click Servers.

Step 2

Expand Servers > Service Profiles.

Step 3

Expand the node for the organization that contains the service profile that you want to associate with the HX node.

Step 4

Right-click the service profile that you want to associate with the HX node and then select Associate Service Profile.

Step 5

In the Associate Service Profile dialog box, select the Server option.

Step 6

Navigate through the navigation tree and select the HX node to which you are assigning the service profile.

Step 7

Click OK.


Exiting HX Maintenance Mode

Use this procedure to exit HX Maintenance Mode after performing a service procedure

Procedure

Exit the node from Cisco HX Maintenance mode by using the vSphere interface:

  • Using the vSphere web client:

    1. Log in to the vSphere web client.

    2. Go to Home > Hosts and Clusters.

    3. Expand the Datacenter that contains the HX Cluster.

    4. Expand the HX Cluster and select the node.

    5. Right-click the node and select Cisco HX Maintenance Mode > Exit HX Maintenance Mode.

  • Using the command-line interface:

    1. Log in to the storage controller cluster command line as a user with root privileges.

    2. Identify the node ID and IP address:

      # stcli node list --summary 
    3. Exit the node out of HX Maintenance Mode:

      # stcli node maintenanceMode (--id   ID  | --ip   IP Address  ) --mode exit 

      (See also stcli node maintenanceMode --help).

    4. Log into the ESXi command line of this node as a user with root privileges.

    5. Verify that the node has exited HX Maintenance Mode:

      # esxcli system maintenanceMode get 

Removing the Node Top Cover

Procedure


Step 1

Remove the top cover:

  1. If the cover latch is locked, use a screwdriver to turn the lock 90-degrees counterclockwise to unlock it.

  2. Lift on the end of the latch that has the green finger grip. The cover is pushed back to the open position as you lift the latch.

  3. Lift the top cover straight up from the node and set it aside.

Step 2

Replace the top cover:

  1. With the latch in the fully open position, place the cover on top of the node about one-half inch (1.27 cm) behind the lip of the front cover panel. The opening in the latch should fit over the peg that sticks up from the fan tray.

  2. Press the cover latch down to the closed position. The cover is pushed forward to the closed position as you push down the latch.

  3. If desired, lock the latch by using a screwdriver to turn the lock 90-degrees clockwise.

Figure 4. Removing the Top Cover

1

Top cover

2

Locking cover latch

3

Serial number label location


Removing and Replacing Components


Warning


Blank faceplates and cover panels serve three important functions: they prevent exposure to hazardous voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate the system unless all cards, faceplates, front covers, and rear covers are in place.

Statement 1029



Caution


When handling node components, handle them only by carrier edges and use an electrostatic discharge (ESD) wrist-strap or other grounding device to avoid damage.

Tip


You can press the unit identification button on the front panel or rear panel to turn on a flashing, blue unit identification LED on both the front and rear panels of the node. This button allows you to locate the specific node that you are servicing when you go to the opposite side of the rack. You can also activate these LEDs remotely.

This section describes how to install and replace node components.

Serviceable Component Locations

This topic shows the locations of the field-replaceable components and service-related items. The view in the following figure shows the node with the top cover removed.

Figure 5. Serviceable Component Locations

1

Drive bays 3 – 10:

  • HX220c Hybrid: persistent data HDDs

  • HX220c All-Flash: persistent data SSDs

  • HX220c All-NVMe: persistent data NVMe SSDs

9

Power supplies (one or two, hot-swappable when redundant as 1+1)

2

Drive bay 2: caching SSD

10

PCIe riser 2/slot 2 (half-height, x16 lane)

Includes PCIe cable connectors for front-loading NVMe SSDs (x8 lane)

3

Drive bay 1: system SSD for logs

11

PCIe riser 1/slot 1 (full-height, x16 lane)

Includes socket for Micro-SD card

4

Cooling fan modules (seven, hot-swappable)

12

Modular LOM (mLOM) card bay on chassis floor (x16 PCIe lane), not visible in this view

5

DIMM sockets on motherboard (12 per CPU)

13

Modular RAID (mRAID) riser, supports HBA storage controller

6

CPUs and heatsinks

14

PCIe cable connectors for front-loading NVMe SSDs on PCIe riser 2

7

Mini-storage module for SATA M.2 SSD Boot drive

15

Micro-SD card socket on PCIe riser 1

8

RTC battery, vertical socket

-

Considerations For Upgrading Hardware in Multiple Nodes of a Cluster

This chapter contains removal and replacement procedures for components that are supported as field-replaceable. This topic describes additional considerations when multiple nodes in an existing cluster are upgraded with the addition or replacement of components.

The following procedure describes the general steps and considerations for upgrading hardware in the nodes of a cluster.


Note


Hot-swappable components can be replaced or added without shutting down the system as described below. Those include certain drives, the internal fan modules, and the power supplies. Check the procedure for the component in this chapter to verify whether the shutdown steps are required.


Procedure


Step 1

Verify that the existing cluster is healthy.

Step 2

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 3

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 4

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 5

Disconnect all power cables from all power supplies.

Step 6

Remove and replace the existing component or add a new component following the supported population rules. Use the specific procedure in this chapter for the component.

Step 7

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Ensure that ESXi boots. The node is auto-discovered by Cisco UCS Manager and the ESXi operating system recognizes the new components.

Step 8

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 9

Verify that ESXi is reconnected to HyperFlex vCenter.

Step 10

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Step 11

Verify within 30 minutes that the HX cluster is again in a healthy state.

Step 12

Move a test VM to the host. Ensure that it is working by performing tests.

Note

 

You must perform the hardware upgrade on the next HyperFlex node in the cluster within two hours after upgrading the prior node and verifying that the cluster is healthy. The HyperFlex Data Platform (HXDP) runs a data rebalance operation every two hours and this timer starts after the cluster is again in a healthy state. Compute only nodes are not part of the data rebalance procedure and can be outside of the two hour windows.

Step 13

Repeat the steps above to upgrade the hardware component in each node until all node hardware is updated.


Replacing Hard Drives or Solid State Drives

This section contains information for replacing front-loading drives.

Drive Population Rules

Drive slot numbering is shown in the following figure.

Figure 6. Drive Slot Numbering

Observe these drive population rules:

  • Slot 1: System SSD for SDS logs only

  • Slot 2: Caching SSD only

  • Slots 3- 10:

    • HX220c Hybrid: persistent data HDDs

    • HX220c All-Flash: persistent data SSDs

    • HX220c All-NVMe: persistent data NVMe SSDs

  • When populating persistent data drives, add drives to the lowest-numbered bays first.

  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

  • See HX220c M5 Drive Configuration Comparison for supported drive configurations.


Note


Regarding drive capacity stated on drive labels vs reported capacity in software:

The capacity stated on the drive physical label and the capacity reported by the HyperFlex software differ because of the following reasons:

  1. Drive label capacities are stated in decimal (base 10) notation, while the software-reported capacities are stated in binary (base 2) notation. For example, 1 TB in decimal notation would be reported as 909 GB in binary notation—these are actually the same capacity, similar to distance reported in miles vs kilometers being the same distance, but in different units. These examples show capacities stated as decimal notation vs binary notation:

    • 500 GB (decimal) = 465.5 GB (binary)

    • 1 TB (decimal) = 909 GB (binary)

    • 2 TB (decimal) = 1.82 TB (binary)

    • 3 TB (decimal) = 2.72 TB (binary)

  2. Preinstalled software and partitions also reduce storage capacity.


HX220c M5 Drive Configuration Comparison

The following table compares the drives supported by each function in the node. Also note the considerations listed after the table.

Component

HX220c M5 Hybrid

HX220c M5 All-Flash

HX220c M5 SED Hybrid

HX220c M5 SED All-Flash

HX220c M5 All-NVMe

System SSD

Slot 1

SSD:

  • HX-SD240G61X-EV

SSD:

  • HX-SD240G61X-EV

SSD:

  • HX-SD240G61X-EV

SSD:

  • HX-SD240G61X-EV

NVMe SSD:

  • HX-NVMELW-I500

Caching SSD

Slot 2

SSD:

  • HX-SD480G63X-EP

  • HX-SD800G123X-EP

SSD:

  • HX-NVMEHW-H1600

  • HX-SD400G12TX-EP

  • HX-NVMEXP-I375

SED SSD:

  • HX-SD800GBENK9

SED SSD:

  • HX-SD800GBENK9

NVMe SSD:

  • HX-NVMEXP-I375

  • HX-NVMEXP-I750

Persistent data drives

Slots 3 - 10

HDD:

  • HX-HD12TB10K12N

SSD:

  • HX-SD960G61X-EV

  • HX-SD38T61X-EV

SED HDD:

  • HX-HD12T10NK9

SED SSD:

  • HX-SD800GBENK9

  • HX-SD960GBE1NK9

  • HX-SD38TBE1NK9

NVMe SSD:

  • HX-NVMELW-I1000

  • HX-NVMEHW-I4000

Note the following considerations and restrictions for All-Flash HyperFlex nodes:

  • The minimum Cisco HyperFlex software required for using Intel Optane NVMe SSD HX-NVMEXP-I375 is Release 3.0(1a) or later. If you use HX-NVMEXP-I375 as your caching drive in HX220c All-Flash nodes, all nodes in the cluster must use this same drive as the caching drive.

  • HX220c All-Flash HyperFlex nodes are ordered as specific All-Flash PIDs; All-Flash configurations are supported only on those PIDs.

  • Conversion from Hybrid HX220c configuration to HX220c All-Flash configuration is not supported.

  • Mixing Hybrid nodes with All-Flash nodes within the same HyperFlex cluster is not supported.

  • If you use an NVMe SSD, PCIe cable CBL-NVME-C220FF is required to carry the PCIe signal from the front drive backplane to PCIe riser 2.

Note the following considerations and restrictions for SED HyperFlex nodes:

  • The minimum Cisco HyperFlex software required for SED configurations is Release 3.5(1a) or later.

  • Mixing HX220c Hybrid SED HyperFlex nodes with HX220c All-Flash SED HyperFlex nodes within the same HyperFlex cluster is not supported.

Drive Replacement Overview

The three types of drives in the node require different replacement procedures.

System SSD

Slot 1

The node must be put into Cisco HX Maintenance Mode before replacing the System SSD. See Replacing the System SSD (Slot 1).

Note

 

After you replace the Housekeeping SSD, see Replacing Housekeeping SSDs in the Cisco HyperFlex Data Platform Administration Guide for additional software update steps.

Caching SSD

Slot 2

Hot-swap replacement is supported for SAS/SATA drives. See Replacing the Caching SSD (Slot 2).

Note

 

Hot-swap replacement for SAS/SATA drives includes hot-removal, so you can remove the drive while it is still operating.

Note

 

If an NVMe SSD is used as the Caching SSD, additional steps are required as described in the procedure.

Persistent data drives

Slots 3 - 10

Hot-swap replacement is supported for SAS/SATA drives. See Replacing Persistent Data Drives (Slots 3 - 10).

Note

 

Hot-swap replacement includes hot-removal, so you can remove the drive while it is still operating.

Note

 

If an NVMe SSD is used as the data drive, additional steps are required as described in the procedure.

Replacing the System SSD (Slot 1)

The Housekeeping SSD must be installed in slot 1.


Note


This procedure requires assistance from technical support for additional software update steps after the hardware is replaced. It cannot be completed without technical support assistance.



Note


Always replace the drive with the same type and size as the original drive.



Caution


Put the node in Cisco HX Maintenance mode before replacing the Housekeeping SSD, as described in the procedure. Hot-swapping the Housekeeping SSD while the node is running causes the node to fail.


Procedure

Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Remove the Housekeeping SSD:

  1. Press the release button on the face of the drive tray.

  2. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  3. Remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 6

Install a new drive:

  1. Place a new drive in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Figure 7. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray

Step 7

Replace power cables and then power on the node by pressing the Power button.

Step 8

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 9

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 10

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Note

 

After you replace the Housekeeping SSD, see Replacing Housekeeping SSDs in the Cisco HyperFlex Data Platform Administration Guide for additional software update steps.


Replacing the Caching SSD (Slot 2)

The Caching SSD must be installed in slot 2.

Note the following considerations for NVMe SSDs, when used as the Caching SSD:

  • NVMe SSDs are supported only in All-Flash and All-NVMe nodes. NVMe SSDs are not supported in Hybrid nodes.

  • In Hybrid and All-Flash nodes, NVMe SSDs are supported only in the Caching SSD position, in drive bay 2. NVMe SSDs are supported for persistent storage or as the Housekeeping drive only in an All-NVMe node.

  • The locator (beacon) LED cannot be turned on or off on NVMe SSDs.


Note


Always replace the drive with the same type and size as the original drive.



Note


Upgrading or downgrading the Caching drive in an existing HyperFlex cluster is not supported. If the Caching drive must be upgraded or downgraded, then a full redeployment of the HyperFlex cluster is required.



Note


When using a SAS/SATA drive, hot-swap replacement includes hot-removal, so you can remove the drive while it is still operating. NVMe drives cannot be hot-swapped


Procedure

Step 1

Only if the caching drive is an NVMe SSD, enter the ESXI host into HX Maintenance Mode (see Shutting Down Using vSphere With HX Maintenance Mode). Otherwise, skip to step 2.

Step 2

Remove the Caching SSD:

  1. Press the release button on the face of the drive tray.

  2. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  3. Remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 3

Install a new drive:

  1. Place a new drive in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Step 4

Only if the Caching SSD is an NVMe SSD:

  1. Reboot the ESXi host. This enables ESXi to discover the NVMe SSD.

  2. Exit the ESXi host from HX Maintenance Mode (see Exiting HX Maintenance Mode).

Figure 8. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Replacing Persistent Data Drives (Slots 3 - 10)


Note


Hot-swap replacement includes hot-removal, so you can remove the drive while it is still operating.



Note


Always replace the drive with the same type and size as the original drive.


Note the following considerations for NVMe SSDs, when used as the data SSD:

  • NVMe SSDs are supported as persistent data drives only in All-NVMe nodes.

  • The locator (beacon) LED cannot be turned on or off on NVMe SSDs.

Procedure

Step 1

Only if the persistent data drive is an NVMe SSD, enter the ESXI host into HX Maintenance Mode (see Shutting Down Using vSphere With HX Maintenance Mode). Otherwise, skip to step 2.

Step 2

Remove the drive that you are replacing or remove a blank drive tray from the bay:

  1. Press the release button on the face of the drive tray.

  2. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  3. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 3

Install a new drive:

  1. Place a new drive in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Step 4

Only if the Caching SSD is an NVMe SSD:

  1. Reboot the ESXi host. This enables ESXi to discover the NVMe SSD.

  2. Exit the ESXi host from HX Maintenance Mode (see Exiting HX Maintenance Mode).

    Figure 9. Replacing a Drive in a Drive Tray

    1

    Ejector lever

    3

    Drive tray screws (two on each side)

    2

    Release button

    4

    Drive removed from drive tray


Replacing Fan Modules


Tip


Each fan module has a fault LED next to the fan connector on the motherboard. This LED lights green when the fan is correctly seated and is operating OK. The LED lights amber when the fan has a fault or is not correctly seated.

Caution


You do not have to shut down or remove power from the node to replace fan modules because they are hot- swappable. However, to maintain proper cooling, do not operate the node for more than one minute with any fan module removed.

Procedure


Step 1

Remove an existing fan module:

  1. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the node from the rack.
  2. Remove the top cover from the node as described in Removing the Node Top Cover.

  3. Grasp the fan module at its front and rear finger-grips. Lift straight up to disengage its connector from the motherboard.

Step 2

Install a new fan module:

  1. Set the new fan module in place. The arrow printed on the top of the fan module should point toward the rear of the node.

  2. Press down gently on the fan module to fully engage it with the connector on the motherboard.

  3. Replace the top cover to the node.

  4. Replace the node in the rack.


Replacing Memory DIMMs


Caution


DIMMs and their sockets are fragile and must be handled with care to avoid damage during installation.



Caution


Cisco does not support third-party DIMMs. Using non-Cisco DIMMs in the server might result in system problems or damage to the motherboard.



Note


To ensure the best node performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace DIMMs.


DIMM Population Rules and Memory Performance Guidelines

This topic describes the rules and guidelines for maximum memory performance.

DIMM Slot Numbering

The following figure shows the numbering of the DIMM slots on the motherboard.

Figure 10. DIMM Slot Numbering
DIMM Population Rules

Observe the following guidelines when installing or replacing DIMMs for maximum performance:

  • Each CPU supports six memory channels.

    • CPU 1 supports channels A, B, C, D, E, F.

    • CPU 2 supports channels G, H, J, K, L, M.

  • Each channel has two DIMM sockets (for example, channel A = slots A1, A2).

  • In a single-CPU configuration, populate the channels for CPU1 only (A, B, C, D, E, F).

  • For optimal performance, populate DIMMs in the order shown in the following table, depending on the number of CPUs and the number of DIMMs per CPU. If your server has two CPUs, balance DIMMs evenly across the two CPUs as shown in the table.


    Note


    The table below lists recommended configurations. Using 5, 7, 9, 10, or 11 DIMMs per CPU is not recommended.


    Table 3. DIMM Population Order

    Number of DIMMs per CPU (Recommended Configurations)

    Populate CPU 1 Slot

    Populate CPU2 Slots

    Blue #1 Slots

    Black #2 Slots

    Blue #1 Slots

    Black #2 Slots

    1

    (A1)

    -

    (G1)

    -

    2

    (A1, B1)

    -

    (G1, H1)

    -

    3

    (A1, B1, C1)

    -

    (G1, H1, J1)

    -

    4

    (A1, B1); (D1, E1)

    -

    (G1, H1); (K1, L1)

    -

    6

    (A1, B1); (C1, D1); (E1, F1)

    -

    (G1, H1); (J1, K1); (L1, M1)

    -

    8

    (A1, B1); (D1, E1)

    (A2, B2); (D2, E2)

    (G1, H1); (K1, L1)

    (G2, H2); (K2, L2)

    12

    (A1, B1); (C1, D1); (E1, F1)

    (A2, B2); (C2, D2); (E2, F2)

    (G1, H1); (J1, K1); (L1, M1)

    (G2, H2); (J2, K2); (L2, M2)

  • The maximum combined memory allowed in the 12 DIMM slots controlled by any one CPU is 768 GB. To populate the 12 DIMM slots with more than 768 GB of combined memory, you must use a high-memory CPU that has a PID that ends with an "M", for example, UCS-CPU-6134M.

  • Observe the DIMM mixing rules shown in the following table.

    Table 4. DIMM Mixing Rules

    DIMM Parameter

    DIMMs in the Same Channel

    DIMMs in the Same Bank

    DIMM Capacity

    RDIMM = 16GB or 32GB

    LRDIMM = 64GB

    TSV-RDIMM = 128GB

    You can mix different capacity DIMMs in the same channel (for example, A1, A2).

    You cannot mix DIMM capacities in a bank (for example A1, B1). Pairs of DIMMs must be identical (same PID and revision).

    DIMM speed

    For example, 2666 GHz

    You can mix speeds, but DIMMs will run at the speed of the slowest DIMMs/CPUs installed in the channel.

    You cannot mix DIMM speeds in a bank (for example A1, B1). Pairs of DIMMs must be identical (same PID and revision).

    DIMM type

    RDIMMs or LRDIMMs

    You cannot mix DIMM types in a channel.

    You cannot mix DIMM types in a bank.

Replacing DIMMs

Identifying a Faulty DIMM

Each DIMM socket has a corresponding DIMM fault LED, directly in front of the DIMM socket. When the node is in standby power mode, these LEDs light amber to indicate a faulty DIMM.

Procedure

Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Remove an existing DIMM:

  1. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the node from the rack.
  2. Remove the top cover from the node as described in Removing the Node Top Cover.

  3. Remove the air baffle that covers the front ends of the DIMM slots to provide clearance.

  4. Locate the DIMM that you are removing, and then open the ejector levers at each end of its DIMM slot.

Step 6

Install a new DIMM:

Note

 
Before installing DIMMs, see the memory population rules for this node: DIMM Population Rules and Memory Performance Guidelines.
  1. Align the new DIMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to correctly orient the DIMM.

  2. Push down evenly on the top corners of the DIMM until it is fully seated and the ejector levers on both ends lock into place.

  3. Replace the top cover to the node.

  4. Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.

Step 7

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 8

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 9

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Replacing CPUs and Heatsinks

This section contains CPU configuration rules and the procedure for replacing CPUs and heatsinks:

Special Information For Upgrades to Second Generation Intel Xeon Scalable Processors


Caution


You must upgrade your node firmware and software to the required minimum levels before you upgrade to the Second Generation Intel Xeon Scalable Processors that are supported in this node. Older firmware versions cannot recognize the new CPUs and this would result in a non-bootable node.



Note


You can use First Generation and Second Generation Intel Xeon Scalable processors in the same cluster. Do not mix First Generation and Second Generation processors within the same node.


The minimum software and firmware versions required for this node to support Second Generation Intel Xeon Scalable Processors are as follows:

Table 5. Minimum Requirements For Second Generation Intel Xeon Scalable Processors

Software or Firmware

Minimum Version

Node Cisco IMC/BIOS

4.0(4d)

Cisco UCS Manager

4.0(4d)

Cisco HyperFlex Data Platform

4.0(1b)

Do one of the following actions:

  • If your server's firmware and Cisco UCS Manager software are already at the required minimums shown above (or later), you can replace the CPU hardware by using the procedure in this section.

  • If your server's firmware and Cisco UCS Manager software are earlier than the required levels, upgrade your software. After you upgrade the software, return to this section as directed to replace the CPU hardware.

CPU Configuration Rules

This node has two CPU sockets on the motherboard. Each CPU supports six DIMM channels (12 DIMM slots).

  • The node can operate with one CPU or two identical CPUs installed.


    Note


    Single-CPU configuration is supported only for the HX Edge configuration for CPU SKUs HX-CPU-4114 and above. Single-CPU configuration is not supported for HX-CPU-3106, HX-CPU-4108 or HX-CPU-4110.


  • The minimum configuration is that the server must have at least CPU 1 installed. Install CPU 1 first, and then CPU 2.

  • The following restrictions apply when using a single-CPU configuration:

    • The maximum number of DIMMs is 12 (only CPU 1 channels A, B, C, D, E, F).

    • PCIe riser 2 (slot 2) is unavailable.

    • NVME drives are unavailable (they require PCIe riser 2).

  • The maximum combined memory allowed in the 12 DIMM slots controlled by any one CPU is 768 GB. To populate the 12 DIMM slots with more than 768 GB of combined memory, you must use a high-memory CPU that has a PID that ends with an "M", for example, UCS-CPU-6134M.

Tools Required For CPU Replacement

You need the following tools and equipment for this procedure:

  • T-30 Torx driver—Supplied with replacement CPU.

  • #1 flat-head screwdriver—Supplied with replacement CPU.

  • CPU assembly tool—Supplied with replacement CPU. Orderable separately as Cisco PID UCS-CPUAT=.

  • Heatsink cleaning kit—Supplied with replacement CPU. Orderable separately as Cisco PID UCSX-HSCK=.

    One cleaning kit can clean up to four CPUs.

  • Thermal interface material (TIM)—Syringe supplied with replacement CPU. Use only if you are reusing your existing heatsink (new heatsinks have a pre-applied pad of TIM). Orderable separately as Cisco PID UCS-CPU-TIM=.

    One TIM kit covers one CPU.

Replacing a CPU and Heatsink


Caution


CPUs and their sockets are fragile and must be handled with extreme care to avoid damaging pins. The CPUs must be installed with heatsinks and thermal interface material to ensure cooling. Failure to install a CPU correctly might result in damage to the server.


Procedure

Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Remove the existing CPU/heatsink assembly from the node:

  1. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the node from the rack.
  2. Remove the top cover from the node as described in Removing the Node Top Cover.

  3. Use the T-30 Torx driver that is supplied with the replacement CPU to loosen the four captive nuts that secure the assembly to the motherboard standoffs.

    Note

     

    Alternate loosening the heatsink nuts evenly so that the heatsink remains level as it is raised. Loosen the heatsink nuts in the order shown on the heatsink label: 4, 3, 2, 1.

  4. Lift straight up on the CPU/heatsink assembly and set it heatsink-down on an antistatic surface.

    Figure 11. Removing the CPU/Heatsink Assembly

    1

    Heatsink

    4

    CPU socket on motherboard

    2

    Heatsink captive nuts (two on each side)

    5

    T-30 Torx driver

    3

    CPU carrier (below heatsink in this view)

    -

Step 6

Separate the heatsink from the CPU assembly (the CPU assembly includes the CPU and the CPU carrier):

  1. Place the heatsink with CPU assembly so that it is oriented upside-down as shown below.

    Note the thermal-interface material (TIM) breaker location. TIM BREAKER is stamped on the CPU carrier next to a small slot.

    Figure 12. Separating the CPU Assembly From the Heatsink

    1

    CPU carrier

    4

    CPU-carrier inner-latch nearest to the TIM breaker slot

    2

    CPU

    5

    #1 flat-head screwdriver inserted into TIM breaker slot

    3

    TIM BREAKER slot in CPU carrier

    -

  2. Pinch inward on the CPU-carrier inner-latch that is nearest the TIM breaker slot and then push up to disengage the clip from its slot in the heatsink corner.

  3. Insert the blade of a #1 flat-head screwdriver into the slot marked TIM BREAKER.

    Caution

     

    In the following step, do not pry on the CPU surface. Use gentle rotation to lift on the plastic surface of the CPU carrier at the TIM breaker slot. Use caution to avoid damaging the heatsink surface.

  4. Gently rotate the screwdriver to lift up on the CPU until the TIM on the heatsink separates from the CPU.

    Note

     

    Do not allow the screwdriver tip to touch or damage the green CPU substrate.

  5. Pinch the CPU-carrier inner-latch at the corner opposite the TIM breaker and push up to disengage the clip from its slot in the heatsink corner.

  6. On the remaining two corners of the CPU carrier, gently pry outward on the outer-latches and then lift the CPU-assembly from the heatsink.

    Note

     
    Handle the CPU-assembly by the plastic carrier only. Do not touch the CPU surface. Do not separate the CPU from the carrier.

Step 7

The new CPU assembly is shipped on a CPU assembly tool. Take the new CPU assembly and CPU assembly tool out of the carton.

If the CPU assembly and CPU assembly tool become separated, note the alignment features shown below for correct orientation. The pin 1 triangle on the CPU carrier must be aligned with the angled corner on the CPU assembly tool.

Caution

 

CPUs and their sockets are fragile and must be handled with extreme care to avoid damaging pins.

Figure 13. CPU Assembly Tool, CPU Assembly, and Heatsink Alignment Features

1

CPU assembly tool

4

Angled corner on heatsink (pin 1 alignment feature)

2

CPU assembly (CPU in plastic carrier)

5

Triangle cut into carrier (pin 1 alignment feature)

3

Heatsink

6

Angled corner on CPU assembly tool (pin 1 alignment feature)

Step 8

Apply new TIM to the heatsink:

Note

 
The heatsink must have new TIM on the heatsink-to-CPU surface to ensure proper cooling and performance.
  • If you are installing a new heatsink, it is shipped with a pre-applied pad of TIM. Go to step 5.

  • If you are reusing a heatsink, you must remove the old TIM from the heatsink and then apply new TIM to the CPU surface from the supplied syringe. Continue with step a below.

  1. Apply the cleaning solution that is included with the heatsink cleaning kit (UCSX-HSCK=) to the old TIM on the heatsink and let it soak for a least 15 seconds.

  2. Wipe all of the TIM off the heatsink using the soft cloth that is included with the heatsink cleaning kit. Be careful to avoid scratching the heatsink surface.

  3. Using the syringe of TIM provided with the new CPU (UCS-CPU-TIM=), apply 1.5 cubic centimeters (1.5ml) of thermal interface material to the top of the CPU. Use the pattern shown below to ensure even coverage.

    Figure 14. Thermal Interface Material Application Pattern

    Caution

     

    Use only the correct heatsink for your CPUs to ensure proper cooling. There are two different heatsinks: UCSC-HS-C220M5= for standard-performance CPUs 150 W and less; UCSC-HS2-C220M5= for high-performance CPUs above 150 W. Note the wattage described on the heatsink label.

Step 9

With the CPU assembly on the CPU assembly tool, set the heatsink onto the CPU assembly. Note the pin 1 alignment features for correct orientation. Push down gently until you hear the corner clips of the CPU carrier click onto the heatsink corners.

Caution

 

In the following step, use extreme care to avoid touching or damaging the CPU contacts or the CPU socket pins.

Step 10

Install the CPU/heatsink assembly to the server:

  1. Lift the heatsink with attached CPU assembly from the CPU assembly tool.

  2. Align the CPU with heatsink over the CPU socket on the motherboard, as shown below.

    Note the alignment features. The pin 1 angled corner on the heatsink must align with the pin 1 angled corner on the CPU socket. The CPU-socket posts must align with the guide-holes in the assembly.

    Figure 15. Installing the Heatsink/CPU Assembly to the CPU Socket

    1

    Guide hole in assembly (two)

    4

    Angled corner on heatsink (pin 1 alignment feature)

    2

    CPU socket alignment post (two)

    5

    Angled corner on socket (pin 1 alignment feature)

    3

    CPU socket leaf spring

    -

  3. Set the heatsink with CPU assembly down onto the CPU socket.

  4. Use the T-30 Torx driver that is supplied with the replacement CPU to tighten the four captive nuts that secure the heatsink to the motherboard standoffs.

    Caution

     
    Alternate tightening the heatsink nuts evenly so that the heatsink remains level while it is lowered. Tighten the heatsink nuts in the order shown on the heatsink label: 1, 2, 3, 4. The captive nuts must be fully tightened so that the leaf springs on the CPU socket lie flat.
  5. Replace the top cover to the node.

  6. Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.

Step 11

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 12

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 13

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Additional CPU-Related Parts to Order with RMA Replacement CPUs

When a return material authorization (RMA) of the CPU is done on a Cisco UCS C-Series server, additional parts might not be included with the CPU spare. The TAC engineer might need to add the additional parts to the RMA to help ensure a successful replacement.


Note


The following items apply to CPU replacement scenarios. If you are replacing a system chassis and moving existing CPUs to the new chassis, you do not have to separate the heatsink from the CPU. See Additional CPU-Related Parts to Order with RMA Replacement System Chassis.


  • Scenario 1—You are reusing the existing heatsinks:

    • Heatsink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

    • Thermal interface material (TIM) kit for M5 servers (UCS-CPU-TIM=)

      One TIM kit covers one CPU.

  • Scenario 2—You are replacing the existing heatsinks:


    Caution


    Use only the correct heatsink for your CPUs to ensure proper cooling. There are two different heatsinks: UCSC-HS-C220M5= for CPUs 150 W and less; UCSC-HS2-C220M5= for CPUs above 150 W.
    • Heatsink: UCSC-HS-C220M5= for CPUs 150 W and less; UCSC-HS2-C220M5= for CPUs above 150 W

      New heatsinks have a pre-applied pad of TIM.

    • Heatsink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

  • Scenario 3—You have a damaged CPU carrier (the plastic frame around the CPU)

    • CPU carrier (UCS-M5-CPU-CAR=)

    • #1 flat-head screwdriver (for separating the CPU from the heatsink)

    • Heatsink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

    • Thermal interface material (TIM) kit for M5 servers (UCS-CPU-TIM=)

      One TIM kit covers one CPU.

A CPU heatsink cleaning kit is good for up to four CPU and heatsink cleanings. The cleaning kit contains two bottles of solution, one to clean the CPU and heatsink of old TIM and the other to prepare the surface of the heatsink.

New heatsink spares come with a pre-applied pad of TIM. It is important to clean any old TIM off of the CPU surface prior to installing the heatsinks. Therefore, even when you are ordering new heatsinks, you must order the heatsink cleaning kit.

Additional CPU-Related Parts to Order with RMA Replacement System Chassis

When a return material authorization (RMA) of the system chassis is done on a Cisco UCS C-Series server, you move existing CPUs to the new chassis.


Note


Unlike previous generation CPUs, the M5 server CPUs do not require you to separate the heatsink from the CPU when you move the CPU-heatsink assembly. Therefore, no additional heatsink cleaning kit or thermal-interface material items are required.


  • The only tool required for moving a CPU/heatsink assembly is a T-30 Torx driver.

Moving an M5 Generation CPU

Tool required for this procedure: T-30 Torx driver


Caution


When you receive a replacement server for an RMA, it includes dust covers on all CPU sockets. These covers protect the socket pins from damage during shipping. You must transfer these covers to the system that you are returning, as described in this procedure.


Procedure

Step 1

When moving an M5 CPU to a new server, you do not have to separate the heatsink from the CPU. Perform the following steps:

  1. Use a T-30 Torx driver to loosen the four captive nuts that secure the assembly to the board standoffs.

    Note

     
    Alternate loosening the heatsink nuts evenly so that the heatsink remains level as it is raised. Loosen the heatsink nuts in the order shown on the heatsink label: 4, 3, 2, 1.
  2. Lift straight up on the CPU/heatsink assembly to remove it from the board.

  3. Set the CPUs with heatsinks aside on an anti-static surface.

    Figure 16. Removing the CPU/Heatsink Assembly

    1

    Heatsink

    4

    CPU socket on motherboard

    2

    Heatsink captive nuts (two on each side)

    5

    T-30 Torx driver

    3

    CPU carrier (below heatsink in this view)

    -

Step 2

Transfer the CPU socket covers from the new system to the system that you are returning:

  1. Remove the socket covers from the replacement system. Grasp the two recessed finger-grip areas marked "REMOVE" and lift straight up.

    Note

     

    Keep a firm grasp on the finger-grip areas at both ends of the cover. Do not make contact with the CPU socket pins.

    Figure 17. Removing a CPU Socket Dust Cover

    1

    Finger-grip areas marked "REMOVE"

    -

  2. With the wording on the dust cover facing up, set it in place over the CPU socket. Make sure that all alignment posts on the socket plate align with the cutouts on the cover.

    Caution

     

    In the next step, do not press down anywhere on the cover except the two points described. Pressing elsewhere might damage the socket pins.

  3. Press down on the two circular markings next to the word "INSTALL" that are closest to the two threaded posts (see the following figure). Press until you feel and hear a click.

    Note

     

    You must press until you feel and hear a click to ensure that the dust covers do not come loose during shipping.

    Figure 18. Installing a CPU Socket Dust Cover

    -

    Press down on the two circular marks next to the word INSTALL.

    -

Step 3

Install the CPUs to the new system:

  1. On the new board, align the assembly over the CPU socket, as shown below.

    Note the alignment features. The pin 1 angled corner on the heatsink must align with the pin 1 angled corner on the CPU socket. The CPU-socket posts must align with the guide-holes in the assembly.

    Figure 19. Installing the Heatsink/CPU Assembly to the CPU Socket

    1

    Guide hole in assembly (two)

    4

    Angled corner on heatsink (pin 1 alignment feature)

    2

    CPU socket alignment post (two)

    5

    Angled corner on socket (pin 1 alignment feature)

    3

    CPU socket leaf spring

    -

  2. On the new board, set the heatsink with CPU assembly down onto the CPU socket.

  3. Use a T-30 Torx driver to tighten the four captive nuts that secure the heatsink to the board standoffs.

    Note

     

    Alternate tightening the heatsink nuts evenly so that the heatsink remains level while it is lowered. Tighten the heatsink nuts in the order shown on the heatsink label: 1, 2, 3, 4. The captive nuts must be fully tightened so that the leaf springs on the CPU socket lie flat.


Replacing a Mini-Storage Module or M.2 Boot Drive

The mini-storage module plugs into a motherboard socket to provide additional M.2 SSD internal storage. This node includes a SATA M.2 SSD that can be used as a boot drive.

Replacing a Mini-Storage Module Carrier

This topic describes how to remove and replace a mini-storage module carrier.

Procedure

Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 6

Remove the top cover from the server as described in Removing the Node Top Cover.

Step 7

Remove a carrier from its socket:

  1. Locate the mini-storage module carrier in its socket just in front of power supply 1.

  2. At each end of the carrier, push outward on the clip that secures the carrier.

  3. Lift both ends of the carrier to disengage it from the socket on the motherboard.

  4. Set the carrier on an anti-static surface.

Step 8

Install a carrier to its socket:

  1. Position the carrier over socket, with the carrier's connector facing down and at the same end as the motherboard socket. Two alignment pegs must match with two holes on the carrier.

  2. Gently push down the socket end of the carrier so that the two pegs go through the two holes on the carrier.

  3. Push down on the carrier so that the securing clips click over it at both ends.

Step 9

Replace the top cover to the server.

Step 10

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 20. Mini-Storage Module Carrier

1

Location of socket on motherboard

3

Securing clips

2

Alignment pegs

-

Step 11

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 12

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 13

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Replacing an M.2 SSD in a Mini-Storage Carrier For M.2

This topic describes how to remove and replace an M.2 SSD in a mini-storage carrier for M.2 (UCS-MSTOR-M2).

Procedure

Step 1

Put the node in HX Maintenance Mode, shut down the node, decommission it, and then remove the mini-storage module carrier from the server as described in Replacing a Mini-Storage Module Carrier.

Step 2

Remove an M.2 SSD:

  1. Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 SSD to the carrier.

  2. Remove the M.2 SSD from its socket on the carrier.

Step 3

Install a new M.2 SSD:

  1. Angle the M.2 SSD downward and insert the connector-end into the socket on the carrier. The M.2 SSD's label must face up.

  2. Press the M.2 SSD flat against the carrier.

  3. Install the single screw that secures the end of the M.2 SSD to the carrier.

Step 4

Install the mini-storage module carrier back into the node and then recommission the node, reassociate its profile, and exit HX Maintenance Mode as described in Replacing a Mini-Storage Module Carrier.


Replacing the RTC Battery


Warning


There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s instructions.

[Statement 1015]


The real-time clock (RTC) battery retains system settings when the node is disconnected from power. The battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be ordered from Cisco (PID N20-MBLIBATT) or purchased from most electronic stores.

Procedure


Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Remove the RTC battery:

  1. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the node from the rack.
  2. Remove the top cover from the server as described in Removing the Node Top Cover.

  3. Locate the RTC battery. The vertical socket is directly in front of PCIe riser 2.

  4. Remove the battery from the socket on the motherboard. Gently pry the securing clip on one side open to provide clearance, then lift straight up on the battery.

Step 6

Install a new RTC battery:

  1. Insert the battery into its holder and press down until it clicks in place under the clip.

    Note

     
    The flat, positive side of the battery marked “3V+” should face left as you face the server front.
  2. Replace the top cover to the node.

  3. Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.

Figure 21. RTC Battery Location on Motherboard

1

RTC battery in vertical socket

-

Step 7

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 8

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 9

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Replacing Power Supplies

The node can use one or two power supplies. When two power supplies are installed they are redundant as 1+1.

This section includes a procedure for replacing AC power supply units.

Replacing AC Power Supplies


Note


You do not have to power off the server to replace a power supply when two power supplies are installed because they are redundant as 1+1.

Note


Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Procedure

Step 1

Remove the power supply that you are replacing:

  1. Perform one of the following actions:

Step 2

Remove the power cord from the power supply that you are replacing.

Step 3

Grasp the power supply handle while pinching the release lever toward the handle.

Step 4

Pull the power supply out of the bay.

Step 5

Install a new power supply:

  1. Grasp the power supply handle and insert the new power supply into the empty bay.

  2. Push the power supply into the bay until the release lever locks.

  3. Connect the power cord to the new power supply.

Step 6

Only if you shut down the node, perform these steps:

  1. Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

  2. Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

  3. After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Figure 22. Replacing AC Power Supplies

1

Power supply release lever

2

Power supply handle


Replacing DC Power Supplies


Warning


A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.

Statement 1022



Warning


This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.

Statement 1045



Warning


Installation of the equipment must comply with local and national electrical codes.

Statement 1074



Note


If you are replacing DC power supplies in a server with power supply redundancy (two power supplies), you do not have to power off the server to replace a power supply because they are redundant as 1+1.

Note


Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Procedure

Step 1

Remove the power supply that you are replacing:

  1. Perform one of the following actions:

Step 2

Remove the power cord from the power supply that you are replacing. Lift the securing clip slightly and then pull the connector from the socket on the power supply.

Step 3

Grasp the power supply handle while pinching the release lever toward the handle.

Step 4

Pull the power supply out of the bay.

Step 5

Install a new power supply:

  1. Grasp the power supply handle and insert the new power supply into the empty bay.

  2. Push the power supply into the bay until the release lever locks.

  3. Connect the power cord to the new power supply. Push the connector into the socket until the securing clip clicks.

Step 6

Only if you shut down the node, perform these steps:

  1. Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

  2. Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

  3. After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Figure 23. Replacing DC Power Supplies

1

Keyed cable connector (CAB-48DC-40A-8AWG)

3

PSU status LED

2

Keyed DC input socket

-


Grounding for DC Power Supplies

AC power supplies have internal grounding and so no additional grounding is required when the supported AC power cords are used.

When using a DC power supply, additional grounding of the server chassis to the earth ground of the rack is available. Two threaded holes for use with your dual-hole grounding lug and grounding wire are supplied on the chassis rear panel.


Note


The grounding points on the chassis are sized for M5 screws. You must provide your own screws, grounding lug, and grounding wire. The grounding lug must be dual-hole lug that fits M5 screws. The grounding cable that you provide must be 14 AWG (2 mm), minimum 60° C wire, or as permitted by the local code.

Replacing a PCIe Card


Note


Cisco supports all PCIe cards qualified and sold by Cisco. PCIe cards not qualified or sold by Cisco are the responsibility of the customer. Although Cisco will always stand behind and support the nodes, customers using standard, off-the-shelf, third-party cards must go to the third-party card vendor for support if any issue with that particular card occurs.

PCIe Slot Specifications

The node contains two PCIe slots on one riser assembly for horizontal installation of PCIe cards. Both slots support the NCSI protocol and 12V standby power.

Figure 24. Rear Panel, Showing PCIe Slot Numbering

The following tables describe the specifications for the slots.

Table 6. PCIe Riser 1/Slot 1

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

1

Gen-3 x16

x24 connector

¾ length

Full-height

Yes

Micro SD card slot

One socket for Micro SD card

Table 7. PCIe Riser 2/Slot 2

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

2

Gen-3 x16

x24 connector

½ length

½ height

Yes

PCIe cable connector for front-panel NVMe SSDs

Gen-3 x8

Other end of cable connects to front drive backplane to support front-panel NVMe SSDs.

Replacing a PCIe Card


Note


The HBA controller card installs to a separate mRAID riser. See Replacing a SAS Storage Controller Card (HBA).

Note


If the card you are replacing is a Cisco VIC 1455 (HX-PCIE-C25Q-04), note that this card requires Cisco HX 4.0(1a) or later.


Procedure

Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Remove an existing PCIe card (or a blank filler panel) from the PCIe riser:

  1. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  2. Remove the top cover from the node as described in Removing the Node Top Cover.

  3. Remove any cables from the ports of the PCIe card that you are replacing.

  4. Use two hands to grasp the external riser handle and the blue area at the front of the riser.

  5. Lift straight up to disengage the riser's connectors from the two sockets on the motherboard. Set the riser upside-down on an antistatic surface.

  6. Open the hinged plastic retainer that secures the rear-panel tab of the card.

  7. Pull evenly on both ends of the PCIe card to remove it from the socket on the PCIe riser.

    If the riser has no card, remove the blanking panel from the rear opening of the riser.

Step 6

Install a new PCIe card:

  1. With the hinged tab retainer open, align the new PCIe card with the empty socket on the PCIe riser.

    PCIe riser 1/slot 1 has a long-card guide at the front end of the riser. Use the slot in the long-card guide to help support a full-length card.

  2. Push down evenly on both ends of the card until it is fully seated in the socket.

  3. Ensure that the card’s rear panel tab sits flat against the riser rear-panel opening and then close the hinged tab retainer over the card’s rear-panel tab.

    Figure 25. PCIe Riser Assembly

    1

    PCIe slot 1 rear-panel opening

    4

    Hinged card retainer (one each slot)

    2

    External riser handle

    5

    PCIe connector for cable that supports front-panel NVMe SSDs

    3

    PCIe slot 2 rear-panel opening

  4. Position the PCIe riser over its two sockets on the motherboard and over the two chassis alignment channels.

    Figure 26. PCIe Riser Alignment Features

    1

    Blue riser handle

    2

    Riser alignment features in chassis

  5. Carefully push down on both ends of the PCIe riser to fully engage its two connectors with the two sockets on the motherboard.

  6. Replace the top cover to the node.

  7. Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.

Step 7

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 8

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 9

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Replacing an mLOM Card

The node supports a modular LOM (mLOM) card to provide additional rear-panel connectivity, such as a Cisco VIC adapter. The horizontal mLOM socket is on the motherboard, under the mRAID riser.

The mLOM socket provides a Gen-3 x16 PCIe lane. The socket remains powered when the node is in 12 V standby power mode and it supports the network communications services interface (NCSI) protocol.


Note


If the card you are replacing is a Cisco VIC 1457 (HX-MLOM-C25Q-04), note that this card requires Cisco HX 4.0(1a) or later.


Procedure


Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Remove any existing mLOM card (or a blanking panel):

  1. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the node from the rack.
  2. Remove the top cover from the server as described in Removing the Node Top Cover.

  3. Remove the mRAID riser to provide access to the mLOM socket below the riser.

    To remove the mRAID riser, use both hands to grasp the external blue handle on the rear and the blue finger-grip on the front. Lift straight up.

    You do not have to disconnect cables from any HBA card that is installed in the riser. Carefully move the riser aside only far enough to provide clearance.

  4. Loosen the single captive thumbscrew that secures the mLOM card to the threaded standoff on the chassis floor.

  5. Slide the mLOM card horizontally to free it from the socket, then lift it out of the node.

Step 6

Install a new mLOM card:

  1. Set the mLOM card on the chassis floor so that its connector is aligned with the motherboard socket.

  2. Push the card horizontally to fully engage the card's edge connector with the socket.

  3. Tighten the captive thumbscrew to secure the card to the standoff on the chassis floor.

  4. Return the mRAID riser to its socket.

    Carefully align the mRAID riser's edge connector with the motherboard socket at the same time you align the two channels on the riser with the two pegs on the inner chassis wall. Press down evenly on both ends of the riser to fully engage its connector with the motherboard socket.

  5. Replace the top cover to the node.

  6. Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.

Figure 27. Location of the mLOM Card Socket Below the mRAID Riser

1

Position of horizontal mLOM card socket

2

Position of mLOM card thumbscrew

Step 7

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 8

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 9

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Replacing a SAS Storage Controller Card (HBA)

For hardware-based storage control, the node can use a SAS HBA that plugs into a horizontal socket on a dedicated mRAID riser (internal riser 3).

Storage Controller Card Firmware Compatibility

Firmware on the storage controller HBA must be verified for compatibility with the current Cisco IMC and BIOS versions that are installed on the node. If not compatible, upgrade or downgrade the storage controller firmware using the Host Upgrade Utility (HUU) for your firmware release to bring it to a compatible level.

See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring node components to compatible levels: HUU Guides.

Replacing a SAS Storage Controller Card (HBA)

Procedure

Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Prepare the node for component installation:

  1. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the node from the rack.
  2. Remove the top cover from the server as described in Removing the Node Top Cover.

Step 6

Remove the mRAID riser (riser 3) from the server:

  1. Using both hands, grasp the external blue handle on the rear of the riser and the blue finger-grip on the front end of the riser.

  2. Lift the riser straight up to disengage it from the motherboard socket.

  3. Set the riser upside down on an antistatic surface.

Step 7

Remove any existing card from the riser:

  1. Disconnect cables from the existing card.

  2. Open the blue card-ejector lever on the back side of the card to eject it from the socket on the riser.

  3. Pull the card from the riser and set it aside.

Step 8

Install a new storage controller card to the riser:

  1. With the riser upside down, set the card on the riser.

  2. Push on both corners of the card to seat its connector in the riser socket.

  3. Close the card-ejector lever on the card to lock it into the riser.

  4. Connect cables to the installed card.

Step 9

Return the riser to the node:

  1. Align the connector on the riser with the socket on the motherboard. At the same time, align the two slots on the back side of the bracket with the two pegs on the inner chassis wall.

  2. Push down gently to engage the riser connector with the motherboard socket. The metal riser bracket must also engage the two pegs that secure it to the chassis wall.

Step 10

Replace the top cover to the node.

Step 11

Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.

Figure 28. mRAID Riser (Internal Riser 3) Location

1

External blue handle

3

Card-ejector lever

2

Two pegs on inner chassis wall

-

Step 12

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 13

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 14

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Replacing a Micro SD Card

There is one socket for a Micro SD card on the top of PCIe riser 1.


Caution


To avoid data loss, we do not recommend that you hot-swap the Micro SD card while it is operating, as indicated by its activity LED turning amber. The activity LED turns amber when the Micro SD card is updating or deleting.

Procedure


Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Remove an existing Micro SD card:

  1. Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the node from the rack.
  2. Remove the top cover from the node as described in Removing the Node Top Cover.

  3. Locate the Micro SD card. The socket is on the top of PCIe riser 1, under a flexible plastic cover.

  4. Use your fingertip to push open the retainer on the socket cover far enough to provide access to the Micro SD card, then push down and release the Micro SD card to make it spring up.

  5. Grasp the Micro SD card and lift it from the socket.

Step 6

Install a new Micro SD card:

  1. While holding the retainer on the plastic cover open with your fingertip, align the new Micro SD card with the socket.

  2. Gently push down on the card until it clicks and locks in place in the socket.

  3. Replace the top cover to the node.

  4. Replace the node in the rack, replace cables, and then fully power on the node by pressing the Power button.

Figure 29. Internal Micro SD Card Socket

1

Location of Micro SD card socket on the top of PCIe riser 1

3

Plastic retainer (push aside to access socket)

2

Micro SD card socket under plastic retainer

4

Micro SD activity LED

Step 7

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 8

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 9

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.

Step 10

For HX Edge or CIMC-managed standalone servers, initialize the Micro SD card as described in Resetting Flex Util Card Configuration.


Service Headers and Jumpers

This node includes two blocks of headers (J38, J39) that you can jumper for certain service and debug functions.

Figure 30. Location of Service Header Blocks J38 and J39

1

Location of header block J38

6

Location of header block J39

2

J38 pin 1 arrow printed on motherboard

7

J39 pin 1 arrow printed on motherboard

3

Clear CMOS: J38 pins 9 - 10

8

Boot Cisco IMC from alternate image: J39 pins 1 - 2

4

Recover BIOS: J38 pins 11 - 12

9

Reset Cisco IMC password to default: J39 pins 3 - 4

5

Clear password: J38 pins 13 - 14

10

Reset Cisco IMC to defaults: J39 pins 5 - 6

Using the Clear CMOS Header (J38, Pins 9 - 10)

You can use this header to clear the node’s CMOS settings in the case of a system hang. For example, if the node hangs because of incorrect settings and does not boot, use this jumper to invalidate the settings and reboot with defaults.


Caution


Clearing the CMOS removes any customized settings and might result in data loss. Make a note of any necessary customized settings in the BIOS before you use this clear CMOS procedure.

Procedure


Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the node from the rack.

Step 6

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 7

Locate header block J38 and pins 9-10.

Step 8

Install a two-pin jumper across pins 9 and 10.

Step 9

Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 10

Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.

Note

 
You must allow the entire node to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.

Step 11

Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.

Step 12

Remove the top cover from the node.

Step 13

Remove the jumper that you installed.

Note

 
If you do not remove the jumper, the CMOS settings are reset to the defaults every time you power-cycle the node.

Step 14

Replace the top cover, replace the node in the rack, replace power cords and any other cables, and then power on the node by pressing the Power button.

Step 15

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 16

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 17

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Using the BIOS Recovery Header (J38, Pins 11 - 12)

Depending on which stage the BIOS becomes corrupted, you might see different behavior.

  • If the BIOS BootBlock is corrupted, you might see the system get stuck on the following message:

    Initializing and configuring memory/hardware
  • If it is a non-BootBlock corruption, a message similar to the following is displayed:

    ****BIOS FLASH IMAGE CORRUPTED****
    Flash a valid BIOS capsule file using Cisco IMC WebGUI or CLI interface.
    IF Cisco IMC INTERFACE IS NOT AVAILABLE, FOLLOW THE STEPS MENTIONED BELOW.
    1. Connect the USB stick with bios.cap file in root folder.
    2. Reset the host.
    IF THESE STEPS DO NOT RECOVER THE BIOS
    1. Power off the system.
    2. Mount recovery jumper.
    3. Connect the USB stick with bios.cap file in root folder.
    4. Power on the system.
    Wait for a few seconds if already plugged in the USB stick.
    REFER TO SYSTEM MANUAL FOR ANY ISSUES.

Note


As indicated by the message shown above, there are two procedures for recovering the BIOS. Try procedure 1 first. If that procedure does not recover the BIOS, use procedure 2.

Procedure 1: Reboot With bios.cap File

Procedure

Step 1

Download the BIOS update package and extract it to a temporary location.

Step 2

Copy the contents of the extracted recovery folder to the root directory of a USB drive. The recovery folder contains the bios.cap file that is required in this procedure.

Note

 
The bios.cap file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either the FAT16 or FAT32 file system.

Step 3

Insert the USB drive into a USB port on the node.

Step 4

Reboot the node.

Step 5

Return the node to main power mode by pressing the Power button on the front panel.

The node boots with the updated BIOS boot block. When the BIOS detects a valid bios.cap file on the USB drive, it displays this message:

Found a valid recovery file...Transferring to Cisco IMC
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...

Step 6

Wait for node to complete the BIOS update, and then remove the USB drive from the node.

Note

 
During the BIOS update, Cisco IMC shuts down the node and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the node after the update is complete.

Procedure 2: Use BIOS Recovery Header and bios.cap File

Procedure

Step 1

Download the BIOS update package and extract it to a temporary location.

Step 2

Copy the contents of the extracted recovery folder to the root directory of a USB drive. The recovery folder contains the bios.cap file that is required in this procedure.

Note

 
The bios.cap file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either the FAT16 or FAT32 file system.

Step 3

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 4

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 5

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 6

Disconnect all power cables from all power supplies.

Step 7

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the node from the rack.

Step 8

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 9

Locate header block J38 and pins 11-12.

Step 10

Install a two-pin jumper across pins 11 and 12.

Step 11

Reconnect AC power cords to the node. The node powers up to standby power mode.

Step 12

Insert the USB thumb drive that you prepared in Step 2 into a USB port on the node.

Step 13

Return the node to main power mode by pressing the Power button on the front panel.

The node boots with the updated BIOS boot block. When the BIOS detects a valid bios.cap file on the USB drive, it displays this message:

Found a valid recovery file...Transferring to Cisco IMC
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...

Step 14

Wait for node to complete the BIOS update, and then remove the USB drive from the node.

Note

 
During the BIOS update, Cisco IMC shuts down the node and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the node after the update is complete.

Step 15

After the node has fully booted, power off the node again and disconnect all power cords.

Step 16

Remove the jumper that you installed.

Note

 
If you do not remove the jumper, after recovery completion you see the prompt, “Please remove the recovery jumper.”

Step 17

Replace the top cover, replace the node in the rack, replace power cords and any other cables, and then power on the node by pressing the Power button.

Step 18

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 19

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 20

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Using the Clear Password Header (J38, Pins 13 - 14)

You can use this header to clear the administrator password.

Procedure


Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 6

Remove the top cover from the server as described in Removing the Node Top Cover.

Step 7

Locate header block J38 and pins 13-14.

Step 8

Install a two-pin jumper across pins 13 and 14.

Step 9

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 10

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note

 
You must allow the entire server to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.

Step 11

Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 12

Remove the top cover from the server.

Step 13

Remove the jumper that you installed.

Note

 
If you do not remove the jumper, the password is cleared every time you power-cycle the server.

Step 14

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.

Step 15

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 16

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 17

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Using the Boot Alternate Cisco IMC Image Header (J39, Pins 1 - 2)

You can use this Cisco IMC debug header to force the system to boot from an alternate Cisco IMC image.

Procedure


Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Slide the node out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the node from the rack.

Step 6

Remove the top cover from the node as described in Removing the Node Top Cover.

Step 7

Locate header block J39, pins 1-2.

Step 8

Install a two-pin jumper across J39 pins 1 and 2.

Step 9

Reinstall the top cover and reconnect AC power cords to the node. The node powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 10

Return the node to main power mode by pressing the Power button on the front panel. The node is in main power mode when the Power LED is green.

Note

 

When you next log in to Cisco IMC, you see a message similar to the following:

'Boot from alternate image' debug functionality is enabled.  
CIMC will boot from alternate image on next reboot or input power cycle.

Step 11

Press the Power button to shut down the node to standby power mode, and then remove AC power cords from the node to remove all power.

Step 12

Remove the top cover from the node.

Step 13

Remove the jumper that you installed.

Note

 
If you do not remove the jumper, the node will boot from an alternate Cisco IMC image every time that you power cycle the node or reboot Cisco IMC.

Step 14

Replace the top cover, replace the node in the rack, replace power cords and any other cables, and then power on the node by pressing the Power button.

Step 15

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 16

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 17

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Using the Reset Cisco IMC Password to Default Header (J39, Pins 3 - 4)

You can use this Cisco IMC debug header to force the Cisco IMC password back to the default.

Procedure


Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 6

Remove the top cover from the server as described in Removing the Node Top Cover.

Step 7

Locate header block J39, pins 3-4.

Step 8

Install a two-pin jumper across J39 pins 3 and 4.

Step 9

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 10

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note

 

When you next log in to Cisco IMC, you see a message similar to the following:

'Reset to default CIMC password' debug functionality is enabled.  
On input power cycle, CIMC password will be reset to defaults.

Step 11

Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 12

Remove the top cover from the server.

Step 13

Remove the jumper that you installed.

Note

 
If you do not remove the jumper, the server will reset the Cisco IMC password to the default every time that you power cycle the server. The jumper has no effect if you reboot Cisco IMC.

Step 14

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.

Step 15

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 16

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 17

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Using the Reset Cisco IMC to Defaults Header (J39, Pins 5 - 6)

You can use this Cisco IMC debug header to force the Cisco IMC settings back to the defaults.

Procedure


Step 1

Put the node in Cisco HX Maintenance Mode as described in Shutting Down Using vSphere With HX Maintenance Mode.

Step 2

Shut down the node as described in Shutting Down and Removing Power From the Node.

Step 3

Decommission the node from UCS Manager as described in Decommissioning the Node Using Cisco UCS Manager.

Caution

 

After a node is shut down to standby power, electric current is still present in the node. To completely remove power, you must disconnect all power cords from the power supplies in the node.

Step 4

Disconnect all power cables from all power supplies.

Step 5

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 6

Remove the top cover from the server as described in Removing the Node Top Cover.

Step 7

Locate header block J39, pins 5-6.

Step 8

Install a two-pin jumper across J39 pins 5 and 6.

Step 9

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 10

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note

 

When you next log in to Cisco IMC, you see a message similar to the following:

'CIMC reset to factory defaults' debug functionality is enabled.  
On input power cycle, CIMC will be reset to factory defaults.

Step 11

To remove the jumper, press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 12

Remove the top cover from the server.

Step 13

Remove the jumper that you installed.

Note

 
If you do not remove the jumper, the server will reset the Cisco IMC to the default settiings every time that you power cycle the server. The jumper has no effect if you reboot Cisco IMC.

Step 14

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.

Step 15

Recommission the node in UCS Manager as described in Recommissioning the Node Using Cisco UCS Manager.

Step 16

Associate the node with its UCS Manager service profile as described in Associating a Service Profile With an HX Node.

Step 17

After ESXi reboot, exit HX Maintenance mode as described in Exiting HX Maintenance Mode.


Setting Up the Node in Standalone Mode


Note


The HX Series node is always managed in UCS Manager-controlled mode. This section is included only for cases in which a node might need to be put into standalone mode for troubleshooting purposes. Do not use this setup for normal operation of the HX Series node.


Initial Node Setup (Standalone)


Note


This section describes how to power on the node, assign an IP address, and connect to node management when using the node in standalone mode.

Node Default Settings

The node is shipped with these default settings:

  • The NIC mode is Shared LOM EXT.

    Shared LOM EXT mode enables the 1-Gb/10-Gb Ethernet ports and the ports on any installed Cisco virtual interface card (VIC) to access the Cisco Integrated Management Interface (Cisco IMC). If you want to use the 10/100/1000 dedicated management ports to access Cisco IMC, you can connect to the node and change the NIC mode as described in Setting Up the Node With the Cisco IMC Configuration Utility.

  • The NIC redundancy is Active-Active. All Ethernet ports are utilized simultaneously.

  • DHCP is enabled.

  • IPv4 is enabled.

Connection Methods

There are two methods for connecting to the system for initial setup:

  • Local setup—Use this procedure if you want to connect a keyboard and monitor directly to the system for setup. This procedure can use a KVM cable (Cisco PID N20-BKVM) or the ports on the rear of the node.

  • Remote setup—Use this procedure if you want to perform setup through your dedicated management LAN.


    Note


    To configure the system remotely, you must have a DHCP server on the same network as the system. Your DHCP server must be preconfigured with the range of MAC addresses for this node. The MAC address is printed on a label that is on the pull-out asset tag on the front panel. This node has a range of six MAC addresses assigned to the Cisco IMC. The MAC address printed on the label is the beginning of the range of six contiguous MAC addresses.

Connecting to the Node Locally For Standalone Setup

This procedure requires the following equipment:

  • VGA monitor

  • USB keyboard

  • Either the supported Cisco KVM cable (Cisco PID N20-BKVM); or a USB cable and VGA DB-15 cable

Procedure

Step 1

Attach a power cord to each power supply in your node, and then attach each power cord to a grounded power outlet.

Wait for approximately two minutes to let the node boot to standby power during the first bootup. You can verify system power status by looking at the system Power Status LED on the front panel. The system is in standby power mode when the LED is amber.

Step 2

Connect a USB keyboard and VGA monitor to the node using one of the following methods:

  • Connect an optional KVM cable (Cisco PID N20-BKVM) to the KVM connector on the front panel. Connect your USB keyboard and VGA monitor to the KVM cable.

  • Connect a USB keyboard and VGA monitor to the corresponding connectors on the rear panel.

Step 3

Open the Cisco IMC Configuration Utility:

  1. Press and hold the front panel power button for four seconds to boot the node.

  2. During bootup, press F8 when prompted to open the Cisco IMC Configuration Utility.

Step 4

Continue with Setting Up the Node With the Cisco IMC Configuration Utility.


Connecting to the Node Remotely For Standalone Setup

This procedure requires the following equipment:

  • One RJ-45 Ethernet cable that is connected to your management LAN.

Before you begin

Note


To configure the system remotely, you must have a DHCP server on the same network as the system. Your DHCP server must be preconfigured with the range of MAC addresses for this node node. The MAC address is printed on a label that is on the pull-out asset tag on the front panel. This node has a range of six MAC addresses assigned to the Cisco IMC. The MAC address printed on the label is the beginning of the range of six contiguous MAC addresses.
Procedure

Step 1

Attach a power cord to each power supply in your node, and then attach each power cord to a grounded power outlet.

Wait for approximately two minutes to let the node boot to standby power during the first bootup. You can verify system power status by looking at the system Power Status LED on the front panel. The system is in standby power mode when the LED is amber.

Step 2

Plug your management Ethernet cable into the dedicated management port on the rear panel.

Step 3

Allow your preconfigured DHCP server to assign an IP address to the node node.

Step 4

Use the assigned IP address to access and log in to the Cisco IMC for the node node. Consult with your DHCP node administrator to determine the IP address.

Note

 
The default user name for the node is admin. The default password is password.

Step 5

From the Cisco IMC node Summary page, click Launch KVM Console. A separate KVM console window opens.

Step 6

From the Cisco IMC Summary page, click Power Cycle node. The system reboots.

Step 7

Select the KVM console window.

Note

 
The KVM console window must be the active window for the following keyboard actions to work.

Step 8

When prompted, press F8 to enter the Cisco IMC Configuration Utility. This utility opens in the KVM console window.

Step 9

Continue with Connecting to the Node Remotely For Standalone Setup.


Setting Up the Node With the Cisco IMC Configuration Utility

Before you begin

The following procedure is performed after you connect to the node and open the Cisco IMC Configuration Utility.

Procedure

Step 1

Set the NIC mode to choose which ports to use to access Cisco IMC for server management:

  • Shared LOM EXT (default)—This is the shared LOM extended mode, the factory-default setting. With this mode, the Shared LOM and Cisco Card interfaces are both enabled. You must select the default Active-Active NIC redundancy setting in the following step.

    In this NIC mode, DHCP replies are returned to both the shared LOM ports and the Cisco card ports. If the system determines that the Cisco card connection is not getting its IP address from a Cisco UCS Manager system because the server is in standalone mode, further DHCP requests from the Cisco card are disabled. Use the Cisco Card NIC mode if you want to connect to Cisco IMC through a Cisco card in standalone mode.

  • Shared LOM—The 1-Gb/10-Gb Ethernet ports are used to access Cisco IMC. You must select either the Active-Active or Active-standby NIC redundancy setting in the following step.

  • Dedicated—The dedicated management port is used to access Cisco IMC. You must select the None NIC redundancy setting in the following step.

  • Cisco Card—The ports on an installed Cisco UCS Virtual Interface Card (VIC) are used to access the Cisco IMC. You must select either the Active-Active or Active-standby NIC redundancy setting in the following step.

    See also the required VIC Slot setting below.

  • VIC Slot—Only if you use the Cisco Card NIC mode, you must select this setting to match where your VIC is installed. The choices are Riser1, Riser2, or Flex-LOM (the mLOM slot).

    • If you select Riser1, you must install the VIC in slot 1.

    • If you select Riser2, you must install the VIC in slot 2.

    • If you select Flex-LOM, you must install an mLOM-style VIC in the mLOM slot.

Step 2

Set the NIC redundancy to your preference. This server has three possible NIC redundancy settings:

  • None—The Ethernet ports operate independently and do not fail over if there is a problem. This setting can be used only with the Dedicated NIC mode.

  • Active-standby—If an active Ethernet port fails, traffic fails over to a standby port. Shared LOM and Cisco Card modes can each use either Active-standby or Active-active settings.

  • Active-active (default)—All Ethernet ports are utilized simultaneously. The Shared LOM EXT mode must use only this NIC redundancy setting. Shared LOM and Cisco Card modes can each use either Active-standby or Active-active settings.

Step 3

Choose whether to enable DHCP for dynamic network settings, or to enter static network settings.

Note

 

Before you enable DHCP, you must preconfigure your DHCP server with the range of MAC addresses for this server. The MAC address is printed on a label on the rear of the server. This server has a range of six MAC addresses assigned to Cisco IMC. The MAC address printed on the label is the beginning of the range of six contiguous MAC addresses.

The static IPv4 and IPv6 settings include the following:

  • The Cisco IMC IP address.

    For IPv6, valid values are 1 - 127.

  • The gateway.

    For IPv6, if you do not know the gateway, you can set it as none by entering :: (two colons).

  • The preferred DNS server address.

    For IPv6, you can set this as none by entering :: (two colons).

Step 4

(Optional) Make VLAN settings.

Step 5

Press F1 to go to the second settings window, then continue with the next step.

From the second window, you can press F2 to switch back to the first window.

Step 6

(Optional) Set a hostname for the server.

Step 7

(Optional) Enable dynamic DNS and set a dynamic DNS (DDNS) domain.

Step 8

(Optional) If you check the Factory Default check box, the server reverts to the factory defaults.

Step 9

(Optional) Set a default user password.

Note

 
The factory default username for the server is admin. The default password is password.

Step 10

(Optional) Enable auto-negotiation of port settings or set the port speed and duplex mode manually.

Note

 
Auto-negotiation is applicable only when you use the Dedicated NIC mode. Auto-negotiation sets the port speed and duplex mode automatically based on the switch port to which the server is connected. If you disable auto-negotiation, you must set the port speed and duplex mode manually.

Step 11

(Optional) Reset port profiles and the port name.

Step 12

Press F5 to refresh the settings that you made. You might have to wait about 45 seconds until the new settings appear and the message, “Network settings configured” is displayed before you reboot the server in the next step.

Step 13

Press F10 to save your settings and reboot the server.

Note

 
If you chose to enable DHCP, the dynamically assigned IP and MAC addresses are displayed on the console screen during bootup.

What to do next
Use a browser and the IP address of the Cisco IMC to connect to the Cisco IMC management interface. The IP address is based upon the settings that you made (either a static address or the address assigned by your DHCP server).

Note


The factory default username for the server is admin. The default password is password.