Maintaining the Server

This chapter contains the following sections:

Status LEDs and Buttons

This section contains information for interpreting LED states.

Front-Panel LEDs

Figure 1. Front Panel LEDs
Table 1. Front Panel LEDs, Definition of States

LED Name

States

1

SAS

SAS/SATA drive fault

Note 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The hard drive is operating properly.

  • Amber—Drive fault detected.

  • Amber, blinking—The device is rebuilding.

  • Amber, blinking with one-second interval—Drive locate function activated in the software.

2

SAS

SAS/SATA drive activity LED

  • Off—There is no hard drive in the hard drive tray (no access, no fault).

  • Green—The hard drive is ready.

  • Green, blinking—The hard drive is reading or writing data.

1

NVMe

NVMe SSD drive fault

Note 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The drive is not in use and can be safely removed.

  • Green—The drive is in use and functioning properly.

  • Green, blinking—the driver is initializing following insertion or the driver is unloading following an eject command.

  • Amber—The drive has failed.

  • Amber, blinking—A drive Locate command has been issued in the software.

2

NVMe

NVMe SSD activity

  • Off—No drive activity.

  • Green, blinking—There is drive activity.

3

Power button/LED

  • Off—There is no AC power to the server.

  • Amber—The server is in standby power mode. Power is supplied only to the Cisco IMC and some motherboard functions.

  • Green—The server is in main power mode. Power is supplied to all server components.

4

Unit identification

  • Off—The unit identification function is not in use.

  • Blue, blinking—The unit identification function is activated.

5

System health

  • Green—The server is running in normal operating condition.

  • Green, blinking—The server is performing system initialization and memory check.

  • Amber, steady—The server is in a degraded operational state (minor fault). For example:

    • Power supply redundancy is lost.

    • CPUs are mismatched.

    • At least one CPU is faulty.

    • At least one DIMM is faulty.

    • At least one drive in a RAID configuration failed.

  • Amber, 2 blinks—There is a major fault with the system board.

  • Amber, 3 blinks—There is a major fault with the memory DIMMs.

  • Amber, 4 blinks—There is a major fault with the CPUs.

6

Fan status

  • Green—All fan modules are operating properly.

  • Amber, blinking—One or more fan modules breached the non-recoverable threshold.

7

Temperature status

  • Green—The server is operating at normal temperature.

  • Amber, steady—One or more temperature sensors breached the critical threshold.

  • Amber, blinking—One or more temperature sensors breached the non-recoverable threshold.

8

Power supply status

  • Green—All power supplies are operating normally.

  • Amber, steady—One or more power supplies are in a degraded operational state.

  • Amber, blinking—One or more power supplies are in a critical fault state.

9

Network link activity

  • Off—The Ethernet LOM port link is idle.

  • Green—One or more Ethernet LOM ports are link-active, but there is no activity.

  • Green, blinking—One or more Ethernet LOM ports are link-active, with activity.

10

DVD drive activity

  • Off—The drive is idle.

  • Green, steady—The drive is spinning up a disk.

  • Green, blinking—The drive is accessing data.

Rear-Panel LEDs

Figure 2. Rear Panel LEDs
Table 2. Rear Panel LEDs, Definition of States

LED Name

States

1

1-Gb/10-Gb Ethernet link speed (on both LAN1 and LAN2)

  • Amber—Link speed is 100 Mbps.

  • Amber—Link speed is 1 Gbps.

  • Green—Link speed is 10 Gbps.

2

1-Gb/10-Gb Ethernet link status (on both LAN1 and LAN2)

  • Off—No link is present.

  • Green—Link is active.

  • Green, blinking—Traffic is present on the active link.

3

1-Gb Ethernet dedicated management link speed

  • Off—Link speed is 10 Mbps.

  • Amber—Link speed is 100 Mbps.

  • Green—Link speed is 1 Gbps.

4

1-Gb Ethernet dedicated management link status

  • Off—No link is present.

  • Green—Link is active.

  • Green, blinking—Traffic is present on the active link.

5

Rear unit identification

  • Off—The unit identification function is not in use.

  • Blue, blinking—The unit identification function is activated.

6

Power supply status (one LED each power supply unit)

AC power supplies:

  • Off—No AC input (12 V main power off, 12 V standby power off).

  • Green, blinking—12 V main power off; 12 V standby power on.

  • Green, solid—12 V main power on; 12 V standby power on.

  • Amber, blinking—Warning threshold detected but 12 V main power on.

  • Amber, solid—Critical error detected; 12 V main power off (for example, over-current, over-voltage, or over-temperature failure).

DC power supply (UCSC-PSUV2-1050DC):

  • Off—No DC input (12 V main power off, 12 V standby power off).

  • Green, blinking—12 V main power off; 12 V standby power on.

  • Green, solid—12 V main power on; 12 V standby power on.

  • Amber, blinking—Warning threshold detected but 12 V main power on.

  • Amber, solid—Critical error detected; 12 V main power off (for example, over-current, over-voltage, or over-temperature failure).

7

SAS

SAS/SATA drive fault

Note 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The hard drive is operating properly.

  • Amber—Drive fault detected.

  • Amber, blinking—The device is rebuilding.

  • Amber, blinking with one-second interval—Drive locate function activated in the software.

8

SAS

SAS/SATA drive activity LED

  • Off—There is no hard drive in the hard drive tray (no access, no fault).

  • Green—The hard drive is ready.

  • Green, blinking—The hard drive is reading or writing data.

7

NVMe

NVMe SSD drive fault

Note 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The drive is not in use and can be safely removed.

  • Green—The drive is in use and functioning properly.

  • Green, blinking—the driver is initializing following insertion or the driver is unloading following an eject command.

  • Amber—The drive has failed.

  • Amber, blinking—A drive Locate command has been issued in the software.

8

NVMe

NVMe SSD activity

  • Off—No drive activity.

  • Green, blinking—There is drive activity.

Internal Diagnostic LEDs

The server has internal fault LEDs for CPUs, DIMMs, and fan modules.

Figure 3. Internal Diagnostic LED Locations

1

Fan module fault LEDs (one on the top of each fan module)

  • Amber—Fan has a fault or is not fully seated.

  • Green—Fan is OK.

3

DIMM fault LEDs (one behind each DIMM socket on the motherboard)

These LEDs operate only when the server is in standby power mode.

  • Amber—DIMM has a fault.

  • Off—DIMM is OK.

2

CPU fault LEDs (one behind each CPU socket on the motherboard).

These LEDs operate only when the server is in standby power mode.

  • Amber—CPU has a fault.

  • Off—CPU is OK.

-

Preparing For Component Installation

This section includes information and tasks that help prepare the server for component installation.

Required Equipment For Service Procedures

The following tools and equipment are used to perform the procedures in this chapter:

  • T-30 Torx driver (supplied with replacement CPUs for heatsink removal)

  • #1 flat-head screwdriver (used during CPU or heatsink replacement)

  • #1 Phillips-head screwdriver (for M.2 SSD and intrusion switch replacement)

  • Electrostatic discharge (ESD) strap or other grounding equipment such as a grounded mat

Shutting Down and Removing Power From the Server

The server can run in either of two power modes:

  • Main power mode—Power is supplied to all server components and any operating system on your drives can run.

  • Standby power mode—Power is supplied only to the service processor and certain components. It is safe for the operating system and data to remove power cords from the server in this mode.


Caution

After a server is shut down to standby power, electric current is still present in the server. To completely remove power, you must disconnect all power cords from the power supplies in the server, as directed in the service procedures.

You can shut down the server by using the front-panel power button or the software management interfaces.


Shutting Down Using the Power Button

Procedure

Step 1

Check the color of the Power button/LED:

  • Amber—The server is already in standby mode and you can safely remove power.

  • Green—The server is in main power mode and must be shut down before you can safely remove power.

Step 2

Invoke either a graceful shutdown or a hard shutdown:

Caution 
To avoid data loss or damage to your operating system, you should always invoke a graceful shutdown of the operating system.
  • Graceful shutdown—Press and release the Power button. The operating system performs a graceful shutdown and the server goes to standby mode, which is indicated by an amber Power button/LED.

  • Emergency shutdown—Press and hold the Power button for 4 seconds to force the main power off and immediately enter standby mode.

Step 3

If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the power supplies in the server.


Shutting Down Using The Cisco IMC GUI

You must log in with user or admin privileges to perform this task.

Procedure

Step 1

In the Navigation pane, click the Server tab.

Step 2

On the Server tab, click Summary.

Step 3

In the Actions area, click Power Off Server.

Step 4

Click OK.

The operating system performs a graceful shutdown and the server goes to standby mode, which is indicated by an amber Power button/LED.

Step 5

If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the power supplies in the server.


Shutting Down Using The Cisco IMC CLI

You must log in with user or admin privileges to perform this task.

Procedure

Step 1

At the server prompt, enter:

Example:
server# scope chassis
Step 2

At the chassis prompt, enter:

Example:
server/chassis# power shutdown

The operating system performs a graceful shutdown and the server goes to standby mode, which is indicated by an amber Power button/LED.

Step 3

If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the power supplies in the server.


Shutting Down Using The Cisco UCS Manager Equipment Tab

You must log in with user or admin privileges to perform this task.

Procedure

Step 1

In the Navigation pane, click Equipment.

Step 2

Expand Equipment > Rack Mounts > Servers.

Step 3

Choose the server that you want to shut down.

Step 4

In the Work pane, click the General tab.

Step 5

In the Actions area, click Shutdown Server.

Step 6

If a confirmation dialog displays, click Yes.

The operating system performs a graceful shutdown and the server goes to standby mode, which is indicated by an amber Power button/LED.

Step 7

If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the power supplies in the server.


Shutting Down Using The Cisco UCS Manager Service Profile

You must log in with user or admin privileges to perform this task.

Procedure

Step 1

In the Navigation pane, click Servers.

Step 2

Expand Servers > Service Profiles.

Step 3

Expand the node for the organization that contains the service profile of the server that you are shutting down.

Step 4

Choose the service profile of the server that you are shutting down.

Step 5

In the Work pane, click the General tab.

Step 6

In the Actions area, click Shutdown Server.

Step 7

If a confirmation dialog displays, click Yes.

The operating system performs a graceful shutdown and the server goes to standby mode, which is indicated by an amber Power button/LED.

Step 8

If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the power supplies in the server.


Removing the Server Top Cover

Procedure


Step 1

Remove the top cover:

  1. If the cover latch is locked, use a screwdriver to turn the lock 90-degrees counterclockwise to unlock it.

  2. Lift on the end of the latch that has the green finger grip. The cover is pushed back to the open position as you lift the latch.

  3. Lift the top cover straight up from the server and set it aside.

Step 2

Replace the top cover:

  1. With the latch in the fully open position, place the cover on top of the server about one-half inch (1.27 cm) behind the lip of the front cover panel. The opening in the latch should fit over the peg that sticks up from the fan tray.

  2. Press the cover latch down to the closed position. The cover is pushed forward to the closed position as you push down the latch.

  3. If desired, lock the latch by using a screwdriver to turn the lock 90-degrees clockwise.

Figure 4. Removing the Top Cover

1

Cover latch

3

Serial number label location

2

Cover lock


Serial Number Location

The serial number for the server is printed on a label on the top of the server, near the front. See Removing the Server Top Cover.

Hot Swap vs Hot Plug

Some components can be removed and replaced without shutting down and removing power from the server. This type of replacement has two varieties: hot-swap and hot-plug.

  • Hot-swap replacement—You do not have to shut down the component in the software or operating system. This applies to the following components:

    • SAS/SATA hard drives

    • SAS/SATA solid state drives

    • Cooling fan modules

    • Power supplies (when redundant as 1+1)

  • Hot-plug replacement—You must take the component offline before removing it for the following component:

    • NVMe PCIe solid state drives

Removing and Replacing Components


Warning

Blank faceplates and cover panels serve three important functions: they prevent exposure to hazardous voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate the system unless all cards, faceplates, front covers, and rear covers are in place.

Statement 1029



Caution

When handling server components, handle them only by carrier edges and use an electrostatic discharge (ESD) wrist-strap or other grounding device to avoid damage.

Tip

You can press the unit identification button on the front panel or rear panel to turn on a flashing, blue unit identification LED on both the front and rear panels of the server. This button allows you to locate the specific server that you are servicing when you go to the opposite side of the rack. You can also activate these LEDs remotely by using the Cisco IMC interface.

This section describes how to install and replace server components.

Serviceable Component Locations

This topic shows the locations of the field-replaceable components and service-related items. The view in the following figure shows the server with the top cover removed.

Figure 5. Cisco UCS C240 M5 Server, Serviceable Component Locations

1

Front-loading drive bays.

11

Power supplies (hot-swappable when redundant as 1+1)

2

Cooling fan modules (six, hot-swappable)

12

Rear 2.5-inch drive bays:

  • Server PID UCSC-C240-M5SN supports up to two rear NVMe PCIe SSDs only.

  • All other C240 M5 PIDs support up to two drives:

    • When using a hardware-RAID controller card in the server, SAS/SATA drives or NVMe SSDs are supported in the rear bays.

    • When using software RAID in the server, only NVMe SSDs are supported in the rear bays.

3

DIMM sockets on motherboard (up to 12 per CPU)

Not visible under air baffle in this view.

See DIMM Population Rules and Memory Performance Guidelines for DIMM slot numbering.

13

Trusted platform module (TPM) socket on motherboard (not visible in this view)

4

CPUs and heatsinks (up to two)

Not visible under air baffle in this view.

14

PCIe riser 2 (PCIe slots 4, 5, 6), with the following options:

  • 2A—Slots 4 (x16), 5 (x16), and 6 (x8).

  • 2B—Slots 4 (x8), 5 (x16), and 6 (x8); includes cable connector for rear-loading NVMe SSDs.

  • 2C—With slots 4 (x8), 5 (x8), and 6 (x8); includes two cable connectors for rear-loading and front-loading NVMe SSDs.

5

Supercap unit (RAID backup) mounting bracket

15

Micro-SD card socket on PCIe riser 1

6

Internal, vertical USB 3.0 port on motherboard

16

PCIe riser 1 (PCIe slot 1, 2, 3), with the following options:

  • 1A—Slots 1 (x8), 2 (x16), 3 (x8); slot 2 requires CPU2.

  • 1B—Slots 1 (x8), 2 (x8), 3 (x8); all slots supported by CPU1

7

Mini-storage module socket. Options:

  • SD card module with two SD card slots

  • M.2 module with slots for either two SATA M.2 drives or two NVMe M.2 drives

17

Modular LOM (mLOM) card bay on chassis floor (x16 PCIe lane), not visible in this view

8

Chassis intrusion switch (optional)

18

Cisco modular RAID controller PCIe slot (dedicated slot)

9

PCIe cable connectors for NVMe SSDs, only on these PCIe riser 2 options:

  • 2B: One connector for rear NVMe SSDs.

  • 2C: One connector for rear NVMe SSDs plus one connector for front-loading NVMe SSDs

  • 2D: One connector for rear NVMe SSDs. (This riser version is available only in the NVMe-optimized server UCSC-C240-M5SN.)

19

RTC battery, vertical socket

10

Rear-drive backplane assembly

20

Securing clips for GPU cards on air baffle

The Technical Specifications Sheets for all versions of this server, which include supported component part numbers, are at Cisco UCS Servers Technical Specifications Sheets (scroll down to Technical Specifications).

Replacing Front-Loading SAS/SATA Drives


Note

You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.

To replace rear-loading SAS/SATA drives, see Replacing Rear-Loading SAS/SATA Drives.

Front-Loading SAS/SATA Drive Population Guidelines

The server is orderable in four different versions, each with a different front panel/drive-backplane configuration.

  • Cisco UCS C240 M5 (UCSC-C240-M5SX)—Small form-factor (SFF) drives, with 24-drive backplane.

    • Front-loading drive bays 1—24 support 2.5-inch SAS/SATA drives.

    • Optionally, front-loading drive bays 1 and 2 support 2.5-inch NVMe SSDs.

  • Cisco UCS C240 M5 (UCSC-C240-M5SN)—SFF drives, with 24-drive backplane.

    • Front-loading drive bays 1—8 support 2.5-inch NVMe PCIe SSDs only.

    • Front-loading drive bays 9—24 support 2.5-inch SAS/SATA drives.

  • Cisco UCS C240 M5 (UCSC-C240-M5S)—SFF drives, with 8-drive backplane and DVD drive option.

    • Front-loading drive bays 1—8 support 2.5-inch SAS/SATA drives.

    • Optionally, front-loading drive bays 1 and 2 support 2.5-inch NVMe SSDs.

  • Cisco UCS C240 M5 (UCSC-C240-M5L)—Large form-factor (LFF) drives, with 12-drive backplane.

    • Front-loading drive bays 1—12 support 3.5-inch SAS/SATA drives.

    • Optionally, front-loading drive bays 1 and 2 support 3.5-inch NVMe SSDs.

Drive bay numbering is shown in the following figures.

Figure 6. Small Form-Factor Drive (24-Drive) Versions, Drive Bay Numbering
Figure 7. Small Form-Factor Drive (8-Drive) Version, Drive Bay Numbering
Figure 8. Large Form-Factor Drive (12-Drive) Version, Drive Bay Numbering

Observe these drive population guidelines for optimum performance:

  • When populating drives, add drives to the lowest-numbered bays first.


    Note

    For diagrams of which drive bays are controlled by particular controller cables on the backplane, see Storage Controller Cable Connectors and Backplanes.
  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

  • You can mix SAS/SATA hard drives and SAS/SATA SSDs in the same server. However, you cannot configure a logical volume (virtual drive) that contains a mix of hard drives and SSDs. That is, when you create a logical volume, it must contain all SAS/SATA hard drives or all SAS/SATA SSDs.

4K Sector Format SAS/SATA Drives Considerations

  • You must boot 4K sector format drives in UEFI mode, not legacy mode. See the procedures in this section.

  • Do not configure 4K sector format and 512-byte sector format drives as part of the same RAID volume.

  • For operating system support on 4K sector drives, see the interoperability matrix tool for your server: Hardware and Software Interoperability Matrix Tools

Procedure


Setting Up UEFI Mode Booting in the BIOS Setup Utility
Procedure

Step 1

Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

Step 2

Go to the Boot Options tab.

Step 3

Set UEFI Boot Options to Enabled.

Step 4

Under Boot Option Priorities, set your OS installation media (such as a virtual DVD) as your Boot Option #1.

Step 5

Go to the Advanced tab.

Step 6

Select LOM and PCIe Slot Configuration.

Step 7

Set the PCIe Slot ID: HBA Option ROM to UEFI Only.

Step 8

Press F10 to save changes and exit the BIOS setup utility. Allow the server to reboot.

Step 9

After the OS installs, verify the installation:

  1. Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

  2. Go to the Boot Options tab.

  3. Under Boot Option Priorities, verify that the OS you installed is listed as your Boot Option #1.


Setting Up UEFI Mode Booting in the Cisco IMC GUI
Procedure

Step 1

Use a web browser and the IP address of the server to log into the Cisco IMC GUI management interface.

Step 2

Navigate to Server > BIOS.

Step 3

Under Actions, click Configure BIOS.

Step 4

In the Configure BIOS Parameters dialog, select the Advanced tab.

Step 5

Go to the LOM and PCIe Slot Configuration section.

Step 6

Set the PCIe Slot: HBA Option ROM to UEFI Only.

Step 7

Click Save Changes. The dialog closes.

Step 8

Under BIOS Properties, set Configured Boot Order to UEFI.

Step 9

Under Actions, click Configure Boot Order.

Step 10

In the Configure Boot Order dialog, click Add Local HDD.

Step 11

In the Add Local HDD dialog, enter the information for the 4K sector format drive and make it first in the boot order.

Step 12

Save changes and reboot the server. The changes you made will be visible after the system reboots.


Replacing a Front-Loading SAS/SATA Drive


Note

You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.
Procedure

Step 1

Remove the drive that you are replacing or remove a blank drive tray from the bay:

  1. Press the release button on the face of the drive tray.

  2. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  3. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 2

Install a new drive:

  1. Place a new drive in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Figure 9. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Replacing Rear-Loading SAS/SATA Drives


Note

You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.

Rear-Loading SAS/SATA Drive Population Guidelines

The rear drive bay support differs by server PID and which type of RAID controller is used in the server:

  • UCSC-C240-M5SX—Small form-factor (SFF) drives, with 24-drive backplane.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

    • Embedded software RAID—Rear drive bays support NVMe drives only.

  • UCSC-C240-M5SN—SFF drives, with 24-drive backplane.

    • Rear drive bays support only NVMe SSDs.

  • UCSC-C240-M5S—SFF drives, with 8-drive backplane and DVD drive option.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

    • Embedded software RAID—Rear drive bays support NVMe drives only.

  • UCSC-C240-M5L—Large form-factor (LFF) drives, with 12-drive backplane.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

    • Embedded software RAID—Rear drive bays support NVMe drives only.

  • The rear drive bay numbering follows the front-drive bay numbering in each server version:

    • 8-drive server—rear bays are numbered bays 9 and 10.

    • 12-drive server—rear bays are numbered bays 13 and 14.

    • 24-drive server—rear bays are numbered bays 25 and 26.

  • When populating drives, add drives to the lowest-numbered bays first.

  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

  • You can mix SAS/SATA hard drives and SAS/SATA SSDs in the same server. However, you cannot configure a logical volume (virtual drive) that contains a mix of hard drives and SSDs. That is, when you create a logical volume, it must contain all SAS/SATA hard drives or all SAS/SATA SSDs.

Replacing a Rear-Loading SAS/SATA Drive


Note

You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.
Procedure

Step 1

Remove the drive that you are replacing or remove a blank drive tray from the bay:

  1. Press the release button on the face of the drive tray.

  2. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  3. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 2

Install a new drive:

  1. Place a new drive in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Figure 10. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Replacing Front-Loading NVMe SSDs

This section is for replacing 2.5-inch or 3.5-inch form-factor NVMe solid-state drives (SSDs) in front-panel drive bays.

To replace HHHL form-factor NVMe SSDs in the PCIe slots, see Replacing HHHL Form-Factor NVMe Solid State Drives.

Front-Loading NVMe SSD Population Guidelines

The front drive bay support for 2.5- or 3.5-inch NVMe SSDs differs by server PID:

  • UCSC-C240-M5SX—Small form-factor (SFF) drives, with 24-drive backplane. Drive bays 1 and 2 support 2.5-inch NVMe SSDs.

  • UCSC-C240-M5SN—SFF drives, with 24-drive backplane. Drive bay 1 - 8 support only 2.5-inch NVMe SSDs.

  • UCSC-C240-M5S—SFF drives, with 8-drive backplane and DVD drive option. Drive bays 1 and 2 support 2.5-inch NVMe SSDs.

  • UCSC-C240-M5L—Large form-factor (LFF) drives, with 12-drive backplane. Drive bays 1 and 2 support 2.5-inch and 3.5-inch NVMe SSDs. If you use 2.5-inch NVMe SSDs, a size-converter drive tray (UCS-LFF-SFF-SLED2) is required for this version of the server.

Front-Loading NVME SSD Requirements and Restrictions

Observe these requirements:

  • The server must have two CPUs. PCIe riser 2 is not available in a single-CPU system.

  • PCIe riser 2C (UCSC-PCI-2C-240M5). Only PCIe riser 2C has a connector for the cable that connects to the front-panel drive backplane.

  • PCIe cable. This is the cable that carries the PCIe signal from the front-panel drive backplane to PCIe riser 2C. The cable differs by server version:

    • For small form factor (SFF) drive versions of the server: CBL-NVME-C240SFF

    • For the large form factor (LFF) drive version of the server: CBL-NVME-C240LFF

  • Hot-plug support must be enabled in the system BIOS. If you ordered the system with NVMe drives, hot-plug support is enabled at the factory.

Observe these restrictions:

  • NVMe 2.5- and 3.5-inch SSDs support booting only in UEFI mode. Legacy boot is not supported. For instructions on setting up UEFI boot, see Setting Up UEFI Mode Booting in the BIOS Setup Utility or Setting Up UEFI Mode Booting in the Cisco IMC GUI.

  • You cannot control NVMe PCIe SSDs with a SAS RAID controller because NVMe SSDs interface with the server via the PCIe bus.

  • You can combine NVMe 2.5- or 3.5-inch SSDs and HHHL form-factor SSDs in the same system, but the same partner brand must be used. For example, two Intel NVMe SFF 2.5-inch SSDs and two HGST HHHL form-factor SSDs is an invalid configuration. A valid configuration is two HGST NVMe SFF 2.5-inch SSDs and two HGST HHHL form-factor SSDs.

  • UEFI boot is supported in all supported operating systems. Hot-insertion and hot-removal are supported in all supported operating systems except VMWare ESXi.

Enabling Hot-Plug Support in the System BIOS

Hot-plug (OS-informed hot-insertion and hot-removal) is disabled in the system BIOS by default.

  • If the system was ordered with NVMe PCIe SSDs, the setting was enabled at the factory. No action is required.

  • If you are adding NVMe PCIe SSDs after-factory, you must enable hot-plug support in the BIOS. See the following procedures.

Enabling Hot-Plug Support Using the BIOS Setup Utility
Procedure

Step 1

Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

Step 2

Navigate to Advanced > PCI Subsystem Settings > NVMe SSD Hot-Plug Support.

Step 3

Set the value to Enabled.

Step 4

Save your changes and exit the utility.


Enabling Hot-Plug Support Using the Cisco IMC GUI
Procedure

Step 1

Use a browser to log in to the Cisco IMC GUI for the server.

Step 2

Navigate to Compute > BIOS > Advanced > PCI Configuration.

Step 3

Set NVME SSD Hot-Plug Support to Enabled.

Step 4

Save your changes.


Replacing a Front-Loading NVMe SSD

This topic describes how to replace 2.5- or 3.5-inch form-factor NVMe SSDs in the front-panel drive bays.


Note

OS-surprise removal is not supported. OS-informed hot-insertion and hot-removal are supported on all supported operating systems except VMware ESXi.



Note

OS-informed hot-insertion and hot-removal must be enabled in the system BIOS. See Enabling Hot-Plug Support in the System BIOS.


Procedure

Step 1

Remove an existing front-loading NVMe SSD:

  1. Shut down the NVMe SSD to initiate an OS-informed removal. Use your operating system interface to shut down the drive, and then observe the drive-tray LED:

    • Green—The drive is in use and functioning properly. Do not remove.

    • Green, blinking—the driver is unloading following a shutdown command. Do not remove.

    • Off—The drive is not in use and can be safely removed.

  2. Press the release button on the face of the drive tray.

  3. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  4. Remove the four drive tray screws that secure the SSD to the tray and then lift the SSD out of the tray.

Note 
If this is the first time that front-loading NVMe SSDs are being installed in the server, you must install a PCIe cable with PCIe riser 2C. See Installing PCIe Riser 2C and Cable For Front-Loading NVMe SSDs.
Step 2

Install a new front-loading NVMe SSD:

  1. Place a new SSD in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Step 3

Observe the drive-tray LED and wait until it returns to solid green before accessing the drive:

  • Off—The drive is not in use.

  • Green, blinking—the driver is initializing following hot-plug insertion.

  • Green—The drive is in use and functioning properly.

Figure 11. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Installing PCIe Riser 2C and Cable For Front-Loading NVMe SSDs

The front-loading NVMe SSDs interface with the server via the PCIe bus. A PCIe cable connects the front-panel drive backplane to PCIe riser 2C.


Note

Only PCIe riser version 2C has a connector that supports front-loading NVMe SSDs. For SFF versions of the server, use cable CBL-NVME-C240SFF. For the LFF version of the server, use cable CBL-NVME-C240LFF.
  • If the server was ordered with front-loading NVMe SSDs, riser 2C and the PCIe cable were preinstalled at the factory. No action is required.

  • If you are adding front-loading NVMe SSDs for the first time, you must order and install riser 2C and the PCIe cable as described in the following procedure.

Procedure

Step 1

Remove the existing PCIe riser 2 version and replace it with PCIe riser 2C. See Replacing a PCIe Riser.

Step 2

Connect the two connectors on one end of the cable to the PCIe connectors on the drive backplane.

Step 3

Route the cables through the chassis cable guides to the rear of the server as shown below.

Step 4

Connect the other end of the cable to the "Front NVMe" connector on PCIe riser 2C.

In the following figure, the colored lines represent cabling paths:

  • The red line represents the cable path from riser 2C to the front-drive backplane.

  • The blue line represents the cable path from riser 2C to the optional rear-drive backplane.

Figure 12. PCIe Cabling From PCIe Riser 2 to Drive Backplanes

1

Front NVMe cable connector

(on riser version 2C only)

2

Rear NVMe cable connector

(on riser version 2B, 2C, or 2D only)


Replacing Rear-Loading NVMe SSDs

This section is for replacing 2.5-inch form-factor NVMe solid-state drives (SSDs) in rear-panel drive bays.

Rear-Loading NVMe SSD Population Guidelines

The rear drive bay support differs by server PID and which type of RAID controller is used in the server for non-NVMe drives:

  • UCSC-C240-M5SX—Small form-factor (SFF) drives, with 24-drive backplane.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

    • Embedded software RAID—Rear drive bays support NVMe drives only.

  • UCSC-C240-M5SN—SFF drives, with 24-drive backplane.

    • Rear drive bays support only NVMe SSDs.

  • UCSC-C240-M5S—SFF drives, with 8-drive backplane and DVD drive option.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

    • Embedded software RAID—Rear drive bays support NVMe drives only.

  • UCSC-C240-M5L—Large form-factor (LFF) drives, with 12-drive backplane.

    • Hardware RAID—Rear drive bays support SAS or NVMe drives

    • Embedded software RAID—Rear drive bays support NVMe drives only.

  • The rear drive bay numbering follows the front-drive bay numbering in each server version:

    • 8-drive server—rear bays are numbered bays 9 and 10.

    • 12-drive server—rear bays are numbered bays 13 and 14.

    • 24-drive server—rear bays are numbered bays 25 and 26.

  • When populating drives, add drives to the lowest-numbered bays first.

  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

Rear-Loading NVME SSD Requirements and Restrictions

Observe these requirements:

  • The server must have two CPUs. PCIe riser 2 is not available in a single-CPU system.

  • PCIe riser 2B, 2C, or 2D. PCIe riser 2B, 2C, and 2D have the connector for the cable that connects to the rear drive backplane. It is not orderable separately.


    Note

    Riser 2D is available only in the NVMe-optimized server UCSC-C240-M5SN.


  • Rear PCIe cable and rear drive backplane. These two items are kitted together (UCSC-RNVME-240M5).

  • Hot-plug support must be enabled in the system BIOS. If you ordered the system with NVMe drives, hot-plug support is enabled at the factory.

Observe these restrictions:

  • NVMe SSDs support booting only in UEFI mode. Legacy boot is not supported. For instructions on setting up UEFI boot, see Setting Up UEFI Mode Booting in the BIOS Setup Utility or Setting Up UEFI Mode Booting in the Cisco IMC GUI.

  • You cannot control NVMe PCIe SSDs with a SAS RAID controller because NVMe SSDs interface with the server via the PCIe bus.

  • You can combine NVMe 2.5-inch SSDs and HHHL form-factor SSDs in the same system, but the same partner brand must be used. For example, two Intel NVMe SFF 2.5-inch SSDs and two HGST HHHL form-factor SSDs is an invalid configuration. A valid configuration is two HGST NVMe SFF 2.5-inch SSDs and two HGST HHHL form-factor SSDs.

  • UEFI boot is supported in all supported operating systems. Hot-insertion and hot-removal are supported in all supported operating systems except VMWare ESXi.

Replacing a Rear-Loading NVMe SSD

This topic describes how to replace 2.5-inch form-factor NVMe SSDs in the rear-panel drive bays.


Note

OS-surprise removal is not supported. OS-informed hot-insertion and hot-removal are supported on all supported operating systems except VMware ESXi.



Note

OS-informed hot-insertion and hot-removal must be enabled in the system BIOS. See Enabling Hot-Plug Support in the System BIOS.


Procedure

Step 1

Remove an existing rear-loading NVMe SSD:

  1. Shut down the NVMe SSD to initiate an OS-informed removal. Use your operating system interface to shut down the drive, and then observe the drive-tray LED:

    • Green—The drive is in use and functioning properly. Do not remove.

    • Green, blinking—the driver is unloading following a shutdown command. Do not remove.

    • Off—The drive is not in use and can be safely removed.

  2. Press the release button on the face of the drive tray.

  3. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  4. Remove the four drive tray screws that secure the SSD to the tray and then lift the SSD out of the tray.

Note 

If this is the first time that rear-loading NVMe SSDs are being installed in the server, you must install PCIe riser 2B or 2C and a rear NVMe cable kit. (see Installing a Rear-Loading NVMe Cable Kit).

Step 2

Install a new front-loading NVMe SSD:

  1. Place a new SSD in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Step 3

Observe the drive-tray LED and wait until it returns to solid green before accessing the drive:

  • Off—The drive is not in use.

  • Green, blinking—the driver is initializing following hot-plug insertion.

  • Green—The drive is in use and functioning properly.

Figure 13. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Installing a Rear-Loading NVMe Cable Kit

The rear-loading NVMe SSDs interface with the server via the PCIe bus. A PCIe cable connects the rear NVMe backplane to PCIe riser 2B or 2C. The required cable is sold as a kit with the rear NVMe backplane (UCSC-RNVME-240M5).

  • If the server was ordered with rear-loading NVMe SSDs, this kit and the correct PCIe riser were preinstalled at the factory. No action is required.

  • If you are adding rear-loading NVMe SSDs for the first time, you must order and install the kit and correct PCIe riser as described in the following procedure.

Procedure

Step 1

If necessary, remove the existing PCIe riser 2 version and replace it with PCIe riser 2B or 2C. See Replacing a PCIe Riser.

Step 2

Install the rear NVMe backplane to the rear drive cage. See .

Step 3

Connect the two connectors on one end of the rear NVMe cable to the PCIe connectors on the rear NVMe backplane.

Step 4

Connect the two connectors on the other end of the rear NVMe cable to the "Rear NVMe" connector on PCIe riser 2B or 2C.

In the following figure, the colored lines represent cabling paths:

  • The blue line represents the cable path from riser 2B or 2C to the rear NVMe backplane.

  • The red line represents the cable path from riser 2C (only) to the front drive backplane.

Figure 14. PCIe Cabling From PCIe Riser 2 to Drive Backplanes

1

Front NVMe cable connector

(on riser version 2C only)

2

Rear NVMe cable connector

(on riser version 2B or 2C)


Replacing a Rear-Loading Drive Backplane Assembly

Although all server versions have the rear-drive cage installed as part of the chassis at the factory, you can use it only if you have ordered or installed a rear backplane.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Remove any existing rear drive backplane assembly:

  1. Remove any drives from the existing rear drive backplane and set them aside.

  2. Disconnect the CONN REAR cable from the backplane.

  3. Lift the hinged rear backplane retainer on the top of the rear backplane cage. This metal retainer is marked blue.

  4. Grasp the black plastic fingergrip on the rear backplane and lift the backplane straight up to remove it from the motherboard socket and cage.

Step 5

Install a new rear backplane or kit:

  1. Grasp the new rear backplane by the black plastic fingergrip on its frame.

  2. Lower the new backplane into the guide channels on the cage until its edge connector touches the socket on the motherboard.

  3. Push down on the top of the backplane until its securing clips click and the edge connector is firmly seated in the motherboard socket.

  4. Close the hinged rear backplane retainer on the top of the cage.

  5. Connect the CONN REAR cable from your SAS controller or PCIe riser to the socket on the backplane.

  6. Install drives to the rear bays.

Step 6

Replace the top cover to the server.

Step 7

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing HHHL Form-Factor NVMe Solid State Drives

This section is for replacing half-height, half-length (HHHL) form-factor NVMe SSDs in the PCIe risers.

HHHL SSD Population Guidelines

Observe the following population guidelines when installing HHHL form-factor NVMe SSDs:

  • Two-CPU systems—You can populate up to 6 HHHL form-factor SSDs, using PCIe slots 1 – 6.

  • One-CPU systems—In a single-CPU system, PCIe riser 2 is not available. Therefore, the maximum number of HHHL form-factor SSDs you can populate is 3, using PCIe slots 1 - 3.

HHHL Form-Factor NVME SSD Requirements and Restrictions

Observe these requirements:

  • All versions of the server support HHHL form-factor NVMe SSDs.

Observe these restrictions:

  • You cannot boot from an HHHL form-factor NVMe SSD.

  • You cannot control HHHL NVMe SSDs with a SAS RAID controller because NVMe SSDs interface with the server via the PCIe bus.

  • You can combine NVMe SFF 2.5- or 3.5-inch SSDs and HHHL form-factor SSDs in the same system, but the same partner brand must be used. For example, two Intel NVMe SFF 2.5-inch SSDs and two HGST HHHL form-factor SSDs is an invalid configuration. A valid configuration is two HGST NVMe SFF 2.5-inch SSDs and two HGST HHHL form-factor SSDs.

Replacing an HHHL Form-Factor NVMe SSD


Note

In a single-CPU server, PCIe riser 2 (PCIe slot 2) is not available.
Procedure

Step 1

Remove an existing HHHL form-factor NVME SSD (or a blank filler panel) from the PCIe riser:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Use two hands to flip up and grasp the blue riser handle and the blue fingergrip area on the front edge of the riser, and then lift straight up.

  5. On the bottom of the riser, push the release latch that holds the securing plate, and then swing the hinged securing plate open.

  6. Open the hinged card-tab retainer that secures the rear-panel tab of the card.

  7. Pull evenly on both ends of the HHHL form-factor NVME SSD to remove it from the socket on the PCIe riser.

    If the riser has no SSD, remove the blanking panel from the rear opening of the riser.

Step 2

Install a new HHHL form-factor NVME SSD:

  1. Open the hinged, plastic card-tab retainer.

  2. Align the new SSD with the empty socket on the PCIe riser.

  3. Push down evenly on both ends of the card until it is fully seated in the socket.

  4. Ensure that the SSD’s rear panel tab sits flat against the riser rear-panel opening and then close the hinged card-tab retainer over the rear-panel tab.

  5. Close the hinged securing plate.

  6. Position the PCIe riser over its socket on the motherboard and over the chassis alignment channels.

  7. Carefully push down on both ends of the PCIe riser to fully engage its connector with the sockets on the motherboard.

  8. Replace the top cover to the server.

  9. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 15. PCIe Riser Card Securing Mechanisms

1

Release latch on hinged securing plate

3

Hinged card-tab retainer

2

Hinged securing plate

-


Replacing Fan Modules

The six fan modules in the server are numbered as shown in Serviceable Component Locations.


Tip

There is a fault LED on the top of each fan module. This LED lights green when the fan is correctly seated and is operating OK. The LED lights amber when the fan has a fault or is not correctly seated.

Caution

You do not have to shut down or remove power from the server to replace fan modules because they are hot- swappable. However, to maintain proper cooling, do not operate the server for more than one minute with any fan module removed.

Procedure


Step 1

Remove an existing fan module:

  1. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  2. Remove the top cover from the server as described in Removing the Server Top Cover.

  3. Grasp and squeeze the fan module release latches on its top. Lift straight up to disengage its connector from the motherboard.

Step 2

Install a new fan module:

  1. Set the new fan module in place. The arrow printed on the top of the fan module should point toward the rear of the server.

  2. Press down gently on the fan module to fully engage it with the connector on the motherboard.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 16. Top View of Fan Module

1

Fan module release latches

2

Fan module fault LED


Replacing CPUs and Heatsinks

This section contains the following topics:

Special Information For Upgrades to Second Generation Intel Xeon Scalable Processors


Caution

You must upgrade your server firmware to the required minimum level before you upgrade to the Second Generation Intel Xeon Scalable processors that are supported in this server. Older firmware versions cannot recognize the new CPUs and this would result in a non-bootable server.


The minimum software and firmware versions required for this server to support Second Generation Intel Xeon Scalable processors are as follows:

Table 3. Minimum Requirements For Second Generation Intel Xeon Scalable processors

Software or Firmware

Minimum Version

Server Cisco IMC

4.0(4)

Server BIOS

4.0(4)

CPU Configuration Rules

This server has two CPU sockets on the motherboard. Each CPU supports six DIMM channels (12 DIMM slots). See DIMM Population Rules and Memory Performance Guidelines.

  • The server can operate with one CPU or two identical CPUs installed.

  • The minimum configuration is that the server must have at least CPU 1 installed. Install CPU 1 first, and then CPU 2.

  • For Intel Xeon Scalable processors (first generation): The maximum combined memory allowed in the 12 DIMM slots controlled by any one CPU is 768 GB. To populate the 12 DIMM slots with more than 768 GB of combined memory, you must use a high-memory CPU that has a PID that ends with an "M", for example, UCS-CPU-6134M.

  • For Second Generation Intel Xeon Scalable processors: These Second Generation CPUs have three memory tiers. These rules apply on a per-socket basis:

    • If the CPU socket has up to 1 TB of memory installed, a CPU with no suffix can be used (for example, Gold 6240).

    • If the CPU socket has 1 TB or more (up to 2 TB) of memory installed, you must use a CPU with an M suffix (for example, Platinum 8276M).

    • If the CPU socket has 2 TB or more (up to 4.5 TB) of memory installed, you must use a CPU with an L suffix (for example, Platinum 8270L).

  • The following restrictions apply when using a single-CPU configuration:

    • Any unused CPU socket must have the socket dust cover from the factory in place.

    • The maximum number of DIMMs is 12 (only CPU 1 channels A, B, C, D, E, F).

    • PCIe riser 2 (slots 4, 5, 6) is unavailable.

    • You must use PCIe riser 1B (UCSC-PCI-1B-C240M5) to have support for all three slots (PCIe 1, 2, 3). In PCIe riser 1 (UCSC-PCI-1-C240M5), slot 3 is unavailable because it is controlled by CPU 2.

    • Front- and rear-loading NVMe drives are unavailable (they require PCIe riser 2B or 2C).

  • The following NVIDIA GPUs are not supported with Second Generation Intel Xeon Scalable processors:

    • NVIDIA Tesla P4

    • NVIDIA Tesla P100 12G

    • NVIDIA Tesla P100 16G

Tools Required For CPU Replacement

You need the following tools and equipment for this procedure:

  • T-30 Torx driver—Supplied with replacement CPU.

  • #1 flat-head screwdriver—Supplied with replacement CPU.

  • CPU assembly tool—Supplied with replacement CPU. Orderable separately as Cisco PID UCS-CPUAT=.

  • Heatsink cleaning kit—Supplied with replacement CPU. Orderable separately as Cisco PID UCSX-HSCK=.

    One cleaning kit can clean up to four CPUs.

  • Thermal interface material (TIM)—Syringe supplied with replacement CPU. Use only if you are reusing your existing heatsink (new heatsinks have a pre-applied pad of TIM). Orderable separately as Cisco PID UCS-CPU-TIM=.

    One TIM kit covers one CPU.

See also Additional CPU-Related Parts to Order with RMA Replacement CPUs.

Replacing a CPU and Heatsink


Caution

CPUs and their sockets are fragile and must be handled with extreme care to avoid damaging pins. The CPUs must be installed with heatsinks and thermal interface material to ensure cooling. Failure to install a CPU correctly might result in damage to the server.


An instructive video is available for this procedure: CPU and Heatsink Replacement in Cisco UCS M5 Servers

Procedure

Step 1

Remove the existing CPU/heatsink assembly from the server:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Use the T-30 Torx driver that is supplied with the replacement CPU to loosen the four captive nuts that secure the assembly to the motherboard standoffs.

    Note 
    Alternate loosening the heatsink nuts evenly so that the heatsink remains level as it is raised. Loosen the heatsink nuts in the order shown on the heatsink label: 4, 3, 2, 1.
  5. Lift straight up on the CPU/heatsink assembly and set it heatsink-down on an antistatic surface.

    Figure 17. Removing the CPU/Heatsink Assembly

    1

    Heatsink

    4

    CPU socket on motherboard

    2

    Heatsink captive nuts (two on each side)

    5

    T-30 Torx driver

    3

    CPU carrier (below heatsink in this view)

    -

Step 2

Separate the heatsink from the CPU assembly (the CPU assembly includes the CPU and the plastic CPU carrier):

  1. Place the heatsink with CPU assembly so that it is oriented upside-down.

    Note the thermal-interface material (TIM) breaker location. TIM BREAKER is stamped on the CPU carrier next to a small slot.

    Figure 18. Separating the CPU Assembly From the Heatsink

    1

    CPU carrier

    4

    CPU-carrier inner-latch nearest to the TIM breaker slot

    2

    CPU

    5

    #1 flat-head screwdriver inserted into TIM breaker slot

    3

    TIM BREAKER slot in CPU carrier

    -

  2. Pinch inward on the CPU-carrier clip that is nearest the TIM breaker slot and then push up to disengage the clip from its slot in the heatsink corner.

  3. Insert the blade of a #1 flat-head screwdriver into the slot marked TIM BREAKER.

    Note 

    In the following step, do not pry on the CPU surface. Use gentle rotation to lift on the plastic surface of the CPU carrier at the TIM breaker slot. Use caution to avoid damaging the heatsink surface.

  4. Gently rotate the screwdriver to lift up on the CPU until the TIM on the heatsink separates from the CPU.

    Note 

    Do not allow the screwdriver tip to touch or damage the green CPU substrate.

  5. Pinch the CPU-carrier clip at the corner opposite the TIM breaker and push up to disengage the clip from its slot in the heatsink corner.

  6. On the remaining two corners of the CPU carrier, gently pry outward on the outer-latches and then lift the CPU-assembly from the heatsink.

    Note 

    Handle the CPU-assembly by the plastic carrier only. Do not touch the CPU surface. Do not separate the CPU from the plastic carrier.

Step 3

The new CPU assembly is shipped on a CPU assembly tool. Take the new CPU assembly and CPU assembly tool out of the carton.

If the CPU assembly and CPU assembly tool become separated, note the alignment features for correct orientation. The pin 1 triangle on the CPU carrier must be aligned with the angled corner on the CPU assembly tool.

Caution 

CPUs and their sockets are fragile and must be handled with extreme care to avoid damaging pins.

Figure 19. CPU Assembly Tool, CPU Assembly, and Heatsink Alignment Features

1

CPU assembly tool

4

Angled corner on heatsink (pin 1 alignment feature)

2

CPU assembly (CPU in plastic carrier frame)

5

Triangle cut into plastic carrier (pin 1 alignment feature)

3

Heatsink

6

Angled corner on CPU assembly tool (pin 1 alignment feature)

Step 4

Apply new TIM to the heatsink:

Note 

The heatsink must have new TIM on the heatsink-to-CPU surface to ensure proper cooling and performance.

  • If you are installing a new heatsink, it is shipped with a pre-applied pad of TIM. Go to step 5.

  • If you are reusing a heatsink, you must remove the old TIM from the heatsink and then apply new TIM to the CPU surface from the supplied syringe. Continue with step a below.

  1. Apply the cleaning solution that is included with the heatsink cleaning kit (UCSX-HSCK=) to the old TIM on the heatsink and let it soak for a least 15 seconds.

  2. Wipe all of the TIM off the heatsink using the soft cloth that is included with the heatsink cleaning kit. Be careful to avoid scratching the heatsink surface.

  3. Using the syringe of TIM provided with the new CPU (UCS-CPU-TIM=), apply 1.5 cubic centimeters (1.5 ml) of thermal interface material to the top of the CPU. Use the pattern shown below to ensure even coverage.

    Figure 20. Thermal Interface Material Application Pattern
Step 5

With the CPU assembly on the CPU assembly tool, set the heatsink onto the CPU assembly. Note the Pin 1 alignment features for correct orientation. Push down gently until you hear the corner clips of the CPU carrier click onto the heatsink corners.

Note 

Use only the correct heatsink for your CPUs to ensure proper cooling. There are two different heatsinks: UCSC-HS-C240M5 for standard-performance CPUs 150 W and less; UCSC-HS2-C240M5 for high-performance CPUs above 150 W. Note the wattage described on the heatsink label.

Caution 
In the following step, use extreme care to avoid touching or damaging the CPU contacts or the CPU socket pins.
Step 6

Install the CPU/heatsink assembly to the server:

  1. Lift the heatsink with attached CPU assembly from the CPU assembly tool.

  2. Align the assembly over the CPU socket on the motherboard.

    Note the alignment features. The pin 1 angled corner on the heatsink must align with the pin 1 angled corner on the CPU socket. The CPU-socket posts must align with the guide-holes in the assembly.

    Figure 21. Installing the Heatsink/CPU Assembly to the CPU Socket

    1

    Guide hole in assembly (two)

    4

    Angled corner on heatsink (pin 1 alignment feature)

    2

    CPU socket alignment post (two)

    5

    Angled corner on socket (pin 1 alignment feature)

    3

    CPU socket leaf spring

    -

  3. Set the heatsink with CPU assembly down onto the CPU socket.

  4. Use the T-30 Torx driver that is supplied with the replacement CPU to tighten the four captive nuts that secure the heatsink to the motherboard standoffs.

    Note 

    Alternate tightening the heatsink nuts evenly so that the heatsink remains level while it is lowered. Tighten the heatsink nuts in the order shown on the heatsink label: 1, 2, 3, 4. The captive nuts must be fully tightened so that the leaf springs on the CPU socket lie flat.

  5. Replace the top cover to the server.

  6. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Additional CPU-Related Parts to Order with RMA Replacement CPUs

When a return material authorization (RMA) of the CPU is done on a Cisco UCS C-Series server, additional parts might not be included with the CPU spare. The TAC engineer might need to add the additional parts to the RMA to help ensure a successful replacement.


Note

The following items apply to CPU replacement scenarios. If you are replacing a system chassis and moving existing CPUs to the new chassis, you do not have to separate the heatsink from the CPU. See Additional CPU-Related Parts to Order with RMA Replacement System Chassis.


  • Scenario 1—You are reusing the existing heatsinks:

    • Heat sink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

    • Thermal interface material (TIM) kit for M5 servers (UCS-CPU-TIM=)

      One TIM kit covers one CPU.

  • Scenario 2—You are replacing the existing heatsinks:


    Caution

    Use only the correct heatsink for your CPUs to ensure proper cooling. There are two different heatsinks: UCSC-HS-C240M5= for CPUs 150 W and less; UCSC-HS2-C240M5= for CPUs above 150 W.
    • Heat sink: UCSC-HS-C240M5= for CPUs 150 W and less; UCSC-HS2-C240M5= for CPUs above 150 W

      New heatsinks have a pre-applied pad of TIM.

    • Heat sink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

  • Scenario 3—You have a damaged CPU carrier (the plastic frame around the CPU):

    • CPU Carrier: UCS-M5-CPU-CAR=

    • #1 flat-head screwdriver (for separating the CPU from the heatsink)

    • Heatsink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

    • Thermal interface material (TIM) kit for M5 servers (UCS-CPU-TIM=)

      One TIM kit covers one CPU.

A CPU heat sink cleaning kit is good for up to four CPU and heat sink cleanings. The cleaning kit contains two bottles of solution, one to clean the CPU and heat sink of old TIM and the other to prepare the surface of the heat sink.

New heat sink spares come with a pre-applied pad of TIM. It is important to clean any old TIM off of the CPU surface prior to installing the heat sinks. Therefore, even when you are ordering new heat sinks, you must order the heat sink cleaning kit.

Additional CPU-Related Parts to Order with RMA Replacement System Chassis

When a return material authorization (RMA) of the system chassis is done on a Cisco UCS C-Series server, you move existing CPUs to the new chassis.


Note

Unlike previous generation CPUs, the M5 server CPUs do not require you to separate the heatsink from the CPU when you move the CPU-heatsink assembly. Therefore, no additional heatsink cleaning kit or thermal-interface material items are required.


  • The only tool required for moving a CPU/heatsink assembly is a T-30 Torx driver.

To move a CPU to a new chassis, use the procedure in Moving an M5 Generation CPU.

Moving an M5 Generation CPU

Tool required for this procedure: T-30 Torx driver


Caution

When you receive a replacement server for an RMA, it includes dust covers on all CPU sockets. These covers protect the socket pins from damage during shipping. You must transfer these covers to the system that you are returning, as described in this procedure.


Procedure

Step 1

When moving an M5 CPU to a new server, you do not have to separate the heatsink from the CPU. Perform the following steps:

  1. Use a T-30 Torx driver to loosen the four captive nuts that secure the assembly to the board standoffs.

    Note 
    Alternate loosening the heatsink nuts evenly so that the heatsink remains level as it is raised. Loosen the heatsink nuts in the order shown on the heatsink label: 4, 3, 2, 1.
  2. Lift straight up on the CPU/heatsink assembly to remove it from the board.

  3. Set the CPUs with heatsinks aside on an anti-static surface.

    Figure 22. Removing the CPU/Heatsink Assembly

    1

    Heatsink

    4

    CPU socket on motherboard

    2

    Heatsink captive nuts (two on each side)

    5

    T-30 Torx driver

    3

    CPU carrier (below heatsink in this view)

    -

Step 2

Transfer the CPU socket covers from the new system to the system that you are returning:

  1. Remove the socket covers from the replacement system. Grasp the two recessed finger-grip areas marked "REMOVE" and lift straight up.

    Note 

    Keep a firm grasp on the finger-grip areas at both ends of the cover. Do not make contact with the CPU socket pins.

    Figure 23. Removing a CPU Socket Dust Cover

    1

    Finger-grip areas marked "REMOVE"

    -

  2. With the wording on the dust cover facing up, set it in place over the CPU socket. Make sure that all alignment posts on the socket plate align with the cutouts on the cover.

    Caution 

    In the next step, do not press down anywhere on the cover except the two points described. Pressing elsewhere might damage the socket pins.

  3. Press down on the two circular markings next to the word "INSTALL" that are closest to the two threaded posts (see the following figure). Press until you feel and hear a click.

    Note 

    You must press until you feel and hear a click to ensure that the dust covers do not come loose during shipping.

    Figure 24. Installing a CPU Socket Dust Cover

    -

    Press down on the two circular marks next to the word INSTALL.

    -

Step 3

Install the CPUs to the new system:

  1. On the new board, align the assembly over the CPU socket, as shown below.

    Note the alignment features. The pin 1 angled corner on the heatsink must align with the pin 1 angled corner on the CPU socket. The CPU-socket posts must align with the guide-holes in the assembly.

    Figure 25. Installing the Heatsink/CPU Assembly to the CPU Socket

    1

    Guide hole in assembly (two)

    4

    Angled corner on heatsink (pin 1 alignment feature)

    2

    CPU socket alignment post (two)

    5

    Angled corner on socket (pin 1 alignment feature)

    3

    CPU socket leaf spring

    -

  2. On the new board, set the heatsink with CPU assembly down onto the CPU socket.

  3. Use a T-30 Torx driver to tighten the four captive nuts that secure the heatsink to the board standoffs.

    Note 

    Alternate tightening the heatsink nuts evenly so that the heatsink remains level while it is lowered. Tighten the heatsink nuts in the order shown on the heatsink label: 1, 2, 3, 4. The captive nuts must be fully tightened so that the leaf springs on the CPU socket lie flat.


Replacing Memory DIMMs


Caution

DIMMs and their sockets are fragile and must be handled with care to avoid damage during installation.



Caution

Cisco does not support third-party DIMMs. Using non-Cisco DIMMs in the server might result in system problems or damage to the motherboard.



Note

To ensure the best server performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace DIMMs.


DIMM Population Rules and Memory Performance Guidelines

This topic describes the rules and guidelines for maximum memory performance.

DIMM Slot Numbering

The following figure shows the numbering of the DIMM slots on the motherboard.

Figure 26. DIMM Slot Numbering
DIMM Population Rules

Observe the following guidelines when installing or replacing DIMMs for maximum performance:

  • Each CPU supports six memory channels.

    • CPU 1 supports channels A, B, C, D, E, F.

    • CPU 2 supports channels G, H, J, K, L, M.

  • Each channel has two DIMM sockets (for example, channel A = slots A1, A2).

  • In a single-CPU configuration, populate the channels for CPU1 only (A, B, C, D, E, F).

  • For optimal performance, populate DIMMs in the order shown in the following table, depending on the number of CPUs and the number of DIMMs per CPU. If your server has two CPUs, balance DIMMs evenly across the two CPUs as shown in the table.


    Note

    The table below lists recommended configurations. Using 5, 7, 9, 10, or 11 DIMMs per CPU is not recommended.


    Table 4. DIMM Population Order

    Number of DIMMs per CPU (Recommended Configurations)

    Populate CPU 1 Slot

    Populate CPU2 Slots

    Blue #1 Slots

    Black #2 Slots

    Blue #1 Slots

    Black #2 Slots

    1

    (A1)

    -

    (G1)

    -

    2

    (A1, B1)

    -

    (G1, H1)

    -

    3

    (A1, B1, C1)

    -

    (G1, H1, J1)

    -

    4

    (A1, B1); (D1, E1)

    -

    (G1, H1); (K1, L1)

    -

    6

    (A1, B1); (C1, D1); (E1, F1)

    -

    (G1, H1); (J1, K1); (L1, M1)

    -

    8

    (A1, B1); (D1, E1)

    (A2, B2); (D2, E2)

    (G1, H1); (K1, L1)

    (G2, H2); (K2, L2)

    12

    (A1, B1); (C1, D1); (E1, F1)

    (A2, B2); (C2, D2); (E2, F2)

    (G1, H1); (J1, K1); (L1, M1)

    (G2, H2); (J2, K2); (L2, M2)

  • The maximum combined memory allowed in the 12 DIMM slots controlled by any one CPU is 768 GB. To populate the 12 DIMM slots with more than 768 GB of combined memory, you must use a high-memory CPU that has a PID that ends with an "M", for example, UCS-CPU-6134M.

  • Memory mirroring reduces the amount of memory available by 50 percent because only one of the two populated channels provides data. When memory mirroring is enabled, you must install DIMMs in even numbers of channels.

  • NVIDIA M-Series GPUs can support only less-than 1 TB memory in the server.

  • NVIDIA P-Series GPUs can support 1 TB or more memory in the server.

  • AMD FirePro S7150 X2 GPUs can support only less-than 1 TB memory in the server.

  • Observe the DIMM mixing rules shown in the following table.

    Table 5. DIMM Mixing Rules

    DIMM Parameter

    DIMMs in the Same Channel

    DIMMs in the Same Bank

    DIMM Capacity

    For example, 8GB, 16GB, 32GB, 64GB, 128GB

    You can mix different capacity DIMMs in the same channel (for example, A1, A2).

    You cannot mix DIMMs with different capacities and Revisions in the same bank (for example A1, B1). The Revision value depends on the manufactures. Two DIMMs with the same PID can have different Revisions.

    DIMM speed

    For example, 2666 GHz

    You can mix speeds, but DIMMs will run at the speed of the slowest DIMMs/CPUs installed in the channel.

    You cannot mix DIMMs with different speeds and Revisions in the same bank (for example A1, B1). The Revision value depends on the manufactures. Two DIMMs with the same PID can have different Revisions.

    DIMM type

    RDIMMs or LRDIMMs

    You cannot mix DIMM types in a channel.

    You cannot mix DIMM types in a bank.

Memory Mirroring

The CPUs in the server support memory mirroring only when an even number of channels are populated with DIMMs. If one or three channels are populated with DIMMs, memory mirroring is automatically disabled.

Memory mirroring reduces the amount of memory available by 50 percent because only one of the two populated channels provides data. The second, duplicate channel provides redundancy.

Replacing DIMMs

Identifying a Faulty DIMM

Each DIMM socket has a corresponding DIMM fault LED, directly in front of the DIMM socket. See Internal Diagnostic LEDs for the locations of these LEDs. When the server is in standby power mode, these LEDs light amber to indicate a faulty DIMM.

Procedure

Step 1

Remove an existing DIMM:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Remove the air baffle that covers the front ends of the DIMM slots to provide clearance.

  5. Locate the DIMM that you are removing, and then open the ejector levers at each end of its DIMM slot.

Step 2

Install a new DIMM:

Note 

Before installing DIMMs, see the memory population rules for this server: DIMM Population Rules and Memory Performance Guidelines.

  1. Align the new DIMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to correctly orient the DIMM.

  2. Push down evenly on the top corners of the DIMM until it is fully seated and the ejector levers on both ends lock into place.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing Intel Optane DC Persistent Memory Modules

This topic contains information for replacing Intel Optane Data Center Persistent memory modules (DCPMMs), including population rules and methods for verifying functionality. DCPMMs have the same form-factor as DDR4 DIMMs and they install to DIMM slots.


Caution

DCPMMs and their sockets are fragile and must be handled with care to avoid damage during installation.



Note

To ensure the best server performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace DCPMMs.



Note

Intel Optane DC persistent memory modules require Second Generation Intel Xeon Scalable processors. You must upgrade the server firmware and BIOS to version 4.0(4) or later and install the supported Second Generation Intel Xeon Scalable processors before installing DCPMMs.


DCPMMs can be configured to operate in one of three modes:

  • Memory Mode: The module operates as 100% memory module. Data is volatile and DRAM acts as a cache for DCPMMs.

  • App Direct Mode: The module operates as a solid-state disk storage device. Data is saved and is non-volatile.

  • Mixed Mode (25% Memory Mode + 75% App Direct): The module operates with 25% capacity used as volatile memory and 75% capacity used as non-volatile storage.

Intel Optane DC Persistent Memory Module Population Rules and Performance Guidelines

This topic describes the rules and guidelines for maximum memory performance when using Intel Optane DC persistent memory modules (DCPMMs) with DDR4 DRAM DIMMs.

DIMM Slot Numbering

The following figure shows the numbering of the DIMM slots on the server motherboard.

Figure 27. DIMM Slot Numbering
Configuration Rules

Observe the following rules and guidelines:

  • To use DCPMMs in this server, two CPUs must be installed.

  • Intel Optane DC persistent memory modules require Second Generation Intel Xeon Scalable processors. You must upgrade the server firmware and BIOS to version 4.0(4) or later and then install the supported Second Generation Intel Xeon Scalable processors before installing DCPMMs.

  • The DCPMMs run at 2666 MHz. If you have 2933 MHz RDIMMs or LRDIMMs in the server and you add DCPMMs, the main memory speed clocks down to 2666 MHz to match the speed of the DCPMMs.

  • Each DCPMM draws 18 W sustained, with a 20 W peak.

  • When using DCPMMs in a server:

    • The DDR4 DIMMs installed in the server must all be the same size.

    • The DCPMMs installed in the server must all be the same size and must have the same SKU.

  • The following table shows supported DCPMM configurations for this server. Fill the DIMM slots for CPU 1 and CPU2 as shown, depending on which DCPMM:DRAM ratio you want to populate.

Figure 28. Supported DCPMM Configurations for Dual-CPU Configurations

Installing Intel Optane DC Persistent Memory Modules


Note

DCPMM configuration is always applied to all DCPMMs in a region, including a replacement DCPMM. You cannot provision a specific replacement DCPMM on a preconfigured server.

Understand which mode your DCPMM is operating in. App Direct mode has some additional considerations in this procedure.



Caution

Replacing a DCPMM in App-Direct mode requires all data to be wiped from the DCPMM. Make sure to backup or offload data before attemping this procedure.


Procedure

Step 1

For App Direct mode, backup the existing data stored in all Optane DIMMs to some other storage.

Step 2

For App Direct mode, remove the Persistent Memory policy which will remove goals and namespaces automatically from all Optane DIMMs.

Step 3

Remove an existing DCPMM:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Remove the air baffle that covers the front ends of the DIMM slots to provide clearance.

    Caution 

    If you are moving DCPMMs with active data (persistent memory) from one server to another as in an RMA situation, each DCPMM must be installed to the identical position in the new server. Note the positions of each DCPMM or temporarily label them when removing them from the old server.

  5. Locate the DCPMM that you are removing, and then open the ejector levers at each end of its DIMM slot.

Step 4

Install a new DCPMM:

Note 

Before installing DCPMMs, see the population rules for this server: Intel Optane DC Persistent Memory Module Population Rules and Performance Guidelines.

  1. Align the new DCPMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to correctly orient the DCPMM.

  2. Push down evenly on the top corners of the DCPMM until it is fully seated and the ejector levers on both ends lock into place.

  3. Reinstall the air baffle.

  4. Replace the top cover to the server.

  5. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Step 5

Perform post-installation actions:

Note 

If your Persistent Memory policy is Host Controlled, you must perform the following actions from the OS side.

  • If the existing configuration is in 100% Memory mode, and the new DCPMM is also in 100% Memory mode (the factory default), the only action is to ensure that all DCPMMs are at the latest, matching firmware level.

  • If the existing configuration is fully or partly in App-Direct mode and new DCPMM is also in App-Direct mode, then ensure that all DCPMMs are are at the latest matching firmware level and also re-provision the DCPMMs by creating a new goal.

    • For App Direct mode, reapply the Persistent Memory policy.

    • For App Direct mode, restore all the offloaded data to the DCPMMs.

  • If the existing configuration and the new DCPMM are in different modes, then ensure that all DCPMMs are are at the latest matching firmware level and also re-provision the DCPMMs by creating a new goal.

There a number of tools for configuring goals, regions, and namespaces.


Server BIOS Setup Utility Menu for DCPMM


Caution

Potential data loss: If you change the mode of a currently installed DCPMM from App Direct or Mixed Mode to Memory Mode, any data in persistent memory is deleted.


DCPMMs can be configured by using the server's BIOS Setup Utility, Cisco IMC, Cisco UCS Manager, or OS-related utilities.

The server BIOS Setup Utility includes menus for DCPMMs. They can be used to view or configure DCPMM regions, goals, and namespaces, and to update DCPMM firmware.

To open the BIOS Setup Utility, press F2 when prompted onscreen during a system boot.

The DCPMM menu is on the Advanced tab of the utility:

Advanced > Intel Optane DC Persistent Memory Configuration

From this tab, you can access other menus:

  • DIMMs: Displays the installed DCPMMs. From this page, you can update DCPMM firmware and configure other DCPMM parameters.

    • Monitor health

    • Update firmware

    • Configure security

      You can enable security mode and set a password so that the DCPMM configuration is locked. When you set a password, it applies to all installed DCPMMs. Security mode is disabled by default.

    • Configure data policy

  • Regions: Displays regions and their persistent memory types. When using App Direct mode with interleaving, the number of regions is equal to the number of CPU sockets in the server. When using App Direct mode without interleaving, the number of regions is equal to the number of DCPMMs in the server.

    From the Regions page, you can configure memory goals that tell the DCPMM how to allocate resources.

    • Create goal config

  • Namespaces: Displays namespaces and allows you to create or delete them when persistent memory is used. Namespaces can also be created when creating goals. A namespace provisioning of persistent memory applies only to the selected region.

    Existing namespace attributes such as the size cannot be modified. You can only add or delete namespaces.

  • Total capacity: Displays the total DCPMM resource allocation across the server.

Updating the DCPMM Firmware Using the BIOS Setup Utility

You can update the DCPMM firmware from the BIOS Setup Utility if you know the path to the .bin files. The firmware update is applied to all installed DCPMMs.

  1. Navigate to Advanced > Intel Optane DC Persistent Memory Configuration > DIMMs > Update firmware

  2. Under File:, provide the file path to the .bin file.

  3. Select Update.

Replacing a Mini-Storage Module

The mini-storage module plugs into a motherboard socket to provide additional internal storage. The module is available in two different versions:

  • SD card carrier—provides two SD card sockets.

  • M.2 SSD Carrier—provides two M.2 form-factor SSD sockets.


Note

The Cisco IMC firmware does not include an out-of-band management interface for the M.2 drives installed in the M.2 version of this mini-storage module (UCS-MSTOR-M2). The M.2 drives are not listed in Cisco IMC inventory, nor can they be managed by Cisco IMC. This is expected behavior.


Replacing a Mini-Storage Module Carrier

This topic describes how to remove and replace a mini-storage module carrier. The carrier has one media socket on its top and one socket on its underside. Use the following procedure for any type of mini-storage module carrier (SD card or M.2 SSD).

Procedure

Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Remove a carrier from its socket:

  1. Locate the mini-storage module carrier in its socket just in front of power supply 1.

  2. Push outward on the securing clips that holds each end of the carrier.

  3. Lift both ends of the carrier to disengage it from the socket on the motherboard.

  4. Set the carrier on an anti-static surface.

Step 5

Install a carrier to its socket:

  1. Position carrier over socket, with the carrier's connector facing down. Two alignment pegs must match with two holes on the carrier.

  2. Gently push down the socket end of the carrier so that the two pegs go through the two holes on the carrier.

  3. Push down on the carrier so that the securing clips click over it at both ends.

Step 6

Replace the top cover to the server.

Step 7

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 29. Mini-Storage Module Carrier Socket

1

Location of socket on motherboard

3

Securing clips

2

Alignment pegs

-


Replacing an SD Card in a Mini-Storage Carrier For SD

This topic describes how to remove and replace an SD card in a mini-storage carrier for SD (PID UCS-MSTOR-SD). The carrier has one SD card socket on its top and one socket on its underside.

Population Rules For Mini-Storage SD Cards

  • You can use one or two SD cards in the carrier.

  • Dual SD cards can be configured in a RAID 1 array through the Cisco IMC interface.

  • SD socket 1 is on the top side of the carrier; SD socket 2 is on the underside of the carrier (the same side as the carrier's motherboard connector).

Procedure

Step 1

Power off the server and then remove the mini-storage module carrier from the server as described in Replacing a Mini-Storage Module Carrier.

Step 2

Remove an SD card:

  1. Push on the top of the SD card, and then release it to allow it to spring out from the socket.

  2. Grasp and remove the SD card from the socket.

Step 3

Install a new SD card:

  1. Insert the new SD card into the socket with its label side facing up (away from the carrier).

  2. Press on the top of the SD card until it clicks in the socket and stays in place.

Step 4

Install the mini-storage module carrier back into the server and then power it on as described in Replacing a Mini-Storage Module Carrier.


Replacing an M.2 SSD in a Mini-Storage Carrier For M.2

This topic describes how to remove and replace an M.2 SATA or NVMe SSD in a mini-storage carrier for M.2 (PID UCS-MSTOR-M2). The carrier has one M.2 SSD socket on its top and one socket on its underside.

Population Rules For Mini-Storage M.2 SSDs

  • Both M.2 SSDs must be either SATA or NVMe; do not mix types in the carrier.

  • You can use one or two M.2 SSDs in the carrier.

  • M.2 socket 1 is on the top side of the carrier; M.2 socket 2 is on the underside of the carrier (the same side as the carrier's motherboard connector).

  • Dual SATA M.2 SSDs can be configured in a RAID 1 array through the BIOS Setup Utility's embedded SATA RAID interface. See Embedded SATA RAID Controller.


    Note

    You cannot control the M.2 SATA SSDs in the server with a HW RAID controller.



    Note

    The embedded SATA RAID controller requires that the server is set to boot in UEFI mode rather than Legacy mode.


Procedure

Step 1

Power off the server and then remove the mini-storage module carrier from the server as described in Replacing a Mini-Storage Module Carrier.

Step 2

Remove an M.2 SSD:

  1. Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 SSD to the carrier.

  2. Remove the M.2 SSD from its socket on the carrier.

Step 3

Install a new M.2 SSD:

  1. Insert the new M.2 SSD connector-end into the socket on the carrier with its label side facing up.

  2. Press the M.2 SSD flat against the carrier.

  3. Install the single screw that secures the end of the M.2 SSD to the carrier.

Step 4

Install the mini-storage module carrier back into the server and then power it on as described in Replacing a Mini-Storage Module Carrier.


Replacing a Micro SD Card

There is one socket for a Micro SD card on the top of PCIe riser 1.


Caution

To avoid data loss, we do not recommend that you hot-swap the Micro SD card while it is operating, as indicated by its activity LED turning amber. The activity LED turns amber when the Micro SD card is updating or deleting.

Procedure


Step 1

Remove an existing Micro SD card:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Locate the Micro SD card. The socket is on the top of PCIe riser 1, under a plastic cover.

  5. Use your fingertip to push the retainer on the plastic socket cover open far enough to provide access to the Micro SD card, then push down and release the Micro SD card to make it spring up.

  6. Grasp the Micro SD card and lift it from the socket.

Step 2

Install a new Micro SD card:

  1. While holding the retainer on the plastic cover open with your fingertip, align the new Micro SD card with the socket.

  2. Gently push down on the card until it clicks and locks in place in the socket.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 30. Location of Internal Micro SD Card Socket

1

Location of Micro SD card socket on the top of PCIe riser 1

3

Plastic retainer (push aside to access socket)

2

Micro SD card socket under plastic retainer

4

Micro SD activity LED


Replacing an Internal USB Drive

This section includes procedures for installing a USB drive and for enabling or disabling the internal USB port.

Replacing a USB Drive


Caution

We do not recommend that you hot-swap the internal USB drive while the server is powered on because of the potential for data loss.
Procedure

Step 1

Remove an existing internal USB drive:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Locate the USB socket on the motherboard, in front of the power supplies.

  5. Grasp the USB drive and pull it vertically to free it from the socket.

Step 2

Install a new internal USB drive:

  1. Align the USB drive with the socket.

  2. Push the USB drive vertically to fully engage it with the socket.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 31. Location of Internal USB Port

1

Location of vertical USB socket on motherboard

-


Enabling or Disabling the Internal USB Port

The factory default is that all USB ports on the server are enabled. However, the internal USB port can be enabled or disabled in the server BIOS.

Procedure

Step 1

Enter the BIOS Setup Utility by pressing the F2 key when prompted during bootup.

Step 2

Navigate to the Advanced tab.

Step 3

On the Advanced tab, select USB Configuration.

Step 4

On the USB Configuration page, select USB Ports Configuration.

Step 5

Scroll to USB Port: Internal, press Enter, and then choose either Enabled or Disabled from the dialog box.

Step 6

Press F10 to save and exit the utility.


Replacing the RTC Battery


Warning

There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s instructions.

[Statement 1015]



Warning

Recyclers: Do not shred the battery! Make sure you dispose of the battery according to appropriate regulations for your country or locale.



Caution

Removing the RTC battery impacts the following:
  • Real clock time gets reset to default value.

  • CMOS setting of the server is lost. You should reset the system setting after replacing the RTC battery.


The real-time clock (RTC) battery retains system settings when the server is disconnected from power. The battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be purchased from most electronic stores.

Procedure


Step 1

Remove the RTC battery:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Remove PCIe riser 1 from the server to provide clearance to the RTC battery socket that is on the motherboard. See Replacing a PCIe Riser.

  5. Locate the horizontal RTC battery socket.

  6. Remove the battery from the socket on the motherboard. Gently pry the securing clip to the side to provide clearance, then lift up on the battery.

Step 2

Install a new RTC battery:

  1. Insert the battery into its socket and press down until it clicks in place under the clip.

    Note 

    The positive side of the battery marked “3V+” should face up.

  2. Replace PCIe riser 1 to the server. See Replacing a PCIe Riser.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 32. RTC Battery Location on Motherboard

1

RTC battery in horizontal socket on motherboard

-


Replacing Power Supplies

The server can have one or two power supplies. When two power supplies are installed they are redundant as 1+1.

This section includes procedures for replacing AC and DC power supply units.

Replacing AC Power Supplies


Note

If you have ordered a server with power supply redundancy (two power supplies), you do not have to power off the server to replace a power supply because they are redundant as 1+1.

Note

Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Procedure

Step 1

Remove the power supply that you are replacing or a blank panel from an empty bay:

  1. Perform one of the following actions:

  2. Remove the power cord from the power supply that you are replacing.

  3. Grasp the power supply handle while pinching the release lever toward the handle.

  4. Pull the power supply out of the bay.

Step 2

Install a new power supply:

  1. Grasp the power supply handle and insert the new power supply into the empty bay.

  2. Push the power supply into the bay until the release lever locks.

  3. Connect the power cord to the new power supply.

  4. Only if you shut down the server, press the Power button to boot the server to main power mode.

Figure 33. Replacing AC Power Supplies

1

Power supply release lever

2

Power supply handle


Replacing DC Power Supplies


Note

This procedure is for replacing DC power supplies in a server that already has DC power supplies installed. If you are installing DC power supplies to the server for the first time, see Installing DC Power Supplies (First Time Installation).



Warning

A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.

Statement 1022



Warning

This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.

Statement 1045



Warning

Installation of the equipment must comply with local and national electrical codes.

Statement 1074



Note

If you are replacing DC power supplies in a server with power supply redundancy (two power supplies), you do not have to power off the server to replace a power supply because they are redundant as 1+1.

Note

Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Procedure

Step 1

Remove the DC power supply that you are replacing or a blank panel from an empty bay:

  1. Perform one of the following actions:

    • If you are replacing a power supply in a server that has only one DC power supply, shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

    • If you are replacing a power supply in a server that has two DC power supplies, you do not have to shut down the server.

  2. Remove the power cord from the power supply that you are replacing. Lift the connector securing clip slightly and then pull the connector from the socket on the power supply.

  3. Grasp the power supply handle while pinching the release lever toward the handle.

  4. Pull the power supply out of the bay.

Step 2

Install a new DC power supply:

  1. Grasp the power supply handle and insert the new power supply into the empty bay.

  2. Push the power supply into the bay until the release lever locks.

  3. Connect the power cord to the new power supply. Press the connector into the socket until the securing clip clicks into place.

  4. Only if you shut down the server, press the Power button to boot the server to main power mode.

Figure 34. Replacing DC Power Supplies

1

Keyed cable connector (CAB-48DC-40A-8AWG)

3

PSU status LED

2

Keyed DC input socket

-


Installing DC Power Supplies (First Time Installation)


Note

This procedure is for installing DC power supplies to the server for the first time. If you are replacing DC power supplies in a server that already has DC power supplies installed, see Replacing DC Power Supplies.



Warning

A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.

Statement 1022



Warning

This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.

Statement 1045



Warning

Installation of the equipment must comply with local and national electrical codes.

Statement 1074



Note

Do not mix power supply types or wattages in the server. Both power supplies must be identical.

Caution

As instructed in the first step of this wiring procedure, turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.
Procedure

Step 1

Turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.

Note 

The required DC input cable is Cisco part CAB-48DC-40A-8AWG. This 3-meter cable has a 3-pin connector on one end that is keyed to the DC input socket on the power supply. The other end of the cable has no connector so that you can wire it to your facility’s DC power.

Step 2

Wire the non-terminated end of the cable to your facility’s DC power input source.

Step 3

Connect the terminated end of the cable to the socket on the power supply. The connector is keyed so that the wires align for correct polarity and ground.

Step 4

Return DC power from your facility’s circuit breaker.

Step 5

Press the Power button to boot the server to main power mode.

Figure 35. Replacing DC Power Supplies

1

Keyed cable connector (CAB-48DC-40A-8AWG)

3

PSU status LED

2

Keyed DC input socket

-

Step 6

See Grounding for DC Power Supplies for information about additional chassis grounding.


Grounding for DC Power Supplies

AC power supplies have internal grounding and so no additional grounding is required when the supported AC power cords are used.

When using a DC power supply, additional grounding of the server chassis to the earth ground of the rack is available. Two screw holes for use with your dual-hole grounding lug and grounding wire are supplied on the chassis rear panel.


Note

The grounding points on the chassis are sized for 10-32 screws. You must provide your own screws, grounding lug, and grounding wire. The grounding lug must be dual-hole lug that fits 10-32 screws. The grounding cable that you provide must be 14 AWG (2 mm), minimum 60° C wire, or as permitted by the local code.

Replacing a PCIe Riser

This server has two toolless PCIe risers for horizontal installation of PCIe cards. Each riser is available in multiple versions. See PCIe Slot Specifications for detailed descriptions of the slots and features in each riser version.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Remove the PCIe riser that you are replacing:

  1. Grasp the flip-up handle on the riser and the blue forward edge, and then lift up evenly to disengage its circuit board from the socket on the motherboard. Set the riser on an antistatic surface.

  2. If the riser has a card installed, remove the card from the riser. See Replacing a PCIe Card.

Step 5

Install a new PCIe riser:

Note 

The PCIe risers are not interchangeable. If you plug a PCIe riser into the wrong socket, the server will not boot. Riser 1 must plug into the motherboard socket labeled “RISER1.” Riser 2 must plug into the motherboard socket labeled “RISER2.”

  1. If you removed a card from the old PCIe riser, install the card to the new riser. See Replacing a PCIe Card.

  2. Position the PCIe riser over its socket on the motherboard and over its alignment slots in the chassis.

  3. Carefully push down on both ends of the PCIe riser to fully engage its circuit board connector with the socket on the motherboard.

Step 6

Replace the top cover to the server.

Step 7

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 36. PCIe Riser Alignment Features

1

Riser handling points (flip-up handle and blue forward edge)

3

Riser 1 alignment features in chassis

2

Riser 2 alignment features in chassis


Replacing a PCIe Card


Note

Cisco supports all PCIe cards qualified and sold by Cisco. PCIe cards not qualified or sold by Cisco are the responsibility of the customer. Although Cisco will always stand behind and support the C-Series rack-mount servers, customers using standard, off-the-shelf, third-party cards must go to the third-party card vendor for support if any issue with that particular card occurs.

PCIe Slot Specifications

The server contains two toolless PCIe risers for horizontal installation of PCIe cards. Each riser is orderable in multiple versions.

  • Riser 1 contains PCIe slots 1, 2, and 3 and is available in two different options:

    • Option 1—Slots 1 (x8), 2 (x16), and 3 (x8). Slots 1 and 2 are controlled by CPU 1; slot 3 is controlled by CPU 2 and is unavailable in a single-CPU configuration.

    • Option 1B—Slots 1 (x8), 2 (x8), and 3 (x8). All slots are controlled by CPU 1.

  • Riser 2 contains PCIe slots 4, 5 and 6 and is available in four different options:

    • Option 2A—Slots 4 (x16), 5 (x16), and 6 (x8).

    • Option 2B—With slots 4 (x8), 5 (x16), and 6 (x8); includes one PCIe cable connector for rear-loading NVMe SSDs.

    • Option 2C—With slots 4 (x8), 5 (x8), and 6 (x8); includes one PCIe cable connector for rear-loading NVMe SSDs, plus one PCIe cable connector for front-loading NVMe SSDs.

    • Option 2D—With slots 4 (x16), 5 (x8), and 6 (x8); includes one PCIe cable connector for rear-loading NVMe SSDs.


      Note

      Riser 2D is shipped only in the NVMe-optimized server version UCSC-C240-M5SN; it is not orderable separately. In the UCSC-C240-M5SN configuration, PCIe slot 4 is dedicated for the NVMe switch card that controls front-loading NVMe drives in front bays 1 - 8.


Figure 37. Rear Panel, Showing PCIe Slot Numbering

The following tables describe the specifications for the slots.

Table 6. PCIe Riser 1 (UCSC-PCI-1-C240M5) PCIe Expansion Slots

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

Double-Wide GPU Card Support

1

Gen-3 x8

x24 connector

¾ length

Full height

Yes 1

No

2

Gen-3 x16

x24 connector

Full length

Full height

Yes

Yes

3 2

Gen-3 x8

x16 connector

Full length

Full hight

No

No

Micro SD card slot

One socket for Micro SD card on the top of the riser.

1 NCSI is supported in only one slot at a time. If a GPU card is present in slot 2, NCSI support automatically moves to slot 1.
2 Slot 3 is not available in a single-CPU system.
Table 7. PCIe Riser 1B (UCSC-PCI-1B-C240M5) PCIe Expansion Slots

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

Double-Wide GPU Card Support

1

Gen-3 x8

x24 connector

¾ length

Full height

Yes 3

No

2

Gen-3 x8

x24 connector

Full length

Full height

Yes

Yes

3

Gen-3 x8

x16 connector

Full length

Full hight

No

No

Micro SD card slot

One socket for Micro SD card on top of the riser.

3 NCSI is supported in only one slot at a time. If a GPU card is present in slot 2, NCSI support automatically moves to slot 1.

Note

Riser 2 is not available in a single-CPU system.


Table 8. PCIe Riser 2A (UCSC-PCI-2A-C240M5) PCIe Expansion Slots

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

Double-Wide GPU Card Support

4

Gen-3 x16

x24 connector

¾ length

Full height

Yes 4

No

5

Gen-3 x16

x24 connector

Full length

Full height

Yes

Yes

6

Gen-3 x8

x16 connector

Full length

Full hight

No

No

4 NCSI is supported in only one slot at a time. If a GPU card is present in slot 5, NCSI support automatically moves to slot 4.
Table 9. PCIe Riser 2B (UCSC-PCI-2B-C240M5) PCIe Expansion Slots

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

Double-Wide GPU Card Support

4

Gen-3 x8

x24 connector

¾ length

Full height

Yes 5

No

5

Gen-3 x16

x24 connector

Full length

Full height

Yes

Yes

6

Gen-3 x8

x16 connector

Full length

Full hight

No

No

Cable connector for rear NVMe SSDs

Gen-3 x8

To rear drive backplane; supports rear-loading NVMe SSDs.

5 NCSI is supported in only one slot at a time. If a GPU card is present in slot 5, NCSI support automatically moves to slot 4.
Table 10. PCIe Riser 2C (UCSC-PCI-2C-C240M5) PCIe Expansion Slots

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

Double-Wide GPU Card Support

4

Gen-3 x8

x24 connector

¾ length

Full height

Yes 6

No

5

Gen-3 x8

x24 connector

Full length

Full height

Yes

No

6

Gen-3 x8

x16 connector

Full length

Full hight

No

No

Cable connector for rear NVMe SSDs

Gen-3 x8

To rear drive backplane; supports rear-loading NVMe SSDs.

Cable connector for front NVMe SSDs

Gen-3 x8

To front drive backplane; supports front-loading NVMe SSDs.

6 NCSI is supported in only one slot at a time. If a GPU card is present in slot 5, NCSI support automatically moves to slot 4.
Table 11. PCIe Riser 2D (Not Orderable Separately) PCIe Expansion Slots

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

Double-Wide GPU Card Support

4

(Dedicated slot for NVMe-switch card)

Gen-3 x16

x24 connector

¾ length

Full height

Yes

No

5

Gen-3 x8

x24 connector

Full length

Full height

Yes7

No

6

Gen-3 x8

x16 connector

Full length

Full hight

No

No

Cable connector for rear NVMe SSDs

Gen-3 x8

To rear drive backplane; supports rear-loading NVMe SSDs.

7 NCSI is supported in only one slot at a time.

Note

Riser 2D is shipped only in the NVMe-enhanced server version UCSC-C240-M5SN; it is not orderable separately. In the UCSC-C240-M5SN configuration, slot 4 is dedicated for the NVMe switch card that controls front-loading NVMe drives in front bays 1 - 8.


Replacing a PCIe Card


Note

If you are installing a Cisco UCS Virtual Interface Card, there are prerequisite considerations. See Cisco Virtual Interface Card (VIC) Considerations.



Note

RAID controller cards install into a dedicated motherboard socket. See Replacing a SAS Storage Controller Card (RAID or HBA).



Note

For instructions on installing or replacing double-wide GPU cards, see GPU Card Installation.


Procedure

Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Remove the PCIe card that you are replacing:

  1. Remove any cables from the ports of the PCIe card that you are replacing.

  2. Use two hands to flip up and grasp the blue riser handle and the blue fingergrip area on the front edge of the riser, and then lift straight up.

  3. On the bottom of the riser, push the release latch that holds the securing plate, and then swing the hinged securing plate open.

  4. Open the hinged card-tab retainer that secures the rear-panel tab of the card.

  5. Pull evenly on both ends of the PCIe card to remove it from the socket on the PCIe riser.

    If the riser has no card, remove the blanking panel from the rear opening of the riser.

Step 5

Install a new PCIe card:

  1. With the hinged card-tab retainer open, align the new PCIe card with the empty socket on the PCIe riser.

  2. Push down evenly on both ends of the card until it is fully seated in the socket.

  3. Ensure that the card’s rear panel tab sits flat against the riser rear-panel opening and then close the hinged card-tab retainer over the card’s rear-panel tab.

  4. Swing the hinged securing plate closed on the bottom of the riser. Ensure that the clip on the plate clicks into the locked position.

  5. Position the PCIe riser over its socket on the motherboard and over the chassis alignment channels.

  6. Carefully push down on both ends of the PCIe riser to fully engage its connector with the sockets on the motherboard.

Step 6

Replace the top cover to the server.

Step 7

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 38. PCIe Riser Card Securing Mechanisms

1

Release latch on hinged securing plate

3

Hinged card-tab retainer

2

Hinged securing plate

-


Cisco Virtual Interface Card (VIC) Considerations

This section describes VIC card support and special considerations for this server.


Note

If you use the Cisco Card NIC mode, you must also make a VIC Slot setting that matches where your VIC is installed. The options are Riser1, Riser2, and Flex-LOM. See NIC Mode and NIC Redundancy Settings for more information about NIC modes.

If you want to use the Cisco UCS VIC card for Cisco UCS Manager integration, see also the Cisco UCS C-Series Server Integration with Cisco UCS Manager Guides for details about supported configurations, cabling, and other requirements.

Table 12. VIC Support and Considerations in This Server

VIC

How Many Supported in Server

Slots That Support VICs

Primary Slot For Cisco UCS Manager Integration

Primary Slot For Cisco Card NIC Mode

Minimum Cisco IMC Firmware

Cisco UCS VIC 1385

UCSC-PCIE-C40Q-03

2 PCIe

PCIe 2

PCIe 5

PCIe 2

PCIe 2

3.1(1)

Cisco UCS VIC 1455

UCSC-PCIE-C25Q-04

2 PCIe

PCIe 2

PCIe 5

PCIe 2

PCIe 2

4.0(1)

Cisco UCS VIC 1495

UCSC-PCIE-C100-04

2 PCIe

PCIe 2

PCIe 5

PCIe 2

PCIe 2

4.0(2)

Cisco UCS VIC 1387

UCSC-MLOM-C40Q-03

1 mLOM

mLOM

mLOM

mLOM

3.1(1)

Cisco UCS VIC 1457

UCSC-MLOM-C25Q-04

1 mLOM

mLOM

mLOM

mLOM

4.0(1)

Cisco UCS VIC 1497

UCSC-MLOM-C100-04

1 mLOM

mLOM

mLOM

mLOM

4.0(2)

  • A total of 3 VICs are supported in the server: 2 PCIe style, and 1 mLOM style.


    Note

    Single wire management is supported on only one VIC at a time. If multiple VICs are installed on a server, only one slot has NCSI enabled at a time. For single wire management, priority goes to the MLOM slot, then slot 2, then slot 5 for NCSI management traffic. When multiple cards are installed, connect the single-wire management cables in the priority order mentioned above.


  • The primary slot for a VIC card in PCIe riser 1 is is slot 2. The secondary slot for a VIC card in PCIe riser 1 is slot 1.


    Note

    The NCSI protocol is supported in only one slot at a time in each riser. If a GPU card is present in slot 2, NCSI automatically shifts from slot 2 to slot 1.


  • The primary slot for a VIC card in PCIe riser 2 is is slot 5. The secondary slot for a VIC card in PCIe riser 2 is slot 4.


    Note

    The NCSI protocol is supported in only one slot at a time in each riser. If a GPU card is present in slot 5, NCSI automatically shifts from slot 5 to slot 4.



    Note

    PCIe riser 2 is not available in a single-CPU system.


Replacing an mLOM Card

The server supports a modular LOM (mLOM) card to provide additional rear-panel connectivity. The mLOM socket is on the motherboard, under the storage controller card.

The mLOM socket provides a Gen-3 x16 PCIe lane. The socket remains powered when the server is in 12 V standby power mode and it supports the network communications services interface (NCSI) protocol.


Note

If your mLOM card is a Cisco UCS Virtual Interface Card (VIC), see Cisco Virtual Interface Card (VIC) Considerations for more information and support details.

Procedure


Step 1

Remove any existing mLOM card (or a blanking panel):

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Remove any storage controller (RAID or HBA card) to provide clearance to the mLOM socket on the motherboard. See Replacing a SAS Storage Controller Card (RAID or HBA).

  5. Loosen the single captive thumbscrew that secures the mLOM card to the threaded standoff on the chassis floor.

  6. Slide the mLOM card horizontally to free it from the socket, then lift it out of the server.

Step 2

Install a new mLOM card:

  1. Set the mLOM card on the chassis floor so that its connector is aligned with the motherboard socket.

  2. Push the card horizontally to fully engage the card's edge connector with the socket.

  3. Tighten the captive thumbscrew to secure the card to the chassis floor.

  4. Return the storage controller card to the server. See Replacing a SAS Storage Controller Card (RAID or HBA).

  5. Replace the top cover to the server.

  6. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 39. Location of the mLOM Card Socket Below the Storage Controller Card

1

Position of horizontal mLOM card socket

2

Position of mLOM card thumbscrew


Replacing a SAS Storage Controller Card (RAID or HBA)

For hardware-based storage control, the server can use a Cisco modular SAS RAID controller or SAS HBA that plugs into a dedicated, vertical socket on the motherboard.

Storage Controller Card Firmware Compatibility

Firmware on the storage controller (RAID or HBA) must be verified for compatibility with the current Cisco IMC and BIOS versions that are installed on the server. If not compatible, upgrade or downgrade the storage controller firmware using the Host Upgrade Utility (HUU) for your firmware release to bring it to a compatible level.


Note

For servers running in standalone mode only: After you replace controller hardware (UCSC-RAID-M5, UCSC-RAID-M5HD, UCSC-SAS-M5, or UCSC-SAS-M5HD), you must run the Cisco UCS Host Upgrade Utility (HUU) to update the controller firmware, even if the firmware Current Version is the same as the Update Version. This is necessary to program the controller's suboem-id to the correct value for the server SKU. If you do not do this, drive enumeration might not display correctly in the software. This issue does not affect servers controlled in UCSM mode.


See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring server components to compatible levels: HUU Guides.

Replacing a SAS Storage Controller Card (RAID or HBA)

For detailed information about storage controllers in this server, see .

The chassis includes a plastic mounting bracket that the card must be attached to before installation.

Procedure

Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

Step 2

Remove any existing storage controller card from the server:

Note 

The chassis includes a plastic mounting bracket that the card must be attached to before installation. During replacement, you must remove the old card from the bracket and then install the new card to the bracket before installing this assembly to the server.

  1. Disconnect SAS/SATA cables and any supercap cable from the existing card.

  2. Lift up on the card's blue ejector lever to unseat it from the motherboard socket.

  3. Lift straight up on the card's carrier frame to disengage the card from the motherboard socket and to disengage the frame from two pegs on the chassis wall.

  4. Remove the existing card from its plastic carrier bracket. Carefully push the retainer tabs aside and then lift the card from the bracket.

Step 3

Install a new storage controller card:

  1. Install the new card to the plastic carrier bracket. Make sure that the retainer tabs close over the edges of the card.

  2. Position the assembly over the chassis and align the card edge with the motherboard socket. At the same time, align the two slots on the back of the carrier bracket with the pegs on the chassis inner wall.

  3. Push on both corners of the card to seat its connector in the riser socket. At the same time, ensure that the slots on the carrier frame engage with the pegs on the inner chassis wall.

  4. Fully close the blue ejector lever on the card to lock the card into the socket.

  5. Connect SAS/SATA cables and any supercap cable to the new card.

    If this is a first-time installation, see Storage Controller Cable Connectors and Backplanes for cabling instructions.

Step 4

Replace the top cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Step 6

If your server is running in standalone mode, use the Cisco UCS Host Upgrade Utility to update the controller firmware and program the correct suboem-id for the controller.

Note 

For servers running in standalone mode only: After you replace controller hardware (UCSC-RAID-M5, UCSC-RAID-M5HD, UCSC-SAS-M5, or UCSC-SAS-M5HD), you must run the Cisco UCS Host Upgrade Utility (HUU) to update the controller firmware, even if the firmware Current Version is the same as the Update Version. This is necessary to program the controller's suboem-id to the correct value for the server SKU. If you do not do this, drive enumeration might not display correctly in the software. This issue does not affect servers controlled in UCSM mode.

See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring server components to compatible levels: HUU Guides.

Figure 40. Replacing a Storage Controller Card

1

Blue ejector lever on card top edge

2

Pegs on inner chassis wall (two)


Replacing the Supercap (RAID Backup)

This server supports installation of one supercap unit. The unit mounts to a bracket on the removable air baffle.

The supercap provides approximately three years of backup for the disk write-back cache DRAM in the case of a sudden power loss by offloading the cache to the NAND flash.

Procedure


Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

Step 2

Remove an existing supercap:

  1. Disconnect the supercap cable from the existing supercap.

  2. Push aside the securing tab that holds the supercap to its bracket on the air baffle.

  3. Lift the supercap free of the bracket and set it aside.

Step 3

Install a new supercap:

  1. Set the new supercap into the mounting bracket.

  2. Push aside the black plastic tab on the air baffle and set the supercap into the bracket. Relax the tab so that it closes over the top edge of the supercap.

  3. Connect the supercap cable from the RAID controller card to the connector on the supercap cable.

Step 4

Replace the top cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 41. Supercap Bracket on Air Baffle

1

Supercap bracket on removeable air baffle

2

Securing tab


Replacing a SATA Interposer Card (8-Drive Server Only)


Note

The only version of this server that supports controlling front-loading drives with embedded SATA RAID is the SFF, 8-drives version (UCSC-C240-M5S).


For software-based storage control that uses the server's embedded SATA controller to control front-loading drives, the server requires a SATA interposer card that plugs into a dedicated socket on the motherboard (the same socket used for SAS storage controllers).


Note

You cannot use a hardware RAID controller card and the embedded software RAID controller to control front drives at the same time. See Storage Controller Considerations for details about RAID support.


Procedure


Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

Step 2

Remove any existing SATA interposer card from the server:

Note 

A SATA interposer card for this server is preinstalled inside a plastic carrier-frame that helps to secure the card to the inner chassis wall. You do not have to remove this plastic carrier frame from the existing card.

  1. Disconnect PCIe cables from the existing card.

  2. Lift up on the card's blue ejector lever to unseat it from the motherboard socket.

  3. Lift straight up on the card's carrier frame to disengage the card from the motherboard socket and to disengage the frame from pegs on the chassis wall.

Step 3

Install a new SATA interposer card:

  1. Carefully align the card edge with the motherboard socket. At the same time, align the two slots on the back of the carrier frame with the pegs on the chassis inner wall.

  2. Push on both corners of the card to seat its connector in the riser socket. At the same time, ensure that the slots on the carrier frame engage with the pegs on the inner chassis wall.

  3. Fully close the blue ejector lever on the card to lock the card into the socket.

  4. Connect PCIe cables to the new card.

    If this is a first-time installation, see Storage Controller Cable Connectors and Backplanes for cabling instructions.

Step 4

Replace the top cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 42. Replacing SATA Interposer Card

1

Blue ejector lever on card top edge

2

Pegs on inner chassis wall (four)


Replacing a Boot-Optimized M.2 RAID Controller Module

The Cisco Boot-Optimized M.2 RAID Controller module connects to the mini-storage module socket on the motherboard. It includes slots for two SATA M.2 drives, plus an integrated 6-Gbps SATA RAID controller that can control the SATA M.2 drives in a RAID 1 array.

Cisco Boot-Optimized M.2 RAID Controller Considerations

Review the following considerations:


Note

The Cisco Boot-Optimized M.2 RAID Controller is not supported when the server is used as a compute-only node in Cisco HyperFlex configurations.


  • The minimum version of Cisco IMC and Cisco UCS Manager that support this controller is 4.0(4) and later.

  • This controller supports RAID 1 (single volume) and JBOD mode.


    Note

    Do not use the server's embedded SW MegaRAID controller to configure RAID settings when using this controller module. Instead, you can use the following interfaces:

    • Cisco IMC 4.0(4a) and later

    • BIOS HII utility, BIOS 4.0(4a) and later

    • Cisco UCS Manager 4.0(4a) and later (UCS Manager-integrated servers)


  • A SATA M.2 drive in slot 1 (the top) is the first SATA device; a SATA M.2 drive in slot 2 (the underside) is the second SATA device.

    • The name of the controller in the software is MSTOR-RAID.

    • A drive in Slot 1 is mapped as drive 253; a drive in slot 2 is mapped as drive 254.

  • When using RAID, we recommend that both SATA M.2 drives are the same capacity. If different capacities are used, the smaller capacity of the two drives is used to create a volume and the rest of the drive space is unusable.

    JBOD mode supports mixed capacity SATA M.2 drives.

  • Hot-plug replacement is not supported. The server must be powered off.

  • Monitoring of the controller and installed SATA M.2 drives can be done using Cisco IMC and Cisco UCS Manager. They can also be monitored using other utilities such as UEFI HII, PMCLI, XMLAPI, and Redfish.

  • Updating firmware of the controller and the individual drives:

  • The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported.

  • If you replace a single SATA M.2 drive that was part of a RAID volume, rebuild of the volume is auto-initiated after the user accepts the prompt to import the configuration. If you replace both drives of a volume, you must create a RAID volume and manually reinstall any OS.

  • We recommend that you erase drive contents before creating volumes on used drives from another server. The configuration utility in the server BIOS includes a SATA secure-erase function.

  • The server BIOS includes a configuration utility specific to this controller that you can use to create and delete RAID volumes, view controller properties, and erase the physical drive contents. Access the utility by pressing F2 when prompted during server boot. Then navigate to Advanced > Cisco Boot Optimized M.2 RAID Controller.

Replacing a Cisco Boot-Optimized M.2 RAID Controller

This topic describes how to remove and replace a Cisco Boot-Optimized M.2 RAID Controller. The controller board has one M.2 socket on its top (Slot 1) and one M.2 socket on its underside (Slot 2).

Procedure

Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Remove a controller from its motherboard socket:

  1. Locate the controller in its socket just in front of power supply 1.

  2. At each end of the controller board, push outward on the clip that secures the carrier.

  3. Lift both ends of the controller to disengage it from the socket on the motherboard.

  4. Set the carrier on an anti-static surface.

Figure 43. Cisco Boot-Optimized M.2 RAID Controller on Motherboard

1

Location of socket on motherboard

3

Securing clips

2

Alignment pegs

-

Step 5

If you are transferring SATA M.2 drives from the old controller to the replacement controller, do that before installing the replacement controller:

Note 

Any previously configured volume and data on the drives are preserved when the M.2 drives are transferred to the new controller. The system will boot the existing OS that is installed on the drives.

  1. Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 drive to the carrier.

  2. Lift the M.2 drive from its socket on the carrier.

  3. Position the replacement M.2 drive over the socket on the controller board.

  4. Angle the M.2 drive downward and insert the connector-end into the socket on the carrier. The M.2 drive's label must face up.

  5. Press the M.2 drive flat against the carrier.

  6. Install the single screw that secures the end of the M.2 SSD to the carrier.

  7. Turn the controller over and install the second M.2 drive.

Figure 44. Cisco Boot-Optimized M.2 RAID Controller, Showing M.2 Drive Installation
Step 6

Install the controller to its socket on the motherboard:

  1. Position the controller over the socket, with the controller's connector facing down and at the same end as the motherboard socket. Two alignment pegs must match with two holes on the controller.

  2. Gently push down the socket end of the controller so that the two pegs go through the two holes on the controller.

  3. Push down on the controller so that the securing clips click over it at both ends.

Step 7

Replace the top cover to the server.

Step 8

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing a Chassis Intrusion Switch

The chassis intrusion switch in an optional security feature that logs an event in the system event log (SEL) whenever the cover is removed from the chassis.

Procedure


Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

Step 2

Remove an existing intrusion switch:

  1. Disconnect the intrusion switch cable from the socket on the motherboard.

  2. Use a #1 Phillips-head screwdriver to loosen and remove the single screw that holds the switch mechanism to the chassis wall.

  3. Slide the switch mechanism straight up to disengage it from the clips on the chassis.

Step 3

Install a new intrusion switch:

  1. Slide the switch mechanism down into the clips on the chassis wall so that the screwholes line up.

  2. Use a #1 Phillips-head screwdriver to install the single screw that secures the switch mechanism to the chassis wall.

  3. Connect the switch cable to the socket on the motherboard.

Step 4

Replace the cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 45. Replacing a Chassis Intrusion Switch

1

Intrusion switch location

-


Installing a Trusted Platform Module (TPM)

The trusted platform module (TPM) is a small circuit board that plugs into a motherboard socket and is then permanently secured with a one-way screw. The socket location is on the motherboard below PCIe riser 2.

TPM Considerations

  • This server supports either TPM version 1.2 or TPM version 2.0. The TPM 2.0, UCSX-TPM2-002B(=), is compliant with Federal Information Processing (FIPS) Standard 140-2. FIPS support has existed, but FIPS 140-2 is now supported.

  • Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does not already have a TPM installed.

  • If there is an existing TPM 1.2 installed in the server, you cannot upgrade to TPM 2.0. If there is no existing TPM in the server, you can install TPM 2.0.

  • If the TPM 2.0 becomes unresponsive, reboot the server.

Installing and Enabling a TPM


Note

Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does not already have a TPM installed.

This topic contains the following procedures, which must be followed in this order when installing and enabling a TPM:

  1. Installing the TPM Hardware

  2. Enabling the TPM in the BIOS

  3. Enabling the Intel TXT Feature in the BIOS

Installing TPM Hardware

Note

For security purposes, the TPM is installed with a one-way screw. It cannot be removed with a standard screwdriver.
Procedure

Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution 
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

Step 2

Remove PCIe riser 2 from the server to provide clearance to the TPM socket on the motherboard.

Step 3

Install a TPM:

  1. Locate the TPM socket on the motherboard.

  2. Align the connector that is on the bottom of the TPM circuit board with the motherboard TPM socket. Align the screw hole on the TPM board with the screw hole that is adjacent to the TPM socket.

  3. Push down evenly on the TPM to seat it in the motherboard socket.

  4. Install the single one-way screw that secures the TPM to the motherboard.

Step 4

Replace PCIe riser 2 to the server. See Replacing a PCIe Riser.

Step 5

Replace the cover to the server.

Step 6

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Step 7

Continue with Enabling the TPM in the BIOS.

Figure 46. Location of the TPM Socket

1

TPM socket location on motherboard, below PCIe riser 2

-


Enabling the TPM in the BIOS

After hardware installation, you must enable TPM support in the BIOS.


Note

You must set a BIOS Administrator password before performing this procedure. To set this password, press the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to Security > Set Administrator Password and enter the new password twice as prompted.


Procedure

Step 1

Enable TPM Support:

  1. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

  2. Log in to the BIOS Setup Utility with your BIOS Administrator password.

  3. On the BIOS Setup Utility window, choose the Advanced tab.

  4. Choose Trusted Computing to open the TPM Security Device Configuration window.

  5. Change TPM SUPPORT to Enabled.

  6. Press F10 to save your settings and reboot the server.

Step 2

Verify that TPM support is now enabled:

  1. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

  2. Log into the BIOS Setup utility with your BIOS Administrator password.

  3. Choose the Advanced tab.

  4. Choose Trusted Computing to open the TPM Security Device Configuration window.

  5. Verify that TPM SUPPORT and TPM State are Enabled.

Step 3

Continue with Enabling the Intel TXT Feature in the BIOS.


Enabling the Intel TXT Feature in the BIOS

Intel Trusted Execution Technology (TXT) provides greater protection for information that is used and stored on the business server. A key aspect of that protection is the provision of an isolated execution environment and associated sections of memory where operations can be conducted on sensitive data, invisibly to the rest of the system. Intel TXT provides for a sealed portion of storage where sensitive data such as encryption keys can be kept, helping to shield them from being compromised during an attack by malicious code.

Procedure

Step 1

Reboot the server and watch for the prompt to press F2.

Step 2

When prompted, press F2 to enter the BIOS Setup utility.

Step 3

Verify that the prerequisite BIOS values are enabled:

  1. Choose the Advanced tab.

  2. Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.

  3. Verify that the following items are listed as Enabled:

    • VT-d Support (default is Enabled)

    • VT Support (default is Enabled)

    • TPM Support

    • TPM State

  4. Do one of the following:

    • If VT-d Support and VT Support are already enabled, skip to step 4.

    • If VT-d Support and VT Support are not enabled, continue with the next steps to enable them.

  5. Press Escape to return to the BIOS Setup utility Advanced tab.

  6. On the Advanced tab, choose Processor Configuration to open the Processor Configuration window.

  7. Set Intel (R) VT and Intel (R) VT-d to Enabled.

Step 4

Enable the Intel Trusted Execution Technology (TXT) feature:

  1. Return to the Intel TXT(LT-SX) Hardware Support window if you are not already there.

  2. Set TXT Support to Enabled.

Step 5

Press F10 to save your changes and exit the BIOS Setup utility.


Removing the PCB Assembly (PCBA)

The PCBA is secured to the server's sheet metal. You must disconnect the PCBA from the tray before recycling the PCBA. The PCBA is secured by M3.5x0.6mm screws.

Before you begin


Note

For Recyclers Only! This procedure is not a standard field-service option. This procedure is for recyclers who will be reclaiming the electronics for proper disposal to comply with local eco design and e-waste regulations.


To remove the printed circuit board assembly (PCBA), the following requirements must be met:

  • The server must be disconnected from facility power.

  • The server must be removed from the equipment rack.

  • The server's top cover must be removed. See Removing the Server Top Cover.

Procedure


Step 1

Locate the PCBA's mounting screws.

The following figure shows the location of the mounting screws.

Figure 47. Screw Locations for Removing the UCS C240 M5 PCBA
Step 2

Using a screwdriver, remove the screws.

Step 3

Remove the PCBA and dispose of it properly.


Service Headers and Jumpers

This server includes two blocks of headers (J38, J39) that you can jumper for certain service and debug functions.

This section contains the following topics:

Figure 48. Location of Service Header Blocks J38 and J39

1

Location of header block J38

6

Location of header block J39

2

J38 pin 1 arrow printed on motherboard

7

J39 pin 1 arrow printed on motherboard

3

Clear CMOS: J38 pins 9 - 10

8

Boot Cisco IMC from alternate image: J39 pins 1 - 2

4

Recover BIOS: J38 pins 11 - 12

9

Reset Cisco IMC password to default: J39 pins 3 - 4

5

Clear password: J38 pins 13 - 14

10

Reset Cisco IMC to defaults: J39 pins 5 - 6

Using the Clear CMOS Header (J38, Pins 9 - 10)

You can use this switch to clear the server’s CMOS settings in the case of a system hang. For example, if the server hangs because of incorrect settings and does not boot, use this jumper to invalidate the settings and reboot with defaults.


Caution

Clearing the CMOS removes any customized settings and might result in data loss. Make a note of any necessary customized settings in the BIOS before you use this clear CMOS procedure.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Install a two-pin jumper across J38 pins 9 and 10.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note 
You must allow the entire server to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.
Step 7

Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Remove the jumper that you installed.

Note 
If you do not remove the jumper, the CMOS settings are reset to the defaults every time you power-cycle the server.
Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.


Using the BIOS Recovery Header (J38, Pins 11 - 12)

Depending on which stage the BIOS becomes corrupted, you might see different behavior.

  • If the BIOS BootBlock is corrupted, you might see the system get stuck on the following message:

    Initializing and configuring memory/hardware
  • If it is a non-BootBlock corruption, a message similar to the following is displayed:

    ****BIOS FLASH IMAGE CORRUPTED****
    Flash a valid BIOS capsule file using Cisco IMC WebGUI or CLI interface.
    IF Cisco IMC INTERFACE IS NOT AVAILABLE, FOLLOW THE STEPS MENTIONED BELOW.
    1. Connect the USB stick with bios.cap file in root folder.
    2. Reset the host.
    IF THESE STEPS DO NOT RECOVER THE BIOS
    1. Power off the system.
    2. Mount recovery jumper.
    3. Connect the USB stick with bios.cap file in root folder.
    4. Power on the system.
    Wait for a few seconds if already plugged in the USB stick.
    REFER TO SYSTEM MANUAL FOR ANY ISSUES.

Note

As indicated by the message shown above, there are two procedures for recovering the BIOS. Try procedure 1 first. If that procedure does not recover the BIOS, use procedure 2.

Procedure 1: Reboot With bios.cap Recovery File

Procedure

Step 1

Download the BIOS update package and extract it to a temporary location.

Step 2

Copy the contents of the extracted recovery folder to the root directory of a USB drive. The recovery folder contains the bios.cap file that is required in this procedure.

Note 
The bios.cap file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either the FAT16 or FAT32 file system.
Step 3

Insert the USB drive into a USB port on the server.

Step 4

Reboot the server.

Step 5

Return the server to main power mode by pressing the Power button on the front panel.

The server boots with the updated BIOS boot block. When the BIOS detects a valid bios.cap file on the USB drive, it displays this message:

Found a valid recovery file...Transferring to Cisco IMC
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...
Step 6

Wait for server to complete the BIOS update, and then remove the USB drive from the server.

Note 
During the BIOS update, Cisco IMC shuts down the server and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the server after the update is complete.

Procedure 2: Use BIOS Recovery Header and bios.cap File

Procedure

Step 1

Download the BIOS update package and extract it to a temporary location.

Step 2

Copy the contents of the extracted recovery folder to the root directory of a USB drive. The recovery folder contains the bios.cap file that is required in this procedure.

Note 
The bios.cap file must be in the root directory of the USB drive. Do not rename this file. The USB drive must be formatted with either the FAT16 or FAT32 file system.
Step 3

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server. Disconnect power dords from all power supplies.

Step 4

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 5

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 6

Install a two-pin jumper across J38 pins 11 and 12.

Step 7

Reconnect AC power cords to the server. The server powers up to standby power mode.

Step 8

Insert the USB thumb drive that you prepared in Step 2 into a USB port on the server.

Step 9

Return the server to main power mode by pressing the Power button on the front panel.

The server boots with the updated BIOS boot block. When the BIOS detects a valid bios.cap file on the USB drive, it displays this message:

Found a valid recovery file...Transferring to Cisco IMC
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...
Step 10

Wait for server to complete the BIOS update, and then remove the USB drive from the server.

Note 
During the BIOS update, Cisco IMC shuts down the server and the screen goes blank for about 10 minutes. Do not unplug the power cords during this update. Cisco IMC powers on the server after the update is complete.
Step 11

After the server has fully booted, power off the server again and disconnect all power cords.

Step 12

Remove the jumper that you installed.

Note 
If you do not remove the jumper, after recovery completion you see the prompt, “Please remove the recovery jumper.”
Step 13

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.


Using the Clear Password Header (J38, Pins 13 - 14)

You can use this switch to clear the administrator password.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server. Disconnect power dords from all power supplies.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Install a two-pin jumper across J38 pins 13 and 14.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note 
You must allow the entire server to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.
Step 7

Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Remove the jumper that you installed.

Note 
If you do not remove the jumper, the password is cleared every time you power-cycle the server.
Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.


Using the Boot Alternate Cisco IMC Image Header (J39, Pins 1 - 2)

You can use this Cisco IMC debug header to force the system to boot from an alternate Cisco IMC image.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server. Disconnect power cords from all power supplies.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Install a two-pin jumper across J39 pins 1 and 2.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note 

When you next log in to Cisco IMC, you see a message similar to the following:

'Boot from alternate image' debug functionality is enabled.  
CIMC will boot from alternate image on next reboot or input power cycle.
Note 
If you do not remove the jumper, the server will boot from an alternate Cisco IMC image every time that you power cycle the server or reboot Cisco IMC.
Step 7

To remove the jumper, press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Remove the jumper that you installed.

Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.


Using the Reset Cisco IMC Password to Default Header (J39, Pins 3 - 4)

You can use this Cisco IMC debug header to force the Cisco IMC password back to the default.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server. Disconnect power cords from all power supplies.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Install a two-pin jumper across J39 pins 3 and 4.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note 

When you next log in to Cisco IMC, you see a message similar to the following:

'Reset to default CIMC password' debug functionality is enabled.  
On input power cycle, CIMC password will be reset to defaults.
Note 
If you do not remove the jumper, the server will reset the Cisco IMC password to the default every time that you power cycle the server. The jumper has no effect if you reboot Cisco IMC.
Step 7

To remove the jumper, press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Remove the jumper that you installed.

Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.


Using the Reset Cisco IMC to Defaults Header (J39, Pins 5 - 6)

You can use this Cisco IMC debug header to force the Cisco IMC settings back to the defaults.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server. Disconnect power cords from all power supplies.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution 
If you cannot safely view and access the component, remove the server from the rack.
Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Install a two-pin jumper across J39 pins 5 and 6.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note 

When you next log in to Cisco IMC, you see a message similar to the following:

'CIMC reset to factory defaults' debug functionality is enabled.  
On input power cycle, CIMC will be reset to factory defaults.
Note 
If you do not remove the jumper, the server will reset the Cisco IMC to the default settings every time that you power cycle the server. The jumper has no effect if you reboot Cisco IMC.
Step 7

To remove the jumper, press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Remove the jumper that you installed.

Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.