Servicing the Server

This chapter contains the following topics.

Status LEDs and Buttons

This section contains information for interpreting LED states.

Front-Panel LEDs

Figure 1. Front Panel LEDs
Table 1. Front Panel LEDs, Definition of States

LED Name

States

1

SAS

SAS/SATA drive fault

Note

 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The hard drive is operating properly (no fault).

  • Amber—Drive fault detected.

  • Amber, blinking—The device is rebuilding.

  • Amber, blinking with one-second interval—Drive locate function activated in the software.

2

SAS

SAS/SATA drive activity LED

  • Off—There is no hard drive in the hard drive tray (no access, no fault).

  • Green—The hard drive is ready.

  • Green, blinking—The hard drive is reading or writing data.

1

NVMe

NVMe SSD drive fault

Note

 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The drive is not in use and can be safely removed.

  • Green—The drive is in use and functioning properly.

  • Green, blinking—the driver is initializing following insertion, or the driver is unloading following an eject command.

  • Amber—The drive has failed.

  • Amber, blinking—A drive Locate command has been issued in the software.

2

NVMe

NVMe SSD activity

  • Off—No drive activity.

  • Green, blinking—There is drive activity.

3

Power button/LED

  • Off—There is no AC power to the server.

  • Amber—The server is in standby power mode. Power is supplied only to the Cisco IMC and some motherboard functions.

  • Green—The server is in main power mode. Power is supplied to all server components.

4

Unit identification

  • Off—The unit identification function is not in use.

  • Blue—The unit identification function is activated.

5

System health

  • Green—The server is running in normal operating condition.

  • Green, blinking—The server is performing system initialization and memory check.

  • Amber, steady—The server is in a degraded operational state (minor fault). For example:

    • Power supply redundancy is lost.

    • CPUs are mismatched.

    • At least one CPU is faulty.

    • At least one DIMM is faulty.

    • At least one drive in a RAID configuration failed.

  • Amber, Blinking—The server is experiencing a critical fault. For example:

    • Boot Failure

    • Fatal Processor and/or bus error detected

    • Loss of I/O

    • Over Temperature Condition

6

Fan status

  • Green—All fan modules are operating properly.

  • Amber—Fans are operating in a degraded state. For example, one of the fans has a fault.

  • Amber, blinking—Two or more fan modules have a fault.

7

Temperature status

  • Green—The server is operating at normal temperature, or the temperature sensor detects no error conditions.

  • Amber, steady—One or more temperature sensors breached a warning threshold.

  • Amber, blinking—One or more temperature sensors breached a critical threshold.

8

Power supply status

  • Green—All power supplies are operating normally, and no error condition is detected.

  • Amber, steady—One or more power supplies are in a degraded operational state.

  • Amber, blinking—One or more power supplies are in a critical fault state.

9

Network link activity

  • Off—The Ethernet LOM port link is idle.

  • Green—One or more Ethernet LOM ports are link-active, but there is no activity on any of the links.

  • Green, blinking—One or more Ethernet LOM ports are link-active, with activity.

Rear-Panel LEDs

Figure 2. Rear Panel LEDs
Table 2. Rear Panel LEDs, Definition of States

LED Name

States

1

Unit identification LED

  • Off—The unit identification function is not in use.

  • Blue, blinking—The unit identification function is activated.

2

1-Gb Ethernet dedicated management link speed

  • Off—Link speed is 10 Mbps.

  • Amber—Link speed is 100 Mbps.

  • Green—Link speed is 1 Gbps.

3

1-Gb Ethernet dedicated management link status

  • Off—No link is present.

  • Green—Link is active.

  • Green, blinking—Traffic is present on the active link.

4

1-Gb Serial port

5

Power supply status (one LED each power supply unit)

AC power supplies:

  • Off—No AC input (12 V main power off, 12 V standby power off).

  • Green, blinking—12 V main power off; 12 V standby power on.

  • Green, solid—12 V main power on; 12 V standby power on.

  • Amber, blinking—Warning threshold detected but 12 V main power on.

  • Amber, solid—Critical error detected; 12 V main power off (for example, over-current, over-voltage, or over-temperature failure).

DC power supply (UCSC-PSUV2-1050DC):

  • Off—No DC input (12 V main power off, 12 V standby power off).

  • Green, blinking—12 V main power off; 12 V standby power on.

  • Green, solid—12 V main power on; 12 V standby power on.

  • Amber, blinking—Warning threshold detected but 12 V main power on.

  • Amber, solid—Critical error detected; 12 V main power off (for example, over-current, over-voltage, or over-temperature failure).

6

SAS

SAS/SATA drive fault

Note

 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The hard drive is operating properly.

  • Amber—Drive fault detected.

  • Amber, blinking—The device is rebuilding.

  • Amber, blinking with one-second interval—Drive locate function activated in the software.

7

SAS

SAS/SATA drive activity LED

  • Off—There is no hard drive in the hard drive tray (no access, no fault).

  • Green—The hard drive is ready.

  • Green, blinking—The hard drive is reading or writing data.

6

NVMe

NVMe SSD drive fault

Note

 
NVMe solid state drive (SSD) drive tray LEDs have different behavior than SAS/SATA drive trays.
  • Off—The drive is not in use and can be safely removed.

  • Green—The drive is in use and functioning properly.

  • Green, blinking—the driver is initializing following insertion, or the driver is unloading following an eject command.

  • Amber—The drive has failed.

  • Amber, blinking—A drive Locate command has been issued in the software.

7

NVMe

NVMe SSD activity

  • Off—No drive activity.

  • Green, blinking—There is drive activity.

Internal Diagnostic LEDs

The server has internal fault LEDs for CPUs, DIMMs, and fan modules at the base of the CPUs, DIMMs, and fan modules.

1

Fan module fault LEDs (one on the top of each fan module)

  • Amber—Fan has a fault or is not fully seated.

  • Green—Fan is OK.

3

DIMM fault LEDs (one behind each DIMM socket on the motherboard)

These LEDs operate only when the server is in standby power mode.

  • Amber—DIMM has a fault.

  • Off—DIMM is OK.

2

CPU fault LEDs (one behind each CPU socket on the motherboard).

These LEDs operate only when the server is in standby power mode.

  • Amber—CPU has a fault.

  • Off—CPU is OK.

-

Preparing For Component Installation

This section includes information and tasks that help prepare the server for component installation.

Required Equipment For Service Procedures

The following tools and equipment are used to perform the procedures in this chapter:

  • T-30 Torx driver (supplied with replacement CPUs for heatsink removal)

  • #1 flat-head screwdriver (used during CPU or heatsink replacement)

  • #1 Phillips-head screwdriver (for M.2 SSD and intrusion switch replacement)

  • Electrostatic discharge (ESD) strap or other grounding equipment such as a grounded mat

Shutting Down and Removing Power From the Server

The server can run in either of two power modes:

  • Main power mode—Power is supplied to all server components and any operating system on your drives can run.

  • Standby power mode—Power is supplied only to the service processor and certain components. It is safe for the operating system and data to remove power cords from the server in this mode.


Caution


After a server is shut down to standby power, electric current is still present in the server. To completely remove power, you must disconnect all power cords from the power supplies in the server, as directed in the service procedures.

You can shut down the server by using the front-panel power button or the software management interfaces.


Removing the Server Top Cover

Procedure


Step 1

Remove the top cover:

  1. If the cover latch is locked, slide the lock sideways to unlock it.

    When the latch is unlocked, the handle pops up so that you can grasp it.

  2. Lift on the end of the latch so that it pivots vertically to 90 degrees.

  3. Simultaneously, slide the cover back and lift the top cover straight up from the server and set it aside.

Step 2

Replace the top cover:

  1. With the latch in the fully open position, place the cover on top of the server about one-half inch (1.27 cm) behind the lip of the front cover panel.

  2. Slide the cover forward until the latch makes contact.

  3. Press the latch down to the closed position. The cover is pushed forward to the closed position as you push down the latch.

  4. Lock the latch by sliding the lock button to sideways to the left.

    Locking the latch ensures that the server latch handle does not protrude when you install the blade.

Figure 3. Removing the Top Cover

1

Cover lock

2

Cover latch handle


Serial Number Location

The serial number for the server is printed on a label on the top of the server, near the front. See Removing the Server Top Cover.

Hot Swap vs Hot Plug

Some components can be removed and replaced without shutting down and removing power from the server. This type of replacement has two varieties: hot-swap and hot-plug.

  • Hot-swap replacement—You do not have to shut down the component in the software or operating system. This applies to the following components:

    • SAS/SATA hard drives

    • SAS/SATA solid state drives

    • Cooling fan modules. With a single fan failure, other fans throttle up to compensate, but with multiple fan failures that require opening the top cover to replace, you must complete the fan replacement procedure within 60 seconds.

    • Power supplies (when redundant as 1+1)

  • Hot-plug replacement—You must take the component offline before removing it for the following component:

    • NVMe PCIe solid state drives

Removing and Replacing Components


Warning


Blank faceplates and cover panels serve three important functions: they prevent exposure to hazardous voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate the system unless all cards, faceplates, front covers, and rear covers are in place.

Statement 1029



Caution


When handling server components, handle them only by carrier edges and use an electrostatic discharge (ESD) wrist-strap or other grounding device to avoid damage.

Tip


You can press the unit identification button on the front panel or rear panel to turn on a flashing, blue unit identification LED on both the front and rear panels of the server. This button allows you to locate the specific server that you are servicing when you go to the opposite side of the rack. You can also activate these LEDs remotely by using the Cisco IMC interface.

This section describes how to install and replace server components.

Serviceable Component Locations

This topic shows the locations of the field-replaceable components and service-related items. The view in the following figure shows the server with the top cover removed.

Figure 4. Cisco UCS C240 M8 Server, Serviceable Component Locations

Note


The preceding illustration shows a server with three half-height rear risers. The server supports also supports two full-height, full-width risers (not shown).


1

Front-loading drive bays.

2

RAID Module slot.

3

Cooling fan modules (six, hot-swappable fan modules in a single fan tray)

4

CPU socket 2

5

DIMM sockets on motherboard (16 per CPU)

See the Cisco UCS Intel M8 Memory Guide. for DIMM slot numbering.

Note

 

An air baffle rests on top of the DIMM and CPUs when the server is operating. The air baffle is not displayed in this illustration.

6

M.2 RAID Controllers, two individual pieces

7

PCIe riser 3 (PCIe slots 7 and 8 numbered from bottom to top), with the following options:

  • 3A (Default Option)—Slot 7 (x24 mechanical, x8 electrical, Gen 4)

    Slot 8 (x16 mechanical, x8 electrical, Gen 4)

  • 3B (Storage Option)—Slots 7 and 8, both support x4 electrical, Gen 4.

    Both slots can accept universal SFF HDDs or NVMe SSDs.

  • 3C (GPU Option)—Slot 7 (x24 mechanical, x16 electrical). Slot 7 can support a full height, full length GPU card.

8

PCIe riser 2 (PCIe slots 4, 5, 6 numbered from bottom to top), with the following options:

  • 2A (Default Option)—Slot 4 (x24 mechanical, x8 electrical, Gen 4. NCSI is supported on one slot at a time. Supports a full height, ¾ length card.

    Slot 5 (x24 mechanical, x16 electrical, Gen 4). NCSI is supported on one slot at a time. Supports one full height, full length card.

    Slot 6 (x16 mechanical, x8 electrical, Gen 4). Supports a full height, full length card.

  • 2C— Slots 4 (x24 mechanical, x16 electrical, Gen 5) NCSI supported on one slot at a time. Supports a full-height, full-length card.

    Slot 5 (x24 mechanical, x16 electrical Gen 5) Supports full-height, full-length drive.

9

PCIe riser 1 (PCIe slot 1, 2, 3 numbered bottom to top), with the following options:

  • 1A (Default Option)—Slot 1 (x24 mechanical, x8 electrical, Gen 4) NCSI is supported on one slot at a time. Supports full height, ¾ length card.

    Slot 2 (x24 mechanical, x16 electrical, Gen 4). NCSI is supported on one slot at a time. Supports full height, full length GPU card.

    Slot 3 (x16 mechanical, x8 electrical, Gen 4) Supports full height, full length card.

  • 1B (Storage Option)—Slot 1 supports an M.2 NVMe RAID card

    Slot 2 (x4 electrical), supports universal 2.5-inch HDD NVMe drive

    Slot 3 (x4 electrical), supports universal 2.5-inch HDD NVMe drive

  • 1C — (x24 mechanical, x16 electrical, Gen 5) NCSI supported on one slot at a time. Supports a full-height, ¾-length card.

    Slot 2 (x24 mechanical, x16 electrical, Gen 5) Supports a full-height, full-length card.

10

RAID controller card

11

CPU socket 1

12

SuperCap Module (under fan tray)

The Technical Specifications Sheets for all versions of this server, which include supported component part numbers, are at Cisco UCS Servers Technical Specifications Sheets (scroll down to Technical Specifications).

Replacing the Air Duct

The server has an air duct under the top sheet metal cover. The air duct ensures proper cooling and air flow across the server from intake (the cool aisle of the data center) to exhaust (the hot aisle in the data center). The air duct is in the middle of the server and covers the CPU and DIMMs.

The server has two different revisions of airduct (A0 and B0) with subtle differences.


Note


You might be able to find the revision on packaging, such as an ESD bag, or on a label on the air duct (if present).


To help you identify each revision of air duct, compare the illustrations that follow.

  • One air duct (UCSC-GPUAD-C240M7= Rev A0) is used for servers populated with Intel Fourth Generation Xeon Scalable Processors. In the following illustrations, notice the lower mesh on the front of the air duct and the legs at the rear of the air duct, which are different from the Rev B0 air duct. Also, the front wall (not pictured) is different from the Rev B0 air duct.


    Note


    The A0 air duct will be phased out so that only the B0 air duct will be available.


    Figure 5. Air Duct for Intel Fourth Generation Xeon Scalable Processors (Rev A0), Front View
    Figure 6. Air Duct for Intel Fourth Generation Xeon Scalable Processors (Rev A0), Rear View
  • One air duct UCSC-GPUAD-C240M7= Rev B0 is required for servers that have Intel Fifth Generation Xeon Scalable Processors.

    • By default, this air duct is pre-installed at the factory for new servers with Intel Fifth Generation Xeon Scalable Processors.

    • This air duct is required if you will be upgrading your server from Intel Fourth Generation Xeon Scalable Processors to Intel Fifth Generation Xeon Scalable Processors.

    • When upgrading to Intel Fifth Generation Xeon Scalable Processors, you must order the Rev B0 air duct from Cisco.

    In the following illustrations, notice the lower mesh on the front of the air duct and the legs at the rear of the air duct, which are different from the Rev A0 air duct. Also, the front wall (not pictured) is different from the Rev A0 air duct.

    Figure 7. Air Duct for Intel Fifth Generation Xeon Scalable Processors, Front View
    Figure 8. Air Duct for Intel Fifth Generation Xeon Scalable Processors, Rear View

To replace the server's air duct, use the following procedures:

Removing the Air Duct

Use this procedure to remove the air duct when needed.


Note


Your air duct might be somewhat different than shown in this topic based on its revision level (A0 or B0), but the overall procedure is applicable.


Before you begin
The air duct has triangular alignment features that match with similar features on the sidewall of the server. Notice their locations. You will use them to aid reinstalling the air duct.
Procedure

Step 1

Remove the server top cover.

Step 2

Locate the detents for the air duct, which are rectangular or semi-circular cutouts in the air baffle that provide grasp point for your fingers.

When removing the air duct, always grasp the detents closest to the chassis sidewalls (left and right).

Step 3

Grasp the left and right detent then lift the air duct out of the chassis.

Note

 

You might need to slide the air duct towards the back of the server while lifting the air duct up.


What to do next

When you are done servicing the server, install the air duct. See Installing the Air Duct.

Installing the Air Duct

The air duct sits behind the front-loading drive cage and covers the CPU and DIMMs in the middle of the server.

Procedure

Step 1

Orient the air duct as shown.

Step 2

Match the alignment features on each side of the air duct with their corresponding feature on the chassis sidewall.

Note

 

Notice that the sheetmetal of the chassis wall has notches that accept the tabs on the air duct.

Step 3

Lower the air duct into place and gently press down to ensure that all of its edges sit flush.

If the air duct is not seated correctly, it can obstruct installing the server's top cover.

Step 4

When the air duct is correctly seated, attach the server's top cover.

The server top cover should sit flush so that the metal tabs on the top cover match the indents in the top edges of the air duct.


Replacing the RV Baffle

If the SFF configuration of the server (UCSC-240-M8SX) has SAS/SATA controllers, a special baffle, the RV baffle, is required This baffle is in addition to either the standard CPU air baffle or GPU air blocker.

The RV baffles sits between the fans and the drive backplane to control airflow and is held in place by notches in the baffle that seat into the top of the sheetmetal walls of the server.

Replacing the RV baffle is a tool-less procedure. Use the following tasks to replace the RV baffle.

Removing the RV Baffle

The RV baffle sits between the front drive backplane and the server fan tray on UCSC-240-M8SX servers with SAS/SATA controllers. The top of the baffle has parts that are molded to a right angle that holds the baffle in place while it rests on top of the server's sheetmetal sidewalls.

Removing the RV baffle is a tool-less procedure, and the RV baffle can be removed independent of the fan tray. However, when the UCSC-240-M8SX server is operating, the RV baffle must be in place.

Use this task to remove the RV baffle.


Caution


Although the fans in the server's fan tray are hooded to shield the fan blades, make sure to keep your fingers away from the fans while performing this procedure!


Procedure

Step 1

If you have not already done so, remove the server top cover.

See Removing the Server Top Cover.

Step 2

Using your fingers, grasp the RV baffle and lift it out of the server.


Installing the RV Baffle

Use this procedure to instll the RV baffle between the SFF backplane and the fan tray for UCSC-240-M8SX servers with SAS/SATA controllers.

Procedure

Step 1

Align the RV baffle with the fan module so that the flat edges of each will sit flush against each other.

Step 2

Holding the RV baffle level, lower it into the gap between the fan modules and the backplane.

The baffle should fit into the gap and the tabs on the top of the baffle should sit easily on the top of the server's sidewalls.


Replacing Front-Loading SAS/SATA Drives


Note


You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.

To replace rear-loading SAS/SATA drives, see Replacing Rear-Loading SAS/SATA Drives.

Front-Loading SAS/SATA Drive Population Guidelines

The server is orderable in the following different versions, each with a different front panel/drive-backplane configuration.

Drive bay numbering is shown in the following figures.

Figure 9. CIsco UCSC-C240-M8SX, Drive Bay Numbering
Figure 10. CIsco UCSC-C240-M8E3S, Drive Bay Numbering
Figure 11. CIsco UCSC-C240-M8L Drive Bay Numbering

Observe these drive population guidelines for optimum performance:

  • When populating drives, add drives to the lowest-numbered bays first.


    Note


    For diagrams of which drive bays are controlled by particular controller cables on the backplane, see Storage Controller Cable Connectors and Backplanes.
  • Front-loading drives are hot pluggable, but each drive requires a 10 second delay between hot removal and hot insertion.

  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

  • You can mix SAS/SATA hard drives and SAS/SATA SSDs in the same server. However, you cannot configure a logical volume (virtual drive) that contains a mix of hard drives and SSDs. That is, when you create a logical volume, it must contain all SAS/SATA hard drives or all SAS/SATA SSDs.

4K Sector Format SAS/SATA Drives Considerations

  • You must boot 4K sector format drives in UEFI mode, not legacy mode. See the procedures in this section.

  • Do not configure 4K sector format and 512-byte sector format drives as part of the same RAID volume.

  • For operating system support on 4K sector drives, see the interoperability matrix tool for your server: Hardware and Software Interoperability Matrix Tools

Procedure


Setting Up UEFI Mode Booting in the BIOS Setup Utility

Procedure

Step 1

Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

Step 2

Go to the Boot Options tab.

Step 3

Set UEFI Boot Options to Enabled.

Step 4

Under Boot Option Priorities, set your OS installation media (such as a virtual DVD) as your Boot Option #1.

Step 5

Go to the Advanced tab.

Step 6

Select LOM and PCIe Slot Configuration.

Step 7

Set the PCIe Slot ID: HBA Option ROM to UEFI Only.

Step 8

Press F10 to save changes and exit the BIOS setup utility. Allow the server to reboot.

Step 9

After the OS installs, verify the installation:

  1. Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

  2. Go to the Boot Options tab.

  3. Under Boot Option Priorities, verify that the OS you installed is listed as your Boot Option #1.


Setting Up UEFI Mode Booting in the Cisco IMC GUI

Procedure

Step 1

Use a web browser and the IP address of the server to log into the Cisco IMC GUI management interface.

Step 2

Navigate to Server > BIOS.

Step 3

Under Actions, click Configure BIOS.

Step 4

In the Configure BIOS Parameters dialog, select the Advanced tab.

Step 5

Go to the LOM and PCIe Slot Configuration section.

Step 6

Set the PCIe Slot: HBA Option ROM to UEFI Only.

Step 7

Click Save Changes. The dialog closes.

Step 8

Under BIOS Properties, set Configured Boot Order to UEFI.

Step 9

Under Actions, click Configure Boot Order.

Step 10

In the Configure Boot Order dialog, click Add Local HDD.

Step 11

In the Add Local HDD dialog, enter the information for the 4K sector format drive and make it first in the boot order.

Step 12

Save changes and reboot the server. The changes you made will be visible after the system reboots.


Replacing a Front-Loading SAS/SATA Drive


Note


You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.
Procedure

Step 1

Remove the drive that you are replacing or remove a blank drive tray from the bay:

  1. Press the release button on the face of the drive tray.

  2. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  3. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 2

Install a new drive:

  1. Place a new drive in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Figure 12. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Replacing Rear-Loading SAS/SATA Drives


Note


You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.

Rear-Loading SAS/SATA Drive Population Guidelines

The rear drive bay support differs by server PID and which type of RAID controller is used in the server:

  • UCSC-C240-M8SX supports up to 4 SFF SAS/SATA or NVMe drives (direct attached or RAID cntrolled) with Riser 1B and 3B

  • UCSCS-C240-M8L supports up to 4 SFF SAS/SATA or NVMe (direct attache or RAID controlled) with Riser 1B and 3. .

  • UCSC-C240-M8E3S supports up to 4 EDSFF E3.S 1ITB NVMe drives (direct attach only) with Riser 1D and 3D.

  • Rear bays are numbered 101 to 104 with drive 101 at the left bottom, 102 at the left top, drive 103 at the right bottom, and 104 at the right top positions.

  • When populating drives, add drives to the lowest-numbered bays first.

  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

  • You can mix SAS/SATA hard drives and SAS/SATA SSDs in the same server. However, you cannot configure a logical volume (virtual drive) that contains a mix of hard drives and SSDs. That is, when you create a logical volume, it must contain all SAS/SATA hard drives or all SAS/SATA SSDs.

Replacing a Rear-Loading SAS/SATA Drive


Note


You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable.
Procedure

Step 1

Remove the drive that you are replacing or remove a blank drive tray from the bay:

  1. Press the release button on the face of the drive tray.

  2. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  3. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 2

Install a new drive:

  1. Place a new drive in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Figure 13. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Basic Troubleshooting: Reseating a SAS/SATA Drive

Sometimes it is possible for a false positive UBAD error to occur on SAS/SATA HDDs installed in the server.

  • Only drives that are managed by the UCS MegaRAID controller are affected.

  • Drives can be affected regardless of where they are installed in the server (front-loaded, rear-loaded, and so on).

  • Both SFF and LFF form factor drives can be affected.

  • Drives installed in all Cisco UCS C-Series servers can be affected.

  • Drives can be affected regardless of whether they are configured for hotplug or not.

  • The UBAD error is not always terminal, so the drive is not always defective or in need of repair or replacement. However, it is also possible that the error is terminal, and the drive will need replacement.

Before submitting the drive to the RMA process, it is a best practice to reseat the drive. If the false UBAD error exists, reseating the drive can clear it. If successful, reseating the drive reduces inconvenience, cost, and service interruption, and optimizes your server uptime.


Note


Reseat the drive only if a UBAD error occurs. Other errors are transient, and you should not attempt diagnostics and troubleshooting without the assistance of Cisco personnel. Contact Cisco TAC for assistance with other drive errors.


To reseat the drive, see Reseating a SAS/SATA Drive.

Reseating a SAS/SATA Drive

Sometimes, SAS/SATA drives can throw a false UBAD error, and reseating the drive can clear the error.

Use the following procedure to reseat the drive.


Caution


This procedure might require powering down the server. Powering down the server will cause a service interruption.


Before you begin

Before attempting this procedure, be aware of the following:

  • Before reseating the drive, it is a best practice to back up any data on it.

  • When reseating the drive, make sure to reuse the same drive bay.

    • Do not move the drive to a different slot.

    • Do not move the drive to a different server.

    • If you do not reuse the same slot, the Cisco management software (for example, Cisco IMM) might require a rescan/rediscovery of the server.

  • When reseating the drive, allow 20 seconds between removal and reinsertion.

Procedure

Step 1

Attempt a hot reseat of the affected drive(s). Choose the appropriate option:

  1. For a front-loading drive, see Replacing a Front-Loading SAS/SATA Drive

  2. For a rear-loading drive, see Replacing a Rear-Loading SAS/SATA Drive

Step 2

During boot up, watch the drive's LEDs to verify correct operation.

See Status LEDs and Buttons.

Step 3

If the error persists, cold reseat the drive, which requires a server power down. Choose the appropriate option:

  1. Use your server management software to gracefully power down the server.

    See the appropriate Cisco management software documentation.

  2. If server power down through software is not available, you can power down the server by pressing the power button.

    See Status LEDs and Buttons.

  3. Reseat the drive as documented in Step 1.

  4. When the drive is correctly reseated, restart the server, and check the drive LEDs for correct operation as documented in Step 2.

Step 4

If hot and cold reseating the drive (if necessary) does not clear the UBAD error, choose the appropriate option:

  1. Contact Cisco TAC Support for assistance with troubleshooting.

  2. Begin an RMA of the errored drive.


Replacing Front-Loading NVMe SSDs

This section is for replacing NVMe solid-state drives (SSDs) in front-panel drive bays.

Front-Loading NVMe SSD Population Guidelines

The front drive bay support for 2.5-inch NVMe SSDs differs by server PID:

  • UCSC-C240-M8S:

    • Drive bays 1 through 24 support SFF U.3 NVMe SSDs when the server is configured with a Tri-Mode Storage Controller.

    • Drive bays 1 through 4 and 21 through 24 also support direct-attached SFF U.3 NVMe SSDs.

  • UCSC-C240-M8E3S:

    • Drive bays 1 through 32 support direct-attached EDSFF E3.S 1TB NVMe SSDs

  • For a single CPU server, front-loading NVMe drives can be populated in slots 21-24.

Front-Loading NVME SSD Requirements and Restrictions

Observe these requirements:

  • Hot-plug support must be enabled in the system BIOS. If you ordered the system with NVMe drives, hot-plug support is enabled at the factory.

Observe these restrictions:

Enabling Hot-Plug Support in the System BIOS

Hot-plug (OS-informed hot-insertion and hot-removal) is disabled in the system BIOS by default.

  • If the system was ordered with NVMe PCIe SSDs, the setting was enabled at the factory. No action is required.

  • If you are adding NVMe PCIe SSDs after-factory, you must enable hot-plug support in the BIOS. See the following procedures.

Enabling Hot-Plug Support Using the BIOS Setup Utility
Procedure

Step 1

Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

Step 2

Navigate to Advanced > PCI Subsystem Settings > NVMe SSD Hot-Plug Support.

Step 3

Set the value to Enabled.

Step 4

Save your changes and exit the utility.


Enabling Hot-Plug Support Using the Cisco IMC GUI
Procedure

Step 1

Use a browser to log in to the Cisco IMC GUI for the server.

Step 2

Navigate to Compute > BIOS > Advanced > PCI Configuration.

Step 3

Set NVME SSD Hot-Plug Support to Enabled.

Step 4

Save your changes.


Replacing a Front-Loading NVMe SSD

This topic describes how to replace NVMe SSDs in the front-panel drive bays.


Note


OS-surprise removal is not supported. OS-informed hot-insertion and hot-removal are supported on all supported operating systems except VMware ESXi.



Note


OS-informed hot-insertion and hot-removal must be enabled in the system BIOS. See Enabling Hot-Plug Support in the System BIOS.


Procedure

Step 1

Remove an existing front-loading NVMe SSD:

  1. Shut down the NVMe SSD to initiate an OS-informed removal. Use your operating system interface to shut down the drive, and then observe the drive-tray LED:

    • Green—The drive is in use and functioning properly. Do not remove.

    • Green, blinking—the driver is unloading following a shutdown command. Do not remove.

    • Off—The drive is not in use and can be safely removed.

  2. Press the release button on the face of the drive tray.

  3. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  4. Remove the four drive tray screws that secure the SSD to the tray and then lift the SSD out of the tray.

Step 2

Install a new front-loading NVMe SSD:

  1. Place a new SSD in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Step 3

Observe the drive-tray LED and wait until it returns to solid green before accessing the drive:

  • Off—The drive is not in use.

  • Green, blinking—the driver is initializing following hot-plug insertion.

  • Green—The drive is in use and functioning properly.

Figure 14. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Replacing an E3.S Drive

The server supports 16 or 32 E3.S front-loading drives. Additionally, up to four E3.S drives are supported in PCI risers, two each in Riser 1 and Riser 3, in the rear of the server.

Use the following tasks to replace a E3.S drive.

E3.S PCIe Requirements and Restrictions

For 7.5mm E3.S PCIe SSD drives, be aware of the following:

  • UEFI boot mode can be configured through the Boot Order Policy setting in the Server Policy supported by Cisco Intersight Managed Mode (IMM). For instructions about setting up UEFI boot mode through Cisco IMM, go to:

    Cisco Intersight Managed Mode Configuration Guide

  • PCIe SSDs cannot be controlled with a SAS RAID controller because PCIe SSDs interface with the compute node via the PCIe bus.

  • UEFI boot is supported in all supported operating systems.

Removing an E3.S Drive

Use this task to remove a E3.S PCIe drive from the server.


Caution


Do not operate the system with an empty drive bay. If you remove a drive, you must reinsert a drive or cover the empty drive bay with a drive blank.


Before you begin

Caution


To prevent data loss, make sure that you know the state of the system before removing a drive.


Procedure

Step 1

Push the ejector button to disengage the locking mechanism and open the ejector arrm partially.

Step 2

Gently swing the ejector arm to the right, the open position, until the ejector arm stops.

Caution

 

Do not twist or tilt on the ejector arm while it is in the open position. Also, the ejector arm stops when it is fully open. Do not force the ejector open past its stopping point.

Step 3

Grasping the ejector arm, gently start sliding the the drive out of the drive bay. When the drive is partially removed, you can grasp the drive carrier with your fingers to slide it completely out of the drive bay.

Step 4

Place the drive on an antistatic mat or antistatic foam if you are not immediately reinstalling it in another server.


What to do next

Cover the empty drive bay. Choose the appropriate option:

  • Installing an E3.S Drive

  • Install a drive blanking panel to maintain proper airflow and keep contaminant and particulate matter out of the drive bay if it will remain empty.

Installing an E3.S Drive

Caution


For hot installation of drives, after the original drive is removed, you must wait for 20 seconds before installing a drive. Failure to allow this 20-second wait period causes the management software to display incorrect drive inventory information. If incorrect drive information is displayed, remove the affected drive(s), wait for 20 seconds, then reinstall them.


To install a E3.S PCIe drive, follow this procedure:

Procedure

Step 1

Place the drive ejector arm into the open position by pushing the ejector button.

Step 2

Orient the drive so that the ejector is on the right side

Step 3

Align the drive with the drive bay, and keeping it level and gently slide the drive into the empty drive bay until it seats into place.

If you need to, you can press on the middle of the drive to seat it into the bay.

Step 4

Push the drive ejector arm into the closed position.

You should feel the ejector arm click into place when it is in the closed position.


Replacing Mid-Mounted SAS/SATA Drives (LFF Server)

Mid-mounted drives are supported on the LFF server only. These drives connect directly to the midplane, so there are no cables to disconnect as part of the replacement procedure.

Mid-mounted drives can be hot swapped and hot inserted, so you do not need to disconnect facility power.

Procedure


Step 1

Open the server top cover.

Step 2

Grasp the handle for the mid-mount drive cage, and swing the cage cover open.

When the cage cover is open, it will be pointing up at a 90-degree angle.

Step 3

Grasping the cage cover handle, pull up on the drive cage until the bottom row of drives clears the top of the server.

When pulling on the mid-mount drive cage, it will arc upward.

Step 4

Grasp the drive handle and pull the drive out of the mid-mount drive cage.

Step 5

Orient the drive so that the handle is at the bottom and align it with its drive bay.

Step 6

Holding the drive level, slide it into the drive bay until it connects with the midplane.

Step 7

Push down on the drive cage so that it seats into the server.

Step 8

Grasp the handle and close the server cage cover.

Note

 

Make sure that the server cage cover is completely closed, and the server cage is completely seated in the server. When the server cage is completely seated, its top is flush with the fans and rear PCI riser cages.

Step 9

Install the server's top cover.

If the server's top cover does not close easily, check that the mid-mount drive cage is completely seated into the server.


Replacing Rear-Loading NVMe SSDs

This section is for replacing NVMe solid-state drives (SSDs) in rear-panel drive bays.

Rear-Loading NVMe SSD Population Guidelines

The rear drive bay support differs by server PID and which type of RAID controller is used in the server for non-NVMe drives:

  • UCSC-C240-M8S: Supports up to 4 SFF or NVMe drives (direct attached or RAID controlled) with Riser 1B and 3B.

  • UCSC-C240-M8E3S: Supports up to 4 SFF or NVMe drives (direct attached or RAID controlled) with Riser 1B and 3B.

  • UCSC-C240-M8L: Supports up to 4 EDSFF E3.S 1TB NVMe drives (direct attached only) with Riser 1D & 3D.

  • Rear bays are numbered 101 to 104 with drive 101 at the left bottom, 102 at the left top, drive 103 at the right bottom, and 104 at the right top positions.

  • When populating drives, add drives to the lowest-numbered bays first.

  • Drives are hot pluggable, but each drive requires a 10-second delay between hot removal and hot insertion.

  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

Rear-Loading NVME SSD Requirements and Restrictions

Observe these requirements:

  • The server must have two CPUs if the server contains more than 2 rear drives.

  • The server must have the correct storage risers for the configuration.

  • Rear PCIe cable and rear drive backplane.

  • Hot-plug support must be enabled in the system BIOS. If you ordered the system with NVMe drives, hot-plug support is enabled at the factory.

Observe these restrictions:

Replacing a Rear-Loading NVMe SSD

This topic describes how to replace NVMe SSDs in the rear-panel drive bays.


Note


OS-surprise removal is not supported. OS-informed hot-insertion and hot-removal are supported on all supported operating systems except VMware ESXi.



Note


OS-informed hot-insertion and hot-removal must be enabled in the system BIOS. See Enabling Hot-Plug Support in the System BIOS.


Procedure

Step 1

Remove an existing rear-loading NVMe SSD:

  1. Shut down the NVMe SSD to initiate an OS-informed removal. Use your operating system interface to shut down the drive, and then observe the drive-tray LED:

    • Green—The drive is in use and functioning properly. Do not remove.

    • Green, blinking—the driver is unloading following a shutdown command. Do not remove.

    • Off—The drive is not in use and can be safely removed.

  2. Press the release button on the face of the drive tray.

  3. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  4. Remove the four drive tray screws that secure the SSD to the tray and then lift the SSD out of the tray.

Note

 

If this is the first time that rear-loading NVMe SSDs are being installed in the server, you must install PCIe riser 2B or 2C and a rear NVMe cable kit.

Step 2

Install a new front-loading NVMe SSD:

  1. Place a new SSD in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Step 3

Observe the drive-tray LED and wait until it returns to solid green before accessing the drive:

  • Off—The drive is not in use.

  • Green, blinking—the driver is initializing following hot-plug insertion.

  • Green—The drive is in use and functioning properly.

Figure 15. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Replacing Fan Modules

The six fan modules in the server are numbered as shown in Serviceable Component Locations.


Tip


There is a fault LED on the top of each fan module. This LED lights green when the fan is correctly seated and is operating OK. The LED lights amber when the fan has a fault or is not correctly seated.

Caution


You do not have to shut down or remove power from the server to replace fan modules because they are hot-swappable. However, to maintain proper cooling, do not operate the server for more than one minute with any fan module removed.

Procedure


Step 1

Remove an existing fan module:

  1. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  2. Remove the top cover from the server as described in Removing the Server Top Cover.

  3. Grasp and squeeze the fan module release latches on its top. Lift straight up to disengage its connector from the motherboard.

Step 2

Install a new fan module:

  1. Set the new fan module in place. The arrow printed on the top of the fan module should point toward the rear of the server.

  2. Press down gently on the fan module to fully engage it with the connector on the motherboard.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 16. Top View of Fan Module

1

Fan module release latches

2

Fan module fault LED

  • Green—All fan modules are operating properly.

  • Amber—Fans are operating in a degraded state. For example, one or more of the fans has a fault.


Replacing the Fan Tray

The server has a fan tray that contains the 5 individual fans modules. Individual fan modules can be replaced, and the fan tray can also be completely removed if needed.

To remove individual fans, see Replacing Fan Modules

Use the following procedure to replace the fan tray.

Removing the Fan Tray

The fan tray can be removed either with all fan modules in place, or when some, or all, of the fan modules have been removed.

Procedure

Step 1

Rotate the tool-less middle lockdown screws that secure the fan tray to the chassis.

  1. Locate the lockdown screws that secure the fan tray to the server.

  2. Grasp the screws and rotate them a quarter of a turn (90 degrees) to loosen the screws.

Step 2

Unhinge the handle on both sides of the fan tray.

Step 3

Remove the fan tray from the server.

  1. Grasp the handles at the top of the fan tray.

  2. Holding the fan tray level, lift the fan tray up until it is removed from the chassis.


What to do next

Reinsert the fan tray into the chassis. See Installing the Fan Tray.

Installing the Fan Tray

You can install the fan tray with or without fans installed. Use the following procedure to install the fan tray.

Procedure

Step 1

Install the fan tray.

  1. Align the fan tray with the guides on the inside of the chassis.

  2. Make sure that the system cable is organized on both sides and will not obstruct installation.

  3. Holding the fan tray by the handles, slide it into place in the chassis.

  4. Push down and rotate the middle lockdown screw clockwise to lock the fan tray into the chassis receiving bracket.

Step 2

Close the top cover, or perform additional procedures, if needed.


Replacing CPUs and Heatsinks

This section contains the following topics:

CPU Configuration Rules

This server has two CPU sockets on the motherboard. Each CPU supports 8 DIMM channels (16 DIMM slots). See the Cisco UCS Intel M8 Memory Guide..

  • The server can operate with one CPU, or two identical CPUs installed.

  • The minimum configuration is that the server must have at least CPU 1 installed. Install CPU 1 first, and then CPU 2.

  • The following restrictions apply when using a single-CPU configuration:

    • Any unused CPU socket must have the socket dust cover from the factory in place.

    • The maximum number of DIMMs is 16 (only CPU 1 channels A through H).

  • Two different form factors exist for heatsinks, a low profile and a high profile. The server can be ordered with either, but you cannot mix high- and low-profile CPUs and heatsinks in the same server. A single server must have all of one type.

    The CPU and heatsink installation procedure is different depending on the type of heatsink used in your server.

    • Low profile (UCSC-HSLP-C220M8), which has 4 T30 Torx screws on the main heatsink, and 2 Phillips-head screws on the extended heatsink.

      This heat sink is required for UCSC-C240-M8L servers or any C240 M8 servers that contain UCSC-GPUAD-240M8.

    • High profile (UCSC-HSHP-240M8), which has 4 T30 Torx screws.

Tools Required For CPU Replacement

You need the following tools and equipment for this procedure:

  • T-30 Torx driver—Supplied with replacement CPU.

  • #1 flat-head screwdriver—Supplied with replacement CPU.

  • CPU assembly tool—Supplied with replacement CPU. Orderable separately as Cisco PID UCS-CPUAT=.

  • Heatsink cleaning kit—Supplied with replacement CPU. Orderable separately as Cisco PID UCSX-HSCK=.

    One cleaning kit can clean up to four CPUs.

  • Thermal interface material (TIM)—Syringe supplied with replacement CPU. Use only if you are reusing your existing heatsink (new heatsinks have a pre-applied pad of TIM). Orderable separately as Cisco PID UCS-CPU-TIM=.

    One TIM kit covers one CPU.

See also Additional CPU-Related Parts to Order with RMA Replacement CPUs.

Removing CPUs and Heat Sinks

Use the following procedure to remove an installed CPU and heatsink from the server. With this procedure, you will remove the CPU from the motherboard, disassemble individual components, then place the CPU and heatsink into the fixture that came with the CPU.

Procedure

Step 1

Choose the appropriate method to loosen the securing screws, based on the whether the CPU has a high-profile or low-profile heatsink.

  • For a CPU with a high-profile heatsink, proceed to step a.

  • For a CPU with a low-profile heatsink, skip to step 2.

  1. Using a T30 Torx driver, loosen all the securing nuts.

  2. Push the rotating wires towards each other to move them to the unlocked position.

    Caution

     

    Make sure that the rotating wires are as far inward as possible. When fully unlocked, the bottom of the rotating wire disengages and allows the removal of the CPU assembly. If the rotating wires are not fully in the unlocked position, you can feel resistance when attempting to remove the CPU assembly.

  3. Grasp the CPU and heatsink along the edge of the carrier and lift the CPU and heatsink off of the motherboard.

    Caution

     
    While lifting the CPU assembly, make sure not to bend the heatsink fins. Also, if you feel any resistance when lifting the CPU assembly, verify that the rotating wires are completely in the unlocked position.
  4. Go to step 3.

Step 2

Remove the CPU.

  1. Using a #2 Phillips screwdriver, loosen the two Phillips head screws for the extended heatsink.

  2. Using a T30 Torx driver, loosen the four Torx securing nuts.

  3. Push the rotating wires towards each other to move them to the unlocked position.

    Caution

     

    Make sure that the rotating wires are as far inward as possible. When fully unlocked, the bottom of the rotating wire disengages and allows the removal of the CPU assembly. If the rotating wires are not fully in the unlocked position, you can feel resistance when attempting to remove the CPU assembly.

  4. Grasp the CPU and heatsink along the edge of the carrier and lift the CPU and heatsink off of the motherboard.

    Caution

     
    While lifting the CPU assembly, make sure not to bend the heatsink fins. Also, if you feel any resistance when lifting the CPU assembly, verify that the rotating wires are completely in the unlocked position.
  5. Go to step 3.

Step 3

Put the CPU assembly on a rubberized mat or other ESD-safe work surface.

When placing the CPU on the work surface, the heatsink label should be facing up. Do not rotate the CPU assembly upside down.

Step 4

Attach a CPU dust cover to the CPU socket.

  1. Align the posts on the CPU bolstering plate with the cutouts at the corners of the dust cover.

  2. Lower the dust cover and simultaneously press down on the edges until it snaps into place over the CPU socket.

    Caution

     

    Do not press down in the center of the dust cover!



Step 5

Detach the CPU from the CPU carrier by disengaging CPU clips and using the TIM breaker.

  1. Turn the CPU assembly upside down, so that the heatsink is pointing down.

    This step enables access to the CPU securing clips.

  2. Gently lift the TIM breaker in a 90-degree upward arc to partially disengage the CPU clips on this end of the CPU carrier.

  3. Lower the TIM breaker into the u-shaped securing clip to allow easier access to the CPU carrier.

    Note

     

    Make sure that the TIM breaker is completely seated in the securing clip.

  4. Gently pull up on the outer edge of the CPU carrier (2) so that you can disengage the second pair of CPU clips near both ends of the TIM breaker.

    Caution

     

    Be careful when flexing the CPU carrier! If you apply too much force you can damage the CPU carrier. Flex the carrier only enough to release the CPU clips. Make sure to watch the clips while performing this step so that you can see when they disengage from the CPU carrier.

  5. Gently pull up on the outer edge of the CPU carrier so that you can disengage the pair of CPU clips (3 in the following illustration) which are opposite the TIM breaker.

  6. Grasp the CPU carrier along the short edges and lift it straight up to remove it from the heatsink.

Step 6

Transfer the CPU and carrier to the fixture.

  1. When all the CPU clips are disengaged, grasp the carrier, and lift it and the CPU to detach them from the heatsink.

    Note

     

    If the carrier and CPU do not lift off of the heatsink, attempt to disengage the CPU clips again.

  2. Flip the CPU and carrier right-side up so that the words PRESS are visible.

  3. Align the posts on the fixture and the pin 1 locations on the CPU carrier and the fixture (1 in the following illustration).

  4. Lower the CPU and CPU carrier onto the fixture.



Step 7

Use the provided cleaning kit (UCSX-HSCK) to remove all of the thermal interface barrier (thermal grease) from the CPU, CPU carrier, and heatsink.

Important

 

Make sure to use only the Cisco-provided cleaning kit, and make sure that no thermal grease is left on any surfaces, corners, or crevices. The CPU, CPU carrier, and heatsink must be completely clean.


What to do next

Choose the appropriate option:

  • If you will be installing a CPU, go to Installing the CPUs and Heatsinks.

  • If you will not be installing a CPU, verify that a CPU socket cover is installed. This option is valid only for CPU socket 2 because CPU socket 1 must always be populated in a runtime deployment.

Installing the CPUs and Heatsinks

Use this procedure to install a CPU if you have removed one, or if you are installing a CPU in an empty CPU socket. To install the CPU, you will move the CPU to the fixture, then attach the CPU assembly to the CPU socket on the server mother board.

Procedure

Step 1

Remove the CPU socket dust cover on the server motherboard.

  1. Push the two vertical tabs inward to disengage the dust cover.

  2. While holding the tabs in, lift the dust cover up to remove it.

  3. Store the dust cover for future use.

    Caution

     

    Do not leave an empty CPU socket uncovered. If a CPU socket does not contain a CPU, you must install a CPU dust cover.

Step 2

Grasp the CPU fixture on the edges labeled PRESS, lift it out of the tray, and place the CPU assembly on an ESD-safe work surface.

Step 3

Apply new TIM.

Note

 
The heatsink must have new TIM on the heatsink-to-CPU surface to ensure proper cooling and performance.
  • If you are installing a new heatsink, it is shipped with a pre-applied pad of TIM. Go to step 4.

  • If you are reusing a heatsink, you must remove the old TIM from the heatsink and then apply new TIM to the CPU surface from the supplied syringe. Continue with step a below.

  1. Apply the Bottle #1 cleaning solution that is included with the heatsink cleaning kit (UCSX-HSCK=), as well as the spare CPU package, to the old TIM on the heatsink and let it soak for a least 15 seconds.

  2. Wipe all of the TIM off the heatsink using the soft cloth that is included with the heatsink cleaning kit. Be careful to avoid scratching the heatsink surface.

  3. Completely clean the bottom surface of the heatsink using Bottle #2 to prepare the heatsink for installation.

  4. Using the syringe of TIM provided with the new CPU (UCS-CPU-TIM=), apply 1.5 cubic centimeters (1.5 ml) of thermal interface material to the top of the CPU. Use the pattern shown in the following figure to ensure even coverage.

    Figure 17. Thermal Interface Material Application Pattern

    Caution

     

    Use only the correct heatsink for your CPU. CPUs use the same heatsink based on your server configuration. For non-GPU servers, use UCSC-HSHP-C240M7. For a GPU, or GPU-ready configuration, use UCSC-HSLP-C220M7.

Step 4

Attach the heatsink to the CPU fixture.

  1. Make sure the rotating wires are in the unlocked position so that the feet of the wires do not impede installing the heatsink.

  2. Grasp the heatsink by the fins and align the pin 1 location of the heatsink with the pin 1 location on the CPU fixture, then lower the heatsink onto the CPU fixture.

Step 5

Install the CPU assembly onto the CPU motherboard socket.

  1. Push the rotating wires (1 in the following image) to the unlocked position so that they do not obstruct installation.

  2. Grasp the heatsink by the fins, align the pin 1 location on the heatsink with the pin 1 location on the CPU socket (2 in the following image), then seat the heatsink onto the CPU socket.

  3. Holding the CPU assembly level, lower it onto the CPU socket.

  4. Push the rotating wires away from each other to lock the CPU assembly into the CPU socket.

    Caution

     

    Make sure that you close the rotating wires completely before using the Torx driver to tighten the securing nuts.

  5. Choose the appropriate option to secure the CPU to the socket.

    • For a CPU with a high-profile heatsink, set the T30 Torx driver to 12 in-lb of torque and tighten the 4 securing nuts to secure the CPU to the motherboard (4).

    • For a CPU with a low-profile heatsink, set the T30 Torx driver to 12 in-lb of torque and tighten the 4 securing nuts to secure the CPU to the motherboard (3) first. Then, set the torque driver to 6 in-lb of torque and tighten the two Phillips head screws for the extended heatsink (4).


Additional CPU-Related Parts to Order with RMA Replacement CPUs

When a return material authorization (RMA) of the CPU is done on a Cisco UCS C-Series server, additional parts might not be included with the CPU spare. The TAC engineer might need to add the additional parts to the RMA to help ensure a successful replacement.


Note


The following items apply to CPU replacement scenarios. If you are replacing a system chassis and moving existing CPUs to the new chassis, you do not have to separate the heatsink from the CPU. See Additional CPU-Related Parts to Order with RMA Replacement System Chassis.


  • Scenario 1—You are reusing the existing heatsinks:

    • Heat sink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

    • Thermal interface material (TIM) kit (UCS-CPU-TIM=)

      One TIM kit covers one CPU.

  • Scenario 2—You are replacing the existing heatsinks:


    Caution


    Use only the correct heatsink for your CPUs to ensure proper cooling. There are two different heatsinks, a low profile (UCSC-HSLP-C220M8) which is used on the LFF version of the server, or with GPUs on any version of the C240 M8 server, and a high-profile (UCSC-HSHP-C240M8) which is used with a non-GPU configuration of the SSF or E3.S versions of the server.
    • New heatsinks have a pre-applied pad of TIM.

    • Heat sink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

  • Scenario 3—You have a damaged CPU carrier (the plastic frame around the CPU):

    • CPU Carrier

    • #1 flat-head screwdriver (for separating the CPU from the heatsink)

    • Heatsink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

    • Thermal interface material (TIM) kit (UCS-CPU-TIM=)

      One TIM kit covers one CPU.

A CPU heat sink cleaning kit is good for up to four CPU and heat sink cleanings. The cleaning kit contains two bottles of solution, one to clean the CPU and heat sink of old TIM and the other to prepare the surface of the heat sink.

New heat sink spares come with a pre-applied pad of TIM. It is important to clean any old TIM off of the CPU surface prior to installing the heat sinks. Therefore, even when you are ordering new heat sinks, you must order the heat sink cleaning kit.

Additional CPU-Related Parts to Order with RMA Replacement System Chassis

When a return material authorization (RMA) of the system chassis is done on a Cisco UCS C-Series server, you move existing CPUs to the new chassis.

The only tool required for moving a CPU/heatsink assembly is a T-30 Torx driver.

Cabling For RAID Cards

The server has single and dual controllers for RAID cards. The following topics show cabling diagrams for supported RAID card configuration.

Cisco Trimode M1 24G RAID Controller With 4GB FBWC32 Drives

The following diagram shows cabling pertinent to the Cisco Trimode M1 24G RAID Controller With 4GB FBWC32 Drives.

Cable

Color

Cisco Part Number

Notes

MCIO cable

(Y cable x16 to x8 + x8)

Light Green

The single-connector end of the cable connects to the P-2 motherboard connector near rear riser 3. The dual-connector end of the cable connects to the NVME-B and NVME-D connectors on the HDD backplane.

MCIO cable

(Y cable x16 to x8 + x8)

Blue

The single-connector end of the cable connects to the P2 connector on the motherboard near the rear riser. The dual-connector end of the cable connects to RAID controller2/HBA2.

HDD backplane CFG cable

Ruby Red

Connects motherboard to HDD backplane

HDD Backplane Power cable, 2

Red

Cisco Trimode M1 24G HBA Controller with 4GB FBWC 16 Drives

The following diagram shows cabling pertinent to a Cisco Trimode M1 24G HBA Controller with 4GB FBWC 16 Drives.

Cable

Color

Cisco Part Number

Notes

MCIO NVMe cable (x8)

Light Green

The single-connector end of the cable connects to the P-2 motherboard connector near rear riser 3. The dual-connector end of the cable connects to the NVME-B and NVME-D connectors on the HDD backplane.

MCIO NVMe cable (x8)

Brown

The single-connector end of the cable connects to the P2 connector on the motherboard near the rear riser. The dual-connector end of the cable connects to RAID controller2/HBA2.

MCIO NVMe Y cable

(x16 to x8)

Blue

Slimlime SAS Cable

Light Blue

HDD backplane power cable

Turquoise

Connects motherboard to HDD backplane

HDD Backplane Power CFG cable

Red

Replacing RAID Card and Cables

This section describes how replace RAID Cards and Cables

Removing Cisco Trimode M1 24G RAID Controller W/4GB FBWC 32 Drives

The RAID card is located in the front of the server. This process requires you to disconnect the RAID cables from the card and motherboard before remoivng the RAID card.

Procedure

Step 1

Remove the Fan Module and Air Baffle.

Figure 18. Remove the Fan Module and Air Baffle

Step 2

Disconnect and remove the connector cable from the server motherboard.

Figure 19. Disconnect the Connector Cable

Step 3

Disconnect the right-side RAID module from the sled carrier.

Figure 20. Disconnecting the Right-Side module

Step 4

Remove the right-side RAID module from the server motherboard.

Figure 21. Removing Right-Side RAID Module

Step 5

Remove RAID card from carrier.

  1. Loosen the five screws connecting the RAID card to module carrier.

  2. Remove the RAID card from the module carrier.

    Figure 22. Removing RAID Card from Carrier

What to do next

Installing Cisco Trimode M1 24G RAID Controller W/4GB FBWC 32 Drives

The RAID card is located in the front of the server. This process requires you to connect the RAID card and cables to and motherboard.

Procedure

Step 1

Remove the Fan Module and Air Baffle.

Figure 23. Remove the Fan Module and Air Baffle

Step 2

Connect the RAID connector cable to the server motherboard.

Figure 24. Connect the RAID Connector Cable

Step 3

Install the right-side RAID module to the sled carrier.

Figure 25. Installing the Right-Side module

Step 4

Tighten the right-side RAID module screws to the server motherboard.


Replacing Cisco Trimode M1 HBA Controller W/4GB FBWC 16 Drives

The RAID card is located in the front of the server. This process requires you to disconnect the RAID cables from the card and motherboard before remoivng the RAID card.

Procedure

Step 1

Remove the Fan Module and the Air Baffle from the server.

Figure 26. Remove Fan Moddule and Air Baffle

Step 2

Remove the connector cable from the server motherboard.

Figure 27. Remove Connector Cables

Step 3

Open the right side RAID module from the sled carrier.

  1. Lift the sled carrier lever.

  2. Loosen the screws on the RAID card sled carrier.

Figure 28. Disconnecting Right RAID Card from Carrier

Step 4

Remove the right side RAID module from the server motherboard.

Figure 29. Remove RAID Side Card Module.

Step 5

Disconnect and remove the RAID card connector cable from the server motherboard.

Figure 30. Remove RAID Card Connector Cable

Step 6

Open the left side RAID module from the sled carrier.

  1. Lift the sled carrier lever.

  2. Loosen the screws on the RAID card sled carrier.

Figure 31. Disconnecting Left side RAID Card from Carrier

Step 7

Remove the RAID card from the module sled carrier.

  1. Loosen the screws on the sled carrier.

  2. Remove RAID card.

Figure 32. Removing RAID Card from Module Sled Carrier

Installing Cisco Trimode M1 24G HBA Controller W/GB FBWC 16 Drives

The RAID card is located in the front of the server. This process requires you to connect the RAID card and cables to and motherboard.

Procedure

Step 1

Attach the RAID card to the module sled carrier.

Figure 33. Attach the RAID Card to the Module Sled Carrier

Step 2

Insert and attach the RAID module to the sled carrier.

Step 3

Attach the RAID card connector cables to the server motherboard.

Step 4

Attach the RAID card module to the server motherboard.

Step 5

Attach the second connector cable to the RAID card and the server motherboard.

Step 6

Install the Fan Module and the Air Baffle to the server.

Step 7

Tighten the RAID module screws to the server motherboard.


What to do next

Check the RAID card and connectors and make sure all connections are secure.

Replacing Memory DIMMs

The server supports R-DIMM, DDR5 288 pin DIMM modules. Eight memory channels per socket are supported with 2 DIMMs per channel, for a total of 32 memory slots on the motherboard.

When installing or replacing DIMMs, be aware of the following:


Caution


DIMMs and their sockets are fragile and must be handled with care to avoid damage during installation.



Caution


Cisco does not support third-party DIMMs. Using non-Cisco DIMMs in the server might result in system problems or damage to the motherboard.



Note


To ensure the best server performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace DIMMs.

For detailed information about sipported memory usage, mixing, and population guidelines, see the Cisco UCS Intel M8 Memory Guide.


Replacing DIMMs

Identifying a Faulty DIMM

Each DIMM socket has a corresponding DIMM fault LED, directly in front of the DIMM socket. See Internal Diagnostic LEDs for the locations of these LEDs. When the server is in standby power mode, these LEDs light amber to indicate a faulty DIMM.

Procedure

Step 1

Remove an existing DIMM:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Remove the air baffle that covers the front ends of the DIMM slots to provide clearance.

  5. Locate the DIMM that you are removing, and then open the ejector levers at each end of its DIMM slot.

Step 2

Install a new DIMM:

Note

 

Before installing DIMMs, see the memory population rules for this server. Go to the Cisco UCS Intel M8 Memory Guide.

  1. Align the new DIMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to correctly orient the DIMM.

  2. Push down evenly on the top corners of the DIMM until it is fully seated and the ejector levers on both ends lock into place.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing the RTC Battery


Warning


There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s instructions.

[Statement 1015]



Warning


Recyclers: Do not shred the battery! Make sure you dispose of the battery according to appropriate regulations for your country or locale.


The real-time clock (RTC) battery retains system settings when the server is disconnected from power. The battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be purchased from most electronic stores.

Procedure


Step 1

Remove the RTC battery:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Remove PCIe riser 2 from the server to provide clearance to the RTC battery socket that is on the motherboard. See Replacing a PCIe Riser.

  5. Locate the horizontal RTC battery socket.

  6. Remove the battery from the socket on the motherboard. Gently pry the securing clip to the side to provide clearance, then lift up on the battery.

Step 2

Install a new RTC battery:

  1. Insert the battery into its socket and press down until it clicks in place under the clip.

    Note

     

    The positive side of the battery marked “3V+” should face up.

  2. Replace PCIe riser 2 to the server. See Replacing a PCIe Riser.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing Power Supplies

When two power supplies are installed they are redundant as 1+1 by default, but they also support cold redundancy mode. Cold redundancy (CR) suspends power delivery on one or more power supplies and forces the remainder of the load to be supplied by the active PSU(s). As a result, total power efficiency is improved by best utilizing the PSU efficiency when compared to load characteristics.

This section includes procedures for replacing AC and DC power supply units.

Supported Power Supplies

The Cisco UCS C240 M8 supports the following power supplies.


Caution


Do not mix PSU types in the same server. PSU must be the same type and wattage.


For detailed information, see Power Specifications.

PSU Type

Supported In

Notes

1050 W DC

All UCS C240 M8 models

One power supply is mandatory; one more can be added for 1 + 1 redundancy as long power supplies are the same.

1200 W AC

All UCS C240 M8 models

One power supply is mandatory; one more can be added for 1 + 1 redundancy as long power supplies are the same.

1600 W AC

All UCS C240 M8 models

One power supply is mandatory; one more can be added for 1 + 1 redundancy as long power supplies are the same.

2300 W AC

All UCS C240 M8 models

One power supply is mandatory; one more can be added for 1 + 1 redundancy as long power supplies are the same.

Replacing AC Power Supplies


Note


If you have ordered a server with power supply redundancy (two power supplies), you do not have to power off the server to replace a power supply because they are redundant as 1+1.

Note


Do not mix power supply types or wattages in the server. Both power supplies must be identical.

Caution


DO NOT interchange power supplies of any earlier Cisco UCS servers (for example, any Cisco UCS C240 M6 server power supplies) with the Cisco UCS C240 M8 server.


Procedure

Step 1

Remove the power supply that you are replacing or a blank panel from an empty bay:

  1. Perform one of the following actions:

  2. Remove the power cord from the power supply that you are replacing.

  3. Grasp the power supply handle while pinching the release lever toward the handle.

  4. Pull the power supply out of the bay.

Step 2

Install a new power supply:

  1. Grasp the power supply handle and insert the new power supply into the empty bay.

  2. Push the power supply into the bay until the release lever locks.

  3. Connect the power cord to the new power supply.

  4. Only if you shut down the server, press the Power button to boot the server to main power mode.


Replacing DC Power Supplies


Note


This procedure is for replacing DC power supplies in a server that already has DC power supplies installed. If you are installing DC power supplies to the server for the first time, see Installing DC Power Supplies (First Time Installation).



Warning


A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.

Statement 1022



Warning


This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.

Statement 1045



Warning


Installation of the equipment must comply with local and national electrical codes.

Statement 1074



Note


If you are replacing DC power supplies in a server with power supply redundancy (two power supplies), you do not have to power off the server to replace a power supply because they are redundant as 1+1.

Note


Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Procedure

Step 1

Remove the DC power supply that you are replacing or a blank panel from an empty bay:

  1. Perform one of the following actions:

    • If you are replacing a power supply in a server that has only one DC power supply, shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

    • If you are replacing a power supply in a server that has two DC power supplies, you do not have to shut down the server.

  2. Remove the power cord from the power supply that you are replacing. Lift the connector securing clip slightly and then pull the connector from the socket on the power supply.

  3. Grasp the power supply handle while pinching the release lever toward the handle.

  4. Pull the power supply out of the bay.

Step 2

Install a new DC power supply:

  1. Grasp the power supply handle and insert the new power supply into the empty bay.

  2. Push the power supply into the bay until the release lever locks.

  3. Connect the power cord to the new power supply. Press the connector into the socket until the securing clip clicks into place.

  4. Only if you shut down the server, press the Power button to boot the server to main power mode.

Figure 34. Replacing DC Power Supplies

1

Keyed cable connector (CAB-48DC-40A-8AWG)

3

PSU status LED

2

Keyed DC input socket

-


Installing DC Power Supplies (First Time Installation)


Note


This procedure is for installing DC power supplies to the server for the first time. If you are replacing DC power supplies in a server that already has DC power supplies installed, see Replacing DC Power Supplies.



Warning


A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.

Statement 1022



Warning


This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.

Statement 1045



Warning


Installation of the equipment must comply with local and national electrical codes.

Statement 1074



Note


Do not mix power supply types or wattages in the server. Both power supplies must be identical.

Caution


As instructed in the first step of this wiring procedure, turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.
Procedure

Step 1

Turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.

Note

 

The required DC input cable is Cisco part CAB-48DC-40A-8AWG. This 3-meter cable has a 3-pin connector on one end that is keyed to the DC input socket on the power supply. The other end of the cable has no connector so that you can wire it to your facility’s DC power.

Step 2

Wire the non-terminated end of the cable to your facility’s DC power input source.

Step 3

Connect the terminated end of the cable to the socket on the power supply. The connector is keyed so that the wires align for correct polarity and ground.

Step 4

Return DC power from your facility’s circuit breaker.

Step 5

Press the Power button to boot the server to main power mode.

Figure 35. Replacing DC Power Supplies

1

Keyed cable connector (CAB-48DC-40A-8AWG)

3

PSU status LED

2

Keyed DC input socket

-

Step 6

See Grounding for DC Power Supplies for information about additional chassis grounding.


Grounding for DC Power Supplies

AC power supplies have internal grounding and so no additional grounding is required when the supported AC power cords are used.

When using a DC power supply, additional grounding of the server chassis to the earth ground of the rack is available. Two screw holes for use with your dual-hole grounding lug and grounding wire are supplied on the chassis rear panel.


Note


The grounding points on the chassis are sized for 10-32 screws. You must provide your own screws, grounding lug, and grounding wire. The grounding lug must be dual-hole lug that fits 10-32 screws. The grounding cable that you provide must be 14 AWG (2 mm), minimum 60° C wire, or as permitted by the local code.

Replacing a PCIe Riser

This server has three toolless PCIe risers for horizontal installation of PCIe cards. Each riser is available in multiple versions. See PCIe Slot Specifications for detailed descriptions of the slots and features in each riser version.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Remove the PCIe riser that you are replacing:

  1. Grasp the flip-up handle on the riser and the blue forward edge, and then lift up evenly to disengage its circuit board from the socket on the motherboard. Set the riser on an antistatic surface.

  2. If the riser has a card installed, remove the card from the riser. See Replacing a PCIe Card.

Step 5

Install a new PCIe riser:

Note

 

The PCIe risers are not interchangeable. If you plug a PCIe riser into the wrong socket, the server will not boot. Riser 1 must plug into the motherboard socket labeled “RISER1.” Riser 2 must plug into the motherboard socket labeled “RISER2.”

  1. If you removed a card from the old PCIe riser, install the card to the new riser. See Replacing a PCIe Card.

  2. Position the PCIe riser over its socket on the motherboard and over its alignment slots in the chassis.

  3. Carefully push down on both ends of the PCIe riser to fully engage its circuit board connector with the socket on the motherboard.

Step 6

Replace the top cover to the server.

Step 7

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing a PCIe Card


Note


Cisco supports all PCIe cards qualified and sold by Cisco. PCIe cards not qualified or sold by Cisco are the responsibility of the customer. Although Cisco will always stand behind and support the C-Series rack-mount servers, customers using standard, off-the-shelf, third-party cards must go to the third-party card vendor for support if any issue with that particular card occurs.

PCIe Slot Specifications

The server contains three toolless PCIe risers for horizontal installation of PCIe cards. Each riser is orderable in multiple versions.

  • Riser 1 (controlled byu CPU1) contains PCIe slots 1, 2, and 3, and is available with the following different options:

    • Riser 1A —Slot 1 (x8 Gen 5), 2 (x16 Gen 5), 3 (x8 Gen 5).

    • Riser 1B (SFF server and LFF server)—Slots 1 (Reserved), 2 (x4 Gen4) and 3 (x4 Gen 4) for SFF drive bays

    • Riser 1C—Slot1 (x16 Gen 5), 2 (x16 Gen 5)

    • Riser 1D (EDSFF server only)—Slot 1 (Reserved), 2 (x4 Gen5), and 3 (x4 Gen 5)

  • Riser 2 contains (controlled by CPU2) contains PCIe slots 4, 5, and 6, and is available with the following different options:

    • Riser 2A—Slots 4 (x8 Gen 5), 5 (x16 Gen 5), and 6 (x8 Gen 5).

    • Riser 2C—Slot 1 (x16 Gen 5), 1 (x16 Gen 5)

  • Riser 3 (controller by CPU2) contains PCIe slots 7 and 8, and is available in the following different options:

    • Riser 3A—Slots 7 (x8 Gen 5) and 8 (x8 Gen 5).

    • Riser 3B (SFF and LFF server)—Slots 1 (Reserved), 2 (x4 Gen 4) and 3 (x4 Gen 4) for SFF drive bays.

    • Riser 3C—Slot 7 (x16 Gen 5).

    • Riser 3D (EDSFF server only)— Slots 1 (Reserved), 2 (x4 Gen 5), and 3 (x4 Gen 5) for EDSFF E3.S 1TB drive bays

The following illustration shows the PCIe slot numbering.

Figure 36. Rear Panel, Showing PCIe Slot Numbering

Replacing a PCIe Card


Note


If you are installing a Cisco UCS Virtual Interface Card, there are prerequisite considerations. See Cisco Virtual Interface Card (VIC) Considerations.



Note


RAID controller cards install into a dedicated motherboard socket. See Replacing a Storage Controller Card (RAID or HBA).


Procedure

Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Remove the PCIe card that you are replacing:

  1. Remove any cables from the ports of the PCIe card that you are replacing.

  2. Use two hands to flip up and grasp the blue riser handle and the blue finger grip area on the front edge of the riser, and then lift straight up.

  3. On the bottom of the riser, push the release latch that holds the securing plate, and then swing the hinged securing plate open.

  4. Open the hinged card-tab retainer that secures the rear-panel tab of the card.

  5. Pull evenly on both ends of the PCIe card to remove it from the socket on the PCIe riser.

    If the riser has no card, remove the blanking panel from the rear opening of the riser.

Step 5

Install a new PCIe card:

  1. With the hinged card-tab retainer open, align the new PCIe card with the empty socket on the PCIe riser.

  2. Push down evenly on both ends of the card until it is fully seated in the socket.

  3. Ensure that the card’s rear panel tab sits flat against the riser rear-panel opening and then close the hinged card-tab retainer over the card’s rear-panel tab.

  4. Swing the hinged securing plate closed on the bottom of the riser. Ensure that the clip on the plate clicks into the locked position.

  5. Position the PCIe riser over its socket on the motherboard and over the chassis alignment channels.

  6. Carefully push down on both ends of the PCIe riser to fully engage its connector with the sockets on the motherboard.

Step 6

Replace the top cover to the server.

Step 7

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 37. PCIe Riser Card Securing Mechanisms

1

Release latch on hinged securing plate

3

Hinged card-tab retainer

2

Hinged securing plate

-


Cisco Virtual Interface Card (VIC) Considerations

This section describes VIC card support and special considerations for this server.


Note


If you use the Cisco Card NIC mode, you must also make a VIC Slot setting that matches where your VIC is installed. The options are Riser1, Riser2, and Flex-LOM. See NIC Mode and NIC Redundancy Settings for more information about NIC modes.

If you want to use the Cisco UCS VIC card for Cisco UCS Manager integration, see also the Cisco UCS C-Series Server Integration with Cisco UCS Manager Guides for details about supported configurations, cabling, and other requirements.

  • A total of 3 VICs are supported in the server: 2 PCIe style, and 1 mLOM style. A maximum of one VIC is supported per riser.

  • VIC slot priority is depened on the which reisers are selected:

    • When VIC is selected with Riser 1A/2A, VIC slots should be populated in the following order: mLOM, Riser 1A slot 2, Riser 2A slot 5, Riser 1A slot 1, Riser 2A slot 4.

    • When VIC is selected with Riser 1C/2C, VIC slots should be populated in the following order: mLOM, Riser 1C slot 1, Riser 2C slot 4


    Note


    Single wire management is supported on only one VIC at a time. If multiple VICs are installed on a server, only one slot has NCSI enabled at a time. For single wire management, priority goes to the MLOM slot, then slot 2, then slot 5 for NCSI management traffic. When multiple cards are installed, connect the single-wire management cables in the priority order mentioned above.


  • The NCSI protocol is supported on n only one slot at a time in each riser.

    • If a GPU card is present in slot 2, NCSI automatically shifts from slot 2 to slot 1.

    • If a GPU card is present in slot 5, NCSI automatically shifts from slot 5 to slot 4.


    Note


    PCIe riser 2 is not available in a single-CPU system.


Replacing an mLOM Card

The server supports a modular LOM (mLOM) card to provide additional rear-panel connectivity. The mLOM socket is on the motherboard, under the storage controller card.

The mLOM socket provides a Gen-3 x16 PCIe lane. The socket remains powered when the server is in 12 V standby power mode, and it supports the network communications services interface (NCSI) protocol.


Note


If your mLOM card is a Cisco UCS Virtual Interface Card (VIC), see Cisco Virtual Interface Card (VIC) Considerations for more information and support details.

Procedure


Step 1

Remove any existing mLOM card (or a blanking panel):

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Remove any storage controller (RAID or HBA card) to provide clearance to the mLOM socket on the motherboard. See Replacing a Storage Controller Card (RAID or HBA).

  5. Loosen the single captive thumbscrew that secures the mLOM card to the threaded standoff on the chassis floor.

  6. Slide the mLOM card horizontally to free it from the socket, then lift it out of the server.

Step 2

Install a new mLOM card:

  1. Set the mLOM card on the chassis floor so that its connector is aligned with the motherboard socket.

  2. Push the card horizontally to fully engage the card's edge connector with the socket.

  3. Tighten the captive thumbscrew to secure the card to the chassis floor.

  4. Return the storage controller card to the server. See Replacing a Storage Controller Card (RAID or HBA).

  5. Replace the top cover to the server.

  6. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing an OCP Card

As a hardware option, the server can be configured with an Open Compute Project (OCP) 3.0 NIC in the rear mezzanine mLOM slot. To support this option, the server requires the Intel Ethernet Network Adapter X710 OCP 3.0 card.

If this is the first time an OCP card is being installed into the server, you will need to install the OCP interposer, which is available as part of a kit (UCSC-OCP3-KIT-D=).


Note


In addition to an OCP card, the server can support a Cisco mLOM in the rear mezzanine mLOM slot. The server can support either an OCP card or an mLOM, but not both. For information about replacing an mLOM, see Replacing an mLOM Card.


See the following topics.

Cisco VIC mLOM and OCP Card Replacement Considerations

In Cisco UCS C240 M8 servers, Cisco IMC network connection may be lost in the following situations, while replacing Cisco VIC mLOM and OCP cards:

  • If an OCP card is replaced by a Cisco VIC card in the mLOM Slot and the NIC mode is set to Shared OCP or Shared OCP Extended.

  • If a Cisco VIC Card in the mLOM Slot is replaced by an OCP Card and the NIC mode is set to Cisco-card MLOM.

Follow these recommendations while replacing Cisco VIC mLOM or OCP cards in Cisco UCS C240 M8 servers to avoid loss of connectivity:

  • Before replacing the card, configure any of the NIC modes that has network connected, other than Cisco card MLOM, Shared OCP, or Shared OCP Extended. After replacing the card, configure the appropriate NIC mode.

    To set the NIC mode, refer Server NIC Configuration section in Configuration Guides for your Cisco IMC release.

  • Or, after replacing the card, configure the appropriate NIC mode using Cisco IMC Configuration Utility/F8.

    See Connecting to the Server Remotely For Setup.

  • Or, after replacing the card, perform factory default settings using Cisco IMC Configuration Utility/F8 then perform the following steps:

    1. Once the server is rebooted, boot the system to Cisco IMC Configuration Utility/F8 then change the default password.

    2. Configure the appropriate NIC mode settings.

Table 3. Factory Default Settings

VIC in mLOM slot

Intel OCP 3.0 NIC (Intel X710) in mLOM Slot

VIC in Riser Slot

Dedicated Management Port

NIC Mode for CIMC Access

Yes

No

No

Yes

Cisco Card mode with the card in mLOM Slot

No

Yes

No

Yes

Shared OCP Extended

No

Yes

Yes

Yes

Shared OCP Extended

No

No

Yes Yes

Cisco Card with VIC SLOT based on precedence:

For C240 M8:

  1. Riser 1 - Slot 2

  2. Riser 2 - Slot 5

  3. Riser 1 - Slot 1

  4. Riser 2 - Slot 4

No No No Yes Dedicated

Removing an OCP Card

The OCP card (UCSC-O-ID10GC) mounts into the rear mezzanine mLOM slot. You will need to open the server top cover to remove or install the OCP card.

Use the following procedure to remove the OCP card from a server with full-height risers.

Before you begin

Gather a #2 Phillips screwdriver.

Procedure

Step 1

If you have not removed the server's top cover, do so now.

See Removing the Server Top Cover.

Step 2

Remove the OCP bracket.

  1. Locate the two screws that secure the bracket to the server sheetmetal.

  2. Using a #2 Phillips screwdriver, loosen the screws.

  3. Remove the screws and lift the bracket off of the server.

  4. Holding the OCP card level, slide it out of the server.

Step 3

Choose the appropriate option:


Installing an OCP Card

The OCP 3.0 card (UCSC-O-ID10GC) installs into the rear mezzanine mLOM slot and connects to an adapter, not directly to the motherboard. To install the OCP card, the server's top cover must be opened to gain access to screws that secure the OCP card in place.

Use the following task to install an OCP 3.0 card.

Before you begin

Gather a #2 Phillips screwdriver.

Procedure

Step 1

If you have not removed the server's top cover, do so now.

See Removing the Server Top Cover.

Step 2

Install the OCP card.

  1. Holding the OCP card level, slide it into the slot on the rear of the server.

  2. Install the OCP bracket onto the server, making sure to align the screwholes at each end of the bracket with the screwholes on the OCP/mLOM slot.

Step 3

Using a #2 Phillips screwdriver, tighten the screws to secure the OCP bracket and OCP card to the server.


What to do next

Replace the server top cover.

Replacing a Storage Controller Card (RAID or HBA)

For hardware-based storage control, the server can use a Cisco modular RAID controller or HBA that plugs into a dedicated, vertical socket on the motherboard.

Storage Controller Card Firmware Compatibility

Firmware on the storage controller (RAID or HBA) must be verified for compatibility with the current Cisco IMC and BIOS versions that are installed on the server. If not compatible, upgrade or downgrade the storage controller firmware using the Host Upgrade Utility (HUU) for your firmware release to bring it to a compatible level.


Note


For servers running in standalone mode only: After you replace controller hardware, you must run the Cisco UCS Host Upgrade Utility (HUU) to update the controller firmware, even if the firmware Current Version is the same as the Update Version. Running HUU is necessary to program any controller specific values to the storage controller for the specific server. If you do not run HUU, the storage controller may not be discovered..


See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring server components to compatible levels: HUU Guides.

Backplane Connectors

The server can be ordered with different storage hardware, each of which has a specific front drive backplane. Refer to the information in this topic when cabling storage controllers (RAID or HBA) for data storage drives.

The backplane connectors are labelled so that they are easily identifiable whenyou are working in the server. For reference, each supported backplane type is shown here.

For each backplane type:

  • The rear view shows the backplane connectors that face the interior of the server and can received cables.

  • The front view shows the backplane connectors that face the front-loading drives and enumerates the connector for each drive bay.


Note


The server also has M.2 boot drives which are managed by a Boot-Optmized RAID Controller, which does not affect the data storage drives and does not connect the drive backplane.


SFF Drive Backplane (UCSC-240-M8SX)

The SFF backplane has the following connectors.

E3S Drive Backplane (UCSC-240-M8E3S)

The NVMe backplane has the following connectors.

LFF Drive Backplane (UCSC-240-M8L)

The LFF backplane has the following connectors.

Removing the Dual Storage Controller Cards

The front RAID assembly can contain either a single Storage controller card in a single tray, or two Storage controller cards each in its own tray. Use this procedure to remove each of the Storage controller cards. This procedure assumes that you have removed power from the server and removed the top cover.

Procedure

Step 1

Locate the dual Storage controller cards.

Each Storage controller card has its own tray, as shown.

Step 2

Remove the fan tray.

For more information, see Removing the Fan Tray.

Step 3

Disconnect the cables.

  1. For each Storage controller card, grasp the ribbon cable connector and disconnect it from the RAID card.

    You can leave the other end of the ribbon cable attached to the motherboard.

  2. For each Storage controller card, grasp the connector for the rear-drive cable, and disconnect it from the card.

    You can leave the other end of the rear-drive cable attached.

    1

    SAS cable connection on Storage controller card.

    2

    SAS cable that connects to rear drives in Riser 3B

    3

    SAS cable connection on rear Riser 3B.

    4

    SAS cable connection on Storage controller card.

    5

    Ribbon cables connecting Storage controller cards to motherboard.

    6

    SAS cable that connects to rear drives in Riser 1B

    7

    SAS cable connection on rear Riser 1B.

Step 4

Remove the Storage controller cards.

  1. Grasp the cable that leads to the rear drives and disconnect it from each card.

  2. Grasp the handle at the top of each card tray, and gently push it towards the rear of the server.

    The handle should slide to the open position. This step disconnects the Storage controller card from a socket on an interior wall.

  3. Using a #2 Phillips screwdriver, loosen the captive screws at the edges of the trays.

  4. Grasp each card tray by the handle and lift the Storage controller cards out of the chassis.


What to do next

Reinsert the dual Storage controller cards. Go to Installing the Dual Storage Controller Cards.

Installing the Dual Storage Controller Cards

Use this procedure to install the dual Storage controller cards into the server. The Storage controller cards are contained in tray, which is replaceable.

Procedure

Step 1

Grasp each card tray by the handle.

Step 2

Install the Storage controller cards.

  1. Make sure that the handle of the tray is in the open position.

  2. Make sure that the cables do not obstruct installing the Storage controller cards.

  3. Orient the Storage controller card so that the thumbscrews align with their threaded standoffs on the motherboard.

  4. Holding the card tray by the handle, keep the tray level and lower it into the server.

  5. Using a #2 Phillips head screwdriver, tighten the screws at the edges of each tray.

  6. Gently push the handle of the tray towards the front of the server.

    This step seats each Storage controller card into its socket on the interior wall. You might feel some resistance as the card meets the socket. This resistance is normal.

Step 3

Reconnect the cables.

Step 4

Reinsert the fan tray.

For more information, see Installing the Fan Tray.


What to do next

Perform other maintenance tasks, if needed, or replace the top cover and restore facility power.

Removing the Storage Controller Card

The Storage controller can contain either a single controller card in a single tray, or two controller cards each in its own tray. Use this procedure to remove the single Storage controller card. This procedure assumes that you have removed power from the server and removed the top cover.

Procedure

Step 1

Locate the Storage controller card.

Step 2

Remove the fan tray.

For more information, see Removing the Fan Tray.

Step 3

Disconnect the cables.

  1. Grasp the ribbon cable connectors and disconnect them from the Storage controller card.

    You can leave the other end of the ribbon cable attached to the motherboard.

  2. Grasp the connector for the rear-drive cables (1 and 4) and disconnect them from the Storage controller card.

    You can leave the other end of the rear-drive cable attached.

    1

    Storage controller card connector for rear drives (Riser 3B)

    2

    SAS/SATA cable for rear drives

    3

    Connector for PCI Riser 3

    4

    Storage controller card connector for rear drives (Riser 1B)

    5

    Connector for PCI Riser 1

    6

    SAS/SATA cable for rear drives

Step 4

Remove the Storage controller card.

  1. Using both hands, grasp the handle at the top of the card tray, and gently push it towards the rear of the server.

    The handle should slide to the open position. This step disconnects the Storage controller card from a socket on an interior wall.

  2. Using a #2 Phillips screwdriver, loosen the captive screws at the edges of the tray.

  3. Using both hands, grasp the tray's handle, and keeping the Storage controller card tray level, lift it out of the chassis.


What to do next

Reinsert the Storage controller card. Go to Installing the Storage Controller Card.

Installing the Storage Controller Card

Use this procedure to install the single Storage controller card into the server. The Storage controller card is contained in tray, which is replaceable.

Procedure

Step 1

Grasp the card tray by the handle.

Step 2

Install the Storage controller card.

  1. Make sure that the handle of the tray is in the open position.

  2. Make sure that the cables do not obstruct installing the Storage controller card.

  3. Orient the Storage controller card so that the thumbscrews align with their threaded standoffs on the motherboard.

  4. Using both hands, hold the card tray by the handle, keep the tray level, and lower it into the server.

  5. Using a #2 Phillips head screwdriver, tighten the screws at the edges of the tray.

  6. Using both hands, make sure to apply equal pressure to both sides of the handle, and gently push the handle of the tray towards the front of the server.

    This step seats the Storage controller card into its sockets on the interior wall. You might feel some resistance as the card meets the socket. This resistance is normal.

Step 3

Reconnect the cables.

Step 4

Reinsert the fan tray.

For more information, see Installing the Fan Tray.


What to do next

Perform other maintenance tasks, if needed, or replace the top cover and restore facility power.

Verify Cabling

After installing a Storage controller card, the cabling between the card(s) and rear drives should be as follows.

  • For a 24-drive server, verify the following:

    • the SAS/SATA cable is connected to the controller card and Riser 3B

    • the SAS/SATA cable is connected to the controller card and the Riser 1B

    • both ribbon cables are connected to the controller card and the motherboard

    1

    SAS cable connection on Storage controller card.

    2

    SAS cable that connects to rear drives in Riser 3B

    3

    SAS cable connection on rear Riser 3B.

    4

    SAS cable connection on Storage controller card.

    5

    Ribbon cables connecting Storage controller cards to motherboard.

    6

    SAS cable that connects to rear drives in Riser 1B

    7

    SAS cable connection on rear Riser 1B.

Cabling for UCSC-240-M8SX Servers

The UCSC-240-M8SX servers have multiple different storage controller options for front- and rear-loading drives. Use the cabling diagrams in this topic to connect the storage controllers to local storage.


Note


For information about the location and identity of the different SFF backplane connectors, see Backplane Connectors.


Cisco UCSC-240-M8SX with No Storage Controller

Use this diagram to connect a UCSC-240-M8SX with 24 front-loading SFF drives. This configuration has no front storage controller.

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane cables, 2

(x16 to x8)

Yellow

The single-connector end of the cable connects to the P-2 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the NVME-A2 and NVME-A1 connectors on the backplane.

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the NVME-B2 and NVME-B1 connectors on the backplane.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cables, 3

Pink

Connects motherboard to HDD backplane power connectors (PWR-1 by CPU1 , PWR-2, and PWR-3 by CPU 2)

Cisco UCSC-240-M8SX with Dual UCSC-RAID-M1L16

Use this diagram to connect a UCSC-240-M8SX with 24 front-loading SFF drives. This configuration features two 16-drive Cisco 24G Tri-Mode M1 RAID controller (UCSC-RAID-M1L16).

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane cables, 2

(x16 to x8)

Yellow

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the NVME-A2 and NVME-A1 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the NVME-B2 and NVME-B1 connectors on the backplane.

MICIO NVMe Y cable to storage controller cables, 2

Blue

The single-connector end of the cable connects to the P-2 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the PCIe-1 and PCIe-2 connectors on the storage controller.

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the PCIe-1 and PCIe-2 connectors on the storage controller.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cables, 3

Pink

Connects motherboard to HDD backplane power (PWR-1 by CPU 1, PWR-2, and PWR-3 by CPU2)

SuperCap cables, 2

Lavender

Connects each Supercap to the storage controller.

Cisco UCSC-240-M8SX with Dual UCSC-HBA-M1L16

Use this diagram to connect a UCSC-240-M8SX with 24 front-loading SFF drives. This configuration features two 16-drive Cisco 24G Tri-Mode M1 HBA controller (UCSC-HBA-M1L16).

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane cables, 2

(x16 to x8)

Yellow

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the NVME-A2 and NVME-A1 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the NVME-B2 and NVME-B1 connectors on the backplane.

MICIO NVMe Y cable to storage controller cables, 2

Blue

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the PCIe-1 and PCIe-2 connectors on the storage controller.

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the PCIe-1 and PCIe-2 connectors on the storage controller.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cables, 3

Pink

Connects motherboard to HDD backplane power (PWR-1 by CPU 1, PWR-2, and PWR-3 by CPU2)

Cisco UCSC-240-M8SX with Dual UCSC-HBA-M1L16 and Rear Drives

Use this diagram to connect a UCSC-240-M8SX with 24 front-loading SFF drives and 4 rear SFF drives. This configuration features two 16-drive Cisco 24G Tri-Mode M1 HBA controllers (UCSC-HBA-M1L16).

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane cables, 2

(x16 to x8)

Yellow

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the NVME-A1 and NVME-A2 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the NVME-B1 and NVME-B2 connectors on the backplane.

MICIO NVMe Y cable to storage controller cables, 2

Blue

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the PCIe-1 and PCIe-2 connectors on the storage controller.

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the PCIe-1 and PCIe-2 connectors on the storage controller.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane byt the CFG connector

HDD Bbackplane PWR cables, 3

Pink

Connects motherboard to HDD backplane power (PWR-1 by CPU 1, PWR-2, and PWR-3 by CPU2)

Slimline SAS Cable, 4

Cyan

Connects the storage controller 1 (by CPU 2) to the rear SAS connector for PCI Riser 3

Connects storage controller 2 (by CPU 1) to the rear SAS connector for PCIe Riser 1

Cisco UCSC-240-M8SX with a Single CPU and Dual UCSC-HBA-M1L16

Use this diagram to connect a UCSC-240-M8SX with 24 front-loading SFF drives. This single CPU configuration two 16-drove Cisco 24G Tri-Mode M1 HBA controllers (UCSC-HBA-M1L16).

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane cables, 2

(x16 to x8)

Yellow

The single-connector end of the cable connects to the P-2 motherboard connector in front of the cnetral DIMM block. The dual-connector end of the cable connects to the NVME-B2 connector on the backplane and the PCIe-1 connector on storage controller 2.

MICIO NVMe Y cable to storage controller cables, 2

Blue

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the NVME-B1 connector on the backplane and the PCIe-1 connector on storage controller 1.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cables, 3

Pink

Connects motherboard to HDD backplane power (PWR-1 by CPU 1, PWR-2, and PWR-3 by CPU2)

Cisco UCSC-240-M8SX with UCSC-RAID-MP1L32

Use this diagram to connect a UCSC-240-M8SX with 24 front-loading SFF drives. This configuration features the 32-drive Cisco 24G Tri-Mode M1 RAID controller (UCSC-HBA-MP1L32).

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane cables, 2

(x16 to x8)

Yellow

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the NVMe-A2 and NVMe-A1 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the NVMe-B2 and NVMe-B1 connectors on the backplane.

MICIO NVMe Y cable to storage controller cable, 1

Blue

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the PCIe-1 and PCIe-2 connector on the storage controller.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cables, 3

Pink

Connects motherboard to HDD backplane power (PWR-1 by CPU 1, PWR-2, and PWR-3 by CPU2)

SuperCap Cable, 1

Lavender

Connects the Supercap to the storage controller

Cisco UCSC-240-M8SX with UCSC-RAID-MP1L32 and Rear Drives

Use this diagram to connect a UCSC-240-M8SX with 24 front-loading SFF drives and four rear SFF drives. This configuration features the 32-drive Cisco 24G Tri-Mode M1 RAID controller (UCSC-HBA-MP1L32).

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane cables, 2

(x16 to x8)

Yellow

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the NVMe-A2 and NVMe-A1 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the NVMe-B2 and NVMe-B1 connectors on the backplane.

MICIO NVMe Y cable to storage controller cable

Blue

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the PCIe-1 and PCIe-2 connector on the storage controller.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane power (PWR-1 by CPU 1, PWR-2, and PWR-3 by CPU2)

HDD Bbackplane PWR cables, 3

Pink

Connects motherboard to HDD backplane power (PWR-1 by CPU 1, PWR-2, and PWR-3 by CPU2)

SuperCap Cable, 1

Lavender

Connects the Supercap to the storage controller

Slimline SAS Cable, 4

Cyan

Connects the storage controller (by CPU 2) to the rear SAS connector for PCI Riser 3

Connects thestorage controller (by CPU 1) to the rear SAS connector for PCIe Riser 1

Cisco UCSC-240-M8SX with a Single CPU and UCSC-RAID-MP1L32

Use this diagram to connect a UCSC-240-M8SX with 24 front-loading SFF drives in a single CPU configuration. This configuration features the 32-drive Cisco 24G Tri-Mode M1 RAID controller (UCSC-HBA-MP1L32).

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane cable

(x16 to x8)

Yellow

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the NVMe-B2 and NVMe-B1 connectors on the backplane.

MICIO NVMe Y cable to storage controller cable

Blue

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the PCIe-1 and PCIe-2 connector on the storage controller.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cables, 3

Pink

Connects motherboard to HDD backplane power (PWR-1 by CPU 1, PWR-2, and PWR-3 by CPU2)

SuperCap Cable, 1

Lavender

Connects the Supercap to the storage controller

Cabling for UCSC-240-E3S Servers

The UCSC-240-M8E3S servers have multiple different storage controller options for front- and rear-loading drives. Use the cabling diagrams in this topic to connect the storage controllers to local storage.


Note


For information about the location and identity of the different NVMe backplance connectors, see Backplane Connectors.


Cisco UCSC-240-M8E3S with 16 1T x4 Signaling Front Drives and Rear Drives

Use this diagram to connect a UCSC-240-M8E3S with 16 1TB drives x4 Signalling and four E3.S rear drives.

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane, Front MCIO to backplane cables, 4

(x16 to x8)

Blue

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the P5 and P6 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the P7 and P8 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the P9 and P8 connectors on the backplane.

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the P11 and P12 connectors on the backplane.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cable, 4

Pink

Connects motherboard to HDD backplane power connectors (PWR-3 and PWR-4 by CPU 2, and PWR-1 and PWR-2 by CPU1)

Rear E3.S x8 cables, 4

Lavender

Connects to the Rear SSD1 and Rear SSD2 connector on the motherboard and PCIe Riser 3

Connects to the Rear SSD1 and Rear SSD2 connector on the motherboard and PCIe Riser 1

Cisco UCSC-240-M8E3S with 24 1T x4 Signaling Front Drives

Use this diagram to connect a UCSC-240-M8E3S with 24 1TB drives x4 Signalling.

Cable

Color

Notes

MCIO NVMe X cable to drive backplane, Riser 2E to backplane cables, 2

(x16 to x8)

Yellow

Connects the P1 through P4 connectors on the backplane to the connectors on Riser 2E

MCIO NVMe Y cable to drive backplane, Front MCIO to backplane cables, 4

(x16 to x8)

Blue

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the P5 and P6 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the P7 and P8 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the P9 and P8 connectors on the backplane.

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the P11 and P12 connectors on the backplane.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cable, 4

Pink

Connects motherboard to HDD backplane power connectors (PWR-3 and PWR-4 by CPU 2, and PWR-1 and PWR-2 by CPU1)

Cisco UCSC-240-M8E3S with 32 1T x4 Signaling Front Drives

Use this diagram to connect a UCSC-240-M8E3S with 32 1TB drives x4 Signalling.

Cable

Color

Notes

MCIO NVMe X cable to drive backplane, Riser 1E and 2E to backplane cables, 2

(x16 to x8)

Yellow

Connects the P1 through P4 connectors on the backplane to the connectors on Riser 2E

Connects the P13 through P16 connectors on the backplane to the connectors on Riser 1E

MCIO NVMe Y cable to drive backplane, Front MCIO to backplane cables, 4

(x16 to x8)

Blue

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the P5 and P6 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the P7 and P8 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the P9 and P10 connectors on the backplane.

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the P11 and P12 connectors on the backplane.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cable, 4

Pink

Connects motherboard to HDD backplane power connectors (PWR-3 and PWR-4 by CPU 2, and PWR-1 and PWR-2 by CPU1)

Cisco UCSC-240-M8E3S with 16 1T x2 Signaling Front Drives and Rear Drives

Use this diagram to connect a UCSC-240-M8E3S with 16 1TB drives 24 Signalling plus four E3.S rear drives.

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane, Front MCIO to backplane cables, 2

(x16 to x8)

Blue

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the P9 through P12 connectors on the backplane.

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 2 DIMM block. The dual-connector end of the cable connects to the P13 throgh P16 connectors on the backplane.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cable, 4

Pink

Connects motherboard to HDD backplane power connectors (PWR-3 and PWR-4 by CPU 2, and PWR-1 and PWR-2 by CPU1)

Rear E3.S x8 cables, 4

Lavender

Connects to the Rear SSD1 and Rear SSD2 connector on the motherboard and PCIe Riser 3

Connects to the Rear SSD1 and Rear SSD2 connector on the motherboard and PCIe Riser 1

Cisco UCSC-240-M8E3S with 32 1T/2T x2 Signaling Front Drives and Rear Drives

Use this diagram to connect a UCSC-240-M8E3S with 16 1- or 2-TB drives x2 Signalling plus four E3.S rear drives.

Cable

Color

Notes

MCIO NVMe Y cable to drive backplane, Front MCIO to backplane cables, 2

(x16 to x8)

Blue

The single-connector end of the cable connects to the P-1 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the P1 through P4 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the P5 through P8 connectors on the backplane.

The single-connector end of the cable connects to the P-2 motherboard connector in front of the central DIMM block. The dual-connector end of the cable connects to the P9 through P12 connectors on the backplane.

The single-connector end of the cable connects to the P-1 motherboard connector in front of the CPU 1 DIMM block. The dual-connector end of the cable connects to the P13 through P16 connectors on the backplane.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Bbackplane PWR cable, 4

Pink

Connects motherboard to HDD backplane power connectors (PWR-3 and PWR-4 by CPU 2, and PWR-1 and PWR-2 by CPU1)

Rear E3.S x8 cables, 4

Lavender

Connects to the Rear SSD1 and Rear SSD2 connector on the motherboard and PCIe Riser 3

Connects to the Rear SSD1 and Rear SSD2 connector on the motherboard and PCIe Riser 1

Cabling for UCSC-240-M8L Servers

The UCSC-240-M8L servers have multiple different storage controller options for front- and rear-loading drives. Use the cabling diagrams in this topic to connect the storage controllers to local storage.


Note


For information about the location and identity of the different LFF backplane connectors, see Backplane Connectors.


Cisco UCSC-240-M8L

Use this diagram to connect a UCSC-240-M8L with 12 large form factor front-loading drives.

Cable

Color

Notes

Slimline SAS cable (x4) 2-in-1 cable, 1

Front BP & Mid BP connection to LFF storage controller

Orange

The single-connector end of the cable connects to the J23 connector on the storage controller.

The dual end of the cable connects to the SAS B1 and SAS B2 connectors on the motherboard.

Slimline SAS cable (x4) 2-in-1 cable, 1

Front BP & Mid BP connection to LFF storage controller

Blue

The single-connector end of the cable connects to the J22 connector on the storage controller.

The dual end of the cable connects to the SAS A1 and SAS A2 connectors on the backplane.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Backplane PWR cables, 2

Pink

Connects motherboard to HDD backplane power connectors (PWR-1 by CPU1 and PWR-2 by CPU 2)

Cisco UCSC-240-M8L with Four LFF Mid-Mount and Four Rear Drives Plus an LFF Storage Controller

Use this diagram to connect a UCSC-240-M8L with 12 large form factor front-loading drives, four mid-mount drives, and four rear drives. This configuration features an LFF storage controller in a PCIe slot in Riser 1.

Cable

Color

Notes

Slimline SAS cable (x4) 2-in-1 cable, 1

Front BP & Mid BP connection to LFF storage controller

Orange

The single-connector end of the cable connects to the J23 connector on the storage controller.

The dual end of the cable connects to the SAS B1 on the front backplane and SAS B2 connector on the middle backplane.

Slimline SAS cable (x4) 2-in-1 cable, 1

Front BP & Mid BP connection to LFF storage controller

Blue

The single-connector end of the cable connects to the J22 connector on the storage controller.

The dual end of the cable connects to the SAS A1 and SAS A2 connectors on the backplane.

HDD backplane CFG cable

Red

Connects motherboard to HDD backplane by the CFG connector

HDD Backplane PWR cables, 2

Pink

Connects motherboard to HDD backplane power connectors (PWR-1 by CPU1 and PWR-2 by CPU 2)

Slimline SAS cable (x4) 2-in-1 cable, 1

(Miami River to Rear HDD BP)

Cyan

The single-connector end of the cable connects to the J6 connector on the storage controller.

The dual end of the cable connects to the Rear SAS 1 and Rear SAS 3 on the rear drive backplane.

Middle backplane CFG cable

Green

Connects to the mid-mount drive backplane by the CFG connector.

Middle backplane Power cable

Yellow

Connects to the mid-mount drive backplane PWR connector

Replacing the Supercap (RAID Backup)

This server supports installation of two Supercap units for 24 drive servers. The unit mounts to a bracket on the removable air baffle.

The Supercap provides approximately three years of backup for the disk write-back cache DRAM in the case of a sudden power loss by offloading the cache to the NAND flash.

Procedure


Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

  4. Locate the SuperCap unit(s) as shown below.

Step 2

Remove an existing Supercap:

  1. Disconnect the Supercap cable from the RAID cable.

  2. Push aside the securing tab that holds the Supercap to its bracket.

  3. Lift the Supercap free of the bracket and set it aside.

Step 3

Install a new Supercap:

  1. Orient the SuperCap unit so that the SuperCap cable and the RAID cable connectors meet.

  2. Make sure that the RAID cable does not obstruct the SuperCap when you install it, then insert the new Supercap into the mounting bracket.

    Make that the SuperCap unit is securely inserted into it bracket.

  3. Connect the Supercap cable from the RAID controller card to the connector on the Supercap cable.

Step 4

Replace the top cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing a Boot-Optimized M.2 RAID Controller Module

The Cisco Boot-Optimized M.2 RAID Controller module includes slots for two SATA M.2 drives, plus an integrated 6-Gbps SATA RAID controller that can control the SATA M.2 drives in a RAID 1 array. Although two M.2 SSDs are recommended, the server can operate on only one.

Multiple RAID controller options are available on the server. The following topics cover the M.2 boot-optimized RAID controllers, which are available and install to different locations on the server.

Table 4. Boot Optimized M.2 RAID Controllers

PID

Description

M.2 Drive Location

UCSC-M2RR-240M8

UCS C240 M8 RAID Controller for Hot-Swap M/2 modules (by Riser 3)

Two separate modules are installable vertically in two slots, one slot on each side of PCIe Riser 3 slot.

UCSC-M2RM-M8

UCS C2X0 M8 Boot RAID Controller for Hot-Swap M.2 Modules (in MLOM slot)

One module containing two M.2 SSDs is installable in the mLOM/OCP card slot.

Note

 

This Boot RAID module prevents the server from accepting an mLOM/OCP card.

UCS-M2-HWRAID2

Cisco Boot-Optimized RAID Controller for Internal M.2 modules.

Inside the server under Riser 3


Note


An internal M.2 boot-optimized RAID controller is available and installs to different locations of the server depending on which server type you order. For more information, see Replacing the Internal Boot-Optimized M.2 RAID Controller.


Cisco Boot-Optimized M.2 RAID Controller Considerations

Review the following considerations:

  • This controller supports RAID 1 (single volume) and JBOD mode.


    Note


    Do not use the server's embedded SW MegaRAID controller to configure RAID settings when using this controller module. Instead, you can use the following interfaces:

    • Cisco IMC 4.3(6) and later

    • BIOS HII utility, BIOS 4.3(6) and later

    • Cisco UCS Manager 4.3(6) and later (UCS Manager-integrated servers)


  • The M.2 module locations differ depending on which RAID controller is installed.

    Table 5. M.2 Drive Locations and Identifiers

    PID

    Location

    Identifier

    UCSC-M2RR-240M8

    Near Riser 3 at the rear of the server

    The left drive is the first device (slot 1, drive 253) and the right drive is the second device (slot 2, drive 254).

    UCSC-M2RM-M8

    mLOM Slot

    The left drive is the first device (slot 1, drive 253) and the right drive is the second device (slot 2, drive 254).

    UCS-M2-HWRAID2

    Inside the server by Rise 3 (LFF)

    The left drive is the first device (slot 1, drive 251) and the right drive is the second device (slot 2, drive 252).

  • When using RAID, we recommend that both SATA M.2 drives are the same capacity. If different capacities are used, the smaller capacity of the two drives is used to create a volume and the rest of the drive space is unusable.

    JBOD mode supports mixed capacity SATA M.2 drives.

  • Drives can be hot pluggable depending on which ones are used.

    • The UCSC-M2RR-240M8 and UCSC-M2RM-M8 support hot-plug (OS informed removal or installation) of its M.2 SSDs.

    • The UCS-M2-HWRAID2 does not support hot-plug replacements of M.2 SSDs.

  • Monitoring of the controller and installed SATA M.2 drives can be done using Cisco IMC and Cisco UCS Manager. They can also be monitored using other utilities such as UEFI HII, PMCLI, XMLAPI, and Redfish.

  • Updating firmware of the controller and the individual drives:

  • The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported.

  • If you replace a single SATA M.2 drive that was part of a RAID volume, rebuild of the volume is auto-initiated after the user accepts the prompt to import the configuration. If you replace both drives of a volume, you must create a RAID volume and manually reinstall any OS.

  • We recommend that you erase drive contents before creating volumes on used drives from another server. The configuration utility in the server BIOS includes a SATA secure-erase function.

  • The server BIOS includes a configuration utility specific to this controller that you can use to create and delete RAID volumes, view controller properties, and erase the physical drive contents. Access the utility by pressing F2 when prompted during server boot. Then navigate to Advanced > Cisco Boot Optimized M.2 RAID Controller.

Installing the UCSC-M2RR-240M8 Boot Optimized RAID Controller

The server offers one boot-optimized RAID controller that hosts two individual M.2 modules, with each module containing a single M.2 NVMe SSD. The controller supports RAID of boot info, OS, and logging. Boot RAID controllers do not affect data I/O to the user storage drives (SFF, NVMe, or LFF) in the front, middle (for LFF servers), or rear of the server.

Use this procedure to install the two boot optimized M.2 RAID controllers.

Before you begin

To install the RAID controllers, you will need the cable them to the rear SATA controller card on the motherboard. Make sure that you have the appropriate cables.

  • Cables are different lengths, and they are labelled to help identify which cable attaches to which USB connector on the rear SATA controller card.

  • Cables have different connectors. The long cable has two right angle connectors, but the short cable has one right angle connector and one straight through connector.

  • Cable connectors are USB type and have the word TOP on each connector end to help you correctly orient and connect the cables.

  • If you are replacing the Boot Optimized RAID Controller, or if you are switching between rear hot-swappble Boot RAID Controllers, will replace the M.2 modules, cables, and the Rear SATA controller card. All these parts are included in the replacement kit.

Procedure

Step 1

Locate the two controller slots on the rear of the server.

Step 2

If you have not already done so, remove the top cover.

See Removing the Server Top Cover.

Step 3

Connect the cables to the rear of each controller.

Step 4

Install the RAID controller modules.

  1. Slide the tabs on the front into the notches in the rear wall (1).

    The tab on the top of the controller inserts into a vertical notch, and the tab on the bottom of the controller inserts into a horizontal notch.

  2. Align the captive thumbscrew at the rear of the controller with its screwhole and tighten the captive screw (2).

Step 5

Connect the cables to the rear SATA controller card.

  1. Route the longer cable, which is closest to the server sidewall, through the notch in the baffle.

  2. Gather both cables, and pass the loose ends through the cable retainer (1).

  3. Connect each end of the cable to the rear SATA controller card card (2).

    The long cable connects to the far connector, and the short cable connects to the close connector.


What to do next

Choose the appropriate option:

  • Install the M.2 Modules into the E1.S cages, if needed.

  • If you are done with field-service, replace the top cover.

Removing the UCSC-M2RR-240M8 Boot Optimized RAID Controller

Use this procedure to remove one, or both, of the hot swappable, UCSC-M2RR-240M8 boot optimized RAID controllers.

Procedure

Step 1

Locate the two M.2 module slots on the rear of the server.

Step 2

If you have not already done so, remove the top cover.

See Removing the Server Top Cover.

Step 3

Disconnect the cables from the rear SATA controller card.

  1. Detach each connector from the controller card (1).

  2. Carefully slide each cable through the cable retainer (2).

Step 4

Carefully remove the cable from the notch in the baffle.

Step 5

Disconnect the cables from the rear of each E1.S cage.

Step 6

Remove each E1.S cage.

  1. Using your fingers, remove the thumbscrews at the rear of each cage (1).

  2. Slide the cages out of their slost, making sure that the bottom (horizontal) tabs are clear of their notches. (2).

  3. Lift the E1.S cages out of the server.


What to do next

Choose the appropriate option:

Removing UCSC-M2RR-240M8 Boot Controller M2 Modules

The M.2 modules for the UCSC-M2RR-240M8 Boot RAID Controller are hot swappable. Each module consists of an E1.S carrier and the M.2 NVMe SSD on it. The modules install into an E1.S cage.

The M.2 modules for boot optimized RAID controllers can be individually removed and inserted. Each M.2 module that has an ejector lever, and by using the ejector lever, you can detach the module from the E1.S cage.

Removal is toolless, and each module is accessible from the rear of the server. You do not need to remove the server's top cover to install or remove the RAID controller or its M.2 modules.

Use the following procedure to replace one, or both, of the M.2 Modules. You can remove the M.2 modules regardless of whether a RAID controller is installed in the server or not. If the server is running, it is a best practice to replace only one M.2 module at a time.

Procedure

Step 1

Grasp the ejector lever at the bottom of the M.2 module, and gently swing it up (1).

Caution

 

Do not swing the ejector more than 45 degrees. If you force the ejector past its range of motion, you risk damaging the ejector.

Step 2

When the module is dislodged from the E1.S cage, grasp the top and bottom edges and gently slide the module out of the cage.

Caution

 

Make sure to hold the module level while removing it! Do not lift, twist, or rotate the module while removing it.


Installing the UCSC-M2RR-240M8 Boot Controller M2 Modules

The UCSC-M2RR-240M8 Boot RAID Controller consists of two individual M.2 modules. Each M.2 module consists of an E.1S carrier and a single M.2 NVMe SSD on it. The M.2 module installs into an E1.S cage.

Each M.2 module is field-replaceable. The M.2 modules are keyed so that they can install only one way into the E1.S cage.

The M.2 modules can be installed into the cage either when the controller is installed or removed. Installation is toolless, and each module is accessible from the rear of the server. You do not need to remove the server's top cover to install or remove the Boot RAID controller or its M.2 modules.

Use this procedure to install one, or both, of the Boot RAID controller's M.2 modules.

Before you begin
  • Cables are different lengths, and they are labelled to help identify which cable attaches to which USB connector on the rear SATA controller card.

  • Cables have different connectors. The long cable has two right angle connectors, but the short cable has one right angle connector and one straight through connector.

  • Cable connectors are USB type and have the word TOP on each connector end to help you correctly orient and connect the cables.

  • If you are replacing the Boot Optimized RAID Controller, or if you are switching between rear hot-swappble Boot RAID Controllers, will replace the M.2 modules, cables, and the Rear SATA controller card. All these parts are included in the replacement kit.

Procedure

Step 1

For each module, make sure that the ejector is in the open position.

Step 2

Orient the module so that it is facing you and the ejector is on the top.

Step 3

Holding the module vertical, gently slide it into the E1.S cage until you feel some resistance (1).

This resistance is normal. It occurs when the module's rear connector contacts its socket on the cage.

Step 4

When you feel resistance, push the ejector down (the closed position) until the ejector is flush with the faceplate of the cage (2).


Installing the UCSC-M2RM-M8 Boot RAID Controller

The server offers one boot-optimized RAID controller that can be installed into the mLOM/VIC slot on the rear of the server. The controller hosts two M.2 modules, each with a single M.2 NVME SSD. The controller supports RAID for boot info, OS, and logging. Boot RAID controllers do not affect data I/O to the user storage drives (SFF, NVMe, or LFF) in the front, middle (for LFF servers), or rear of the server.

Use this procedure to install the controller.

Before you begin

To install the UCSC-M2RM-M8 RAID controller, you will need the cable it to the rear SATA controller card on the motherboard. Make sure that you have the appropriate cables. Cable connectors are USB type and have the word TOP on each connector end to help you correctly orient and connect the cables.

To perform this procedure, you will need a # 2 Phillips screwdriver.

Procedure

Step 1

Locate the mLOM slot on the rear of the server.

Step 2

If you have not already done so, remove the top cover.

See Removing the Server Top Cover.

Step 3

Remove PCIe Riser 1.

Step 4

Install the controller.

  1. Slide the card into the mLOM slot.

  2. Insert the two screws into the RAID controller bracket.

  3. Tighten the screws to secure the controller to the server.

Step 5

Connect the cables.

  1. Connect both ends of the cables to the rear SATA controller card (1).

    Make sure to connect the right-angle connectors to the interposer with the word TOP facing up.

  2. Route the cables, one at a time, through the cable stay (2).

  3. Connect the both ends of the cables to the controller (3).

    Make sure to connect the straight-through connectors to the controller with the word TOP facing up.

Step 6

When completely installed and cabled, the RAID controller should be installed as shown.

Step 7

Install PCIe Riser 1.

Step 8

Install the server top cover.


Removing the UCSC-M2RM-M8 Boot RAID Controller

The server offers one boot-optimized RAID controller that can install into the mLOM/VIC slot on the rear of the server.

Use this procedure to remove the UCSC-M2RM-M8 Boot RAID controller.

Before you begin

To perform this procedure, you will need a # 2 Phillips screwdriver.

Procedure

Step 1

Locate the mLOM slot on the rear of the server.

Step 2

If you have not already done so, remove the top cover.

See Removing the Server Top Cover.

Step 3

Remove PCIe Riser 1.

Step 4

Disconnect the cables.

  1. Detach the cables from the rear of the E1.S cage (3).

  2. Choose the appropriate option:

    • If you will be re-installing a controller, skip to Step 5.

    • If you will be completely removing the controller, feed both cables through the cable retainer so that it no longer restrains them (2), then disconnect the cables from the rear controller card (1).

Step 5

Loosen the screws that secure the cage to the server.

  1. Using a #2 Phillips screwdriver, loosen the screws on the bracket.

  2. Using your fingers, loosen the two thumbscrews at the rear of the cage.

Step 6

Remove the cage.

  1. Remove two screws from the bracket.

  2. Slide the cage out of the server.

Step 7

If you will not be installing the controller, an mLOM, or an OCP card, install a blank filler panel.

Step 8

Install Riser 1.

Step 9

Replace the server top cover.


Installing UCSC-M2RM-M8 Boot RAID Controller M.2 Modules

The server's mLOM/VIC slot can support a hot-swappable, boot optimized M.2 RAID Controller. The controller consists of two M.2 NVMe modules, and E.1S cage, a rear storage controller card, and USB cables. Each M.2 module has one M.2 NVMe SSD that sits on an E1.S carrier. Each M.2 module is field-replaceable. The M.2 modules are keyed so that they can install only one way into the E1.S cage.

The M.2 modules can be installed when the cage is in the server or when the cage is removed. Installation is toolless, and each M.2 module is accessible from the rear of the server. You do not need to remove the server's top cover to install or remove the controller or its M.2 modules.

Use this procedure to install one, or both, of the controller's M.2 modules.

Procedure

Step 1

For each module, make sure that the ejector is in the open position.

Step 2

Orient the module so that it is facing you and the ejector is on the right side.

Step 3

Holding the module level, gently slide it into the cage until you feel some resistance.

This resistance is normal. It occurs when the module's rear connector contacts its socket on the cage.

Step 4

When you feel resistance, push the ejector horizontally to the left (the closed position) until the ejector is flush with the faceplate of the cage.


Removing UCSC-M2RM-M8 Boot RAID Controller M.2 Modules

The M.2 modules for the UCSC-M2RM-M8 Boot RAID Controller are hot swappable. Each module consists of an E1.S carrier and an M.2 NVMe SSD on it.

The M.2 modules for boot optimized RAID controllers can be individually removed and inserted. Each M.2 module that has an ejector lever, and by using the ejector lever, you can detach the module from the cage.

Removal is toolless, and each module is accessible from the rear of the server. You do not need to remove the server's top cover to install or remove the M.2 modules.

Use the following procedure to replace one, or both, of the M.2 Modules. You can remove the M.2 modules regardless of whether cage is installed in the server or not. If the server is running, it is a best practice to replace only one M.2 module at a time.

Procedure

Step 1

Grasp the left edge of the module and gently swing the ejector horizontally to the right (1).

Caution

 

Do not swing the ejector more than 45 degrees. If you force the ejector past its range of motion, you risk damaging the ejector.

Step 2

When the module is dislodged from the cage, grasp the module by the edges and slide it out of the cage (2).

Caution

 

Make sure to hold the module level while removing it! Do not lift, twist, or rotate the module while removing it.


Replacing Boot-Optimized M2 SSDs

This topic describes how to remove and replace an M.2 SATA or NVMe SSD in a E1.S carrier for M.2 (UCS-HWRAID-M2-D). The carrier is mounted on the riser, and has one M.2 SSD socket on either side (front or back) of the vertical riser.

Population Rules For Mini-Storage M.2 SSDs

  • Both M.2 SSDs must be either SATA or NVMe; do not mix types in the carrier.

  • You can use one or two M.2 SSDs in the carrier. It is a best practice to use two SSDs.

  • M.2 socket 1 is on the front side of the carrier, which faces PCI riser 3; M.2 socket 2 is on the back of the carrier, which faces PCI riser 2.

Before you begin

You will need a #1 Phillips screwdriver to complete this procedure.

Procedure

Step 1

Remove the M.2 controlllers from the server.

See the appropriate topic.

Step 2

Remove an M.2 SSD:

Note

 

The following procedure shows removal and installatoin of the M.2 SSD on one of the Riser 3 M.2 modules, but the process is the same for the M.2 SSDs on the mLOM Boot RAID controller and the internal Boot RAID controller.

  1. Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 SSD to the carrier (1).

  2. Remove the M.2 SSD from its socket on the carrier (2).

    To get the SSD to unseat into the socket on the carrier, you might need to slightly tilt or angle the screw-end of the M.2 as you lift it off of the carrier.

Step 3

Install a new M.2 SSD:

  1. Insert the new M.2 SSD connector-end into the socket on the carrier with its label side facing up.

    To get the SSD to seat into the socket on the carrier, you might need to slightly tilt or angle the screw-end of the M.2 as you lower it off of the carrier.

  2. Press the M.2 SSD flat against the carrier.

  3. Install the single screw that secures the end of the M.2 SSD to the carrier.


Replacing the Internal Boot-Optimized M.2 RAID Controller

The server offers an internal Cisco Boot-Optimized M.2 RAID Controller module, which includes slots for two SATA M.2 drives, plus an integrated 6-Gbps SATA RAID controller that can control the SATA M.2 drives in a RAID 1 array. Although two M.2 SSDs are recommended, the server can operate on only one.

The boot-optimized RAID controller is a module that consists of a carrier and M.2 SSDs. The drives install onto the carrier, and the carrier installs into the module. Each controller supports up to two NVMe SSDs. The boot-optimized M.2 RAID controller sits on the CPU air baffle and connects to the RAID controller card on the motherboard.

1

Air Baffle Top Cover

2

Internal M.2 Boot Optimized RAID Controller

3

M.2 Carrier

4

Air Baffle

The internal Boot RAID controller is installed in different locations depending on which server model you have.

Table 6. Internal Boot Optimized M.2 RAID Controllers

Description

PID

Installation Location

UCS C240 M8 Internal M.2 Module

UCSC-M2I-240M8

For the 24 SFF or 32 E3.S models, the M.2 module is installed on the CPU air baffle and is cabled to the RAID controller module on the motherboard.

UCS C240 M8L Internal M.2 Module

UCSC-M2I-240M8L

For the 16 LFF module, the M.2 module is installed vertically on the motherboard and is cabled to the RAID controller.


Note


Two options of hot swappable, rear-loading M.2 boot-optimized RAID controller are available. Each option installs to a different location of the server depending on which server type you order. For more information, see Replacing a Boot-Optimized M.2 RAID Controller Module.


Cisco Boot-Optimized M.2 RAID Controller Considerations

Review the following considerations:

  • This controller supports RAID 1 (single volume) and JBOD mode.


    Note


    Do not use the server's embedded SW MegaRAID controller to configure RAID settings when using this controller module. Instead, you can use the following interfaces:

    • Cisco IMC 4.3(6) and later

    • BIOS HII utility, BIOS 4.3(6) and later

    • Cisco UCS Manager 4.3(6) and later (UCS Manager-integrated servers)


  • The M.2 module locations differ depending on which RAID controller is installed.

    Table 7. M.2 Drive Locations and Identifiers

    PID

    Location

    Identifier

    UCSC-M2RR-240M8

    Near Riser 3 at the rear of the server

    The left drive is the first device (slot 1, drive 253) and the right drive is the second device (slot 2, drive 254).

    UCSC-M2RM-M8

    mLOM Slot

    The left drive is the first device (slot 1, drive 253) and the right drive is the second device (slot 2, drive 254).

    UCS-M2-HWRAID2

    Inside the server by Rise 3 (LFF)

    The left drive is the first device (slot 1, drive 251) and the right drive is the second device (slot 2, drive 252).

  • When using RAID, we recommend that both SATA M.2 drives are the same capacity. If different capacities are used, the smaller capacity of the two drives is used to create a volume and the rest of the drive space is unusable.

    JBOD mode supports mixed capacity SATA M.2 drives.

  • Drives can be hot pluggable depending on which ones are used.

    • The UCSC-M2RR-240M8 and UCSC-M2RM-M8 support hot-plug (OS informed removal or installation) of its M.2 SSDs.

    • The UCS-M2-HWRAID2 does not support hot-plug replacements of M.2 SSDs.

  • Monitoring of the controller and installed SATA M.2 drives can be done using Cisco IMC and Cisco UCS Manager. They can also be monitored using other utilities such as UEFI HII, PMCLI, XMLAPI, and Redfish.

  • Updating firmware of the controller and the individual drives:

  • The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported.

  • If you replace a single SATA M.2 drive that was part of a RAID volume, rebuild of the volume is auto-initiated after the user accepts the prompt to import the configuration. If you replace both drives of a volume, you must create a RAID volume and manually reinstall any OS.

  • We recommend that you erase drive contents before creating volumes on used drives from another server. The configuration utility in the server BIOS includes a SATA secure-erase function.

  • The server BIOS includes a configuration utility specific to this controller that you can use to create and delete RAID volumes, view controller properties, and erase the physical drive contents. Access the utility by pressing F2 when prompted during server boot. Then navigate to Advanced > Cisco Boot Optimized M.2 RAID Controller.

Removing Internal Boot-Optimized RAID Controller From SFF and E3S Servers

The internal boot-optimized RAID controller is a module that consists of a carrier and M.2 SSDs. Each controller supports up to two NVMe SSDs. The SSDs install onto the carrier, and the carrier sits on the interior of the server on the air baffle (duct) and connects to the motherboard.

Use this task to remove the internal boot-optimized RAID controller (internal controller) when it is installed on SFF or E3.S configurations of the server.

Procedure

Step 1

If you have not already removed the server top cover, do so now.

Go to Removing the Server Top Cover.

Step 2

Remove PCIe riser 2

Step 3

Remove the fan components.

  1. Remove the RV baffle for the fans (1).

  2. Remove the Fan Modules. (2).

Step 4

Disconnect the contoller cable from the motherboard.

  1. Using a screwdriver, remove the cable detainer (1).

  2. Unplug the cable (2).

  3. Gently remove the cable from the notch in the air duct (3).

Step 5

Remove the air duct.

Removing the Air Duct.

Step 6

Grasp and lift the RAID controller out of the air duct.

Step 7

Choose the appropriate option.

  • If you will replace the internal controller, go to step 7.

    See

  • If you are completely removing the internal controller and will not install another, disconnect the cable from the M.2 carrier.

Step 8

Choose the appropriate option:


Installing the Internal Boot-Optimized RAID Controller in SFF and E3S Servers

The internal boot-optimized RAID controller is a module that consists of a carrier and M.2 SSDs. Each controller supports up to two NVMe SSDs. The SSDs install onto the carrier, and the carrier sits on the interior of the server on the air baffle (duct) and connects to the motherboard.

Use this task to remove the internal boot-optimized RAID controller (internal controller) when it is installed on SFF or E3.S configurations of the server.

Procedure

Step 1

Install the air duct.

See Installing the Air Duct.

Step 2

Lower the controller onto the air duct.

The controller sits in a rectangular area in the center of the air duct.

Step 3

Attach the cable to the motherboard.

  1. Gently route the cable through the notch in the air duct. (1).

  2. Connect the cable to the motherboard connector (2).

  3. Using a #2 Phillips screwdriver, install the cable detainer to secure the cable in place (3).

Step 4

Attach the cable to the controller.

Step 5

Install the fan components.

  1. Install the Fan Modules.

  2. Install the RV baffle for the fans.

Step 6

Install PCIe riser 2

Step 7

Replace the server's top cover.


Removing Internal Boot RAID Controller SSDs from SFF and E3S Servers

The SSDs on the internal boot controller for SFF and E3S configurations of the server sit horizontally in a module in the server's air duct. The controller consists of a carrier, which is a tray that holds the SSDs, and the SSDs themselves. A maximum of two SSDs, are held in place horizontally on each side of the carrier by pressure clips and the SSDs are attached to the carrier by a connector at one end and a retaining screw at the other.

Use this task to remove one, for both, of the SSDs from the internal Boot-Optimized RAID Controller.

Before you begin

You will need a #1 Phillips screwdriver to complete this procedure.

Procedure

Step 1

Remove the server's top cover.

See Removing the Server Top Cover.

Step 2

Remove the internal Boot RAID Controller.

Removing Internal Boot-Optimized RAID Controller From SFF and E3S Servers.

Step 3

When the internal controller is disconnected from the server, release the carrier.

  1. Press outward on the retaining clip at each end of the controller to release the carrier .

  2. While holding the clips out, grasp the edges of the carrier and lift it off of the controller.

Step 4

Release the M.2 SSD.

  1. Using a #1 Phillips screwdriver, remove the retaining screw to detach the M.2 SSD from the carrier.

    Note

     

    You might need to gently tilt or angle the SSD to disconnect it from its connector on the carrier.

  2. Lift the SSD off of the carrier.

Step 5

Repeat step 3 and 4 as needed to replace the other M.2 SSD.


What to do next

When all SSDs have been removed, install the appropriate number of SSDs. See Installing Internal Boot RAID Controller SSDs for SFF and ES3 Servers.

.

Installing Internal Boot RAID Controller SSDs for SFF and ES3 Servers

The SSDs on the internal boot controller for SFF and E3S configurations of the server install horizontally onto each side of a carrier. When installed, the SSDs are attached to the carrier by a connector at one end and a retaining screw at the other end.

Use this procedure to install one, or both, of the M.2 SSD onto the carrier, then install the carrier onto the controller. You will need to install the controller back into the server.

Before you begin

You will need a #2 Phillips screwdriver to complete this procedure.

Procedure

Step 1

Install the SSD.

  1. Orient the SSD with the carrier so that the retraining screwhole in the SSD lines up with the screwhole in the carrier.

  2. Lower the SSD onto the carrier and insert the screw into SSD screwhole.

  3. Using a #2 Phillips screwdriver, tighten the screw to secure the SSD to the carrier.

Step 2

If needed, repeat step 1 to install the additional SSD.

Step 3

When the correct number of SSDs are installed, attach the carrier to the controller.

  1. Align the notches at each end of the carrier with the retaining clips on the controller.

  2. Lower the carrier onto the controller until the carrier is fully seated on the controller.

    You should feel some resistance as the retaining clips meet the controller. This resistance is normal due to some pressure required to flex the retaining clips outward enough to allow the carrier to snap into place.

Step 4

Install the internal Boot RAID Controller.

Installing the Internal Boot-Optimized RAID Controller in SFF and E3S Servers.

Step 5

Replace the server's top cover.


Removing the Internal Boot-Optimized RAID Controller from LFF Servers

The internal boot-optimized RAID controller is a module that consists of a carrier and M.2 SSDs. Each controller supports up to two NVMe SSDs. Each controller supports up to two NVMe SSDs. The controller board has one M.2 socket on its top (Slot 1) and one M.2 socket on its underside (Slot 2). The SSDs install onto sockets on the carrier, and the carrier sits on the interior of the server and connects to the motherboard.

Use this task to remove the internal boot-optimized RAID controller (internal controller) when it is installed on 16 LFF drive configurations of the server.

Before you begin

You will need a #2 Phillips screwdriver to conplete this task.

Procedure

Step 1

Locate the controller in its socket between PCIe Riser 2 and 3.

Figure 38. Cisco Boot-Optimized M.2 RAID Controller on Motherboard

Step 2

Remove PCIe Riser 3 and Riser 2.

Caution

 

Do not twist or angle the controller while removing it. Do not apply any sideways force to the part.

Step 3

Remove the controller.

  1. Using a #2 Phillips screwdriver, remove the cable detainer that secures the cable to the motherboard (1).

  2. Disconnect the cable from its motherboard connector (2).

  3. Using the screwdriver, remove the captive screw on the controller.

  4. Lift the controller and cable off of the motherboard.

Step 4

If you will be removing or replacing the M.2 SSDs, go to Removing Internal Boot RAID Controller SSDs from SFF and E3S Servers.


Installing the Internal Boot-Optimized RAID Controller in LFF Servers

The internal boot-optimized RAID controller is a module that consists of a carrier and M.2 SSDs. Each controller supports up to two NVMe SSDs. Each controller supports up to two NVMe SSDs. The controller board has one M.2 socket on its top (Slot 1) and one M.2 socket on its underside (Slot 2). The SSDs install onto sockets on the carrier, and the carrier sits on the interior of the server and connects to the motherboard.

Before you begin

You will need a #2 Phillips screwdriver to conplete this task.

Procedure

Step 1

Orient the controller so that the captive screw on the controller aligns with the screwhole on the motherboard

Step 2

Holding the controller level, lower it onto the motherboard.

Note

 

The bottom of the controller has an elongated slot that receives a catch pin on the inside of the server. When lowering the controller into place make sure that these alignment features meet.

Step 3

Connect the cable to the motherboard connector.

Step 4

Holding the card vertical, use the screwdriver to tighten the securing screw.

Caution

 

Do not twist or angle the controller while installing it. Keep the controller vertical so that the screws and alignment features meet and no sideways force is applied to the part.

Step 5

Replace PCIe Riser 2and Riser 3.

Step 6

Replace the server's top cover.


Removing Internal Boot RAID Controller SSDs from LFF Servers

The SSDs on the internal boot controller for the LFF configuration of the server sit vertically on the controller between PCIe Riser 3 and PCIe Riser 2. The controller consists of a carrier, which is a tray that holds the SSDs, and the SSDs themselves. A maximum of two SSDs are vertically attached to each side of the carrier by a connector at one end and a retaining screw at the other.

Use this task to remove one, for both, of the SSDs from the internal Boot-Optimized RAID Controller.

Before you begin

You will need a #2 Phillips screwdriver to complete this procedure.

Procedure

Step 1

Remove the server's top cover.

See Removing the Server Top Cover.

Step 2

Remove the M.2 SSDs.

  1. Using a #2 Phillips screwdriver, loosen the captive screws and remove the M.2 module.

  2. At each end of the controller board, push outward on the clip that secures the carrier.

  3. Lift both ends of the controller to disengage it from the carrier.

Step 3

Repeat step 2 to remove the additional M.2 SSD, if needed.


What to do next

Iinstall the proper number of M2. SSDs. See Installing the Internal Boot RAID Controller SSDs for an LFF Server.

Installing the Internal Boot RAID Controller SSDs for an LFF Server

The SSDs for the LFF server's internal Cisco Boot-Optimized RAID Controller are iinstalled vertically on a carrier that sits on the controller. The carrier contains the M.2 SSDs, which are attached by a retaning screw at one end and a a connector at the other end. When you install the SSDs, you will need to seat them into the connector, then secure them with the retaining screw.

Use this procedure to install one, or both, of the M.2 SSD onto the carrier, then install the carrier onto the controller. You will need to install the controller back into the server.

Before you begin

You will need a #2 Phillips screwdriver to complete this procedure.

Procedure

Step 1

Install the M.2 SSD.

  1. Orient the SSD with the carrier so that the retraining screwhole in the SSD lines up with the screwhole in the carrier.

  2. Lower the SSD onto the carrier and insert the screw into SSD screwhole.

  3. Using a #2 Phillips screwdriver, tighten the screw to secure the SSD to the carrier.

Step 2

When the correct number of SSDs are installed, attach the carrier to the controller.

  1. Align the notches at each end of the carrier with the retaining clips on the controller.

  2. Lower the carrier onto the controller until the carrier is fully seated on the controller.

    You should feel some resistance as the retaining clips meet the controller. This resistance is normal due to some pressure required to flex the retaining clips outward enough to allow the carrier to snap into place.

Step 3

Replace the top cover to the server.

Step 4

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing the Rear SATA Controller Card

The server has an rear SATA controller card that connects the hot swappable Cisco Boot-Optimized M.2 Boot RAID Controllers to the motherboard. The interposer features four separate USB-type connectors that support cabling the following rear Boot RAID controllers:

  • UCS C240 M8 Rear Hot-Plug M.2 Module for Riser 3, UCSC-M2RR-240M8

  • UCS C220/240 M8 Rear Hot-plug M.2 module (MLOM), UCSC-M2RM-M8

Use the following tasks to replace the rear SATA controller Card:

Removing the Rear SATA Controller Card

The M.2. Interposer card is located near the rear of the server under PCIe Riser 1. When the card is removed, hot swappable Cisco Boot-Optimized RAID Controllers at the rear of the chassis are not supported.

Use this procedure to remove the rear SATA controller card.

Before you begin

You will need a #2 Phillips screwdriver to complete this procedure.

Procedure

Step 1

Using a #2 Phillips screwdriver, remove the thumbscrew that secures the card to the motherboard.

Step 2

Disconnect the card from the motherboard connectors.

Step 3

Lift the card off of the motherboard.


Installing the Rear SATA Controller Card

The rear SATA controller card is located at the rear of the server under Riser 1. The card is attached to the motherboard and supports cabling either of the rear hot swappable Cisco Boot-Optimized RAID controllers.

Use this procedure to install the rear SATA controller card.

Before you begin

You will need a #2 Phillips screwdriver to complete this procedure.

Procedure

Step 1

Orient the card so that the thumbscrew aligns with the screwhole on the motherboard.

In this position, the male connectors on the card align with the female connectors on the motherboard.

Step 2

Lower the card onto the motherboard

Step 3

Attach the connectors on the card to the motherboard connectors.

Step 4

Using a #2 Phillips screwdriver, secure the card to the motherboard.


Replacing a Chassis Intrusion Switch

The chassis intrusion switch in an optional security feature that logs an event in the system event log (SEL) whenever the cover is removed from the chassis.

Procedure


Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

Step 2

Remove an existing intrusion switch:

  1. Disconnect the intrusion switch cable from the socket on the motherboard.

  2. Use a #1 Phillips-head screwdriver to loosen and remove the single screw that holds the switch mechanism to the chassis wall.

  3. Slide the switch mechanism straight up to disengage it from the clips on the chassis.

Step 3

Install a new intrusion switch:

  1. Slide the switch mechanism down into the clips on the chassis wall so that the screwholes line up.

  2. Use a #1 Phillips-head screwdriver to install the single screw that secures the switch mechanism to the chassis wall.

  3. Connect the switch cable to the socket on the motherboard.

Step 4

Replace the cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 39. Replacing a Chassis Intrusion Switch

1

Intrusion switch location

-


Installing a Trusted Platform Module (TPM)

The trusted platform module (TPM) is a small circuit board that plugs into a motherboard socket and is then permanently secured with a one-way screw. The socket location is on the motherboard below PCIe riser 2.

TPM Considerations

  • This server supports TPM version 2.0 (UCSX-TPM-002D-D) as defined by the Trusted Computing Group (TCG). The TPM is also SPI-based.

  • Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does not already have a TPM installed.

  • If the TPM 2.0 becomes unresponsive, reboot the server.

Installing TPM Hardware


Note


For security purposes, the TPM is installed with a one-way screw. It cannot be removed with a standard screwdriver.
Procedure

Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing the Server Top Cover.

Step 2

Remove PCIe riser 2 from the server to provide clearance to the TPM socket on the motherboard.

Step 3

Install a TPM:

  1. Locate the TPM socket on the motherboard.

  2. Align the connector that is on the bottom of the TPM circuit board with the motherboard TPM socket. Align the screw hole on the TPM board with the screw hole that is adjacent to the TPM socket.

  3. Push down evenly on the TPM to seat it in the motherboard socket.

  4. Install the single one-way screw that secures the TPM to the motherboard.

Step 4

Replace PCIe riser 2 to the server. See Replacing a PCIe Riser.

Step 5

Replace the cover to the server.

Step 6

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Step 7

Continue with Enabling the TPM in the BIOS.


Enabling the TPM in the BIOS

After hardware installation, you must enable TPM support in the BIOS.


Note


You must set a BIOS Administrator password before performing this procedure. To set this password, press the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to Security > Set Administrator Password and enter the new password twice as prompted.


Procedure

Step 1

Enable TPM Support:

  1. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

  2. Log in to the BIOS Setup Utility with your BIOS Administrator password.

  3. On the BIOS Setup Utility window, choose the Advanced tab.

  4. Choose Trusted Computing to open the TPM Security Device Configuration window.

  5. Change TPM SUPPORT to Enabled.

  6. Press F10 to save your settings and reboot the server.

Step 2

Verify that TPM support is now enabled:

  1. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

  2. Log into the BIOS Setup utility with your BIOS Administrator password.

  3. Choose the Advanced tab.

  4. Choose Trusted Computing to open the TPM Security Device Configuration window.

  5. Verify that TPM SUPPORT and TPM State are Enabled.

Step 3

Continue with Enabling the Intel TXT Feature in the BIOS.


Enabling the Intel TXT Feature in the BIOS

Intel Trusted Execution Technology (TXT) provides greater protection for information that is used and stored on the business server. A key aspect of that protection is the provision of an isolated execution environment and associated sections of memory where operations can be conducted on sensitive data, invisibly to the rest of the system. Intel TXT provides for a sealed portion of storage where sensitive data such as encryption keys can be kept, helping to shield them from being compromised during an attack by malicious code.

Procedure

Step 1

Reboot the server and watch for the prompt to press F2.

Step 2

When prompted, press F2 to enter the BIOS Setup utility.

Step 3

Verify that the prerequisite BIOS values are enabled:

  1. Choose the Advanced tab.

  2. Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.

  3. Verify that the following items are listed as Enabled:

    • VT-d Support (default is Enabled)

    • VT Support (default is Enabled)

    • TPM Support

    • TPM State

  4. Do one of the following:

    • If VT-d Support and VT Support are already enabled, skip to step 4.

    • If VT-d Support and VT Support are not enabled, continue with the next steps to enable them.

  5. Press Escape to return to the BIOS Setup utility Advanced tab.

  6. On the Advanced tab, choose Processor Configuration to open the Processor Configuration window.

  7. Set Intel (R) VT and Intel (R) VT-d to Enabled.

Step 4

Enable the Intel Trusted Execution Technology (TXT) feature:

  1. Return to the Intel TXT(LT-SX) Hardware Support window if you are not already there.

  2. Set TXT Support to Enabled.

Step 5

Press F10 to save your changes and exit the BIOS Setup utility.


Service Headers and Jumpers

This server includes blocks of headers and switches (SW12, CN3) that you can use for certain service and debug functions.

This section contains the following topics:

1

Location of header block CN5

4

Clear BIOS Password Switch (SW4 Switch 6)

Clear CMOS Switch (SW4 Switch 9)

2

Boot Alternate Cisco IMC Header: CN5 pins 1 - 2

5

Clear CMOS Switch (SW4 Switch 9)

3

Location of SW4 DIP switches

-

Using the Clear CMOS Switch (SW4, Switch 9)

You can use this switch to clear the server’s CMOS settings in the case of a system hang. For example, if the server hangs because of incorrect settings and does not boot, use this switch to invalidate the settings and reboot with defaults.

You will find it helpful to refer to the location of the SW4 switch block. See Service Headers and Jumpers.


Caution


Clearing the CMOS removes any customized settings and might result in data loss. Make a note of any necessary customized settings in the BIOS before you use this clear CMOS procedure.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Using your finger, gently push the SW4 switch 9 to the side marked ON.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note

 
You must allow the entire server to reboot to main power mode to complete the reset. The state of the switch cannot be determined without the host CPU running.

Step 7

Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Using your finger, gently push switch 9 to its original position (OFF).

Note

 
If you do not reset the switch to its original position (OFF), the CMOS settings are reset to the defaults every time you power-cycle the server.

Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.


Using the Clear BIOS Password Switch (SW4, Switch 6)

You can use this switch to clear the BIOS password.

You will find it helpful to refer to the location of the SW4 switch block. See Service Headers and Jumpers.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server. Disconnect power cords from all power supplies.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Using your finger, gently slide the SW4 switch 6 to the ON position.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note

 
You must allow the entire server to reboot to main power mode to complete the reset. The state of the switch cannot be determined without the host CPU running.

Step 7

Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Reset the switch to its original position (OFF).

Note

 
If you do not remove the switch to its original position (OFF), the BIOS password is cleared every time you power-cycle the server.

Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.


Using the Boot Alternate Cisco IMC Image Header (CN5, Pins 1-2)

You can use this Cisco IMC debug header to force the system to boot from an alternate Cisco IMC image.

You will find it helpful to refer to the location of the CN5 header. See Service Headers and Jumpers.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server. Disconnect power cords from all power supplies.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing the Server Top Cover.

Step 4

Install a two-pin jumper across CN5 pins 1 and 2.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note

 

When you next log in to Cisco IMC, you see a message similar to the following:

'Boot from alternate image' debug functionality is enabled.  
CIMC will boot from alternate image on next reboot or input power cycle.

Note

 
If you do not remove the jumper, the server will boot from an alternate Cisco IMC image every time that you power cycle the server or reboot Cisco IMC.

Step 7

To remove the jumper, press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Remove the jumper that you installed.

Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.