Maintaining the Server

This chapter contains the following topics:

Status LEDs and Buttons

This section contains information for interpreting front, rear, and internal LED states.

Front-Panel LEDs

Figure 1. Front Panel LEDs
Table 1. Front Panel LEDs, Definition of States

LED Name

States

1

Power button/LED ()

  • Off—There is no AC power to the server.

  • Amber—The server is in standby power mode. Power is supplied only to the Cisco IMC and some motherboard functions.

  • Green—The server is in main power mode. Power is supplied to all server components.

2

Unit identification (

  • Off—The unit identification function is not in use.

  • Blue, blinking—The unit identification function is activated.

3

System health ()

  • Green—The server is running in normal operating condition.

  • Green, blinking—The server is performing system initialization and memory check.

  • Amber, steady—The server is in a degraded operational state (minor fault). For example:

    • Power supply redundancy is lost.

    • CPUs are mismatched.

    • At least one CPU is faulty.

    • At least one DIMM is faulty.

    • At least one drive in a RAID configuration failed.

  • Amber, 2 blinks—There is a major fault with the system board.

  • Amber, 3 blinks—There is a major fault with the memory DIMMs.

  • Amber, 4 blinks—There is a major fault with the CPUs.

4

Power supply status ()

  • Green—All power supplies are operating normally.

  • Amber, steady—One or more power supplies are in a degraded operational state.

  • Amber, blinking—One or more power supplies are in a critical fault state.

5

Fan status ()

  • Green—All fan modules are operating properly.

  • Amber, blinking—One or more fan modules breached the non-recoverable threshold.

6

Network link activity ()

  • Off—The Ethernet LOM port link is idle.

  • Green—One or more Ethernet LOM ports are link-active, but there is no activity.

  • Green, blinking—One or more Ethernet LOM ports are link-active, with activity.

7

Temperature status ()

  • Green—The server is operating at normal temperature.

  • Amber, steady—One or more temperature sensors breached the critical threshold.

  • Amber, blinking—One or more temperature sensors breached the non-recoverable threshold.

Rear-Panel LEDs

Figure 2. Rear Panel LEDs
Table 2. Rear Panel LEDs, Definition of States

LED Name

States

1

1-Gb/10-Gb Ethernet link speed (on both LAN1 and LAN2)

  • Amber—Link speed is 100 Mbps.

  • Amber—Link speed is 1 Gbps.

  • Green—Link speed is 10 Gbps.

2

1-Gb/10-Gb Ethernet link status (on both LAN1 and LAN2)

  • Off—No link is present.

  • Green—Link is active.

  • Green, blinking—Traffic is present on the active link.

3

1-Gb Ethernet dedicated management link speed

  • Off—Link speed is 10 Mbps.

  • Amber—Link speed is 100 Mbps.

  • Green—Link speed is 1 Gbps.

4

1-Gb Ethernet dedicated management link status

  • Off—No link is present.

  • Green—Link is active.

  • Green, blinking—Traffic is present on the active link.

5

Rear unit identification

  • Off—The unit identification function is not in use.

  • Blue, blinking—The unit identification function is activated.

6

Power supply status (one LED each power supply unit)

AC power supplies:

  • Off—No AC input (12 V main power off, 12 V standby power off).

  • Green, blinking—12 V main power off; 12 V standby power on.

  • Green, solid—12 V main power on; 12 V standby power on.

  • Amber, blinking—Warning threshold detected but 12 V main power on.

  • Amber, solid—Critical error detected; 12 V main power off (for example, over-current, over-voltage, or over-temperature failure).

DC power supplies:

  • Off—No DC input (12 V main power off, 12 V standby power off).

  • Green, blinking—12 V main power off; 12 V standby power on.

  • Green, solid—12 V main power on; 12 V standby power on.

  • Amber, blinking—Warning threshold detected but 12 V main power on.

  • Amber, solid—Critical error detected; 12 V main power off (for example, over-current, over-voltage, or over-temperature failure).

7

1-Gb Ethernet dedicated management port

  • Off—No link is present.

  • Green—Link is active.

  • Green, blinking—Traffic is present on the active link.

8

COM port (RJ-45 connector)

-

9

VGA display port (DB15 connector)

-

Internal Diagnostic LEDs

The server has internal fault LEDs for CPUs, DIMMs, and fan modules.

Figure 3. Internal Diagnostic LED Locations

1

Fan module fault LEDs (one behind each fan connector on the motherboard)

  • Amber—Fan has a fault or is not fully seated.

  • Green—Fan is OK.

3

DIMM fault LEDs (one behind each DIMM socket on the motherboard)

These LEDs operate only when the server is in standby power mode.

  • Amber—DIMM has a fault.

  • Off—DIMM is OK.

2

CPU fault LEDs (one behind each CPU socket on the motherboard).

These LEDs operate only when the server is in standby power mode.

  • Amber—CPU has a fault.

  • Off—CPU is OK.

-

Preparing For Component Installation

This section includes information and tasks that help prepare the server for component installation.

Required Equipment For Service Procedures

The following tools and equipment are used to perform the procedures in this chapter:

  • T-30 Torx driver (supplied with replacement CPUs for heatsink removal)

  • #1 flat-head screwdriver (supplied with replacement CPUs for heatsink removal)

  • #1 Phillips-head screwdriver (for M.2 SSD and intrusion switch replacement)

  • Electrostatic discharge (ESD) strap or other grounding equipment such as a grounded mat

Shutting Down and Removing Power From the Server

The server can run in either of two power modes:

  • Main power mode—Power is supplied to all server components and any operating system on your drives can run.

  • Standby power mode—Power is supplied only to the service processor and certain components. It is safe for the operating system and data to remove power cords from the server in this mode.


Caution


After a server is shut down to standby power, electric current is still present in the server. To completely remove power as directed in some service procedures, you must disconnect all power cords from all power supplies in the server.


You can shut down the server by using the front-panel power button or the software management interfaces.

Shutting Down Using the Power Button

Procedure

Step 1

Check the color of the Power button/LED:

  • Amber—The server is already in standby mode, and you can safely remove power.

  • Green—The server is in main power mode and must be shut down before you can safely remove power.

Step 2

Invoke either a graceful shutdown or a hard shutdown:

Caution

 
To avoid data loss or damage to your operating system, you should always invoke a graceful shutdown of the operating system.
  • Graceful shutdown—Press and release the Power button. The operating system performs a graceful shutdown, and the server goes to standby mode, which is indicated by an amber Power button/LED.

  • Emergency shutdown—Press and hold the Power button for 4 seconds to force the main power off and immediately enter standby mode.

Step 3

If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the power supplies in the server.


Shutting Down Using The Cisco IMC GUI

You must log in with user or admin privileges to perform this task.

Procedure

Step 1

In the Navigation pane, click the Server tab.

Step 2

On the Server tab, click Summary.

Step 3

In the Actions area, click Power Off Server.

Step 4

Click OK.

The operating system performs a graceful shutdown, and the server goes to standby mode, which is indicated by an amber Power button/LED.

Step 5

If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the power supplies in the server.


Shutting Down Using The Cisco IMC CLI

You must log in with user or admin privileges to perform this task.

Procedure

Step 1

At the server prompt, enter:

Example:
server# scope chassis

Step 2

At the chassis prompt, enter:

Example:
server/chassis# power shutdown

The operating system performs a graceful shutdown, and the server goes to standby mode, which is indicated by an amber Power button/LED.

Step 3

If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the power supplies in the server.


Removing Top Cover

Procedure


Step 1

Remove the top cover:

  1. If the cover latch is locked, slide the lock sideways to unlock it.

    When the latch is unlocked, the handle pops up so that you can grasp it.

  2. Lift on the end of the latch so that it pivots vertically to 90 degrees.

  3. Simultaneously, slide the cover back and lift the top cover straight up from the server and set it aside.

Step 2

Replace the top cover:

  1. With the latch in the fully open position, place the cover on top of the server a few inches behind the lip of the front cover panel.

  2. Slide the cover forward until the latch makes contact.

  3. Press the latch down to the closed position. The cover is pushed forward to the closed position as you push down the latch.

  4. Lock the latch by sliding the lock button to sideways to the left.

    Locking the latch ensures that the server latch handle does not protrude when you install the blade.

Figure 4. Removing the Top Cover

1

Cover lock

2

Cover latch handle


Serial Number Location

The serial number for the server is printed on a label on the top of the server, near the front. See Removing Top Cover.

Hot Swap vs Hot Plug

Some components can be removed and replaced without shutting down and removing power from the server. This type of replacement has two varieties: hot-swap and hot-plug.

  • Hot-swap replacement—You do not have to shut down the component in the software or operating system. This applies to the following components:

    • SAS/SATA hard drives

    • SAS/SATA solid state drives

    • Cooling fan modules

    • Power supplies (when redundant as 1+1)

  • Hot-plug replacement—You must take the component offline before removing it for the following component:

    • NVMe PCIe solid state drives

Removing and Replacing Components


Warning


Blank faceplates and cover panels serve three important functions: they prevent exposure to hazardous voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate the system unless all cards, faceplates, front covers, and rear covers are in place.

Statement 1029



Caution


When handling server components, handle them only by carrier edges and use an electrostatic discharge (ESD) wrist-strap or other grounding device to avoid damage.

Tip


You can press the unit identification button on the front panel or rear panel to turn on a flashing, blue unit identification LED on both the front and rear panels of the server. This button allows you to locate the specific server that you are servicing when you go to the opposite side of the rack. You can also activate these LEDs remotely by using the Cisco IMC interface.

This section describes how to install and replace server components.

Serviceable Component Locations

This topic shows the locations of the field-replaceable components and service-related items. The view in the following figure shows the server with the top cover removed.

Figure 5. Cisco UCS C220 M6 Server, Full Height, Full Width PCIe Cards, Serviceable Component Locations

Note


The chassis has an internal USB drive under the


1

Front-loading drive bays 1–10 support SAS/SATA drives.

2

M6 modular RAID card or SATA Interposer card

3

Cooling fan modules, eight.

Each fan is hot-swappable

4

SuperCap module mounting bracket

The SuperCap module (not shown) that mounts into this location provides RAID write-cache backup.

5

DIMM sockets on motherboard, 32 total, 16 per CPU

Eight DIMM sockets are placed between the CPUs and the server sidewall, and 16 DIMM sockets are placed between the two CPUs.

6

Motherboard CPU socket two (CPU2)

7

M.2 module connector

Supports a boot-optimized RAID controller with connectors for up to two SATA M.2 SSDs

8

Power Supply Units (PSUs), two

9

PCIe riser slot 2

Accepts 1 full height, full width PCIe riser card.

Includes PCIe cable connectors for front-loading NVMe SSDs (x8 lane)

10

PCIe riser slot 1

Accepts 1 full height, full width (x16 lane) PCIe riser card

Note

 

The chassis supports an internal USB drive (not shown) at this PCIe slot. See Replacing a USB Drive.

11

Modular LOM (mLOM) card bay on chassis floor (x16 PCIe lane)

The mLOM card bay sits below PCIe riser slot 1.

12

Motherboard CPU socket one (CPU1)

13

Front Panel Controller board

The view in the following figure shows the individual component locations and numbering, including the FHFW PCIe cards.

Figure 6. Cisco UCS C220 M6 Server, Full Height, Full Width PCIe Cards, Serviceable Component Locations

1

Front-loading drive bays 1–10 support SAS/SATA drives.

2

M6 modular RAID card or SATA Interposer card

3

Cooling fan modules, eight.

Each fan is hot-swappable

4

SuperCap module mounting bracket

The SuperCap module (not shown) that mounts into this location provides RAID write-cache backup.

5

DIMM sockets on motherboard, 32 total, 16 per CPU

Eight DIMM sockets are placed between the CPUs and the server sidewall, and 16 DIMM sockets are placed between the two CPUs.

6

Motherboard CPU socket

CPU2 is the top socket.

7

M.2 module connector

Supports a boot-optimized RAID controller with connectors for up to two SATA M.2 SSDs

8

Power Supply Units (PSUs), two

9

PCIe riser slot 3

Accepts 1 half height, half width PCIe riser card.

10

PCIe riser slot 2

Accepts 1 half height, half width PCIe riser card.

11

PCIe riser slot 1

Accepts 1 half height, half width PCIe riser card

Note

 

The chassis supports an internal USB drive (not shown) at this PCIe slot. See Replacing a USB Drive.

12

Modular LOM (mLOM) card bay on chassis floor (x16 PCIe lane)

The mLOM card bay sits below PCIe riser slot 1.

13

Motherboard CPU socket

CPU1 is the bottom socket

14

Front Panel Controller board

The view in the following figure shows the individual component locations and numbering, including the HHHL PCIe slots.

The Technical Specifications Sheets for all versions of this server, which include supported component part numbers, are at Cisco UCS Servers Technical Specifications Sheets (scroll down to Technical Specifications).

Replacing SAS/SATA Hard Drives or Solid-State Drives


Note


You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they are hot-swappable. To replace an NVMe PCIe SSD drive, which must be shut down before removal, see Replacing a Front-Loading NVMe SSD.

SAS/SATA Drive Population Guidelines

The server is orderable in two different versions, each with a different front panel/drive-backplane configuration.

  • Cisco UCS C220 M6 SAS/SATA—Small form-factor (SFF) drives, with 10-drive backplane. Supports up to 10 2.5-inch SAS/SATA drives.

  • Cisco UCS C220 M6 NVMe—SFF drives, with 10-drive backplane. Supports up to 10 2.5-inch NVMe-only SSDs.

Drive bay numbering is shown in the following figures.

Figure 7. Small Form-Factor Drive Versions, Drive Bay Numbering

Observe these drive population guidelines for optimum performance:

  • When populating drives, add drives to the lowest-numbered bays first.

  • Keep an empty drive blanking tray in any unused bays to ensure proper airflow.

  • You can mix SAS/SATA hard drives and SAS/SATA SSDs in the same server. However, you cannot configure a logical volume (virtual drive) that contains a mix of hard drives and SSDs. That is, when you create a logical volume, it must contain all SAS/SATA hard drives or all SAS/SATA SSDs.

4K Sector Format SAS/SATA Drives Considerations

  • You must boot 4K sector format drives in UEFI mode, not legacy mode. UEFI mode is the system default. Only if the mode has been changed and must be changed back to UEFI mode, see the following procedure.

  • Do not configure 4K sector format and 512-byte sector format drives as part of the same RAID volume.

  • For operating system support on 4K sector drives, see the interoperability matrix tool for your server: Hardware and Software Interoperability Matrix Tools

Procedure


Setting Up UEFI Mode Booting in the BIOS Setup Utility

UEFI mode is the system default. Use this procedure if the mode has been changed and must be set back to UEFI mode.
Procedure

Step 1

Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

Step 2

Go to the Boot Options tab.

Step 3

Set Boot Mode to UEFI Mode.

Step 4

Under Boot Option Priorities, set your OS installation media (such as a virtual DVD) as your Boot Option #1.

Step 5

Press F10 to save changes and exit the BIOS setup utility. Allow the server to reboot.

Step 6

After the OS installs, verify the installation:

  1. Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

  2. Go to the Boot Options tab.

  3. Under Boot Option Priorities, verify that the OS you installed is listed as your Boot Option #1.


Replacing a SAS/SATA Drive

Procedure

Step 1

Remove the drive that you are replacing or remove a blank drive tray from the bay:

  1. Press the release button on the face of the drive tray.

  2. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  3. If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift the drive out of the tray.

Step 2

Install a new drive:

  1. Place a new drive in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Figure 8. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Basic Troubleshooting: Reseating a SAS/SATA Drive

Sometimes it is possible for a false positive UBAD error to occur on SAS/SATA HDDs installed in the server.

  • Only drives that are managed by the UCS MegaRAID controller are affected.

  • Drives can be affected regardless where they are installed in the server (front-loaded, rear-loaded, and so on).

  • Both SFF and LFF form factor drives can be affected.

  • Drives installed in all Cisco UCS C-Series servers with M3 processors and later can be affected.

  • Drives can be affected regardless of whether they are configured for hotplug or not.

  • The UBAD error is not always terminal, so the drive is not always defective or in need of repair or replacement. However, it is also possible that the error is terminal, and the drive will need replacement.

Before submitting the drive to the RMA process, it is a best practice to reseat the drive. If the false UBAD error exists, reseating the drive can clear it. If successful, reseating the drive reduces inconvenience, cost, and service interruption, and optimizes your server uptime.


Note


Reseat the drive only if a UBAD error occurs. Other errors are transient, and you should not attempt diagnostics and troubleshooting without the assistance of Cisco personnel. Contact Cisco TAC for assistance with other drive errors.


To reseat the drive, see Reseating a SAS/SATA Drive.

Reseating a SAS/SATA Drive

Sometimes, SAS/SATA drives can throw a false UBAD error, and reseating the drive can clear the error.

Use the following procedure to reseat the drive.


Caution


This procedure might require powering down the server. Powering down the server will cause a service interruption.


Before you begin

Before attempting this procedure, be aware of the following:

  • Before reseating the drive, it is a best practice to back up any data on it.

  • When reseating the drive, make sure to reuse the same drive bay.

    • Do not move the drive to a different slot.

    • Do not move the drive to a different server.

    • If you do not reuse the same slot, the Cisco management software (for example, Cisco IMM) might require a rescan/rediscovery of the server.

  • When reseating the drive, allow 20 seconds between removal and reinsertion.

Procedure

Step 1

Attempt a hot reseat of the affected drive(s). For a front-loading drive, see Replacing a SAS/SATA Drive.

Step 2

During boot up, watch the drive's LEDs to verify correct operation.

See Status LEDs and Buttons.

Step 3

If the error persists, cold reseat the drive, which requires a server power down. Choose the appropriate option:

  1. Use your server management software to gracefully power down the server.

    See the appropriate Cisco management software documentation.

  2. If server power down through software is not available, you can power down the server by pressing the power button.

    See Status LEDs and Buttons.

  3. Reseat the drive as documented in Step 1.

  4. When the drive is correctly reseated, restart the server, and check the drive LEDs for correct operation as documented in Step 2.

Step 4

If hot and cold reseating the drive (if necessary) does not clear the UBAD error, choose the appropriate option:

  1. Contact Cisco Systems for assistance with troubleshooting.

  2. Begin an RMA of the errored drive.


Replacing a Front-Loading NVMe SSD

This section is for replacing 2.5-inch form-factor NVMe solid-state drives (SSDs) in front-panel drive bays.

Front-Loading NVMe SSD Population Guidelines

The server supports the following front drive bay configurations with 2.5-inch NVMe SSDs:

  • UCS C220 M6 with SFF drives, a 10-drive backplane. Drive bay 1 - 10 support 2.5-inch NVMe-only SSDs.

Front-Loading NVME SSD Requirements and Restrictions

Observe these requirements:

  • The server must have two CPUs. PCIe riser 2 is not available in a single-CPU system. PCIe riser 2 has connectors for the cable that connects to the front-panel drive backplane.

  • PCIe cable CBL-NVME-C220FF. This is the cable that carries the PCIe signal from the front-panel drive backplane to PCIe riser 2. This cable is for all versions of this server.

  • Hot-plug support must be enabled in the system BIOS. If you ordered the system with NVMe drives, hot-plug support is enabled at the factory.

  • The NVMe-optimized, SFF 10-drive version, supports NVMe drives only. This version of the server comes with an NVMe-switch card factory-installed in the internal mRAID riser for support of NVMe drives in slots 3 - 10. The NVMe drives in slots 1 and 2 are supported by PCIe riser 2. The NVMe switch card is not orderable separately.

Observe these restrictions:

  • NVMe SFF 2.5-inch SSDs support booting only in UEFI mode. Legacy boot is not supported. For instructions on setting up UEFI boot, see 4K Sector Format SAS/SATA Drives Considerations.

  • You cannot control NVMe PCIe SSDs with a SAS RAID controller because NVMe SSDs interface with the server via the PCIe bus.

  • UEFI boot is supported in all supported operating systems. Hot-insertion and hot-removal are supported in all supported operating systems except VMWare ESXi.

Enabling Hot-Plug Support in the System BIOS

Hot-plug (OS-informed hot-insertion and hot-removal) is disabled in the system BIOS by default.

  • If the system was ordered with NVMe PCIe SSDs, the setting was enabled at the factory. No action is required.

  • If you are adding NVMe PCIe SSDs after-factory, you must enable hot-plug support in the BIOS. See the following procedures.

Enabling Hot-Plug Support Using the BIOS Setup Utility
Procedure

Step 1

Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.

Step 2

Navigate to Advanced > PCI Subsystem Settings > NVMe SSD Hot-Plug Support.

Step 3

Set the value to Enabled.

Step 4

Save your changes and exit the utility.


Enabling Hot-Plug Support Using the Cisco IMC GUI
Procedure

Step 1

Use a browser to log in to the Cisco IMC GUI for the server.

Step 2

Navigate to Compute > BIOS > Advanced > PCI Configuration.

Step 3

Set NVME SSD Hot-Plug Support to Enabled.

Step 4

Save your changes.


Replacing a Front-Loading NVMe SSD

This topic describes how to replace 2.5-inch form-factor NVMe SSDs in the front-panel drive bays.


Note


OS-surprise removal is not supported. OS-informed hot-insertion and hot-removal are supported on all supported operating systems except VMware ESXi.



Note


OS-informed hot-insertion and hot-removal must be enabled in the system BIOS. See Enabling Hot-Plug Support in the System BIOS.


Procedure

Step 1

Remove an existing front-loading NVMe SSD:

  1. Shut down the NVMe SSD to initiate an OS-informed removal. Use your operating system interface to shut down the drive, and then observe the drive-tray LED:

    • Green—The drive is in use and functioning properly. Do not remove.

    • Green, blinking—the driver is unloading following a shutdown command. Do not remove.

    • Off—The drive is not in use and can be safely removed.

  2. Press the release button on the face of the drive tray.

  3. Grasp and open the ejector lever and then pull the drive tray out of the slot.

  4. Remove the four drive tray screws that secure the SSD to the tray and then lift the SSD out of the tray.

Note

 
If this is the first time that front-loading NVMe SSDs are being installed in the server, you must install PCIe cable CBL-NVME-C220FF before installing the drive. See Cabling NVMe Drives (UCS C220 M6 10 SFF Drives Only).

Step 2

Install a new front-loading NVMe SSD:

  1. Place a new SSD in the empty drive tray and install the four drive-tray screws.

  2. With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.

  3. Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.

Step 3

Observe the drive-tray LED and wait until it returns to solid green before accessing the drive:

  • Off—The drive is not in use.

  • Green, blinking—the driver is initializing following hot-plug insertion.

  • Green—The drive is in use and functioning properly.

Figure 9. Replacing a Drive in a Drive Tray

1

Ejector lever

3

Drive tray screws (two on each side)

2

Release button

4

Drive removed from drive tray


Cabling NVMe Drives (UCS C220 M6 10 SFF Drives Only)

When adding or replacing NVMe front-loading drives, a "Y" cable is is required to connect drives from the backplane to the server motherboard. The "Y" cable has two connectors for the backplane side (connectors B1 and B2), and only one connector (NVMe B) for the motherboard. Connectors are keyed, and they are different at each end of the cable to prevent improper installation. The backplane connector IDs are silkscreened onto the interior of the server.

For this task, you need the NVMe "Y" cable (74-124686-01) which is available through CBL-FNVME-220M6=.

Before you begin

Specific cables are required to add or replace the front-loading NVMe drives in 10-SFF drive servers. This procedure is for Cisco UCS C220 M6 10 SFF-drive servers only.

Procedure

Step 1

Remove the server top cover.

See Removing Top Cover.

Step 2

Locate the NVMe backplane connectors.

1

Connector B1

2

Connector B2

3

Motherboard connector

-

Step 3

Orient the cable correctly and lower it into place, but do not attach it yet.

Step 4

Step 5

Pass the NVMe B motherboard connector through the rectangular cutout in the fan cage's sheetmetal.

Note

 

To pass the NVMe B connector through the cutout, rotate the connector so that it is horizontal.

To provide enough slack in the cable, make sure that you have not attached the NVMe B1 and B2 connectors yet.

Step 6

Attach the cable.

  1. Attach the cable to the motherboard.

  2. Attach the cable to B1 and B2 connectors.

Step 7

Replace the top cover.


Replacing Fan Modules

The eight fan modules in the server are numbered as shown in Cisco UCS C220 M6 Server, Full Height, Full Width PCIe Cards, Serviceable Component Locations.


Tip


Each fan module has a fault LED next to the fan connector on the motherboard. This LED lights green when the fan is correctly seated and is operating OK. The LED lights amber when the fan has a fault or is not correctly seated.

Caution


You do not have to shut down or remove power from the server to replace fan modules because they are hot- swappable. However, to maintain proper cooling, do not operate the server for more than one minute with any fan module removed.

Procedure


Step 1

Remove an existing fan module:

  1. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  2. Remove the top cover from the server as described in Removing Top Cover.

  3. Grasp the fan module at its front and rear finger-grips. Lift straight up to disengage its connector from the motherboard.

Step 2

Install a new fan module:

  1. Set the new fan module in place. The arrow printed on the top of the fan module should point toward the rear of the server.

  2. Press down gently on the fan module to fully engage it with the connector on the motherboard.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing Riser Cages

The server can support either three half-height PCIe riser cages or two full-height PCIe riser cages in the rear PCIe slots.


Note


If you need to remove the MLOM to install riser cages, see Replacing an mLOM Card.


By using a Cisco replacement kit, you can change your server's rear PCIe riser configuration from three half-height riser cages to full-height riser cages or three half-height riser cages to two full-height riser cages. To perform this replacement, see the following topics:

Required Equipment for Replacing Riser Cages

To replace the server's three half-height (HH) rear PCIe riser cages with two full-height (FH) rear PCIe riser cages, you will need to obtain the C220 M6 GPU Riser Bracket assembly kit (UCSC-GPURKIT-C220=), which contains the following required parts:

  • FH rear wall (1)

  • Countersink Phillips flathead screws, M3 x 0.5 (4)

  • FH Riser Cage 1

  • FH Riser Cage 2


Note


To remove and install screws, you also need a #2 Phillips screwdriver, which is not provided by Cisco.


Removing Half Height Riser Cages

This task enables switching from 3 FH rear PCIe cages to 2 HH rear PCIe cages. To complete this procedure, make sure that you have the required equipment. See Required Equipment for Replacing Riser Cages.

Procedure


Step 1

Remove the server top cover to gain access to the PCIe riser cages.

See Removing Top Cover.

Step 2

Remove the three rear PCIe riser cages.

  1. Locate the riser cages.

  2. Using a #2 Phillips screwdriver or your fingers, for each riser cage, loosen its captive thumbscrew.

1

Rear Riser Cage 1

2

Rear Riser Cage 2

3

Rear Riser cage 3

4

Riser Cage Thumbscrews, three total (one per riser cage)

Step 3

Using a #2 Phillips screwdriver, remove the four screws that secure the half height rear wall and mLOM bracket to the chassis sheet metal.

Note

 
One of the screws is located behind the rear wall so it might be difficult to see. when you are facing the server's rear riser slots.
Figure 10. Locations of Securing Screws, Facing Rear Riser Slots
Figure 11. Locations of Securing Screws, Alternate View

Step 4

Remove the half height rear wall and mLOM bracket.

  1. Grasp each end of the half height rear wall and remove it.

  2. Grasp each end of the mLOM bracket and remove it.

Step 5

Save the three HH riser cages and the half height rear wall.


What to do next

Install the two full-height riser cages. See Installing Full Height Riser Cages.

Installing Full Height Riser Cages

Use this task to install 2 FH rear riser cages after 3 HH rear riser cages are removed.

Before beginning this procedure, see Required Equipment for Replacing Riser Cages.

Procedure


Step 1

Install the mLOM bracket.

Step 2

Install the full-height rear wall.

  1. Orient the full-height rear wall as shown, making sure the folded metal tab is facing up.

  2. Align the screw holes in the FH rear wall with the screw holes in the server sheet metal.

  3. Holding the rear wall level, seat onto the server sheet metal, making sure that the screw holes line up.

Step 3

Using a #2 Phillips screwdriver, install the four screws the secure the mLOM bracket and the FH rear wall to the server sheet metal.

Caution

 

Tighten the screws to 4 lbs-in of torque. Do not overtighten the screws or you risk stripping them.

Figure 12. Installing Securing Screws, Facing Rear Riser Slots
Figure 13. Installing Securing Screws, Alternative View

Step 4

Install the two full height riser cages.

  1. Align riser cages 1 and 2 over their PCIe slots, making sure that the captive thumbscrews are aligned with their screw holes.

  2. Holding each riser cage level, lower it into its PCIe slot, then tighten the thumbscrew by using a #2 Phillips screwdriver or your fingers.

    Caution

     

    Tighten the screws to 4 lbs-in of torque. Do not overtighten the screws or you risk stripping them.

Step 5

Replace the server's top cover.


Removing Full Height Riser Cages

This task enables switching from 2 FH rear PCIe cages to 3 HH rear PCIe cages. To complete this procedure, make sure that you have the required equipment. See Required Equipment for Replacing Riser Cages.

Procedure


Step 1

Remove the server top cover to gain access to the PCIe riser cages.

See Removing Top Cover.

Step 2

Remove the two rear PCIe riser cages.

  1. Locate the riser cages.

  2. Using a #2 Phillips screwdriver or your fingers, for each riser cage, loosen its captive thumbscrew.

1

Rear Riser Cage 1

2

Rear Riser Cage 2

3

Rear Riser cage 3

4

Riser Cage Thumbscrews, two total (one per riser cage)

Step 3

Using a #2 Phillips screwdriver, remove the four screws that secure the half height rear wall and mLOM bracket to the chassis sheet metal.

Note

 
One of the screws is located behind the rear wall so it might be difficult to see. when you are facing the server's rear riser slots.
Figure 14. Locations of Securing Screws, Facing Rear Riser Slots
Figure 15. Locations of Securing Screws, Alternate View

Step 4

Remove the half height rear wall and mLOM bracket.

  1. Grasp each end of the full height rear wall and remove it.

    Figure 16. Removing the Full Height Rear Wall
  2. Grasp each end of the mLOM bracket and remove it.

Figure 17. Remove mLOM Bracket

Step 5

Save the three FH riser cages and the full height rear wall.


What to do next

Install the two half-height riser cages. See Installing Half Height Riser Cages .

Installing Half Height Riser Cages

Use this task to install 3 HH rear riser cages after 2 FH rear riser cages are removed.

Before beginning this procedure, see Required Equipment for Replacing Riser Cages.

Procedure


Step 1

Install the mLOM bracket.

Step 2

Install the half-height rear wall.

  1. Orient the half-height rear wall as shown, making sure the folded metal tab is facing up.

  2. Align the screw holes in the HH rear wall with the screw holes in the server sheet metal.

  3. Holding the rear wall level, seat onto the server sheet metal, making sure that the screw holes line up.

Step 3

Using a #2 Phillips screwdriver, install the four screws the secure the mLOM bracket and the HH rear wall to the server sheet metal.

Caution

 

Tighten screws to 4 lbs-in. Do not overtighten screws or you risk stripping them!

Figure 18. Installing Securing Screws, Facing Rear Riser Slots
Figure 19. Installing Securing Screws, Alternative View

Step 4

Install the two full height riser cages.

  1. Align riser cages 1, 2, and 3 over their PCIe slots, making sure that the captive thumbscrews are aligned with their screw holes.

  2. Holding each riser cage level, lower it into its PCIe slot, then tighten the thumbscrew by using a #2 Phillips screwdriver or your fingers.

Step 5

Ensure the three riser cages are securely seated on the motherboard.

Step 6

Replace the server's top cover.


Replacing CPUs and Heatsinks

This section contains CPU configuration rules and the procedure for replacing CPUs and heatsinks:

CPU Configuration Rules

This server has two CPU sockets on the motherboard. Each CPU supports eight DIM channels (16 DIMM slots). See DIMM Slot Numbering.

  • The server can operate with one CPU, or two identical CPUs installed.

  • The minimum configuration is that the server must have at least CPU 1 installed. Install CPU 1 first, and then CPU 2.

  • The following restrictions apply when using a single-CPU configuration:

    • Any unused CPU socket must have the protective dust cover from the factory in place.

    • The maximum number of DIMMs is 16 (only CPU 1 channels A, B, C, D, E, F, G, and H).

    • PCIe riser 2 (slot 2) is unavailable.

    • Front-loading NVME drives are unavailable (they require PCIe riser 2).

  • One type of CPU heatsink is available for this server, the low profile heatsink (UCSC-HSLP-M6). This heatsink has four T30 Torx screws on the main heatsink, and 2 Phillips-head screws on the extended heatsink.

Tools Required For CPU Replacement

You need the following tools and equipment for this procedure:

  • T-30 Torx driver—Supplied with replacement CPU.

  • #1 flat-head screwdriver—Supplied with replacement CPU.

  • #2 Phillips screwdriver.

  • CPU assembly tool—Supplied with replacement CPU. Orderable separately as Cisco PID UCS-CPUAT=.

  • Heatsink cleaning kit—Supplied with replacement CPU. Orderable separately as Cisco PID UCSX-HSCK=.

    One cleaning kit can clean up to four CPUs.

  • Thermal interface material (TIM)—Syringe supplied with replacement CPU. Use only if you are reusing your existing heatsink (new heatsinks have a pre-applied pad of TIM). Orderable separately as Cisco PID UCS-CPU-TIM=.

    One TIM kit covers one CPU.

See also Additional CPU-Related Parts to Order with RMA Replacement CPUs.

Removing CPUs and Heat Sinks

Use the following procedure to remove an installed CPU and heatsink from the server. With this procedure, you will remove the CPU from the motherboard, disassemble individual components, then place the CPU and heatsink into the fixture that came with the CPU.

Procedure


Step 1

Detach the CPU and heatsink (the CPU assembly) from the CPU socket.

  1. Using a #2 Phillips screwdriver, loosen the two captive screws at the far end of the heatsink.

  2. Using a T30 Torx driver, loosen all the securing nuts.

  3. Push the rotating wires towards each other to move them to the unlocked position. The rotating wire locked and unlocked positions are labeled on the top of the heatsink.

    Caution

     

    Make sure that the rotating wires are as far inward as possible. When fully unlocked, the bottom of the rotating wire disengages and allows the removal of the CPU assembly. If the rotating wires are not fully in the unlocked position, you can feel resistance when attempting to remove the CPU assembly.

  4. Grasp the heatsink along the edge of the fins and lift the CPU assembly off of the motherboard.

    Caution

     
    While lifting the CPU assembly, make sure not to bend the heatsink fins. Also, if you feel any resistance when lifting the CPU assembly, verify that the rotating wires are completely in the unlocked position.

Step 2

Put the CPU assembly on a rubberized mat or other ESD-safe work surface.

When placing the CPU on the work surface, the heatsink label should be facing up. Do not rotate the CPU assembly upside down.

Ensure that the heatsink sits level on the work surface.

Step 3

Attach a CPU dust cover (UCS-CPU-M6-CVR=) to the CPU socket.

  1. Align the posts on the CPU bolstering plate with the cutouts at the corners of the dust cover.

  2. Lower the dust cover and simultaneously press down on the edges until it snaps into place over the CPU socket.

    Caution

     

    Do not press down in the center of the dust cover!



Step 4

Detach the CPU from the CPU carrier.

  1. Turn the CPU assembly upside down, so that the heatsink is pointing down.

    This step enables access to the CPU securing clips.

  2. Gently lift the TIM breaker (1 in the following illustration) in a 90-degree upward arc to partially disengage the CPU clips on this end of the CPU carrier.

  3. Lower the TIM breaker into the u-shaped securing clip to allow easier access to the CPU carrier.

    Note

     

    Make sure that the TIM breaker is completely seated in the securing clip.

  4. Gently pull up on the extended edge of the CPU carrier (1) so that you can disengage the second pair of CPU clips near both ends of the TIM breaker.

    Caution

     

    Be careful when flexing the CPU carrier! If you apply too much force you can damage the CPU carrier. Flex the carrier only enough to release the CPU clips. Make sure to watch the clips while performing this step so that you can see when they disengage from the CPU carrier.

  5. Gently pull up on the opposite edge of the CPU carrier (2) so that you can disengage the pair of CPU clips.

Step 5

When all the CPU clips are disengaged, grasp the carrier, and lift it and the CPU to detach them from the heatsink.

Note

 

If the carrier and CPU do not lift off of the heatsink, attempt to disengage the CPU clips again.

Step 6

Use the provided cleaning kit (UCSX-HSCK) to remove all of the thermal interface barrier (thermal grease) from the CPU, CPU carrier, and heatsink.

Important

 

Make sure to use only the Cisco-provided cleaning kit, and make sure that no thermal grease is left on any surfaces, corners, or crevices. The CPU, CPU carrier, and heatsink must be completely clean.

Step 7

Transfer the CPU and carrier to the fixture.

  1. Flip the CPU and carrier right-side up.

  2. Align the CPU and carrier with the fixture.

  3. Lower the CPU and CPU carrier onto the fixture.


What to do next

Choose the appropriate option:

  • If you will be installing a CPU, go to Installing the CPUs and Heatsinks.

  • If you will not be installing a CPU, verify that a CPU socket cover is installed. This option is valid only for CPU socket 2 because CPU socket 1 must always be populated in a runtime deployment.

Installing the CPUs and Heatsinks

Use this procedure to install a CPU if you have removed one, or if you are installing a CPU in an empty CPU socket. To install the CPU, you will move the CPU to the fixture, then attach the CPU assembly to the CPU socket on the server mother board.

Procedure


Step 1

Remove the CPU socket dust cover (UCS-CPU-M6-CVR=) on the server motherboard.

  1. Push the two vertical tabs inward to disengage the dust cover.

  2. While holding the tabs in, lift the dust cover up to remove it.

  3. Store the dust cover for future use.

    Caution

     

    Do not leave an empty CPU socket uncovered. If a CPU socket does not contain a CPU, you must install a CPU dust cover.

Step 2

Grasp the CPU fixture on the edges labeled PRESS, lift it out of the tray, and place the CPU assembly on an ESD-safe work surface.

Step 3

Apply new TIM.

Note

 
The heatsink must have new TIM on the heatsink-to-CPU surface to ensure proper cooling and performance.
  • If you are installing a new heatsink, it is shipped with a pre-applied pad of TIM. Go to step 4.

  • If you are reusing a heatsink, you must remove the old TIM from the heatsink and then apply new TIM to the CPU surface from the supplied syringe. Continue with step a below.

  1. Apply the Bottle #1 cleaning solution that is included with the heatsink cleaning kit (UCSX-HSCK=), as well as the spare CPU package, to the old TIM on the heatsink and let it soak for a least 15 seconds.

  2. Wipe all of the TIM off the heatsink using the soft cloth that is included with the heatsink cleaning kit. Be careful to avoid scratching the heatsink surface.

  3. Completely clean the bottom surface of the heatsink using Bottle #2 to prepare the heatsink for installation.

  4. Using the syringe of TIM provided with the new CPU (UCS-CPU-TIM=), apply 1.5 cubic centimeters (1.5 ml) of thermal interface material to the top of the CPU. Use the pattern shown in the following figure to ensure even coverage.

    Figure 20. Thermal Interface Material Application Pattern

    Caution

     

    Use only the correct heatsink for your CPU - UC SC-HSLP-M6=

Step 4

Attach the heatsink to the socket.

  1. Align the CPU and heatsink.

  2. Lower the heatsink onto the CPU.

  3. Close the rotating wires to lock the heatsink into place on the TIM grease.

Step 5

Install the CPU to the motherboard.

  1. Push the rotating wires to the unlocked position so that they do not obstruct installation.

  2. Holding the CPU by the fins, align it with the posts on the socket.

  3. Lower the CPU onto the motherboard socket.

  4. Set the T30 Torx driver to 12 in-lb of torque and tighten the 4 securing nuts to secure the CPU to the motherboard (3) first. Then, set the torque driver to 6 in-lb of torque and tighten the two Phillips head screws for the extended heatsink (4).


Additional CPU-Related Parts to Order with RMA Replacement CPUs

When a return material authorization (RMA) of the CPU is done on a Cisco UCS C-Series server, additional parts might not be included with the CPU spare. The TAC engineer might need to add the additional parts to the RMA to help ensure a successful replacement.


Note


The following items apply to CPU replacement scenarios. If you are replacing a system chassis and moving existing CPUs to the new motherboard, you do not have to separate the heatsink from the CPU.


  • Scenario 1—You are reusing the existing heatsinks:

    • Heatsink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

    • Thermal interface material (TIM) kit for M6 servers (UCS-CPU-TIM=)

      One TIM kit covers one CPU.

  • Scenario 2—You are replacing the existing heatsinks:

    • Use heatsink UCSC-HSLP-M6=

      New heatsinks have a pre-applied pad of TIM.

    • Heatsink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

  • Scenario 3—You have a damaged CPU carrier (the plastic frame around the CPU):

    • CPU Carrier: UCS-M6-CPU-CAR=

    • #1 flat-head screwdriver (for separating the CPU from the heatsink)

    • Heatsink cleaning kit (UCSX-HSCK=)

      One cleaning kit can clean up to four CPUs.

    • Thermal interface material (TIM) kit for M5 servers (UCS-CPU-TIM=)

      One TIM kit covers one CPU.

A CPU heatsink cleaning kit is good for up to four CPU and heatsink cleanings. The cleaning kit contains two bottles of solution, one to clean the CPU and heatsink of old TIM and the other to prepare the surface of the heatsink.

New heatsink spares come with a pre-applied pad of TIM. It is important to clean any old TIM off of the CPU surface prior to installing the heatsinks. Therefore, even when you are ordering new heatsinks, you must order the heatsink cleaning kit.

Replacing Memory DIMMs


Caution


DIMMs and their sockets are fragile and must be handled with care to avoid damage during installation.



Note


DIMMs and their slots are keyed to insert only one way. Make sure to align the notch on the bottom of the DIMM with the key in the DIMM slot. If you are seating a DIMM in a slot and feel resistance, remove the DIMM and verify that its notch is properly aligned with the slot's key.



Caution


Cisco does not support third-party DIMMs. Using non-Cisco DIMMs in the server might result in system problems or damage to the motherboard.



Note


To ensure the best server performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace DIMMs.


DIMM Population Rules and Memory Performance Guidelines

The following sections provide partial information for memory usage. mixing, and population guidelines. For detailed information about memory usage and population, download the Cisco UCS C220/C240/B200 M6 Memory Guide.

DIMM Slot Numbering

The following figure shows the numbering of the DIMM slots on the motherboard.

DIMM Population Rules

Observe the following guidelines when installing or replacing DIMMs for maximum performance:

  • The Cisco UCS C220 M6 supports DIMMs (RDIMMs), load-reduced DIMMs (LR DIMMs) and Intel® Optane™ Persistent Memory Modules (PMEMs).

  • Each CPU supports eight memory channels, A through H.

    • CPU 1 supports channels P1 A1, P1 A2, P1 B1, P1 B2, P1 C1, P1 C2, P1 D1, P1 D2, P1 E1, P1 E2, P1 F1, P1 F2, P1 G1, P1 G2, P1 H1, and P1 H2.

    • CPU 2 supports channels P2 A1, P2 A2, P2 B1, P2 B2, P2 C1, P2 C2, P2 D1, P2 D2, P2 E1, P2 E2, P2 F1, P2 F2, P2 G1, P2 G2, P2 H1, and P2 H2.

  • When one DIMM is used, it must be populated in DIMM slot 1 (farthest away from the CPU) of a given channel.

  • When single- or dual-rank DIMMs are populated in two DIMMs per channel (2DPC) configurations, always populate the higher number rank DIMM first (starting from the farthest slot). For a 2DPC example, first populate with dual-rank DIMMs in DIMM slot 1. Then populate single-rank DIMMs in DIMM 2 slot.

  • Each channel has two DIMM sockets (for example, channel A = slots A1, A2).

  • In a single-CPU configuration, populate the channels for CPU1 only (P1 A1 through P1 H2).

  • For optimal performance, populate DIMMs in the order shown in the following table, depending on the number of CPUs and the number of DIMMs per CPU. If your server has two CPUs, balance DIMMs evenly across the two CPUs as shown in the table. DIMMs for CPU 1 and CPU 2 (when populated) must always be configured identically.


    Note


    The section below lists recommended configurations. Using 5, 7, 9, 10, or 11 DIMMs per CPU is not recommended.


  • Cisco memory from previous generation servers (DDR3 and DDR4) is not compatible with the server.

  • Memory can be configured in any number of DIMMs as pairs, although for optimal performance, see the following document: https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/c220-c240-b200-m6-memory-guide.pdf.

  • DIMM mixing is supported for DIMMs, but not when Intel Optane Persistent Memory is installed.

    • LRDIMMs cannot be mixed with RDIMMs

    • RDIMMs can be mixed with RDIMMs, and LRDIMMs can be mixed with LRDIMMs, but mixing of non-3DS and 3DS LRDIMMs is not allowed in the same channel, across different channels, or across different sockets.

    • Allowed mixing must be in pairs of similar quantities (for example, 8x32GB and 8x64GB, 8x16GB and 8x64GB, 8x32GB and 8x64GB, or 8x16GB and 8x32GB). Mixing of 10x32GB and 6x64GB, for example, is not allowed.

  • DIMMs are keyed. To properly install them, make sure that the notch on the bottom of the DIMM lines up with the key in slot.

  • Populate all slots with a DIMM or DIMM blank. A DIMM slot cannot be empty.

Memory Population Order

The Cisco UCS C220 M6 server has two memory options, DIMMs only or DIMMs plus Intel Optane PMem 200 series memory.

Memory slots are color-coded, blue and black. The color-coded channel population order is blue slots first, then black. DIMMs for CPU 1 and CPU 2 (when populated) must always be configured identically.

The following tables show the memory population order for each memory option.

Table 3. DIMMs Population Order

Number of DDR4 DIMMs per CPU (Recommended Configurations)

Populate CPU 1 Slot

Populate CPU2 Slots

P1 Blue #1 Slots

P1 Black #2 Slots

P2 Blue #1 Slots

P2 Black #2 Slots

1

(A1)

-

(A1)

2

(A1, E1)

-

(A1, E1)

4

(A1, C1); (E1, G1)

-

(A1, C1); (E1, G1)

6

(A1, C1); (D1, E1): (G1, H1)

-

(A1, C1); (D1, E1): (G1, H1)

8

(A1,C1); (D1, E1): (G1, H1); (B1, F1)

-

(A1,C1); (D1, E1): (G1, H1); (B1, F1)

12

(A1,C1); (D1, E1): (G1, H1)

(A2, C2); (D2, E2); (G2, H2)

(A1,C1); (D1, E1): (G1, H1)

(A2, C2); (D2, E2); (G2, H2)

16

All populated (A1 through H1)

All populated (A2 through H2)

All populated (A1 through H1)

All populated (A2 through H2)

Table 4. DIMM Plus Intel Optane PMem 200 Series Memory Population Order

Total Number of DIMMs per CPU

DDR4 DIMM Slot

Intel Optane PMem 200 Series DIMM Slot

4+4 DIMM

A0, C0, E0, G0,

B0, D0, F0, H0

8+1 DIMMs

A0, B0, C0, D0, E0, F0, G0, H0

A1

8+4 DIMMs

A0, B0, C0, D0, E0, F0, G0, H0

A1, C1, E1, G1

8+8 DIMMs

A0, B0, C0, D0, E0, F0, G0, H0

A1, B1, C1, D1, E1, F1, G1, H1

Memory Mirroring

The CPUs in the server support memory mirroring only when an even number of channels are populated with DIMMs. If one or three channels are populated with DIMMs, memory mirroring is automatically disabled.

Memory mirroring reduces the amount of memory available by 50 percent because only one of the two populated channels provides data. The second, duplicate channel provides redundancy.

Replacing DIMMs

Identifying a Faulty DIMM

Each DIMM socket has a corresponding DIMM fault LED, directly in front of the DIMM socket. See Internal Diagnostic LEDs for the locations of these LEDs. When the server is in standby power mode, these LEDs light amber to indicate a faulty DIMM.

Procedure


Step 1

Remove an existing DIMM:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover

  4. Remove the air baffle that covers the front ends of the DIMM slots to provide clearance.

  5. Locate the DIMM that you are removing, and then open the ejector levers at each end of its DIMM slot.

Step 2

Install a new DIMM:

Note

 
Before installing DIMMs, see the memory population rules for this server: DIMM Population Rules and Memory Performance Guidelines.
  1. Align the new DIMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to correctly orient the DIMM.

  2. Push down evenly on the top corners of the DIMM until it is fully seated and the ejector levers on both ends lock into place.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing Intel Optane DC Persistent Memory Modules

This topic contains information for replacing Intel Optane Data Center Persistent Memory modules (DCPMMs), including population rules. DCPMMs have the same form-factor as DDR4 DIMMs and they install to DIMM slots.


Caution


DCPMMs and their sockets are fragile and must be handled with care to avoid damage during installation.



Note


To ensure the best server performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace DCPMMs.


DCPMMs can be configured to operate in one of three modes:

  • Memory Mode (default): The module operates as 100% memory module. Data is volatile and DRAM acts as a cache for DCPMMs. This is the factory default setting.

  • App Direct Mode: The module operates as a solid-state disk storage device. Data is saved and is non-volatile.

  • Mixed Mode (25% Memory Mode + 75% App Direct): The module operates with 25% capacity used as volatile memory and 75% capacity used as non-volatile storage.

Intel Optane DC Persistent Memory Module Population Rules and Performance Guidelines

This topic describes the rules and guidelines for maximum memory performance when using Intel Optane DC persistent memory modules (DCPMMs) with DDR4 DRAM DIMMs.

DIMM Slot Numbering

The following figure shows the numbering of the DIMM slots on the server motherboard.

Configuration Rules

Observe the following rules and guidelines:

  • To use DCPMMs in this server, two CPUs must be installed.

  • When using DCPMMs in a server:

    • The DDR4 DIMMs installed in the server must all be the same size.

    • The DCPMMs installed in the server must all be the same size and must have the same SKU.

  • The DCPMMs run at 2666 MHz. If you have 2933 MHz RDIMMs or LRDIMMs in the server and you add DCPMMs, the main memory speed clocks down to 2666 MHz to match the speed of the DCPMMs.

  • Each DCPMM draws 18 W sustained, with a 20 W peak.

  • Intel Optane Persistent Memory supports the following memory modes:

    • App Direct Mode, in which the PMEM operates as a solid-state disk storage device. Data is saved and is non-volatile. Both PMEM and DIMM capacities count towards the CPU capacity limit

    • Memory Mode, in which the PMEM operates as a 100% memory module. Data is volatile and DRAM acts as a cache for PMEMs. Only the PMEM capacity counts towards the CPU capacity limit). This is the factory default mode

PMEM and DRAM Support

  • Both DRAMs and PMEMs are supported in the Cisco UCS C220 M6 rack server.

  • Each CPU has 16 DIMM sockets and supports the following maximum memory capacities:

    • 4 TB using 16 x 256 GB DRAMs, or

    • 6 TB using 8 x 256 GB DRAMs and 8 x 512 GB Intel® Optane™ Persistent Memory Modules (PMEMs)

  • If DRAMs/PMEMs are mixed, the following configuration the only one supported per CPU socket:

    • 4 DRAMs and 4 PMEMs

    • 8 DRAMs and 4 PMEMs

    • 8 DRAMs and 1 PMEM

    • 8 DRAMs and 8 PMEMs

  • Supported capacities are:

    • DRAM: 32 GB, 64 GB, 128 GB, or 256 GB

    • PMEM: 128 GB, 256 GB, or 512 GB

Installing Intel Optane DC Persistent Memory Modules


Note


DCPMM configuration is always applied to all DCPMMs in a region, including a replacement DCPMM. You cannot provision a specific replacement DCPMM on a preconfigured server.


Procedure


Step 1

Remove an existing DCPMM:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

  4. Remove the air baffle that covers the front ends of the DIMM slots to provide clearance.

    Caution

     

    If you are moving DCPMMs with active data (persistent memory) from one server to another as in an RMA situation, each DCPMM must be installed to the identical position in the new server. Note the positions of each DCPMM or temporarily label them when removing them from the old server.

  5. Locate the DCPMM that you are removing, and then open the ejector levers at each end of its DIMM slot.

Step 2

Install a new DCPMM:

Note

 

Before installing DCPMMs, see the population rules for this server: Intel Optane DC Persistent Memory Module Population Rules and Performance Guidelines.

  1. Align the new DCPMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to correctly orient the DCPMM.

  2. Push down evenly on the top corners of the DCPMM until it is fully seated and the ejector levers on both ends lock into place.

  3. Reinstall the air baffle.

  4. Replace the top cover to the server.

  5. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Step 3

Perform post-installation actions:

  • If the existing configuration is in 100% Memory mode, and the new DCPMM is also in 100% Memory mode (the factory default), the only action is to ensure that all DCPMMs are at the latest, matching firmware level.

  • If the existing configuration is fully or partly in App-Direct mode and new DCPMM is also in App-Direct mode, then ensure that all DCPMMs are at the latest matching firmware level and also re-provision the DCPMMs by creating a new goal.

  • If the existing configuration and the new DCPMM are in different modes, then ensure that all DCPMMs are at the latest matching firmware level and also re-provision the DCPMMs by creating a new goal.

There are a number of tools for configuring goals, regions, and namespaces.


Server BIOS Setup Utility Menu for DCPMM


Caution


Potential data loss: If you change the mode of a currently installed DCPMM from App Direct or Mixed Mode to Memory Mode, any data in persistent memory is deleted.


DCPMMs can be configured by using the server's BIOS Setup Utility, Cisco IMC, Cisco UCS Manager, or OS-related utilities.

The server BIOS Setup Utility includes menus for DCPMMs. They can be used to view or configure DCPMM regions, goals, and namespaces, and to update DCPMM firmware.

To open the BIOS Setup Utility, press F2 when prompted during a system boot.

The DCPMM menu is on the Advanced tab of the utility:

Advanced > Intel Optane DC Persistent Memory Configuration

From this tab, you can access other menu items:

  • DIMMs: Displays the installed DCPMMs. From this page, you can update DCPMM firmware and configure other DCPMM parameters.

    • Monitor health

    • Update firmware

    • Configure security

      You can enable security mode and set a password so that the DCPMM configuration is locked. When you set a password, it applies to all installed DCPMMs. Security mode is disabled by default.

    • Configure data policy

  • Regions: Displays regions and their persistent memory types. When using App Direct mode with interleaving, the number of regions is equal to the number of CPU sockets in the server. When using App Direct mode without interleaving, the number of regions is equal to the number of DCPMMs in the server.

    From the Regions page, you can configure memory goals that tell the DCPMM how to allocate resources.

    • Create goal config

  • Namespaces: Displays namespaces and allows you to create or delete them when persistent memory is used. Namespaces can also be created when creating goals. A namespace provisioning of persistent memory applies only to the selected region.

    Existing namespace attributes such as the size cannot be modified. You can only add or delete namespaces.

  • Total capacity: Displays the total resource allocation across the server.

Updating the DCPMM Firmware Using the BIOS Setup Utility

You can update the DCPMM firmware from the BIOS Setup Utility if you know the path to the .bin files. The firmware update is applied to all installed DCPMMs.

  1. Navigate to Advanced > Intel Optane DC Persistent Memory Configuration > DIMMs > Update firmware

  2. Under File:, provide the file path to the .bin file.

  3. Select Update.

Replacing a Mini-Storage Module

The mini-storage module plugs into a motherboard socket to provide additional internal storage. The module is available in two different versions:

  • SD card carrier—provides two SD card sockets.

  • M.2 SSD Carrier—provides two M.2 form-factor SSD sockets.


Note


The Cisco IMC firmware does not include an out-of-band management interface for the M.2 drives installed in the M.2 version of this mini-storage module (UCS-MSTOR-M2). The M.2 drives are not listed in Cisco IMC inventory, nor can they be managed by Cisco IMC. This is expected behavior.


Replacing a Mini-Storage Module Carrier

This topic describes how to remove and replace a mini-storage module carrier. The carrier has one media socket on its top and one socket on its underside. Use the following procedure for any type of mini-storage module carrier (SD card or M.2 SSD).

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing Top Cover.

Step 4

Remove a carrier from its socket:

  1. Locate the mini-storage module carrier in its socket just in front of power supply 1.

  2. At each end of the carrier, push outward on the clip that secures the carrier.

  3. Lift both ends of the carrier to disengage it from the socket on the motherboard.

  4. Set the carrier on an anti-static surface.

Step 5

Install a carrier to its socket:

  1. Position the carrier over socket, with the carrier's connector facing down and at the same end as the motherboard socket. Two alignment pegs must match with two holes on the carrier.

  2. Gently push down the socket end of the carrier so that the two pegs go through the two holes on the carrier.

  3. Push down on the carrier so that the securing clips click over it at both ends.

Step 6

Replace the top cover to the server.

Step 7

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

1

Location of socket on motherboard

3

Securing clips

2

Alignment pegs

-


Replacing an SD Card in a Mini-Storage Carrier For SD

This topic describes how to remove and replace an SD card in a mini-storage carrier for SD (PID UCS-MSTOR-SD). The carrier has one SD card slot on its top and one slot on its underside.

Population Rules For Mini-Storage SD Cards

  • You can use one or two SD cards in the carrier.

  • Dual SD cards can be configured in a RAID 1 array through the Cisco IMC interface.

  • SD slot 1 is on the top side of the carrier; SD slot 2 is on the underside of the carrier (the same side as the carrier's motherboard connector).

Procedure


Step 1

Power off the server and then remove the mini-storage module carrier from the server as described in Replacing a Mini-Storage Module Carrier.

Step 2

Remove an SD card:

  1. Push on the top of the SD card, and then release it to allow it to spring out from the socket.

  2. Grasp and remove the SD card from the socket.

Step 3

Install a new SD card:

  1. Insert the new SD card into the socket with its label side facing up.

  2. Press on the top of the SD card until it clicks in the socket and stays in place.

Step 4

Install the mini-storage module carrier back into the server and then power it on as described in Replacing a Mini-Storage Module Carrier.


Replacing an M.2 SSD in a Mini-Storage Carrier For M.2

This topic describes how to remove and replace an M.2 SATA or M.2 NVMe SSD in a mini-storage carrier for M.2 (UCS-MSTOR-M2). The carrier has one M.2 SSD socket on its top and one socket on its underside.

Population Rules For Mini-Storage M.2 SSDs

  • Both M.2 SSDs must be either SATA or NVMe; do not mix types in the carrier.

  • You can use one or two M.2 SSDs in the carrier.

  • M.2 socket 1 is on the top side of the carrier; M.2 socket 2 is on the underside of the carrier (the same side as the carrier's motherboard connector).

Procedure


Step 1

Power off the server and then remove the mini-storage module carrier from the server as described in Replacing a Mini-Storage Module Carrier.

Step 2

Remove an M.2 SSD:

  1. Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 SSD to the carrier.

  2. Remove the M.2 SSD from its socket on the carrier.

Step 3

Install a new M.2 SSD:

  1. Angle the M.2 SSD downward and insert the connector-end into the socket on the carrier. The M.2 SSD's label must face up.

  2. Press the M.2 SSD flat against the carrier.

  3. Install the single screw that secures the end of the M.2 SSD to the carrier.

Step 4

Install the mini-storage module carrier back into the server and then power it on as described in Replacing a Mini-Storage Module Carrier.


Replacing an Internal USB Drive

This section includes procedures for installing a USB 3.0 drive and for enabling or disabling the internal USB port.

Replacing a USB Drive


Caution


We do not recommend that you hot-swap the internal USB drive while the server is powered on because of the potential for data loss.

Procedure


Step 1

Remove an existing internal USB drive:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

  4. Locate the USB socket on the motherboard, directly in front of PCIe riser 1.

  5. Grasp the USB drive and pull it horizontally to free it from the socket.

Step 2

Install a new internal USB drive:

  1. Align the USB drive with the socket.

  2. Push the USB drive horizontally to fully engage it with the socket.

  3. Replace the top cover to the server.

  4. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 21. Location of Internal USB Port

Location of horizontal USB socket on motherboard


Enabling or Disabling the Internal USB Port

The factory default is that all USB ports on the server are enabled. However, the internal USB port can be enabled or disabled in the server BIOS.

Procedure


Step 1

Enter the BIOS Setup Utility by pressing the F2 key when prompted during bootup.

Step 2

Navigate to the Advanced tab.

Step 3

On the Advanced tab, select USB Configuration.

Step 4

On the USB Configuration page, select USB Ports Configuration.

Step 5

Scroll to USB Port: Internal, press Enter, and then choose either Enabled or Disabled from the dialog box.

Step 6

Press F10 to save and exit the utility.


Replacing the RTC Battery


Warning


There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s instructions.

[Statement 1015]



Warning


Recyclers: Do not shred the battery! Make sure you dispose of the battery according to appropriate regulations for your country or locale.


The real-time clock (RTC) battery retains system settings when the server is disconnected from power. The battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be ordered from Cisco (PID N20-MBLIBATT) or purchased from most electronic stores.

Procedure


Step 1

Remove the RTC battery:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

  4. Locate the RTC battery. The vertical socket is directly in front of PCIe riser 2.

  5. Remove the battery from the socket on the motherboard. Gently pry the securing clip on one side open to provide clearance, then lift straight up on the battery.

Step 2

Install a new RTC battery:

  1. Insert the battery into its holder and press down until it clicks in place under the clip.

    Note

     
    The flat, positive side of the battery marked “3V+” should face left as you face the server front.
  2. Replace the top cover to the server.

  3. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

1

RTC battery in vertical socket

-


Replacing Power Supplies

The server can have one or two Titanium 80PLUS rated power supplies. When two power supplies are installed they are redundant as 1+1 by default, but they also support cold redundancy mode. Cold redundancy (CR) suspends power delivery on one or more power supplies and forces the remainder of the load to be supplied by the active PSU(s). As a result, total power efficiency is improved by best utilizing the PSU efficiency when compared to load characteristics.

The server supports up to two of the following hot-swappable power supplies:

  • 1050 W (AC), Cisco PID UCSC-PSU1-1050W

  • 1050 W V2 (DC), Cisco PID UCSC-PSUV2-1050DC

  • 1600 W (AC), Cisco PID UCSC-PSU1-1600W

  • 2300 W (AC), Cisco PID UCSC-PSU-2300W

One power supply is mandatory, and one more can be added for 1 + 1 redundancy. You cannot mix AC and DC power supplies in the same server.

This section includes procedures for replacing AC and DC power supply units.

See the following.

Replacing AC Power Supplies


Note


If you have ordered a server with power supply redundancy (two power supplies), you do not have to power off the server to replace a power supply because they are redundant as 1+1.

Note


Do not mix power supply types or wattages in the server. Both power supplies must be identical.

Procedure


Step 1

Remove the power supply that you are replacing or a blank panel from an empty bay:

  1. Perform one of the following actions:

  2. Remove the power cord from the power supply that you are replacing.

  3. Grasp the power supply handle while pinching the release lever toward the handle.

  4. Pull the power supply out of the bay.

Step 2

Install a new power supply:

  1. Grasp the power supply handle and insert the new power supply into the empty bay.

  2. Push the power supply into the bay until the release lever locks.

  3. Connect the power cord to the new power supply.

  4. Only if you shut down the server, press the Power button to boot the server to main power mode.

1

Power supply release lever

2

Power supply handle


Replacing DC Power Supplies


Note


This procedure is for replacing DC power supplies in a server that already has DC power supplies installed. If you are installing DC power supplies to the server for the first time, see Installing DC Power Supplies (First Time Installation).



Warning


A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.

Statement 1022



Warning


This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.

Statement 1045



Warning


Installation of the equipment must comply with local and national electrical codes.

Statement 1074



Note


If you are replacing DC power supplies in a server with power supply redundancy (two power supplies), you do not have to power off the server to replace a power supply because they are redundant as 1+1.

Note


Do not mix power supply types or wattages in the server. Both power supplies must be identical.

Procedure


Step 1

Remove the DC power supply that you are replacing or a blank panel from an empty bay:

  1. Perform one of the following actions:

    • If you are replacing a power supply in a server that has only one DC power supply, shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

    • If you are replacing a power supply in a server that has two DC power supplies, you do not have to shut down the server.

  2. Remove the power cord from the power supply that you are replacing. Lift the connector securing clip slightly and then pull the connector from the socket on the power supply.

  3. Grasp the power supply handle while pinching the release lever toward the handle.

  4. Pull the power supply out of the bay.

Step 2

Install a new DC power supply:

  1. Grasp the power supply handle and insert the new power supply into the empty bay.

  2. Push the power supply into the bay until the release lever locks.

  3. Connect the power cord to the new power supply. Press the connector into the socket until the securing clip clicks into place.

  4. Only if you shut down the server, press the Power button to boot the server to main power mode.

Figure 22. Replacing DC Power Supplies

1

Keyed cable connector (CAB-48DC-40A-8AWG)

3

PSU status LED

2

Keyed DC input socket

-


Installing DC Power Supplies (First Time Installation)


Note


This procedure is for installing DC power supplies to the server for the first time. If you are replacing DC power supplies in a server that already has DC power supplies installed, see Replacing DC Power Supplies.



Warning


A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.

Statement 1022



Warning


This product requires short-circuit (overcurrent) protection, to be provided as part of the building installation. Install only in accordance with national and local wiring regulations.

Statement 1045



Warning


Installation of the equipment must comply with local and national electrical codes.

Statement 1074



Note


Do not mix power supply types or wattages in the server. Both power supplies must be identical.

Caution


As instructed in the first step of this wiring procedure, turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.

Procedure


Step 1

Turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.

Note

 
The required DC input cable is Cisco part CAB-48DC-40A-8AWG. This 3-meter cable has a 3-pin connector on one end that is keyed to the DC input socket on the power supply. The other end of the cable has no connector so that you can wire it to your facility’s DC power.

Step 2

Wire the non-terminated end of the cable to your facility’s DC power input source.

Step 3

Connect the terminated end of the cable to the socket on the power supply. The connector is keyed so that the wires align for correct polarity and ground.

Step 4

Return DC power from your facility’s circuit breaker.

Step 5

Press the Power button to boot the server to main power mode.

Figure 23. Installing DC Power Supplies

1

Keyed cable connector (CAB-48DC-40A-8AWG)

3

PSU status LED

2

Keyed DC input socket

-

Step 6

See Grounding for DC Power Supplies for information about additional chassis grounding.


Grounding for DC Power Supplies

AC power supplies have internal grounding and so no additional grounding is required when the supported AC power cords are used.

When using a DC power supply, additional grounding of the server chassis to the earth ground of the rack is available. Two screw holes for use with your dual-hole grounding lug and grounding wire are supplied on the chassis rear panel.


Note


The grounding points on the chassis are sized for 10-32 screws. You must provide your own screws, grounding lug, and grounding wire. The grounding lug must be dual-hole lug that fits 10-32 screws. The grounding cable that you provide must be 14 AWG (2 mm), minimum 60° C wire, or as permitted by the local code.

Replacing a PCIe Card


Note


If you are installing a Cisco UCS Virtual Interface Card, there are prerequisite considerations. See Cisco Virtual Interface Card (VIC) Considerations.

Note


RAID controller cards install into a separate mRAID riser. See Replacing a SAS Storage Controller Card (RAID or HBA).

Procedure


Step 1

Remove an existing PCIe card (or a blank filler panel) from the PCIe riser:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

  4. Remove any cables from the ports of the PCIe card that you are replacing.

  5. Use two hands to grasp the external riser handle and the blue area at the front of the riser.

  6. Lift straight up to disengage the riser's connectors from the two sockets on the motherboard. Set the riser upside-down on an antistatic surface.

  7. Open the hinged plastic retainer that secures the rear-panel tab of the card.

  8. Pull evenly on both ends of the PCIe card to remove it from the socket on the PCIe riser.

    If the riser has no card, remove the blanking panel from the rear opening of the riser.

Step 2

Install a new PCIe card:

  1. With the hinged tab retainer open, align the new PCIe card with the empty socket on the PCIe riser.

    PCIe riser 1/slot 1 has a long-card guide at the front end of the riser. Use the slot in the long-card guide to help support a full-length card.

  2. Push down evenly on both ends of the card until it is fully seated in the socket.

  3. Ensure that the card’s rear panel tab sits flat against the riser rear-panel opening and then close the hinged tab retainer over the card’s rear-panel tab.

  4. Position the PCIe riser over its two sockets on the motherboard and over the two chassis alignment channels.

    Figure 24. PCIe Riser Alignment Features

    1

    Blue riser handle

    2

    Riser alignment features in chassis

  5. Carefully push down on both ends of the PCIe riser to fully engage its two connectors with the two sockets on the motherboard.

  6. Replace the top cover to the server.

  7. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


PCIe Slot Specifications

The server contains two PCIe slots on one riser assembly for horizontal installation of PCIe cards. Both slots support the NCSI protocol and 12V standby power.

The following tables describe the specifications for the slots.

Table 5. PCIe Riser 1/Slot 1

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

1

Gen-3 x16

x24 connector

¾ length

Full-height

Yes

Micro SD card slot

One socket for Micro SD card

Table 6. PCIe Riser 2/Slot 2

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

2

Gen-3 x16

x24 connector

½ length

½ height

Yes

PCIe cable connector for front-panel NVMe SSDs

Gen-3 x8

Other end of cable connects to front drive backplane to support front-panel NVMe SSDs.

Note


Riser 2/Slot 2 is not available in single-CPU configurations.


Replacing a PCIe Card


Note


If you are installing a Cisco UCS Virtual Interface Card, there are prerequisite considerations. See Cisco Virtual Interface Card (VIC) Considerations.

Note


RAID controller cards install into a separate mRAID riser. See Replacing a SAS Storage Controller Card (RAID or HBA).

Procedure


Step 1

Remove an existing PCIe card (or a blank filler panel) from the PCIe riser:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

  4. Remove any cables from the ports of the PCIe card that you are replacing.

  5. Use two hands to grasp the external riser handle and the blue area at the front of the riser.

  6. Lift straight up to disengage the riser's connectors from the two sockets on the motherboard. Set the riser upside-down on an antistatic surface.

  7. Open the hinged plastic retainer that secures the rear-panel tab of the card.

  8. Pull evenly on both ends of the PCIe card to remove it from the socket on the PCIe riser.

    If the riser has no card, remove the blanking panel from the rear opening of the riser.

Step 2

Install a new PCIe card:

  1. With the hinged tab retainer open, align the new PCIe card with the empty socket on the PCIe riser.

    PCIe riser 1/slot 1 has a long-card guide at the front end of the riser. Use the slot in the long-card guide to help support a full-length card.

  2. Push down evenly on both ends of the card until it is fully seated in the socket.

  3. Ensure that the card’s rear panel tab sits flat against the riser rear-panel opening and then close the hinged tab retainer over the card’s rear-panel tab.

  4. Position the PCIe riser over its two sockets on the motherboard and over the two chassis alignment channels.

    Figure 25. PCIe Riser Alignment Features

    1

    Blue riser handle

    2

    Riser alignment features in chassis

  5. Carefully push down on both ends of the PCIe riser to fully engage its two connectors with the two sockets on the motherboard.

  6. Replace the top cover to the server.

  7. Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Cisco Virtual Interface Card (VIC) Considerations

This section describes VIC card support and special considerations for this server.


Note


If you use the Cisco Card NIC mode, you must also make a VIC Slot setting that matches where your VIC is installed. The options are Riser1, Riser2, and Flex-LOM. See NIC Mode and NIC Redundancy Settings for more information about NIC modes.
  • If you want to use the Cisco UCS VIC card for Cisco UCS Manager integration, see also the Cisco UCS C-Series Server Integration with Cisco UCS Manager Guides for details about supported configurations, cabling, and other requirements.

  • C-Series servers support a maximum of three (3) VIC adapters, one mLOM and two PCIe.

    Each compatible riser supports only one NCSI capable card, whether Cisco VIC or third party advanced network adapter (NVIDIA Connect-X, Intel X700/X800, etc) in the higher numbered compatible slot on each riser.

    PCIe x16 slots are recommended and preferred for high performance networking including Cisco VIC. If a GPU or other non-networking add-in card occupies the x16 slot on the riser, a VIC can be placed in the x8 alternate slot listed in the support table. The performance for 100gbps network interfaces may be degraded in an x8 slot, and this configuration is not recommended.

    If a third party network adapter with NCSI is in the x16 slot, and a VIC is not supported on that riser, the system will boot if a VIC is installed in the x8 slot, but the VIC will not be detected because that VIC is not yet functional.

    This consideration applies to Cisco 15000 Series VICs only.

Table 7. VIC Support and Considerations in This Server

VIC

How Many Supported in Server

Slots That Support VICs

Primary Slot For Cisco UCS Manager Integration

Primary Slot For Cisco Card NIC Mode

Minimum Cisco IMC Firmware

Cisco UCS VIC 15425

UCSC-P-V5Q50G

2 PCIe

PCIe 2

PCIe 5

PCIe 2

PCIe 2

4.0(1)

Cisco UCS VIC 15235

UCSC-P-V5D200G

2 PCIe

PCIe 2

PCIe 5

PCIe 2

PCIe 2

4.0(2)

Cisco UCS VIC 1385

UCSC-PCIE-C40Q-03

2 PCIe

PCIe 1

PCIe 2

PCIe 1

PCIe 1

3.1(1)

Cisco UCS VIC 1455

UCSC-PCIE-C25Q-04

2 PCIe

PCIe 1

PCIe 2

PCIe 1

PCIe 1

4.0(1)

Cisco UCS VIC 1495

UCSC-PCIE-C100-04

2 PCIe

PCIe 1

PCIe 2

PCIe 1

PCIe 1

4.0(2)

Cisco UCS VIC 1387

UCSC-MLOM-C40Q-03

1 mLOM

mLOM

mLOM

mLOM

3.1(1)

Cisco UCS VIC 1457

UCSC-MLOM-C25Q-04

1 mLOM

mLOM

mLOM

mLOM

4.0(1)

Cisco UCS VIC 1497

UCSC-MLOM-C100-04

1 mLOM

mLOM

mLOM

mLOM

4.0(2)

Replacing an mLOM Card

The server supports a modular LOM (mLOM) card to provide additional rear-panel connectivity. The horizontal mLOM socket is on the motherboard, under a PCIe riser.

The mLOM socket provides a Gen-3 x16 PCIe lane. The socket remains powered when the server is in 12 V standby power mode, and it supports the network communications services interface (NCSI) protocol.

The mLOM replacement procedure differs slightly depending on whether your server has 2 full-height (FH) or 3 half-height (HH) riser cages. Use the following procedures to replace an mLOM:

Removing an mLOM Card (2FH Riser Cages)

Use the following task to remove an mLOM card from a server with 2 full height riser cages.

Before you begin

You will find it helpful to have a #2 Phillips screwdriver for this task.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

If full height riser cages are present, remove them now.

See Removing Full Height Riser Cages.

Step 4

If you have not already removed the riser cage rear wall, remove it now.

  1. Using a #2 Phillips screwdriver, remove the two countersink screws.

  2. Grasp each end of the full height rear wall and remove it.

Step 5

If you have not removed the existing mLOM bracket, remove it now.

  1. Using a #2 Phillips screwdriver, remove the two countersink screws that hold the mLOM bracket in place.

  2. Lift the mLOM bracket straight up to remove it from the server.

Step 6

Remove the mLOM card.

  1. Loosen the two captive thumbscrews that secure the mLOM card to the threaded standoff on the chassis floor.

  2. Slide the mLOM card horizontally to disconnect it from the socket, then lift it out of the server.

Step 7

If you are not installing an mLOM, install the filler panel in the mLOM slot as shown below. Otherwise, go to Installing an mLOM Card (2FH Riser Cages).

  1. Lower the filler panel onto the server, aligning the screwholes.

  2. Using a #2 Phillips screwdriver, insert and tighten the screws.

    Caution

     

    Tighten screws to 4 lbs-in. Do not overtighten screws or you risk stipping them!


Installing an mLOM Card (2FH Riser Cages)

Use the following task to install an mLOM card in a server with 2 full height riser cages.

Before you begin

You will find it helpful to have a #2 Phillips screwdriver for this task.

Procedure


Step 1

Install the mLOM card into the mLOM slot.

  1. Holding the mLOM level, slide it into the slot until it seats into the PCI connector.

  2. Using a #2 Phillips screwdriver, tighten the captive screws to secure the mLOM to the server.

Step 2

Install the mLOM bracket.

  1. Lower the mLOM bracket onto the mLOM, aligning the screwholes.

  2. Using a #2 Phillips screwdriver, insert and tighten the screws.

    Caution

     

    Tighten the screws to 4 lbs-in of torque. Do not overtighten the screws or you risk stripping them.

Step 3

Install the full-height rear wall.

  1. Orient the full-height rear wall as shown, making sure the folded metal tab is facing up.

  2. Align the screw holes in the FH rear wall with the screw holes in the server sheet metal.

  3. Holding the rear wall level, seat it onto the server sheet metal, making sure that the screw holes line up.

  4. Using a #2 Phillips screwdriver, insert and tighten the countersink screws.

    Caution

     

    Tighten the screws to 4 lbs-in of torque. Do not overtighten the screws or you risk stripping them.

Step 4

Install the two full height riser cages.

  1. Align riser cages 1 and 2 over their PCIe slots, making sure that the captive thumbscrews are aligned with their screw holes.

  2. Holding each riser cage level, lower it into its PCIe slot, then tighten the thumbscrew by using a #2 Phillips screwdriver or your fingers.

    Caution

     

    Tighten the screws to 4 lbs-in of torque. Do not overtighten the screws or you risk stripping them.

Step 5

Reinstall the server.

  1. Replace the server' top cover.

  2. If needed, reinstall the server in the rack.

  3. If needed, reconnect any cables.


Removing an mLOM Card (3HH Riser Cages)

Use the following task to install an mLOM card in a server with 3 half-height riser cages.

Before you begin

You will find it helpful to have a #2 Phillips screwdriver for this task.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

If half-height riser cages are present, remove them now.

See Removing Half Height Riser Cages.

Step 4

If you have not already removed the half-height rear wall, remove it now.

  1. Using a #2 Phillips screwdriver, remove the four countersink screws.

  2. Grasp each end of the half-height rear wall and lift it off of the server.

Step 5

If you have not removed the existing mLOM bracket, remove it now.

  1. Using a #2 Phillips screwdriver, remove the two countersink screws that hold the mLOM bracket in place.

  2. Lift the mLOM bracket to remove it from the server.

Step 6

Remove the mLOM card.

  1. Loosen the two captive thumbscrews that secure the mLOM card to the threaded standoff on the chassis floor.

  2. Slide the mLOM card horizontally to disconnect it from the socket, then lift it out of the server.

Step 7

If you are not installing an mLOM, install the filler panel in the mLOM slot as shown below. Otherwise, go to Installing an mLOM Card (3HH Riser Cages).

  1. Lower the filler panel onto the server, aligning the screwholes.

  2. Lower the half-height rear wall onto the server, aligning the screwholes.

  3. Using a #2 Phillips screwdriver, insert and tighten the four countersink screws.

    Note

     

    Two screwholes overlap on the rear wall and the filler panel. When installing the screws, make sure that the screws sink through both parts and tightens into sheetmetal.

    Caution

     

    Tighten screws to 4 lbs-in. Do not overtighten screws or you risk stripping them!


Installing an mLOM Card (3HH Riser Cages)

User this task to install and mLOM card in a server that has 3 half-height risers.

Before you begin

You will find it helpful to have a #2 Phillips screwdriver for this task.

Procedure


Step 1

Install the mLOM card into the mLOM slot.

  1. Holding the mLOM level, slide it into the slot until it seats into the PCI connector.

  2. Using a #2 Phillips screwdriver, tighten the captive screws to secure the mLOM to the server.

Step 2

Install the mLOM bracket.

  1. Lower the mLOM bracket onto the mLOM, aligning the screw holes.

  2. Using a #2 Phillips screwdriver, insert and tighten the screws.

    Caution

     

    Tighten the screws to 4 lbs-in of torque. Do not over tighten the screws or you risk stripping them.

Step 3

Install the half-height rear wall.

  1. Orient the half-height rear wall as shown.

  2. Align the screw holes in the FH rear wall with the screw holes in the server sheet metal.

  3. Holding the rear wall level, seat it onto the server sheet metal, making sure that the screw holes line up.

  4. Using a #2 Phillips screwdriver, insert and tighten the countersink screws.

    Caution

     

    Tighten the screws to 4 lbs-in of torque. Do not over tighten the screws or you risk stripping them.

Step 4

Install the two full height riser cages.

  1. Align riser cages 1 and 2 over their PCIe slots, making sure that the captive thumbscrews are aligned with their screw holes.

  2. Holding each riser cage level, lower it into its PCIe slot, then tighten the thumbscrew by using a #2 Phillips screwdriver or your fingers.

    Caution

     

    Tighten the screws to 4 lbs-in of torque. Do not overtighten the screws or you risk stripping them.

Step 5

Reinstall the server.

  1. Replace the server' top cover.

  2. If needed, reinstall the server in the rack.

  3. If needed, reconnect any cables.


Replacing an mRAID Riser (Riser 3)

The server has a dedicated internal riser that is used for either a Cisco modular storage controller card (RAID or HBA) or the SATA interposer card for embedded software RAID. This riser plugs into a dedicated motherboard socket and provides a horizontal socket for the installed card.

This riser can be ordered as the following options:

  • UCSC-XRAIDR-220M5—Replacement unit for this mRAID riser.

  • UCSC-MRAID1GB-KIT—Kit for first-time addition of this riser (includes RAID controller, SuperCap, and SuperCap cable).

    See also Replacing a SAS Storage Controller Card (RAID or HBA).

    See also Replacing the Supercap (RAID Backup).

  • UCSC-SATA-KIT-M5—Kit for first-time addition of this riser (includes SATA interposer for embedded software RAID and SATA cables).

    See also Replacing a SATA Interposer Card.

  • The NVMe-optimized, SFF 10-drive version, UCSC-220-M5SN, supports NVMe drives only and so does not use SAS or SATA RAID. This version of the server comes with an NVMe-switch card factory-installed in the internal mRAID riser to support NVMe drives in front-loading bays 3 - 10. The NVMe switch card is not orderable separately.

Procedure


Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

Step 2

Remove the existing mRAID riser:

  1. Using both hands, grasp the external blue handle on the rear of the riser and the blue finger-grip on the front end of the riser.

  2. Lift the riser straight up to disengage it from the motherboard socket.

  3. Set the riser upside down on an antistatic surface.

  4. Remove any card from the riser. Open the blue card-ejector lever that is on the edge of the card and then pull the card straight out from its socket on the riser.

Step 3

Install a new mRAID riser:

  1. Install your card into the new riser. Close the card-ejector lever on the card to lock it into the riser.

  2. Connect cables to the installed card.

  3. Align the riser with the socket on the motherboard. At the same time, align the two slots on the back side of the bracket with the two pegs on the inner chassis wall.

  4. Push down gently to engage the riser with the motherboard socket. The metal riser bracket must also engage the two pegs that secure it to the chassis wall.

Step 4

Replace the top cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 26. mRAID Riser (Internal Riser 3) Location

1

External blue handle

3

Card-ejector lever

2

Two pegs on inner chassis wall

-


Replacing a SAS Storage Controller Card (RAID or HBA)

For hardware-based storage control, the server can use a Cisco modular SAS RAID controller or SAS HBA that plugs into a dedicated, vertical socket on the motherboard.

Depending on the server configuration, ther server supports up to two RAID cards. RAID cards are numbered 1 and 2, and they are located behind the front-loading drives as shown.

Storage Controller Card Firmware Compatibility

Firmware on the storage controller (RAID or HBA) must be verified for compatibility with the current Cisco IMC and BIOS versions that are installed on the server. If not compatible, upgrade or downgrade the storage controller firmware using the Cisco Host Upgrade Utility (HUU) for your firmware release to bring it to a compatible level.

See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring server components to compatible levels: HUU Guides.


Note


For servers running in standalone mode only: After you replace controller hardware, you must run the Cisco Host Upgrade Utility (HUU) to update the controller firmware, even if the firmware Current Version is the same as the Update Version. This is necessary to program the controller's suboem-id to the correct value for the server SKU. If you do not do this, drive enumeration might not display correctly in the software. This issue does not affect servers controlled in UCSM mode.


Replacing a SAS Storage Controller Card (RAID or HBA)

Before you begin

You will find it helpful to know the locations of RAID Card 1 and RAID Card 2. See Replacing a SAS Storage Controller Card (RAID or HBA).

Procedure


Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

Step 2

Remove any existing card from the riser:

Note

 

The chassis includes a plastic mounting bracket that the card must be attached to before installation. During replacement, you must remove the old card from the bracket and then install the new card to the bracket before installing this assembly to the server.

  1. Disconnect SAS/SATA cables and any Supercap cable from the existing card.

  2. Lift up on the card's blue ejector lever to unseat it from the motherboard socket.

  3. Lift straight up on the card's carrier frame to disengage the card from the motherboard socket and to disengage the frame from two pegs on the chassis wall.

  4. Remove the existing card from its plastic carrier bracket. Carefully push the retainer tabs aside and then lift the card from the bracket.

Step 3

Install a new storage controller card to the riser:

  1. Install the new card to the plastic carrier bracket. Make sure that the retainer tabs close over the edges of the card.

  2. Position the assembly over the chassis and align the card edge with the motherboard socket. At the same time, align the two slots on the back of the carrier bracket with the pegs on the chassis inner wall.

  3. Push on both corners of the card to seat its connector in the riser socket. At the same time, ensure that the slots on the carrier frame engage with the pegs on the inner chassis wall.

  4. Fully close the blue ejector lever on the card to lock the card into the socket.

  5. Connect SAS/SATA cables and any Supercap cable to the new card.

Step 4

Replace the top cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

If this is a first-time installation, see Storage Controller and Backplane Connectors for cabling instructions.

Step 6

If your server is running in standalone mode, use the Cisco UCS Host Upgrade Utility to update the controller firmware and program the correct suboem-id for the controller.

Note

 

For servers running in standalone mode only: After you replace controller hardware, you must run the Cisco UCS Host Upgrade Utility (HUU) to update the controller firmware, even if the firmware Current Version is the same as the Update Version. This is necessary to program the controller's suboem-id to the correct value for the server SKU. If you do not do this, drive enumeration might not display correctly in the software. This issue does not affect servers controlled in UCSM mode.

See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring server components to compatible levels: HUU Guides.


Replacing a Boot-Optimized M.2 RAID Controller Module

The Cisco Boot-Optimized M.2 RAID Controller module connects to the mini-storage module socket on the motherboard. It includes slots for two SATA M.2 drives, plus an integrated 6-Gbps SATA RAID controller that can control the SATA M.2 drives in a RAID 1 array. The Cisco Boot-Optimized M.2 RAID Controller module (UCSC-HWRAID) plugs into a connector on the motherboard and holds up to 2 M.2 SATA drives.

The server supports the following SATA M.2 drives are:

  • 240 GB M.2 SATA SSD (UCSC-M2-240GB)

  • 960 GB M.2 SATA SSD (UCSC-M2-960GB)

Cisco Boot-Optimized M.2 RAID Controller Considerations

Review the following considerations:


Note


The Cisco Boot-Optimized M.2 RAID Controller is not supported when the server is used as a compute-only node in Cisco HyperFlex configurations.


  • The minimum version of Cisco IMC and Cisco UCS Manager that support this controller is 4.0(4) and later.

  • This controller supports RAID 1 (single volume) and JBOD mode.


    Note


    Do not use the server's embedded SW MegaRAID controller to configure RAID settings when using this controller module. Instead, you can use the following interfaces:

    • Cisco IMC 4.2(1) and later

    • BIOS HII utility, BIOS 4.2(1) and later

    • Cisco UCS Manager 4.2(1) and later (UCS Manager-integrated servers)


    The name of the controller in the software is MSTOR-RAID.

  • The controller supports only 240 GB and 960 GB M.2 SSDs. The M.2 SATA SSDs must be identical. You cannot mix M.2 drives with different capacities. For example, one 240 GB M.2 and one 960 GB M.2 is an unsupported configuration.

  • The Boot-Optimized RAID controller supports VMWare, Windows, and Linux Operating Systems only.

  • A SATA M.2 drive in slot 1 (the top) is the first SATA device; a SATA M.2 drive in slot 2 (the underside) is the second SATA device.

    • The name of the controller in the software is MSTOR-RAID.

    • A drive in Slot 1 is mapped as drive 253; a drive in slot 2 is mapped as drive 254.

  • It is recommended that M.2 SATA SSDs be used as boot-only devices.

  • When using RAID, we recommend that both SATA M.2 drives are the same capacity. If different capacities are used, the smaller capacity of the two drives is used to create a volume and the rest of the drive space is unusable.

    JBOD mode supports mixed capacity SATA M.2 drives.

  • Hot-plug replacement is not supported. The server must be powered off.

  • Monitoring of the controller and installed SATA M.2 drives can be done using Cisco IMC and Cisco UCS Manager. They can also be monitored using other utilities such as UEFI HII, PMCLI, XMLAPI, and Redfish.

  • CIMC/UCSM is supported for configuring of volumes and monitoring of the controller and installed SATA M.2 drives.

  • Updating firmware of the controller and the individual drives:

  • The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported.

  • If you replace a single SATA M.2 drive that was part of a RAID volume, rebuild of the volume is auto-initiated after the user accepts the prompt to import the configuration. If you replace both drives of a volume, you must create a RAID volume and manually reinstall any OS.

  • We recommend that you erase drive contents before creating volumes on used drives from another server. The configuration utility in the server BIOS includes a SATA secure-erase function.

  • The server BIOS includes a configuration utility specific to this controller that you can use to create and delete RAID volumes, view controller properties, and erase the physical drive contents. Access the utility by pressing F2 when prompted during server boot. Then navigate to Advanced > Cisco Boot Optimized M.2 RAID Controller.

  • The boot-optimized RAID controller is not supported when the server is used as a compute node in HyperFlex configurations.

Replacing a Cisco Boot-Optimized M.2 RAID Controller

This topic describes how to remove and replace a Cisco Boot-Optimized M.2 RAID Controller. The controller board has one M.2 socket on its top (Slot 1) and one M.2 socket on its underside (Slot 2).

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing Top Cover.

Step 4

Grasp and remove the air baffle located between CPU 2 and PCIe Riser 3.

Step 5

Remove a controller from its motherboard socket:

  1. Locate the controller in its socket just behind CPU 2.

  2. At each end of the controller board, push outward on the clip that secures the carrier.

  3. Lift both ends of the controller to disengage it from the socket on the motherboard.

  4. Set the carrier on an anti-static surface.

Step 6

If you are transferring SATA M.2 drives from the old controller to the replacement controller, do that before installing the replacement controller:

Note

 

Any previously configured volume and data on the drives are preserved when the M.2 drives are transferred to the new controller. The system will boot the existing OS that is installed on the drives.

  1. Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 drive to the carrier.

  2. Lift the M.2 drive from its socket on the carrier.

  3. Position the replacement M.2 drive over the socket on the controller board.

  4. Angle the M.2 drive downward and insert the connector-end into the socket on the carrier. The M.2 drive's label must face up.

  5. Press the M.2 drive flat against the carrier.

  6. Install the single screw that secures the end of the M.2 SSD to the carrier.

  7. Turn the controller over and install the second M.2 drive.

Figure 27. Cisco Boot-Optimized M.2 RAID Controller, Showing M.2 Drive Installation

Step 7

Install the controller to its socket on the motherboard:

  1. Position the controller over socket, with the controller's connector facing down and at the same end as the motherboard socket. Two alignment pegs must match with two holes on the controller.

  2. Gently push down the socket end of the controller so that the two pegs go through the two holes on the controller.

  3. Push down on the controller so that the securing clips click over it at both ends.

Step 8

Replace the top cover to the server.

Step 9

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing the Supercap (RAID Backup)

This server supports installation of one Supercap unit (UCS-SCAP-M6). The unit mounts to a bracket that is in the middle of the row of cooling fan modules.

Procedure


Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

Step 2

Remove an existing Supercap:

  1. Locate the Supercap modules near the RAID card by the front-loading drives.

  2. Disconnect the Supercap cable connector from the RAID cable connector.

  3. Push aside the securing tab and open the hinged door that secures the Supercap to its bracket.

  4. Lift the Supercap free of the bracket and set it aside.

Step 3

Install a new Supercap:

  1. Orient the Supercap so that its cable connector is facing the RAID cable connector.

  2. Make sure that the RAID cable will not obstruct installation, then insert the new Supercap into the mounting bracket.

    Note

     

    You must feed the Supercap cable and connector through the open space in the tray so that the Supercap cable can connect to the RAID cable.

  3. Connect the Supercap cable from the RAID controller card to the connector on the new Supercap cable.

  4. Close the hinged plastic bracket over the Supercap. Push down until the securing tab clicks.

Step 4

Replace the top cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Replacing a SATA Interposer Card

For software-based storage control that uses the server's embedded SATA controller, the server requires a SATA interposer card that plugs into a horizontal socket on a dedicated mRAID riser.

The SATA Interposer card (UCSC-SATAIN-220M6) supports Advanced Host Control Interface (AHCI) by default. AHCI supports SATA-only drives. A maximum of 8 SATA drives is supported with AHCI, and this configuration requires a SATA interposer card, which plugs directly into the drive backplane. The SATA Interposer supports drives in slots 1-4 and 6-9.

Procedure


Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

Step 2

Remove the mRAID riser from the server:

  1. Using both hands, grasp the external blue handle on the rear of the riser and the blue finger-grip on the front end of the riser.

  2. Lift the riser straight up to disengage it from the motherboard socket.

  3. Set the riser upside down on an antistatic surface.

Step 3

Remove any existing card from the riser:

  1. Disconnect cables from the existing card.

  2. Open the blue card-ejector lever on the back side of the card to eject it from the socket on the riser.

  3. Pull the card from the riser and set it aside.

Step 4

Install a new card to the riser:

  1. With the riser upside down, set the card on the riser.

  2. Push on both corners of the card to seat its connector in the riser socket.

  3. Close the card-ejector lever on the card to lock it into the riser.

Step 5

Return the riser to the server:

  1. Align the connector on the riser with the socket on the motherboard. At the same time, align the two slots on the back side of the bracket with the two pegs on the inner chassis wall.

  2. Push down gently to engage the riser connector with the motherboard socket. The metal riser bracket must also engage the two pegs that secure it to the chassis wall.

Step 6

Reconnect the cables to their connectors on the new card.

Step 7

Replace the top cover to the server.

Step 8

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Figure 28. mRAID Riser Location

Replacing a Chassis Intrusion Switch

The chassis intrusion switch in an optional security feature that logs an event in the system event log (SEL) whenever the cover is removed from the chassis.

Procedure


Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

Step 2

Remove an existing intrusion switch:

  1. Disconnect the intrusion switch cable from the socket on the motherboard.

  2. Use a #1 Phillips screwdriver to loosen and remove the single screw that holds the switch mechanism to the chassis wall.

  3. Slide the switch mechanism straight up to disengage it from the clips on the chassis.

Step 3

Install a new intrusion switch:

  1. Slide the switch mechanism down into the clips on the chassis wall so that the screwhole lines up.

  2. Use a #1 Phillips screwdriver to install the single screw that secures the switch mechanism to the chassis wall.

  3. Connect the switch cable to the socket on the motherboard.

Step 4

Replace the cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.


Installing a Trusted Platform Module (TPM)

A Trusted Platform Module (TPM) is a computer chip (microcontroller) that can securely store artifacts used to authenticate the platform (server). These artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to store platform measurements that help ensure that the platform remains trustworthy. Authentication (ensuring that the platform can prove that it is what it claims to be) and attestation (a process helping to prove that a platform is trustworthy and has not been breached) are necessary steps to ensure safer computing in all environments.

The trusted platform module (TPM) plugs into a motherboard socket and is then permanently secured with a one-way screw.

TPM Considerations

  • This server supports either TPM version 1.2 or TPM version 2.0 (UCSX-TPM-002C) as defined by the Trusted Computing Group (TCG). The TPM is also SPI-based.

  • Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does not already have a TPM installed.

  • If there is an existing TPM 1.2 installed in the server, you cannot upgrade to TPM 2.0. If there is no existing TPM in the server, you can install TPM 2.0.

  • If a server with a TPM is returned, the replacement server must be ordered with a new TPM.

  • If the TPM 2.0 becomes unresponsive, reboot the server.

Installing and Enabling a TPM


Note


Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does not already have a TPM installed.

This topic contains the following procedures, which must be followed in this order when installing and enabling a TPM:

  1. Installing the TPM Hardware

  2. Enabling the TPM in the BIOS

  3. Enabling the Intel TXT Feature in the BIOS

Installing TPM Hardware


Note


For security purposes, the TPM is installed with a one-way screw. It cannot be removed with a standard screwdriver.

The TPM socket is located under PCIe riser cage 2.

Procedure

Step 1

Prepare the server for component installation:

  1. Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

  2. Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

    Caution

     
    If you cannot safely view and access the component, remove the server from the rack.
  3. Remove the top cover from the server as described in Removing Top Cover.

Step 2

Check if there is a card installed in PCIe riser 2:

  • If no card is installed in PCIe riser 2, you can access the TPM socket. Go to the next step.

  • If a card is installed in PCIe riser 2, remove the PCIe riser assembly from the chassis to provide clearance before continuing with the next step. See Replacing a PCIe Card for instructions on removing the PCIe riser.

Step 3

Install a TPM:

  1. Locate the TPM socket on the motherboard, as shown below.

    1

    TPM socket location on motherboard under any card in PCIe riser 2

  2. Align the connector that is on the bottom of the TPM circuit board with the motherboard TPM socket. Align the screw hole on the TPM board with the screw hole that is adjacent to the TPM socket.

  3. Push down evenly on the TPM to seat it in the motherboard socket.

  4. Install the single one-way screw that secures the TPM to the motherboard.

  5. If you removed the PCIe riser assembly to provide clearance, return it to the server now.

Step 4

Replace the cover to the server.

Step 5

Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.

Step 6

Continue with Enabling the TPM in the BIOS.


Enabling the TPM in the BIOS

After hardware installation, you must enable TPM support in the BIOS.


Note


You must set a BIOS Administrator password before performing this procedure. To set this password, press the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to Security > Set Administrator Password and enter the new password twice as prompted.


Procedure

Step 1

Enable TPM Support:

  1. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

  2. Log in to the BIOS Setup Utility with your BIOS Administrator password.

  3. On the BIOS Setup Utility window, choose the Advanced tab.

  4. Choose Trusted Computing to open the TPM Security Device Configuration window.

  5. Change TPM SUPPORT to Enabled.

  6. Press F10 to save your settings and reboot the server.

Step 2

Verify that TPM support is now enabled:

  1. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

  2. Log into the BIOS Setup utility with your BIOS Administrator password.

  3. Choose the Advanced tab.

  4. Choose Trusted Computing to open the TPM Security Device Configuration window.

  5. Verify that TPM SUPPORT and TPM State are Enabled.

Step 3

Continue with Enabling the Intel TXT Feature in the BIOS.


Enabling the Intel TXT Feature in the BIOS

Intel Trusted Execution Technology (TXT) provides greater protection for information that is used and stored on the business server. A key aspect of that protection is the provision of an isolated execution environment and associated sections of memory where operations can be conducted on sensitive data, invisibly to the rest of the system. Intel TXT provides for a sealed portion of storage where sensitive data such as encryption keys can be kept, helping to shield them from being compromised during an attack by malicious code.

Procedure

Step 1

Reboot the server and watch for the prompt to press F2.

Step 2

When prompted, press F2 to enter the BIOS Setup utility.

Step 3

Verify that the prerequisite BIOS values are enabled:

  1. Choose the Advanced tab.

  2. Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.

  3. Verify that the following items are listed as Enabled:

    • VT-d Support (default is Enabled)

    • VT Support (default is Enabled)

    • TPM Support

    • TPM State

  4. Do one of the following:

    • If VT-d Support and VT Support are already enabled, skip to step 4.

    • If VT-d Support and VT Support are not enabled, continue with the next steps to enable them.

  5. Press Escape to return to the BIOS Setup utility Advanced tab.

  6. On the Advanced tab, choose Processor Configuration to open the Processor Configuration window.

  7. Set Intel (R) VT and Intel (R) VT-d to Enabled.

Step 4

Enable the Intel Trusted Execution Technology (TXT) feature:

  1. Return to the Intel TXT(LT-SX) Hardware Support window if you are not already there.

  2. Set TXT Support to Enabled.

Step 5

Press F10 to save your changes and exit the BIOS Setup utility.


Removing the Trusted Platform Module (TPM)

The TPM module is attached to the printed circuit board assembly (PCBA). You must disconnect the TPM module from the PCBA before recycling the PCBA. The TPM module is secured to a threaded standoff by a tamper resistant screw. If you do not have the correct tool for the screw, you can use a pair of pliers to remove the screw.

Before you begin


Note


For Recyclers Only! This procedure is not a standard field-service option. This procedure is for recyclers who will be reclaiming the electronics for proper disposal to comply with local eco design and e-waste regulations.


To remove the TPM, the following requirements must be met for the server:

  • It must be disconnected from facility power.

  • It must be removed from the equipment rack.

  • The top cover must be removed. If the top cover is not removed, see Removing Top Cover.

Procedure


Step 1

Locate the TPM module.

The following illustration shows the location of the TPM module's screw.

Figure 29. Screw Location for Removing the TPM Module

Step 2

Using the pliers, grip the head of the screw and turn it counter clockwise until the screw releases.

Step 3

Remove the TPM module and dispose of it properly.


What to do next

Remove the PCBA. See Recycling the PCB Assembly (PCBA).

Recycling the PCB Assembly (PCBA)

The PCBA is secured to the server's sheet metal through the following:

  • 13 M3.5x0.6mm Torx screws.

  • 2 M3.5x0.6mm Torx thumb screws.

You must disconnect the PCBA from the tray before recycling the PCBA.

Before you begin


Note


For Recyclers Only! This procedure is not a standard field-service option. This procedure is for recyclers who will be reclaiming the electronics for proper disposal to comply with local eco design and e-waste regulations.


To remove the printed circuit board assembly (PCBA), the following requirements must be met:

  • The server must be disconnected from facility power.

  • The server must be removed from the equipment rack.

  • The server's top cover must be removed. See Removing Top Cover.

Gather the following tools before you start this procedure:

  • Pliers

  • T10 Torx screwdriver

Procedure


Step 1

If you have not removed the TPM module, do so now.

Step 2

When the TPM module is detached, locate the PCBA's screws.

The following figure shows the location of the screws.

Figure 30. Screw Locations for Removing the PCBA

Step 3

Using a T10 Torx driver, remove all of the indicated screws.

Step 4

Remove the PCBA and dispose of it properly.


Service Headers and Jumpers

This server includes two blocks of headers (J38, J39) that you can jumper for certain service and debug functions.

This section contains the following topics:

Figure 31. Location of Service Header Blocks SW12 and CN3

1

Location of header block CN3

5

BIOS Recovery Switch (SW12 Switch 5)

2

Boot Alternate Cisco IMC Header: CN3 pins 1 - 2

6

Clear BIOS Password Switch (SW12 Switch 6)

3

System Secure Firmware Erase Header: CN3 pins 3 - 4

7

Clear CMOS Switch (SW1q2 Switch 9)

4

Location of SW 12 DIP switches

-

Using the Clear CMOS Switch (SW12, Switch 9)

You can use this switch to clear the server’s CMOS settings in the case of a system hang. For example, if the server hangs because of incorrect settings and does not boot, use this switch to invalidate the settings and reboot with defaults.

You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers.


Caution


Clearing the CMOS removes any customized settings and might result in data loss. Make a note of any necessary customized settings in the BIOS before you use this clear CMOS procedure.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing Top Cover.

Step 4

Using your finger, gently push the SW12 switch 9 to the side marked ON.

Step 5

Using your finger, gently push switch 9 to its original position (OFF).

Note

 
If you do not reset the switch to its original position (OFF), the CMOS settings are reset to the defaults every time you power-cycle the server.

Step 6

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to main power mode, indicated when the Power LED is green.

Note

 
You must allow the entire server to reboot to main power mode to complete the reset (that is, to restore the CMOS settings to the defaults). The state of the switch cannot be determined without the host CPU running.

Using the BIOS Recovery Header (SW12, Switch 5)

Depending on which stage the BIOS becomes corrupted, you might see different behavior.

  • If the BIOS BootBlock is corrupted, you might see the system get stuck on the following message:

    Initializing and configuring memory/hardware
  • If it is a non-BootBlock corruption, a message similar to the following is displayed:

    ****BIOS FLASH IMAGE CORRUPTED****
    Flash a valid BIOS capsule file using Cisco IMC WebGUI or CLI interface.
    IF Cisco IMC INTERFACE IS NOT AVAILABLE, FOLLOW THE STEPS MENTIONED BELOW.
    1. Connect the USB stick with bios.cap file in root folder.
    2. Reset the host.
    IF THESE STEPS DO NOT RECOVER THE BIOS
    1. Power off the system.
    2. Mount recovery jumper.
    3. Connect the USB stick with bios.cap file in root folder.
    4. Power on the system.
    Wait for a few seconds if already plugged in the USB stick.
    REFER TO SYSTEM MANUAL FOR ANY ISSUES.

Note


As indicated by the message shown above, there are two procedures for recovering the BIOS. Try procedure 1 first. If that procedure does not recover the BIOS, use procedure 2.

Using the Clear BIOS Password Switch (SW12, Switch 6)

You can use this switch to clear the BIOS password.

You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server. Disconnect power cords from all power supplies.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing Top Cover.

Step 4

Using your finger, gently slide the SW12 switch 6 to the ON position.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note

 
You must allow the entire server to reboot to main power mode to complete the reset. The state of the switch cannot be determined without the host CPU running.

Step 7

Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Reset the switch to its original position (OFF).

Note

 
If you do not remove the switch to its original position (OFF), the BIOS password is cleared every time you power-cycle the server.

Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.


Using the Boot Alternate Cisco IMC Image Header (CN3, Pins 1-2)

You can use this Cisco IMC debug header to force the system to boot from an alternate Cisco IMC image.

You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server. Disconnect power cords from all power supplies.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing Top Cover.

Step 4

Install a two-pin jumper across CN3 pins 1 and 2.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note

 

When you next log in to Cisco IMC, you see a message similar to the following:

'Boot from alternate image' debug functionality is enabled.  
CIMC will boot from alternate image on next reboot or input power cycle.

Note

 
If you do not remove the jumper, the server will boot from an alternate Cisco IMC image every time that you power cycle the server or reboot Cisco IMC.

Step 7

To remove the jumper, press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Remove the jumper that you installed.

Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.


Using the System Firmware Secure Erase Header (CN3, Pins 3-4)

You can use this header to securely erase system firmware from the server.

You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers.

Procedure


Step 1

Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server. Disconnect power cords from all power supplies.

Step 2

Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach cables from the rear panel to provide clearance.

Caution

 
If you cannot safely view and access the component, remove the server from the rack.

Step 3

Remove the top cover from the server as described in Removing Top Cover.

Step 4

Install a two-pin jumper across CN3 pins 3 and 4.

Step 5

Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode, indicated when the Power LED on the front panel is amber.

Step 6

Return the server to main power mode by pressing the Power button on the front panel. The server is in main power mode when the Power LED is green.

Note

 
You must allow the entire server to reboot to main power mode to complete the reset. The state of the jumper cannot be determined without the host CPU running.

Step 7

Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the server to remove all power.

Step 8

Remove the top cover from the server.

Step 9

Remove the jumper that you installed.

Note

 
If you do not remove the jumper, the password is cleared every time you power-cycle the server.

Step 10

Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the server by pressing the Power button.