Replacing Server Node and I/O Expander Components

This chapter contains procedures for replacing components internal to the server node and the optional I/O expander.

Preparing For Component Replacement

This section contains procedures that are referred to from the replacement procedures.

Removing an S3260 M5 Server Node or I/O Expander Top Cover

The optional I/O expander and the server node use the same top cover. If an I/O expander is attached to the top of the server node, the top cover is on the I/O expander, as shown in the side view in the following figure. In this case, there is also an intermediate cover between the server node and the I/O expander.

Figure 1. Side View, Server Node With Attached I/O Expander

1

Top Cover

In this view, the top cover is on the attached I/O expander.

3

Intermediate cover securing screws (two on each side)

2

Intermediate cover

In this view, the intermediate cover is attached to the server node.

-


Note

You do not have to power off the chassis in the next step. Replacement with the chassis powered on is supported if you shut down the server node before removal.


Procedure


Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node from the system (or server node with attached I/O expander, if present):

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Remove the top cover from the server node or the I/O expander (if present):

  1. Lift the cover latch handle to an upright position.

  2. Turn the cover latch handle 90 degrees to release the lock.

  3. Slide the cover toward the rear (toward the rear-panel buttons) and then lift it from the server node or I/O expander (if present).

Figure 2. Top View, Server Node or I/O Expander Top Cover

1

Top cover on server node or I/O expander

2

Cover latch handle (shown in closed, flat position)

Step 4

Reinstall the top cover:

  1. Set the cover in place on the server node or I/O expander (if present), offset about one inch toward the rear. Pegs on the inside of the cover must engage the tracks on the server node or I/O expander base.

  2. Push the cover forward until it stops.

  3. Turn the latch handle 90 degrees to close the lock.

  4. Fold the latch handle flat.

Step 5

Reinstall the server node to the chassis:

  1. With the two ejector levers open, align the server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 6

Power on the server node.


Removing an I/O Expander From a Server Node

This topics in this section describe how to remove and then reinstall an I/O expander and intermediate cover from a server node so that you can access the components inside the server node.


Note

You do not have to power off the chassis in the next step. Replacement with the chassis powered on is supported if you shut down the server node before removal.


Disassembling the I/O Expander Assembly

Procedure

Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node with attached I/O expander from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Remove the top cover from the I/O expander:

  1. Lift the cover latch handle to an upright position.

  2. Turn the cover latch handle 90 degrees to release the lock.

  3. Slide the cover toward the rear (toward the rear-panel buttons) and then lift it from the I/O expander.

Figure 3. Top View, Server Node or I/O Expander Top Cover

1

Top cover on server node or I/O expander

2

Cover latch handle (shown in closed, flat position)

Step 4

Remove the I/O expander from the server node:

  1. Remove the five screws that secure the I/O expander to the top of the server node.

    Figure 4. I/O Expander Securing Screws (Five)
  2. Use two small flat-head screwdrivers (1/4-inch or equivalent) to help separate the connector on the underside of the I/O expander from the socket on the server node board.

    Insert a screwdriver about ½ inch into the “REMOVAL SLOT” that is marked with an arrow on each side of the I/O expander. Then lift up evenly on both screwdrivers at the same time to separate the connectors and lift the I/O expander about ½ inch.

  3. Grasp the two handles on the I/O expander board and lift it straight up.

Figure 5. Separating the I/O Expander From the Server Node

1

Side view showing REMOVAL SLOT for screwdriver insertion (one on each side of the I/O expander)

2

Rear view of server node with I/O expander

Step 5

Remove the intermediate cover from the server node:

  1. Remove the four screws that secure the intermediate cover and set them aside. There are two screws on each side of the intermediate cover.

  2. Slide the intermediate cover toward the rear (toward the rear-panel buttons) and then lift it from the server node.

Step 6

Install the server node (with attached I/O expander, if present) to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 7

Power on the server node.


Reassembling the I/O Expander Assembly

Before you begin

This procedure assumes that you have already taken apart the I/O expander/server node assembly.

Procedure

Step 1

Reinstall the intermediate cover to the server node:

  1. Set the intermediate cover in place on the server node, offset about one inch toward the rear. Pegs on the inside of the cover must engage the tracks on the server node base.

  2. Push the cover forward until it stops.

  3. Reinstall the four screws that secure the intermediate cover.

Step 2

Reinstall the I/O expander to the server node:

Caution 

Use caution to align all features of the I/O expander with the intermediate cover and server node before mating the connector on the underside of the expander with the socket on the server board. The connector can be damaged if alignment is not correct.

  1. Carefully align the I/O expander with the alignment pegs on the top of the intermediate cover.

  2. Set the I/O expander down on the intermediate cover and lower it gently to mate the mezzanine connector and the socket on the server board.

    Figure 6. Reinstalling the I/O Expander to the Server Node

    1

    Intermediate cover attached to server node

    3

    I/O expander mezzanine connector

    2

    Intermediate cover screws (two on each side of cover)

    4

    Alignment pegs on intermediate cover

  3. If an NVMe SSD is present in the right-hand socket (IOENVMe2) of the I/O expander, you must remove it to access the PRESS HERE plate in the next step.

    See Replacing an NVMe SSD in the I/O Expander to remove it, then return to the next step.

  4. Press down firmly on the plastic plate marked “PRESS HERE” to fully seat the connector to the server node board.

    Figure 7. I/O Expander, Showing PRESS HERE Plate
  5. If you removed an NVMe SSD to access the PRESS HERE plate, reinstall it now.

    See Replacing an NVMe SSD in the I/O Expander.

Caution 

Before you reinstall the I/O expander securing screws, you must use the alignment tool (UCSC-C3K-M4IOTOOL) in the next step to ensure alignment of the connectors that plug into the internal chassis backplane. Failure to ensure alignment might damage the sockets on the backplane.

Step 3

Insert the four pegs of the alignment tool into the holes that are built into the connector side of the server node and I/O expander. Ensure that the alignment tool fits into all four holes and lies flat.

The alignment tool is shipped with systems that are ordered with an I/O expander. It is also shipped with I/O expander replacement spares. You can order the tool using Cisco PID UCSC-C3K-M4IOTOOL.

Figure 8. Using the I/O Expander Alignment Tool

1

Alignment Tool

3

Alignment tool installed

2

Connector side of server node and I/O expander

-

Step 4

Reinstall and tighten the five screws that secure the I/O expander to the top of the server node.

Step 5

Remove the alignment tool.

Step 6

Reinstall the top cover to the I/O expander:

  1. Set the cover in place on the server node or I/O expander (if present), offset about one inch toward the rear. Pegs on the inside of the cover must engage the tracks on the server node or I/O expander base.

  2. Push the cover forward until it stops.

  3. Turn the latch handle 90 degrees to close the lock.

  4. Fold the latch handle flat.

Step 7

Reinstall the server node to the chassis:

  1. With the two ejector levers open, align the server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 8

Power on the server node.


Component Replacement Procedures

This section contains procedures for replacing components in the S3260 M5 server node and the optional I/O expander.

Replacing CPUs and Heatsinks

Each server node contains two CPUs.

Special Information For Upgrades to Second Generation Intel Xeon Scalable Processors


Caution

You must upgrade your server firmware to the required minimum level before you upgrade to the Second Generation Intel Xeon Scalable Processors that are supported in this server. Older firmware versions cannot recognize the new CPUs, so this would result in a non-bootable server.


The minimum software and firmware versions required for this server to support Second Generation Intel Xeon Scalable processors are as follows:

Table 1. Second Generation Intel Xeon Scalable Processor Minimum Requirements

Software or Firmware

Minimum Version

Server Cisco IMC

4.0(4)

Server BIOS

4.0(4)

Cisco UCS Manager (UCS-integrated servers only)

4.0(4)

Do one of the following actions:

  • If your server's firmware and Cisco UCS Manager software are already at the required minimums shown above (or later), you can replace the CPU hardware by using the procedure in this section.

  • If your server's firmware and Cisco UCS Manager software are earlier than the required levels, use the instructions in the Cisco UCS C- and S-Series M5 Servers Upgrade Guide For Next Gen Intel Xeon Processors to upgrade your software. After you upgrade the software, return to this section as directed to replace the CPU hardware.

CPU Configuration Rules

  • A server node must have two CPUs to operate.

  • For Intel Xeon Scalable processors (first generation): The maximum combined memory allowed in the DIMM slots controlled by any one CPU is 768 GB. To populate the DIMM slots with more than 768 GB of combined memory, you must use a high-memory CPU that has a PID that ends with an "M", for example, UCS-CPU-6134M.

  • For Second Generation Intel Xeon Scalable processors: These Second Generation CPUs have three memory tiers. These rules apply on a per-socket basis:

    • If the CPU socket has up to 1 TB of memory installed, a CPU with no suffix can be used (for example, Gold 6240).

    • If the CPU socket has 1 TB or more (up to 2 TB) of memory installed, you must use a CPU with an M suffix (for example, Platinum 8276M).

    • If the CPU socket has 2 TB or more (up to 4.5 TB) of memory installed, you must use a CPU with an L suffix (for example, Platinum 8270L).

Tools Required For CPU Replacement

You need the following tools and equipment for this procedure:

  • T-30 Torx driver—Supplied with replacement CPU.

  • #1 flat-head screwdriver—Supplied with replacement CPU.

  • CPU assembly tool—Supplied with replacement CPU. Orderable separately as Cisco PID UCS-CPUAT=.

  • Heatsink cleaning kit—Supplied with replacement CPU. Orderable separately as Cisco PID UCSX-HSCK=.

  • Thermal interface material (TIM)—Syringe supplied with replacement CPU. Use only if you are reusing your existing heatsink (new heatsinks have a pre-applied pad of TIM). Orderable separately as Cisco PID UCS-CPU-TIM=.

See also Additional CPU-Related Parts to Order with RMA Replacement CPUs.

Replacing a CPU and Heatsink


Caution

CPUs and their sockets are fragile and must be handled with extreme care to avoid damaging pins. The CPUs must be installed with heatsinks and thermal interface material to ensure cooling. Failure to install a CPU correctly might result in damage to the server.


Procedure

Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Do one of the following to access the component inside the server node:

Step 4

Remove the existing CPU/heatsink assembly from the server:

  1. Use the T-30 Torx driver that is supplied with the replacement CPU to loosen the four captive nuts that secure the assembly to the motherboard standoffs.

    Note 

    Alternate loosening the heatsink nuts evenly so that the heatsink remains level as it is raised. Loosen the heatsink nuts in the order shown on the heatsink label: 4, 3, 2, 1.

  2. Lift straight up on the CPU/heatsink assembly and set it heatsink-down on an antistatic surface.

    Figure 9. Removing the CPU/Heatsink Assembly

    1

    Heatsink

    4

    CPU socket on motherboard

    2

    Heatsink captive nuts (two on each side)

    5

    T-30 Torx driver

    3

    CPU carrier (below heatsink in this view)

    -

Step 5

Separate the heatsink from the CPU assembly (the CPU assembly includes the CPU and the CPU carrier):

  1. Place the heatsink with CPU assembly so that it is oriented upside-down as shown below.

    Note the thermal-interface material (TIM) breaker location. TIM BREAKER is stamped on the CPU carrier next to a small slot.

    Figure 10. Separating the CPU Assembly From the Heatsink

    1

    CPU carrier

    4

    CPU-carrier inner-latch nearest to the TIM breaker slot

    2

    CPU

    5

    #1 flat-head screwdriver inserted into TIM breaker slot

    3

    TIM BREAKER slot in CPU carrier

    -

  2. Pinch inward on the CPU-carrier inner-latch that is nearest the TIM breaker slot and then push up to disengage the clip from its slot in the heatsink corner.

  3. Insert the blade of a #1 flat-head screwdriver into the slot marked TIM BREAKER.

    Caution 

    In the following step, do not pry on the CPU surface. Use gentle rotation to lift on the plastic surface of the CPU carrier at the TIM breaker slot. Use caution to avoid damaging the heatsink surface.

  4. Gently rotate the screwdriver to lift up on the CPU until the TIM on the heatsink separates from the CPU.

    Note 

    Do not allow the screwdriver tip to touch or damage the green CPU substrate.

  5. Pinch the CPU-carrier inner-latch at the corner opposite the TIM breaker and push up to disengage the clip from its slot in the heatsink corner.

  6. On the remaining two corners of the CPU carrier, gently pry outward on the outer-latches and then lift the CPU-assembly from the heatsink.

    Note 
    Handle the CPU-assembly by the plastic carrier only. Do not touch the CPU surface. Do not separate the CPU from the carrier.
Step 6

The new CPU assembly is shipped on a CPU assembly tool. Take the new CPU assembly and CPU assembly tool out of the carton.

If the CPU assembly and CPU assembly tool become separated, note the alignment features shown below for correct orientation. The pin 1 triangle on the CPU carrier must be aligned with the angled corner on the CPU assembly tool.

Caution 

CPUs and their sockets are fragile and must be handled with extreme care to avoid damaging pins.

Figure 11. CPU Assembly Tool, CPU Assembly, and Heatsink Alignment Features

1

CPU assembly tool

4

Angled corner on heatsink (pin 1 alignment feature)

2

CPU assembly (CPU in plastic carrier)

5

Triangle cut into carrier (pin 1 alignment feature)

3

Heatsink

6

Angled corner on CPU assembly tool (pin 1 alignment feature)

Step 7

Apply new TIM to the heatsink:

Note 
The heatsink must have new TIM on the heatsink-to-CPU surface to ensure proper cooling and performance.
  • If you are installing a new heatsink, it is shipped with a pre-applied pad of TIM. Go to step 5.

  • If you are reusing a heatsink, you must remove the old TIM from the heatsink and then apply new TIM to the CPU surface from the supplied syringe. Continue with step a below.

  1. Apply the cleaning solution that is included with the heatsink cleaning kit (UCSX-HSCK=) to the old TIM on the heatsink and let it soak for a least 15 seconds.

  2. Wipe all of the TIM off the heatsink using the soft cloth that is included with the heatsink cleaning kit. Be careful to avoid scratching the heatsink surface.

  3. Using the syringe of TIM provided with the new CPU (UCS-CPU-TIM=), apply 1.5 cubic centimeters (1.5ml) of thermal interface material to the top of the CPU. Use the pattern shown below to ensure even coverage.

    Figure 12. Thermal Interface Material Application Pattern
Step 8

With the CPU assembly on the CPU assembly tool, set the heatsink onto the CPU assembly. Note the pin 1 alignment features for correct orientation. Push down gently until you hear the corner clips of the CPU carrier click onto the heatsink corners.

Caution 

In the following step, use extreme care to avoid touching or damaging the CPU contacts or the CPU socket pins.

Step 9

Install the CPU/heatsink assembly to the server:

  1. Lift the heatsink with attached CPU assembly from the CPU assembly tool.

  2. Align the CPU with heatsink over the CPU socket on the motherboard, as shown below.

    Note the alignment features. The pin 1 angled corner on the heatsink must align with the pin 1 angled corner on the CPU socket. The CPU-socket posts must align with the guide-holes in the assembly.

    Figure 13. Installing the Heatsink/CPU Assembly to the CPU Socket

    1

    Guide hole in assembly (two)

    4

    Angled corner on heatsink (pin 1 alignment feature)

    2

    CPU socket alignment post (two)

    5

    Angled corner on socket (pin 1 alignment feature)

    3

    CPU socket leaf spring

    -

  3. Set the heatsink with CPU assembly down onto the CPU socket.

  4. Use the T-30 Torx driver that is supplied with the replacement CPU to tighten the four captive nuts that secure the heatsink to the motherboard standoffs.

    Caution 
    Alternate tightening the heatsink nuts evenly so that the heatsink remains level while it is lowered. Tighten the heatsink nuts in the order shown on the heatsink label: 1, 2, 3, 4. The captive nuts must be fully tightened so that the leaf springs on the CPU socket lie flat.
Step 10

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 11

Power on the server node.


Additional CPU-Related Parts to Order with RMA Replacement CPUs

When a return material authorization (RMA) of the CPU is done on a server node, additional parts might not be included with the CPU spare. The TAC engineer might need to add the additional parts to the RMA to help ensure a successful replacement.


Note

The following items apply to CPU replacement scenarios. If you are replacing a server node and moving existing CPUs to the new board, you do not have to separate the heatsink from the CPU. See Additional CPU-Related Parts to Order with RMA Replacement Server Node.


  • Scenario 1—You are reusing the existing heatsinks:

    • Heatsink cleaning kit (UCSX-HSCK=)

    • Thermal interface material (TIM) kit for M5 servers (UCS-CPU-TIM=)

  • Scenario 2—You are replacing the existing heatsinks:


    Caution

    Use only the correct heatsink for your CPUs to ensure proper cooling.
    • Heatsink: UCS-S3260-M5HS=

    • Heatsink cleaning kit (UCSX-HSCK=)

  • Scenario 3—You have a damaged CPU carrier (the plastic frame around the CPU):

    • CPU Carrier: UCS-M5-CPU-CAR=

    • #1 flat-head screwdriver (for separating the CPU from the heatsink)

    • Heatsink cleaning kit (UCSX-HSCK=)

    • Thermal interface material (TIM) kit for M5 servers (UCS-CPU-TIM=)

A CPU heatsink cleaning kit is good for up to four CPU and heatsink cleanings. The cleaning kit contains two bottles of solution, one to clean the CPU and heatsink of old TIM and the other to prepare the surface of the heatsink.

New heatsink spares come with a pre-applied pad of TIM. It is important to clean any old TIM off of the CPU surface prior to installing the heatsinks. Therefore, even when you are ordering new heatsinks, you must order the heatsink cleaning kit.

Additional CPU-Related Parts to Order with RMA Replacement Server Node

When a return material authorization (RMA) of a server node is done, you move existing CPUs to the new chassis.


Note

Unlike previous generation CPUs, the M5 server CPUs do not require you to separate the heatsink from the CPU when you move the CPU-heatsink assembly. Therefore, no additional heatsink cleaning kit or thermal-interface material items are required.


  • The only tool required for moving a CPU/heatsink assembly is a T-30 Torx driver.

To move a CPU to a new chassis, use the procedure in Moving an M5 Generation CPU.

Moving an M5 Generation CPU

Tool required for this procedure: T-30 Torx driver


Caution

When you receive a replacement server for an RMA, it includes dust covers on all CPU sockets. These covers protect the socket pins from damage during shipping. You must transfer these covers to the system that you are returning, as described in this procedure.


Procedure

Step 1

When moving an M5 CPU to a new server, you do not have to separate the heatsink from the CPU. Perform the following steps:

  1. Use a T-30 Torx driver to loosen the four captive nuts that secure the assembly to the board standoffs.

    Note 
    Alternate loosening the heatsink nuts evenly so that the heatsink remains level as it is raised. Loosen the heatsink nuts in the order shown on the heatsink label: 4, 3, 2, 1.
  2. Lift straight up on the CPU/heatsink assembly to remove it from the board.

  3. Set the CPUs with heatsinks aside on an anti-static surface.

    Figure 14. Removing the CPU/Heatsink Assembly

    1

    Heatsink

    4

    CPU socket on motherboard

    2

    Heatsink captive nuts (two on each side)

    5

    T-30 Torx driver

    3

    CPU carrier (below heatsink in this view)

    -

Step 2

Transfer the CPU socket covers from the new system to the system that you are returning:

  1. Remove the socket covers from the replacement system. Grasp the two recessed finger-grip areas marked "REMOVE" and lift straight up.

    Note 

    Keep a firm grasp on the finger-grip areas at both ends of the cover. Do not make contact with the CPU socket pins.

    Figure 15. Removing a CPU Socket Dust Cover

    1

    Finger-grip areas marked "REMOVE"

    -

  2. With the wording on the dust cover facing up, set it in place over the CPU socket. Make sure that all alignment posts on the socket plate align with the cutouts on the cover.

    Caution 

    In the next step, do not press down anywhere on the cover except the two points described. Pressing elsewhere might damage the socket pins.

  3. Press down on the two circular markings next to the word "INSTALL" that are closest to the two threaded posts (see the following figure). Press until you feel and hear a click.

    Note 

    You must press until you feel and hear a click to ensure that the dust covers do not come loose during shipping.

    Figure 16. Installing a CPU Socket Dust Cover

    -

    Press down on the two circular marks next to the word INSTALL.

    -

Step 3

Install the CPUs to the new system:

  1. On the new board, align the assembly over the CPU socket, as shown below.

    Note the alignment features. The pin 1 angled corner on the heatsink must align with the pin 1 angled corner on the CPU socket. The CPU-socket posts must align with the guide-holes in the assembly.

    Figure 17. Installing the Heatsink/CPU Assembly to the CPU Socket

    1

    Guide hole in assembly (two)

    4

    Angled corner on heatsink (pin 1 alignment feature)

    2

    CPU socket alignment post (two)

    5

    Angled corner on socket (pin 1 alignment feature)

    3

    CPU socket leaf spring

    -

  2. On the new board, set the heatsink with CPU assembly down onto the CPU socket.

  3. Use a T-30 Torx driver to tighten the four captive nuts that secure the heatsink to the board standoffs.

    Note 

    Alternate tightening the heatsink nuts evenly so that the heatsink remains level while it is lowered. Tighten the heatsink nuts in the order shown on the heatsink label: 1, 2, 3, 4. The captive nuts must be fully tightened so that the leaf springs on the CPU socket lie flat.


Replacing Memory DIMMs

There are 14 DIMM sockets on the server node board, 7 DIMMs controlled by each CPU.


Caution

DIMMs and their sockets are fragile and must be handled with care to avoid damage during installation.



Caution

Cisco does not support third-party DIMMs. Using non-Cisco DIMMs in the system might result in system problems or damage to the motherboard.



Note

To ensure the best system performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace the memory.


For additional information about troubleshooting DIMM memory issues, see the document Troubleshoot DIMM Memory Issues in UCS.

DIMM Sockets

The following figure shows the DIMM sockets and how they are numbered on an S3260 M5 server node board.

  • A server node has 14 DIMM sockets (7 for each CPU).

  • Channels are labeled with letters as shown in the following figure. For example, channel A = DIMM sockets A1, A2.

  • Channels A and G use two DIMMs per channel (DPC); all other channels use one DPC.

Figure 18. S3260 M5 DIMM and CPU Numbering

DIMM Population Rules

Observe the following guidelines when installing or replacing DIMMs:

  • For optimal performance, spread DIMMs evenly across both CPUs and all channels. Populate the DIMM slots of each CPU identically.

  • For optimal performance, populate DIMMs in the order shown in the following table, depending on the number of DIMMs per CPU.


    Note

    The table below lists recommended configurations. Using 5 DIMMs per CPU is not recommended.


    Table 2. DIMM Population Order

    Number of DIMMs per CPU (Recommended Configurations)

    Populate CPU 1 Slots

    Populate CPU 2 Slots

    Blue #1 Slots

    Black #2 Slot

    Blue #1 Slots

    Black #2 Slot

    1

    (A1)

    -

    (G1)

    -

    2

    (A1, B1)

    -

    (G1, H1)

    -

    3

    (A1, B1, C1)

    -

    (G1, H1, J1)

    -

    4

    (A1, B1); (D1, E1)

    -

    (G1, H1); (K1, L1)

    -

    6

    (A1, B1); (C1, D1); (E1, F1)

    -

    (G1, H1); (J1, K1); (L1, M1)

    -

    7

    (A1, B1); (C1, D1); (E1, F1)

    (A2)

    (G1, H1); (J1, K1); (L1, M1)

    (G2)

  • Observe the DIMM mixing rules in the following table.

Table 3. DIMM Mixing Rules

DIMM Parameter

DIMMs in the same Channel

DIMMs in the Same Bank

DIMM capacity

You can mix different capacity DIMMs in the same channel (for example, A1, A2).

You can mix different capacity DIMMs in the same bank (for example, A1, B1, C1).

However, for optimal performance DIMMs in the same bank should have the same capacity.

DIMM speed

You can mix speeds, but DIMMs will run at the speed of the slowest DIMMs/CPUs installed in the channel.

You can mix speeds, but DIMMs will run at the speed of the slowest DIMMs/CPUs installed in the bank.

DIMM type

You cannot mix DIMM types in a channel.

You cannot mix DIMM types in a bank.

Memory Mirroring Mode

When you enable memory mirroring mode, the memory subsystem simultaneously writes identical data to two channels. If a memory read from one of the channels returns incorrect data due to an uncorrectable memory error, the system automatically retrieves the data from the other channel. A transient or soft error in one channel does not affect the mirrored data and operation continues.

Memory mirroring reduces the amount of memory available to the operating system by 50 percent because only one of the two populated channels provides data.

Replacing DIMMs

Procedure

Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Do one of the following to access the component inside the server node:

Step 4

Locate the faulty DIMM and remove it from the socket on the riser by opening the ejector levers at both ends of the DIMM socket.

Step 5

Install a new DIMM:

Note 

Before installing DIMMs, refer to the population guidelines. See DIMM Population Rules

  1. Align the new DIMM with the socket on the riser. Use the alignment key in the DIMM socket to correctly orient the DIMM.

  2. Push the DIMM into the socket until it is fully seated and the ejector levers on either side of the socket lock into place.

Step 6

Do one of the following:

Step 7

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 8

Power on the server node.


Replacing Intel Optane DC Persistent Memory Modules

This topic contains information for replacing Intel Optane Data Center Persistent Memory modules (DCPMMs), including population rules. DCPMMs have the same form-factor as DDR4 DIMMs and they install to DIMM slots.


Note

DCPMMs require Second Generation Intel Xeon Scalable processors. You must upgrade the server firmware and BIOS to version 4.0(4) or later and install the supported Second Generation Intel Xeon Scalable processors before installing DCPMMs.



Caution

DCPMMs and their sockets are fragile and must be handled with care to avoid damage during installation.



Note

To ensure the best server performance, it is important that you are familiar with memory performance guidelines and population rules before you install or replace DCPMMs.


In this server, DCPMMs can be configured to operate in one mode at this time:

  • App Direct Mode: The module operates as a solid-state disk storage device. Data is saved and is non-volatile.


    Note

    In this server, App Direct Mode can be used non-interleaved only.


Intel Optane DC Persistent Memory Module Population Rules and Performance Guidelines

This topic describes the rules and guidelines for maximum memory performance when using Intel Optane DC persistent memory modules (DCPMMs) with DDR4 DRAM DIMMs.

DIMM Slot Numbering

The following figure shows the numbering of the DIMM slots on the server motherboard.

Figure 19. DIMM Slot Numbering
Configuration Rules

Observe the following rules and guidelines:

  • To use DCPMMs in this server node, two CPUs must be installed.

  • Intel Optane DC persistent memory modules require Second Generation Intel Xeon Scalable processors. You must upgrade the server firmware and BIOS to version 4.0(4) or later and then install the supported Second Generation Intel Xeon Scalable processors before installing DCPMMs.

  • When using DCPMMs in a server:

    • The DDR4 DIMMs installed in the server must all be the same size.

    • The DCPMMs installed in the server must all be the same size and must have the same SKU.

  • The DCPMMs run at 2666 MHz. If you have 2933 MHz RDIMMs or LRDIMMs in the server and you add DCPMMs, the main memory speed clocks down to 2666 MHz to match the speed of the DCPMMs.

  • Each DCPMM draws 18 W sustained, with a 20 W peak.

  • When App Direct mode is used in this server, it must be non-interleaved.

  • The following table shows the supported DCPMM configuration for this server. Fill the DIMM slots for CPU 1 and CPU2 as shown.

Figure 20. Supported DCPMM Configuration For App Direct Mode (Non-Interleaved), Dual-CPU

Installing Intel Optane DC Persistent Memory Modules


Note

DCPMM configuration is always applied to all DCPMMs in a region, including a replacement DCPMM. You cannot provision a specific replacement DCPMM on a preconfigured server.

Understand which mode your DCPMM is operating in. App Direct mode has some additional considerations in this procedure.



Caution

Replacing a DCPMM in App-Direct mode requires all data to be wiped from the DCPMM. Make sure to backup or offload data before attemping this procedure.


Procedure

Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Do one of the following to access the component inside the server node:

Caution 

If you are moving DCPMMs with active data (persistent memory) from one server to another as in an RMA situation, each DCPMM must be installed to the identical position in the new server. Note the positions of each DCPMM or temporarily label them when removing them from the old server.

Step 4

For App Direct mode, backup the existing data stored in all Optane DIMMs to some other storage.

Step 5

For App Direct mode, remove the Persistent Memory policy which will remove goals and namespaces automatically from all Optane DIMMs.

Step 6

Locate the faulty DCPMM and remove it from the socket on the riser by opening the ejector levers at both ends of the DIMM socket.

Step 7

Install a new DCPMM:

Note 

Before installing DCPMM, refer to the population guidelines. See Intel Optane DC Persistent Memory Module Population Rules and Performance Guidelines.

  1. Align the new DCPMM with the socket on the riser. Use the alignment key in the DIMM socket to correctly orient the DCPMM.

  2. Push the DIMM into the socket until it is fully seated and the ejector levers on either side of the socket lock into place.

Step 8

Do one of the following:

Step 9

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 10

Power on the server node.

Step 11

Perform post-installation actions:

Note 

If your Persistent Memory policy is Host Controlled, you must perform the following actions from the OS side.

  • If the existing configuration is fully or partly in App-Direct mode and new DCPMM is also in App-Direct mode, then ensure that all DCPMMs are are at the latest matching firmware level and also re-provision the DCPMMs by creating a new goal.

    • For App Direct mode, reapply the Persistent Memory policy.

    • For App Direct mode, restore all the offloaded data to the DCPMMs.

There a number of tools for configuring goals, regions, and namespaces. See Server BIOS Setup Utility Menu for DCPMM.


Server BIOS Setup Utility Menu for DCPMM

DCPMMs can be configured by using the server's BIOS Setup Utility, Cisco IMC, Cisco UCS Manager, or OS-related utilities.

The server BIOS Setup Utility includes menus for DCPMMs. They can be used to view or configure DCPMM regions, goals, and namespaces, and to update DCPMM firmware.

To open the BIOS Setup Utility, press F2 when prompted onscreen during a system boot.

The DCPMM menu is on the Advanced tab of the utility:

Advanced > Intel Optane DC Persistent Memory Configuration

From this tab, you can access other menus:

  • DIMMs: Displays the installed DCPMMs. From this page, you can update DCPMM firmware and configure other DCPMM parameters.

    • Monitor health

    • Update firmware

    • Configure security

      You can enable security mode and set a password so that the DCPMM configuration is locked. When you set a password, it applies to all installed DCPMMs. Security mode is disabled by default.

    • Configure data policy

  • Regions: Displays regions and their persistent memory types. When using App Direct mode with interleaving, the number of regions is equal to the number of CPU sockets in the server. When using App Direct mode without interleaving, the number of regions is equal to the number of DCPMMs in the server.

    From the Regions page, you can configure memory goals that tell the DCPMM how to allocate resources.

    • Create goal config

  • Namespaces: Displays namespaces and allows you to create or delete them when persistent memory is used. Namespaces can also be created when creating goals. A namespace provisioning of persistent memory applies only to the selected region.

    Existing namespace attributes such as the size cannot be modified. You can only add or delete namespaces.

  • Total capacity: Displays the total DCPMM resource allocation across the server.

Updating the DCPMM Firmware Using the BIOS Setup Utility

You can update the DCPMM firmware from the BIOS Setup Utility if you know the path to the .bin files. The firmware update is applied to all installed DCPMMs.

  1. Navigate to Advanced > Intel Optane DC Persistent Memory Configuration > DIMMs > Update firmware

  2. Under File:, provide the file path to the .bin file.

  3. Select Update.

Replacing a Storage Controller

The Cisco storage controller card (RAID or HBA) connects to a mezzanine-style socket on the server board. See Supported Storage Controllers for information about the controllers supported in the S3260 M5 server node.


Note

When using S3260 M5 server nodes in the chassis, the supported storage controllers are supported in the server nodes only. The controllers are not supported in the I/O expander.



Note

Do not mix different storage controllers in the same system. If the system has two server nodes, they must both contain the same controller.


Procedure


Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Do one of the following to access the component inside the server node:

Step 4

Remove a storage controller card:

  1. Loosen the captive thumbscrews that secure the card and its bracket to the board.

  2. Grasp the card at both ends and lift it evenly to disengage the connector on the underside of the card from the mezzanine socket.

    Note 

    For the supported Cisco RAID controller card, the two supercap backup units come already attached to the bracket of a new RAID card, so you do not have to remove them separately.

    Figure 21. Screw Locations on Storage Controller (Six)
Step 5

Install a new storage card:

Note 

The storage controllers supported in this S3260 M5 server node require chassis part number 68-5286-06 or later. The chassis motherboard in earlier chassis versions does not support this controller. You can determine the chassis part number by looking on the part-number label on the top-front of the chassis, or by using the inventory-all command, as shown in the following example using the Cisco IMC CLI:

Server# scope chassis
Server/chassis# inventory-refresh
Server/chassis# inventory-all

CHS_FRU(ID1)
Board Mfg             : Cisco Systems Inc
Board Product         : N/A
Board Serial          : FCH20317RVA
Board Part Number     : 73-16125-03
Board Extra           : A12V02
Board Extra           : 0000000000
Product Manufacturer  : Cisco Systems Inc
Product Name          : N/A
Product Part Number   : 68-5286-07
Product Serial        : FCH20317RVA
  1. Align the card over the mezzanine socket on the server board while aligning the six thumbscrews with the standoffs.

  2. Press down on both ends of the card to engage the connector on the underside of the card with the mezzanine socket.

  3. Tighten the six captive screws that secure the card to the board.

Step 6

Do one of the following:

Step 7

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 8

Power on the server node.

Step 9

Update the controller firmware using the Host Upgrade Utility. This will ensure that the firmware version on the card is compatible with the latest system firmware.

See the Cisco Host Upgrade Utility User Guide For S3260 Storage Servers for instructions on updating the firmware.


Replacing a Supercap Unit on a RAID Controller

The dual RAID controller UCS-S3260-DRAID uses two supercap units (RAID backup) that mount to the controller bracket.

Each supercap provides approximately 3 years of backup for the disk write-back cache DRAM in the case of sudden power loss by offloading the cache to the NAND flash.

The PID for the spare supercap is UCSC-SCAP-M5=.

Procedure


Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Do one of the following to access the component inside the server node:

Step 4

Remove a supercap unit:

  1. Press the securing clip on the bracket toward the center of the board and then lift the supercap from the bracket.

  2. Disconnect the supercap cable from the RAID controller cable.

Figure 22. Supercap Units on RAID Controller

1

Supercap cable connector

3

Supercap unit in clips on bracket (one each side)

2

Supercap release clip

-

Step 5

Install a new supercap unit:

  1. Connect the new supercap cable to the RAID controller cable to which the old supercap was connected.

  2. Press the securing clip on the bracket toward the center of the board while you set the supercap in place. Release the securing clip.

Step 6

Do one of the following:

Step 7

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 8

Power on the server node.


Replacing an NVMe SSD in the Server Node

The optional NVMe SSD sled for the server node can hold up to two NVMe SSDs. The sled might be under a storage controller card, if one is installed in the server node.

To replace an NVMe SSD in the I/O expander, see Replacing an NVMe SSD in the I/O Expander.


Note

At the current time, you can have NVMe SSDs in either the I/O expander or the server node, but not both.

All NVMe SSDs in the system (in the server nodes and/or the I/O expander) must be of the same partner brand. For example, two Intel NVMe SSDs in the server node and two HGST NVMe SSDs in the I/O expander is an invalid configuration because of driver incompatibility.



Note

NVMe SSDs are bootable in UEFI mode; legacy booting is not supported.


Procedure


Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Do one of the following to access the component inside the server node:

Step 4

Remove the NVMe SSD sled from the server node:

  1. Do one of the following:

    • If there is no storage controller card installed in the server node, continue with the next step.

    • If there is a storage controller card installed in the server node, you must remove it to provide clearance to the sled. Remove the storage controller card as described in Replacing a Storage Controller before you continue with the next step.

  2. Loosen the thumbscrew on the rear panel of the server node until the threads clear the sled.

  3. Loosen the thumbscrew on the top of the sled.

  4. Lift on the ribbons labeled "LIFT HERE" to disengage the two connectors on the underside of the sled from the sockets on the board.

Step 5

Remove the NVMe SSD (or a filler panel) from the sled:

  1. Remove the four screws that secure the SSD to the sled. There are two screws on each side of the SSD.

  2. Lift gently on the rear of the SSD and then pull it free from the connector in the sled.

Figure 23. NVMe SSD Sled in the Server Node

1

Thumbscrew on rear panel of server node

3

Sled removed from server, showing securing screws for upper NVMe SSD 2 (two on each side of sled)

2

Thumbscrew on sled

4

Sled removed from server, showing securing screws for lower NVMe SSD 1 (two on each side of sled)

Step 6

Install a new NVMe SSD to the sled:

  1. Set the SSD in place, then gently push its connector into the connector on the sled. Ensure that the SSD is fully seated and that the screw holes on the sides of the sled align with the screw holes on the SSD.

  2. Install the four securing screws.

Step 7

Install the NVMe SSD sled to the server node:

  1. Gently set the sled in place so that the two connectors on the underside of the sled align with the two sockets on the server node board.

  2. Press down on the ribbon labeled “PRESS HERE TO INSTALL” to fully seat the connectors in the sockets.

  3. Tighten the thumbscrew on the top of the sled.

  4. Tighten the thumbscrew on the rear panel of the server node.

Step 8

If you removed a storage controller to provide clearance, reinstall it to the server node using the procedure in Replacing a Storage Controller.

Step 9

Do one of the following:

Step 10

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 11

Power on the server node.


Replacing the RTC Battery

The real-time clock (RTC) battery retains system settings when the server is disconnected from power. The battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be purchased from Cisco or most electronic stores.


Note

When the RTC battery is removed or it completely loses power, settings that were stored in the BMC of the server node are lost. You must reconfigure the BMC settings after installing a new battery.



Warning

Recyclers: Do not shred the battery! Make sure you dispose of the battery according to appropriate regulations for your country or locale.


Procedure


Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Do one of the following to access the component inside the server node:

Step 4

Remove the server node RTC battery:

  1. Locate the RTC battery in its vertical socket.

    Figure 24. RTC Battery Socket Location

    1

    RTC battery vertical socket

    -

  2. Do one of the following:

    • If there is no storage controller card installed in the server node, continue with the next step.

    • If there is a storage controller card installed in the server node, you must remove it to provide clearance to the battery, which is under the storage controller card bracket. Remove the storage controller card as described in Replacing a Storage Controller before you continue with the next step.

  3. Pull the battery retaining clip away from the battery and pull the battery from the socket.

Step 5

Install a new RTC battery:

  1. Pull the retaining clip away from the battery socket and insert the battery in the socket.

    Note 

    The flat, positive side of the battery marked “+” should face the retaining clip.

  2. Push the battery into the socket until it is fully seated and the retaining clip clicks over the top of the battery.

Step 6

If you removed a storage controller to provide clearance, reinstall it to the server node using the procedure in Replacing a Storage Controller.

Step 7

Do one of the following:

Step 8

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 9

Power on the server node.

Step 10

Reconfigure the BMC settings for this node.


Installing a Trusted Platform Module (TPM)

The trusted platform module (TPM) is a small circuit board that plugs into a server board socket and is then permanently secured with a one-way screw.

TPM Considerations

  • This server supports either TPM version 1.2 or TPM version 2.0. The TPM 2.0, UCSX-TPM2-002B(=), is compliant with Federal Information Processing (FIPS) Standard 140-2. FIPS support has existed, but FIPS 140-2 is now supported.

  • Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does not already have a TPM installed.

  • If there is an existing TPM 1.2 installed in the server, you cannot upgrade to TPM 2.0. If there is no existing TPM in the server, you can install TPM 2.0.

  • If the TPM 2.0 becomes unresponsive, reboot the server.

Installing and Enabling a TPM


Note

Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does not already have a TPM installed.

This topic contains the following procedures, which must be followed in this order when installing and enabling a TPM:

  1. Installing the TPM Hardware

  2. Enabling the TPM in the BIOS

  3. Enabling the Intel TXT Feature in the BIOS

Installing TPM Hardware

Note

For security purposes, the TPM is installed with a one-way screw. It cannot be removed with a standard screwdriver.
Procedure

Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Do one of the following to access the component inside the server node:

Step 4

Install a TPM:

  1. Locate the TPM socket on the server board, as shown below.

  2. Do one of the following:

    • If there is no storage controller card installed in the server node, continue with the next step.

    • If there is a storage controller card installed in the server node, you must remove it to provide clearance to the TPM socket, which is under the storage controller card bracket. Remove the storage controller card as described in Replacing a Storage Controller before you continue with the next step.

  3. Align the connector that is on the bottom of the TPM circuit board with the server board TPM socket. Align the screw hole on the TPM board with the screw hole that is adjacent to the TPM socket.

  4. Push down evenly on the TPM to seat it in the socket.

  5. Install the single one-way screw that secures the TPM to the motherboard.

Step 5

If you removed a storage controller to provide clearance, reinstall it to the server node using the procedure in Replacing a Storage Controller.

Step 6

Do one of the following:

Step 7

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 8

Power on the server node.

Figure 25. Location of the TPM Socket

1

TPM socket location on server board

-

Step 9

Continue with Enabling the TPM in the BIOS.


Enabling the TPM in the BIOS

After hardware installation, you must enable TPM support in the BIOS.


Note

You must set a BIOS Administrator password before performing this procedure. To set this password, press the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to Advanced > Security > Set Administrator Password and enter the new password twice as prompted.


Procedure

Step 1

Enable TPM Support:

  1. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

  2. Log in to the BIOS Setup Utility with your BIOS Administrator password.

  3. On the BIOS Setup Utility window, choose the Advanced tab.

  4. Choose Trusted Computing to open the TPM Security Device Configuration window.

  5. Change TPM SUPPORT to Enabled.

  6. Press F10 to save your settings and reboot the server.

Step 2

Verify that TPM support is now enabled:

  1. Watch during bootup for the F2 prompt, and then press F2 to enter BIOS setup.

  2. Log into the BIOS Setup utility with your BIOS Administrator password.

  3. Choose the Advanced tab.

  4. Choose Trusted Computing to open the TPM Security Device Configuration window.

  5. Verify that TPM SUPPORT and TPM State are Enabled.

Step 3

Continue with Enabling the Intel TXT Feature in the BIOS.


Enabling the Intel TXT Feature in the BIOS

Intel Trusted Execution Technology (TXT) provides greater protection for information that is used and stored on the business server. A key aspect of that protection is the provision of an isolated execution environment and associated sections of memory where operations can be conducted on sensitive data, invisibly to the rest of the system. Intel TXT provides for a sealed portion of storage where sensitive data such as encryption keys can be kept, helping to shield them from being compromised during an attack by malicious code.

Procedure

Step 1

Reboot the server and watch for the prompt to press F2.

Step 2

When prompted, press F2 to enter the BIOS Setup utility.

Step 3

Verify that the prerequisite BIOS values are enabled:

  1. Choose the Advanced tab.

  2. Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.

  3. Verify that the following items are listed as Enabled:

    • VT-d Support (default is Enabled)

    • VT Support (default is Enabled)

    • TPM Support

    • TPM State

  4. Do one of the following:

    • If VT-d Support and VT Support are already enabled, skip to step 4.

    • If VT-d Support and VT Support are not enabled, continue with the next steps to enable them.

  5. Press Escape to return to the BIOS Setup utility Advanced tab.

  6. On the Advanced tab, choose Processor Configuration to open the Processor Configuration window.

  7. Set Intel (R) VT and Intel (R) VT-d to Enabled.

Step 4

Enable the Intel Trusted Execution Technology (TXT) feature:

  1. Return to the Intel TXT(LT-SX) Hardware Support window if you are not already there.

  2. Set TXT Support to Enabled.

Step 5

Press F10 to save your changes and exit the BIOS Setup utility.


Replacing an I/O Expander

The server node with optional I/O expander is accessed from the rear of the system, so you do not have to pull the system out from the rack.


Note

You do not have to power off the chassis in this procedure. Replacement with the chassis powered on is supported if you shut down the server node before removal.


Procedure


Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node with attached I/O expander from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Remove the I/O expander top cover as described in Removing an S3260 M5 Server Node or I/O Expander Top Cover.

Step 4

Remove the I/O expander from the server node:

  1. Remove the five screws that secure the I/O expander to the top of the server node.

    Figure 26. I/O Expander Securing Screws (Five)
  2. Use two small flat-head screwdrivers (1/4-inch or equivalent) to help separate the connector on the underside of the I/O expander from the socket on the server node board.

    Insert a screwdriver about ½ inch into the “REMOVAL SLOT” that is marked with an arrow on each side of the I/O expander. Then lift up evenly on both screwdrivers at the same time to separate the connectors and lift the I/O expander about ½ inch.

  3. Grasp the two handles on the I/O expander board and lift it straight up.

Figure 27. Separating the I/O Expander From the Server Node

1

Side view showing REMOVAL SLOT for screwdriver insertion (one on each side of the I/O expander)

2

Rear view of server node with I/O expander

Step 5

Install a new I/O expander to the server node:

Caution 

Use caution to align all features of the I/O expander with the intermediate cover and server node before mating the connector on the underside of the expander with the socket on the server board. The connector can be damaged if alignment is not correct.

  1. Carefully align the I/O expander with the alignment pegs on the top of the intermediate cover.

  2. Set the I/O expander down on the intermediate cover and lower it gently to mate the mezzanine connector and the socket on the server board.

    Figure 28. Reinstalling the I/O Expander to the Server Node

    1

    Intermediate cover on server node

    3

    I/O expander mezzanine connector

    2

    Intermediate cover screws (two on each side of cover)

    4

    Alignment pegs on intermediate cover

  3. If an NVMe SSD is present in the right-hand socket (IOENVMe2) of the I/O expander, you must remove it to access the PRESS HERE plate in the next step.

    See Replacing an NVMe SSD in the I/O Expander to remove it, then return to the next step.

  4. Press down firmly on the plastic plate marked “PRESS HERE” to fully seat the connector to the server node board.

    Figure 29. I/O Expander, Showing PRESS HERE Plate
  5. If you removed an NVMe SSD to access the PRESS HERE plate, reinstall it now.

    See Replacing an NVMe SSD in the I/O Expander.

Caution 

Before you reinstall the I/O expander securing screws, you must use the alignment tool (UCSC-C3K-M4IOTOOL) in the next step to ensure alignment of the connectors that plug into the internal chassis backplane. Failure to ensure alignment might damage the sockets on the backplane.

Step 6

Insert the four pegs of the alignment tool into the holes that are built into the connector side of the server node and I/O expander. Ensure that the alignment tool fits into all four holes and lies flat.

The alignment tool is shipped with systems that are ordered with an I/O expander. It is also shipped with I/O expander replacement spares. You can order the tool using Cisco PID UCSC-C3K-M4IOTOOL.

Figure 30. Using the I/O Expander Alignment Tool

1

Alignment Tool

3

Alignment tool installed

2

Connector side of server node and I/O expander

-

Step 7

Reinstall and tighten the five screws that secure the I/O expander to the top of the server node.

Step 8

Remove the alignment tool.

Step 9

Reinstall the top cover to the I/O expander:

  1. Set the cover in place on the server node or I/O expander (if present), offset about one inch toward the rear. Pegs on the inside of the cover must engage the tracks on the server node or I/O expander base.

  2. Push the cover forward until it stops.

  3. Turn the latch handle 90 degrees to close the lock.

  4. Fold the latch handle flat.

Step 10

Reinstall the server node to the chassis:

  1. With the two ejector levers open, align the server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 11

Power on the server node.

Step 12

Update the IOE firmware using the Host Upgrade Utility. This will ensure that the firmware version on the IOE is compatible with the latest system firmware.

See the Cisco Host Upgrade Utility User Guide For S3260 Storage Servers for instructions on updating the firmware.


Adding an I/O Expander After-Factory


Note

This procedure is for adding an I/O expander to an S3260 M5 server node. If you are replacing an existing I/O expander, see Replacing an I/O Expander.


Tools Required

This procedure requires:

  • I/O expander alignment tool UCSC-C3K-M4IOTOOL. This tool is included with the I/O expander spare.

  • I/O expander kit UCS-S3260-IOLID. This kit includes:

    • One intermediate cover for the server node, including 4 cover screws

    • One threaded support post, with screw

Procedure

When an I/O expander is installed, the server node must occupy lower server bay 1. The I/O expander occupies upper server bay 2.

Procedure


Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove any server node (or disk expansion tray) from upper server bay 2 and set it on an antistatic work surface.

Step 3

Remove any server node (or disk expansion tray) from lower server bay 1 and set it on an antistatic work surface.

Step 4

Remove the top cover from the server node to which you will install an I/O expander:

  1. Lift the latch handle to an upright position, and then turn the latch handle 90 degrees to release the lock.

  2. Slide the cover toward the rear (toward the rear-panel buttons) and then lift it from the server node.

Figure 31. Top View, Server Node Top Cover

1

Top cover on server node

2

Cover latch handle (shown in closed, flat position)

Step 5

If there is a storage controller card installed in the server node, remove it to provide clearance in the next step. (If there is no storage controller card, skip to the next step.)

  1. Loosen the captive thumbscrews that secure the controller card to the board.

  2. Grasp the card at both ends and lift it evenly to disengage the connector on the underside of the card from the mezzanine socket.

    Figure 32. Screw Locations on Storage Controller (Six)
Step 6

Move the cover latch assembly from the controller-card bracket to a bracket in the IO expander. You will install the top cover to the I/O expander at the end of this procedure and so it requires the cover latch.

Remove the two Phillips-head screws that secure the cover latch assembly to the controller card bracket in the server node. See the following figure.

  • If your I/O expander has a storage controller card, use the two Phillips-head screws to install the cover latch assembly to the bracket of that controller card.

  • If your I/O expander does not have a storage controller card, use the two Phillips-head screws to install the cover latch assembly to the bracket on the I/O expander board. See the following figure.

Figure 33. Cover Latch Assembly Screws in Server Node and I/O Expander

1

Two securing screws on cover latch assembly in server node (shown installed on controller card bracket)

2

Two securing screws on cover latch assembly in I/O expander (shown installed to bracket on I/O expander board)

Step 7

Remove one Phillips-head screw from the server node board. See the following figure for the screw location.

Step 8

Install the threaded metal support post from the kit, using one screw.

Set the post against the edge of the server board where you removed the screw in the prior step. The flange with the screw hole must sit flat on top of the board. See the following figure.

Figure 34. Support Post and Screw, S3260 M5 Server Node

1

Phillips-head screw location (remove this screw)

3

Support post screw

2

Side view of support post, showing installation position on edge of server board

-

Step 9

If you removed a storage controller card, install it back to the server node now:

  1. Align the card over the mezzanine socket and the standoffs.

  2. Press down on both ends of the card to engage the connector on the underside of the card with the mezzanine socket.

  3. Tighten the captive screws that secure the card to the board.

Step 10

Install the intermediate cover from the kit to the server node. Set the intermediate cover in place and then install its four securing screws (two on each side).

Step 11

Install the I/O expander to the server node:

Caution 

Use caution to align all features of the I/O expander with the server node before mating the connector on the underside of the expander with the socket on the server board. The connector can be damaged if correct alignment is not used.

  1. Carefully align the I/O expander with the alignment pegs on the top of the intermediate cover.

  2. Set the I/O expander down on the server node intermediate cover and push down gently to mate the connectors.

    Figure 35. Reinstalling the I/O Expander to the Server Node

    1

    Intermediate cover on server node

    3

    I/O expander mezzanine connector

    2

    Intermediate cover screws (two on each side of cover)

    4

    Alignment pegs on intermediate cover

  3. If an NVMe SSD is present in the right-hand socket (IOENVMe2) of the I/O expander, you must remove it to access the PRESS HERE plate in the next step.

    See Replacing an NVMe SSD in the I/O Expander to remove it, then return to the next step.

  4. Press down firmly on the plastic plate marked “PRESS HERE” to fully seat the connectors.

    Figure 36. I/O Expander, Showing PRESS HERE Plate
  5. If you removed an NVMe SSD to access the PRESS HERE plate, reinstall the SSD now.

    Caution 

    Before you install the I/O expander securing screws, you must use the supplied alignment tool (UCSC-C3K-M4IOTOOL) in the next step to ensure alignment of the connectors that connect to the internal chassis backplane. Failure to ensure alignment might damage the sockets on the backplane.

Step 12

Insert the four pegs of the alignment tool into the holes that are built into the connector side of the server node and I/O expander. Ensure that the alignment tool fits into all four holes and lies flat.

The alignment tool is shipped with I/O expander spares. You can also order the tool using Cisco PID UCSC-C3K-M4IOTOOL.

Figure 37. Using the I/O Expander Alignment Tool

1

Alignment Tool

3

Alignment tool installed

2

Connector side of server node and I/O expander

-

Step 13

Install and tighten the five screws that secure the I/O expander to the top of the server node. The screw at the center of the board-edge screws into the support post that you installed earlier in this procedure.

Figure 38. I/O Expander Securing Screws (Five)
Step 14

Remove the alignment tool.

Step 15

Install the top cover that you removed from the server node to the I/O expander.

  1. Set the cover in place on the I/O expander, offset about one inch toward the rear. Pegs on the inside of the cover must set into the tracks on the I/O expander base.

  2. Push the cover forward until it stops.

  3. Turn the latch handle 90 degrees to close the lock and then fold the latch handle flat.

Step 16

Install the server node with attached I/O expander to the chassis:

  1. With the two ejector levers open, align the server node and I/O expander with the two empty bays.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 17

Power on the server node.

Step 18

Update the IOE firmware using the Host Upgrade Utility. This will ensure that the firmware version on the IOE is compatible with the latest system firmware.

See the Cisco Host Upgrade Utility User Guide For S3260 Storage Servers for instructions on updating the firmware.


Replacing a PCIe Card in the I/O Expander

The optional I/O expander has two horizontal PCIe sockets.

PCIe Slot Specifications

The following table describes the specifications for the PCIe slots.

Table 4. PCIe Expansion Slots in the I/O Expander

Slot Number

Electrical Lane Width

Connector Length

Maximum Card Length

Card Height (Rear Panel Opening)

NCSI Support

1 (IOESlot1)

Gen-3 x8

x16 connector

¾ length

½ height

No

2 (IOESlot2)

Gen-3 x8

x16 connector

¾ length

½ height

No

Replacing a PCIe Card

Procedure

Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Remove the I/O expander top cover as described in Removing an S3260 M5 Server Node or I/O Expander Top Cover.

Step 4

Remove an existing PCIe card (or a filler panel if no card is present):

  1. Release the card-tab retainer. On the inside of the I/O expander, pull the spring-loaded plunger on the card-tab retainer inward and then rotate the card-tab retainer 90 degrees to the open position.

  2. Slide the PCIe card horizontally to free its edge connector from the socket, and then lift the card out from the I/O expander.

Figure 39. PCIe Card Sockets in the I/O Expander

1

PCIe card sockets

3

Card-tab retainer (outside the I/O expander)

2

Spring-loaded plunger on card-tab retainer (inside the I/O expander)

Step 5

Install a new PCIe card:

  1. With the card-tab retainer in the open position, set the card in the I/O expander and align its edge connector with the socket.

  2. Slide the card horizontally to fully engage the edge connector with the socket. The card’s tab should sit flat against the rear-panel opening.

  3. Close the card-tab retainer. Rotate the retainer 90 degrees until it clicks and locks.

Step 6

Replace the I/O expander top cover.

  1. Set the cover in place on the I/O expander, offset about one inch toward the rear. Pegs on the inside of the cover must engage the tracks on the I/O expander base.

  2. Push the cover forward until it stops.

  3. Turn the latch handle 90 degrees to close the lock.

  4. Fold the latch handle flat.

  5. Reinstall the four screws that secure the top cover to the I/O expander.

Step 7

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 8

Power on the server node.


Replacing an NVMe SSD in the I/O Expander

The I/O expander has two sockets for NVMe SSDs. The software designations of the NVMe PCIe SSD sockets inside the I/O expander are IOENVMe1 and IOENVMe2.


Note

At the current time, you can have NVMe SSDs in either the I/O expander or the server node, but not both.

All NVMe SSDs in the system (in the server nodes and/or the I/O expander) must be of the same partner brand. For example, two Intel NVMe SSDs in the server node and two HGST NVMe SSDs in the I/O expander is an invalid configuration because of driver incompatibility.



Note

NVMe SSDs are bootable in UEFI mode; legacy booting is not supported.


Procedure


Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Remove the I/O expander top cover as described in Removing an S3260 M5 Server Node or I/O Expander Top Cover.

Step 4

Remove an NVMe SSD:

  1. Remove the single screw that secures the drive to its bracket.

  2. Slide the drive horizontally to disengage it from its socket, then lift it from the I/O expander.

Figure 40. NVMe SSDs in the I/O Expander

1

Single securing screw (one each SSD)

2

NVMe PCIe SSD sockets IOENVMe1 and IOENVMe2

Step 5

Install a new NVMe SSD:

  1. Set the drive in its bracket, then slide it forward to engage its connector with the socket.

  2. Install the single screw that secures the drive to the bracket.

Step 6

Reinstall the I/O expander top cover.

Step 7

Reinstall the server node to the chassis:

  1. With the two ejector levers open, align the server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 8

Power on the server node.


Service Headers on the Server Node Board

The server node board includes headers that you can jumper for certain service functions.

Service Header Locations

There are two 2-pin service headers on the server node board that are supported for use.

  • Header P10 = Password reset

  • Header P11 = Clear CMOS

Figure 41. Service Headers on the Server Node Board <PLACEHOLDER, NEED NEW ILLO>

1

Header P10 = Password reset

2

Header P11 = Clear CMOS

Using the Password Reset Header P10

You can use a jumper on header P10 to clear the administrator password.

Procedure


Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Do one of the following to access the component inside the server node:

Step 4

Locate header P10 and install a jumper across pins 1 and 2.

Step 5

Return the server node to the chassis (do not reinstall an I/O expander to the server node for this reboot):

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 6

Power on the server node.

Step 7

After the server node has booted fully, shut it down again.

Step 8

Remove the server node from the system.

Step 9

Remove the jumper from the header pins.

Note 

If you do not remove the jumper, the Cisco IMC resets the password each time that you boot the server node.

Step 10

Do one of the following:

Step 11

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 12

Power on the server node.


Using the Clear CMOS Header P11

You can use a jumper on header P11 to clear the CMOS settings.

Procedure


Step 1

Shut down the server node by using the software interface or by pressing the node power button, as described in Shutting Down an S3260 M5 Server Node.

Step 2

Remove the server node (with attached I/O expander, if present) from the system:

  1. Grasp the two ejector levers and pinch their latches to release the levers.

  2. Rotate both levers to the outside at the same time to evenly disengage the server node from its midplane connectors.

  3. Pull the server node straight out from the system.

Step 3

Do one of the following to access the component inside the server node:

Step 4

Locate header P11 and install a jumper across pins 1 and 2.

Step 5

Return the server node to the chassis (do not reinstall an I/O expander to the server node for this reboot):

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 6

Power on the server node.

Step 7

After the server node has booted fully, shut it down again.

Step 8

Remove the server node from the system.

Step 9

Remove the jumper from the header pins.

Note 

If you do not remove the jumper, the Cisco IMC clears the settings each time that you boot the server node.

Step 10

Do one of the following:

Step 11

Return the server node to the chassis:

  1. With the two ejector levers open, align the new server node with the empty bay.

  2. Push the server node into the bay until it engages with the midplane connectors and is flush with the chassis.

  3. Rotate both ejector levers toward the center until they lay flat and their latches lock into the rear of the server node.

Step 12

Power on the server node.