Guest

Cisco UCS B-Series Blade Servers

Cisco UCS B420 M3 High Performance Blade Server Installation and Service Note

  • Viewing Options

  • PDF (3.4 MB)
  • EPUB (1.4 MB)
  • Feedback
Cisco UCS B420 M3 High Performance Blade Server

Contents

Cisco UCS B420 M3 High Performance Blade Server

This document describes how to install and service the Cisco UCS B420 M3 High Performance Blade Server, a full-width blade server meaning up to four of these high-density, four-socket blade servers can reside in a Cisco UCS 5108 Blade Server chassis.

The B420 M3 Blade Server has the following features:

  • Up to four Intel Xeon processor E5-4600 processor family CPUs, with up to 32 cores per server

  • 48 DIMM slots for registered ECC DIMMs, with up to 1.5-TB memory capacity (using 32-GB LRDIMMs)

  • 3 adapter connectors for up to 160-Gb/s bandwidth:

    • One dedicated connector for the Cisco VIC 1240 modular LAN-on-motherboard (mLOM)

    • Two connectors for Cisco the VIC 1280, VIC Port Expander, or third-party network adapter cards

  • Four hot-plug drive bays that support SAS or SATA SSD drives

  • LSI 2208R controller that provides RAID 0, 1, 5, and 10 with an optional 1-GB flash-backed write cache

Figure 1. B420 M3 Blade Server Front Panel

1

Drive bay 1

8

Power button and LED

2

Drive bay 2

9

Network link status LED

3

Drive bay 3

10

Blade health LED

4

Drive bay 4

11

Local console connection

5

Left ejector handle

12

Reset button access

6

Asset Tag

Each server has a blank plastic tag that pulls out of the front panel so you can add your own asset tracking label without interfering with the intended air flow.

13

Beaconing button and LED

7

Right ejector handle

   

LEDs

Server LEDs indicate whether the blade server is in active or standby mode, the status of the network link, the overall health of the blade server, and whether the server is set to give a flashing blue beaconing indication.

The removable drives also have LEDs indicating hard disk access activity and disk health.

Table 1 Blade Server LEDs

LED

Color

Description

Power



Off

Power off.

Green

Normal operation.

Amber

Standby.

Link



Off

None of the network links are up.

Green

At least one network link is up.

Health



Off

Power off.

Green

Normal operation.

Amber

Minor error.

Blinking Amber

Critical error.

Beaconing



Off

Beaconing not enabled.

Blinking blue 1 Hz

Beaconing to locate a selected blade—If the LED is not blinking, the blade is not selected. You can initiate beaconing in UCS Manager or by using the Locator button.

Activity

(Disk Drive)



Off

Inactive.

Green

Outstanding I/O to disk drive.

Flashing Amber 4 Hz

Rebuild in progress. Health LED will flash in unison.

Flashing Amber 4 hz

Identify drive active.

Health

(Disk Drive)



Off

Can mean either no fault detected or the drive is not installed.

Flashing Amber 4 hz

Rebuild drive active. If the Activity LED is also flashing amber, a drive rebuild is in progress.

Amber

Fault detected.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds, and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may get turned on or off by pressing the combination button and LED.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly. If the desired power state for a service profile associated with a blade server or an integrated rack-mount server is set to "off", using the power button or Cisco UCS Manager to reset the server will cause the desired power state of the server to become out of sync with the actual power state and the server may unexpected shutdown at a later time. To safely reboot a server from a power-down state, use the Boot Server action in Cisco UCS Manager.

Connectors

The console port allows a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle cable (N20-BKVM) which provides a connection into a Cisco UCS blade server; it has a DB9 serial connector, a VGA connector for a monitor, and dual USB ports for a keyboard and mouse. With this cable, you can create a direct connection to the operating system and the BIOS running on a blade server. A KVM cable ships standard with each blade chassis accessory kit.

Figure 2. KVM Cable for Blade Servers

1

Connector to blade server slot

3

VGA connection for a monitor

2

DB9 serial connector

4

2-port USB connector for a mouse and keyboard

Hard Drive Replacement

Each blade has up to four front-accessible, hot plug capable, 2.5-inch SAS or SATA drive bays. Unused hard drive bays should always be covered with cover plates (N20-BBLKD) to assure proper cooling and ventilation.

You can remove and install blade server hard drives without removing the blade server from the chassis.

The drives supported in this blade server come with the drive sled attached. Spare drive sleds are not available. A list of currently supported drives is in the specification sheets at: http:/​/​www.cisco.com/​en/​US/​products/​ps10280/​products_​data_​sheets_​list.html

Before upgrading or adding an HDD to a running system, check the service profile in Cisco UCS Manager and make sure the new hardware configuration will be within the parameters allowed by the service profile.


Caution


To prevent ESD damage, wear grounding wrist straps during these procedures and handle modules by the carrier edges only.


RAID Considerations

Each blade contains an LSI SAS 2208R RAID controller embedded in the motherboard that is not separately replaceable. The controller has 6 Gbps SAS connectivity and supports RAID 0/1/5/10.

If the drive being replaced was part of a RAID array, Cisco recommends using a new drive of identical size, model, and manufacturer to replace the failed drive. This recommendation comes from the industry standard practice of using drives of the same capacity when creating RAID volumes. If drives of different capacities are used, the useable portion of the smallest drive will be used on all drives that make up the RAID volume.

For hard disk and RAID troubleshooting information, see the Cisco UCS Manager B-Series Troubleshooting Guide.

Removing a Blade Server Hard Drive

To remove a hard drive from a blade server, follow these steps:

Procedure
    Step 1   Push the button to release the ejector, and then pull the hard drive from its slot.
    Step 2   Place the hard drive on an antistatic mat or antistatic foam if you are not immediately reinstalling it in another server.
    Step 3   Install a hard disk drive blank faceplate to keep dust out of the blade server if the slot will remain empty.
    Figure 3. Removing and Installing a Drive


    Installing a Blade Server Hard Drive

    Procedure
      Step 1   Place the hard drive lever into the open position by pushing the release button.
      Figure 4. Installing a Hard Drive in a Blade Server

      Step 2   Gently slide the hard drive into the opening in the blade server until it seats into place.
      Step 3   Push the hard drive lever into the closed position.

      You can use Cisco UCS Manager to format and configure RAID services. For details, see the Configuration Guide for the version of Cisco UCS Manager that you are using. The configuration guides are available at the following URL: http:/​/​www.cisco.com/​en/​US/​products/​ps10281/​products_​installation_​and_​configuration_​guides_​list.html

      If you need to move a RAID cluster, see the Cisco UCS Manager B-Series Troubleshooting Guide.


      Blade Server Removal and Installation

      Before performing any internal operations on this blade server, you must remove it from the chassis.


      Caution


      To prevent ESD damage, wear grounding wrist straps during these procedures and handle modules by the carrier edges only.


      Powering Off Blade Servers Using the Power Button


      Tip


      You can also shut the server down remotely using Cisco UCS Manager. For details, see the Configuration Guide for the version of Cisco UCS Manager that you are using. The configuration guides are available at the following URL: http:/​/​www.cisco.com/​en/​US/​products/​ps10281/​products_​installation_​and_​configuration_​guides_​list.html


      Procedure
        Step 1   For each server in the chassis that you want to power off, check the color of the Power Status LED.
        • Green indicates that the server is running and must be shut down before it can be safely powered off. Go to Step 2.

        • Amber indicates that the server is already in standby mode and can be safely powered off. Go to Step 3.

        Step 2   Press and release the Power button, then wait until the Power Status LED changes to amber.

        The operating system performs a graceful shutdown and the server goes to standby mode.

        Caution   

        To avoid data loss or damage to your operating system, you should always invoke a graceful shutdown of the operating system.

        Step 3   (Optional)If you are shutting down all blade servers in a chassis, disconnect the power cords from the chassis to completely power off the servers.
        Step 4   Remove the appropriate servers from the chassis.

        Removing a Blade Server

        You must decommission the server using Cisco UCS Manager before physically removing the blade server.

        Procedure
          Step 1   Turn off the blade server using either Cisco UCS Manager or the power button.
          Step 2   Completely loosen the captive screws on the front of the blade.
          Step 3   Remove the blade from the chassis by pulling the ejector levers on the blade until it unseats the blade server.
          Step 4   Slide the blade part of the way out of the chassis, and place your other hand under the blade to support its weight.
          Step 5   Once removed, place the blade on an antistatic mat or antistatic foam if you are not immediately reinstalling it into another slot.
          Step 6   If the slot is to remain empty, reinstall the slot divider (N20-CDIVV) and install two blank faceplates (N20-CBLKB1) to keep assure proper ventilation and cooling.

          Installing a Blade Server

          Full-width Blade Servers are intended to run in the upper slots of the chassis, if the chassis will house both full-width and half-width servers. A chassis with four full width servers is fully supported.

          Procedure
            Step 1   If necessary, remove the slot divider (N20-CDIVV) from the chassis.
            1. Simultaneously pull up on the left side catch and push down on the right side catch as shown in callout 1 of the following figure.
            2. Pull the slot divider out of the chassis as shown in callout 2 of the following figure. Keep the slot divider in case it is needed at another time.
              Figure 5. Removing a Slot Divider

              Tip   

              To reinstall the slot divider, align it with the dimples in the slot top and bottom and slide it back in until it clicks into place.

            Step 2   Grasp the front of the blade server and place your other hand under the blade to support it.
            Figure 6. Positioning a Blade Server in the Chassis

            Step 3   Open the ejector levers in the front of the blade server.
            Step 4   Gently slide the blade into the opening until you cannot push it any farther.
            Step 5   Press the ejector levers so that they catch the edge of the chassis and press the blade server all the way in.
            Step 6   Tighten the captive screw on the front of the blade to no more than 3 in-lbs. Tightening with bare fingers only is unlikely to lead to stripped or damaged captive screws.

            Secure Digital (SD) Card Access

            SD card slots are provided and one or two SD cards can be populated. If two SD cards are populated, they can be used in a mirrored mode (Cisco UCS Manager 2.2.x and later required).


            Note


            Do not mix 16 GB and 32 GB SD cards in the same server.



            Note


            Due to technical limitations, the 32 GB SD card has only 16 GB usable capacity (regardless of mirroring) in this particular server.


            Figure 7. SD Card Slot Locations

            Removing a Blade Server Cover

            Replacing the cover is the reverse of removing the cover.

            Procedure
              Step 1   Press and hold the button down as shown in the figure below.
              Step 2   While holding the back end of the cover, pull the cover back and then up.
              Figure 8. Opening a B420 M3 Blade Server


              Air Baffles

              The air baffles (UCSB-BAFF-B420-M3=) shown below ship with this server; they direct and improve air flow for the server components. No tools are necessary to install them. Place them over the DIMMs and align them to the standoffs.


              Caution


              Be sure that the tabs on the baffles are set in the slots provided on the motherboard; otherwise, it may be difficult to replace the server cover or damage to the motherboard might occur.


              Figure 9. Cisco UCS B420 Air Baffles

              Internal Components

              Figure 10. Inside View of the Blade Server

              1

              Hard drive bay 1

              2

              Drive bay 2

              3

              Hard drive bay 3

              4

              Drive bay 4

              5

              CMOS battery

              6

              Internal USB connector

              Cisco UCS-USBFLSH-S-4GB= is recommended, but if you use another USB drive it must be no wider than 0.8 inches (20 mm), and no more than 1.345 inches (34 mm) long in order to provide needed clearances to install or remove the USB drive. Third-party USB flash memory is allowed but not subject to support by Cisco and is at the user’s risk.

              7

              Diagnostics Button

              8

              Transferable Flash-backed Write Cache Module (TFM) for flash-backed write cache

              The flash-backed write cache feature is not supported at the initial server release.

              9

              DIMM slots for CPU 1

              10

              DIMM slots for CPU 2

              11

              DIMM slots for CPU 3

              12

              DIMM slots for CPU 4

              13

              mLOM card

              This slot is shown in Cisco UCS Manager as “Adapter 1” but the BIOS lists it as “mLOM.” The VIC 1240 is a type of adapter with a specific footprint that can only be used in this slot.

              14

              Adapter card

              This slot is shown in Cisco UCS Manager as “Adapter 2,” but is shown in the BIOS as “Mezz 1.” Mixing adapter types is supported.

              15

              Supercap for flash-backed write cache

              The flash-backed write cache feature is not supported at the initial server release.

              16

              Adapter card

              This slot is shown in Cisco UCS Manager as “Adapter 3,” but it is shown in the BIOS as “Mezz 2.” Mixing adapter types is supported.


              Note


              • A squeeze-to-remove retaining clip is provided to secure the internal USB flash memory, the clip must always be securely fastened when the flash memory is in use. Third party memory that will not fit in the clip should not be used.

              • Use of this server may require an upgrade to the FEX in the chassis. The 2104XP fabric extender is not compatible when any Cisco-certified adapter is installed in slot 1 or slot 2. If a VIC 1240 modular LOM card is installed, you will have connectivity through the mLOM but other adapters will not be recognized.


              Diagnostics Button and LEDs

              At blade start-up, POST diagnostics test the CPUs, DIMMs, HDDs, and adapter cards, and any failure notifications are sent to UCS Manager. You can view these notifications in the System Error Log or in the output of the show tech-support command. If errors are found, an amber diagnostic LED also lights up next to the failed component. During run time, the blade BIOS, component drivers, and OS all monitor for hardware faults and will light up the amber diagnostic LED for a component if an uncorrectable error or correctable errors (such as a host ECC error) over the allowed threshold occur.

              LED states are saved, and if you remove the blade from the chassis the LED values will persist for up to 10 minutes. Pressing the LED diagnostics button on the motherboard causes the LEDs that currently show a component fault to light for up to 30 seconds for easier component identification. LED fault values are reset when the blade is reinserted into the chassis and booted, and the process begins from its start.

              If DIMM insertion errors are detected, they may cause the blade discovery to fail and errors will be reported in the server POST information, which is viewable using the UCS Manager GUI or CLI. UCS blade servers require specific rules to be followed when populating DIMMs in a blade server, and the rules depend on the blade server model. Refer to the documentation for a specific blade server for those rules.

              HDD status LEDs are on the front face of the HDD. Faults on the CPU, DIMMs, or adapter cards also cause the server health LED to light solid amber for minor error conditions or blinking amber for critical error conditions.

              Working Inside the Blade Server

              Installing a Motherboard CMOS Battery

              All Cisco UCS blade servers use a CR2032 battery (Cisco PID N20-MBLIBATT=) to preserve BIOS settings while the server is powered down.


              Warning


              There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s instructions.



              To install or replace a motherboard complementary metal-oxide semiconductor (CMOS) battery, follow these steps:

              Procedure
                Step 1   Remove the old CMOS battery:
                1. Power off the blade, remove it from the chassis, and remove the top cover.
                2. Push the battery socket retaining clip away from the battery.
                3. Lift the battery from the socket. Use needle-nose pliers to grasp the battery if there is not enough clearance for your fingers.
                Step 2   Install a motherboard CMOS battery:
                1. Push the battery socket retaining clip away from where the battery fits in the housing.
                2. Insert the new battery into the socket with the battery’s positive (+) marking facing away from the retaining clip. Ensure that the retaining clip can click over the top of the battery to secure it in the housing.
                3. Replace the top cover.
                4. Replace the server in the chassis and power on the blade by pressing the Power button.

                Removing a CPU and Heat Sink

                You will use these procedures to move a CPU from one server to another, to replace a faulty CPU, or to upgrade from one CPU to another.


                Note


                The CPU pick and place tool is required to prevent damage to the connection pins between the motherboard and the CPU. Do not attempt this procedure without the required tool, which is included with each CPU option kit.


                Procedure
                  Step 1   Unscrew the four captive screws securing the heat sink to the motherboard.

                  Loosen one screw by a quarter turn, then move to the next screw. Continue loosening until the heat sink can be lifted off.

                  Step 2   Remove the heat sink.

                  Remove the existing thermal compound from the bottom of the heat sink using the cleaning kit (UCSX-HSCK= ) included with each CPU option kit. Follow the instructions on the two bottles of cleaning solvent.

                  Step 3   Unhook the first socket hook, which has the following icon: See callout 3 in the following figure.
                  Step 4   Unhook the second socket hook, which has the following icon: See callout 4 in the following figure.
                  Step 5   Open the socket latch. See callout 5 in the following figure.
                  Figure 11. Removing the Heat Sink and Accessing the CPU Socket

                  Step 6   Press the central button on the provided CPU pick and place tool (UCS-CPU-EP-PNP=) to release the catch.

                  The CPU pick and place tool is included with each CPU option kit, or the tool may be purchased separately.

                  Step 7   Remove an old CPU as follows:
                  1. Place the CPU pick and place tool on the CPU socket aligned with the arrow pointing to the CPU registration mark.
                  2. Press the button/handle on the tool to grasp the installed CPU.
                  3. Lift the tool and CPU straight up.
                  Figure 12. Proper Alignment of CPU Pick and Place Tool

                  1

                  Alignment mark on the button/handle of the pick and place tool

                  2

                  Alignment mark on the socket


                  Installing a New CPU and Heat Sink

                  Before installing a new CPU in a server, verify the following:

                  • The CPU is supported for that given server model. This may be verified via the server's Technical Specifications ordering guides or by the relevant release of the Cisco UCS Capability Catalog.

                  • A BIOS update is available and installed that supports the CPU and the given server configuration.

                  • If the server will be managed by Cisco UCS Manager, the service profile for this server in Cisco UCS Manager will recognize and allow the new CPU.


                  Caution


                  The Pick-and-Place tools used in this procedure are required to prevent damage to the contact pins between the motherboard and the CPU. Do not attempt this procedure without the required tools, which are included with each CPU option kit. If you do not have the tool, you can order a spare: Cisco PID UCS-CPU-EP-PNP= for 10-, 8-, 6-, 4-, or 2-core CPUs (green); UCS-CPU-EP2-PNP= for v2 12-core CPUs (purple).


                  Procedure
                    Step 1   (Optional)If you are installing a CPU in a socket that had been shipped empty, there will be a protective cap intended to prevent bent or touched contact pins. The pick and pull cap tool provided can be used in a manner similar to a pair of tweezers. Grasp the protective cap and pivot as shown.
                    Figure 13. Protective Cap Removal

                    Step 2   Release the catch on the pick and place tool by pressing the handle/button.
                    Step 3   Remove the new CPU from the packaging, and load it into the pick and place tool as follows:
                    1. Confirm that the pedestal is set up correctly for your processor. The pedestal ships configured with the markings “LGA2011-R1” facing upward, and this is the correct orientation.
                    2. Place the CPU on the pedestal. The CPU corners should fit snugly at the pedestal corners and the notches should meet the pegs perfectly.
                    3. Place the CPU pick and place tool on the CPU pedestal aligned with the A1 arrow pointing to the A1 registration mark on the pedestal.
                    4. Press the button/handle on the tool to grasp the CPU.
                    5. Lift the tool and CPU straight up off of the pedestal.
                      Figure 14. Loading the Pick and Place Tool

                      1

                      Alignment mark on the pick and place tool, CPU and pedestal

                    Step 4   Place the CPU and tool on the CPU socket with the registration marks aligned as shown.
                    Step 5   Press the button/handle on the pick and place tool to release the CPU into the socket.
                    Figure 15. Using the CPU Pick and Place Tool to Insert the CPU

                    1

                    Alignment mark on the tool button/handle

                    2

                    Alignment mark on the CPU socket

                    Step 6   Close the socket latch. See callout 1 in the following figure.
                    Step 7   Secure the first hook, which has the following icon: See callout 2 in the following figure.
                    Step 8   Secure the second hook, which has the following icon: See callout 3 in the following figure.
                    Figure 16. Replacing the Heat Sink (B200 M3 Shown)

                    Step 9   Using the syringe of thermal grease provided with replacement CPUs and servers (UCS-CPU-GREASE=), add 2 cubic centimeters of thermal grease to the top of the CPU where it will contact the heat sink. Use the pattern shown. This should require half the contents of the syringe.
                    Caution   

                    The thermal grease has very specific thermal properties, and thermal grease from other sources should not be substituted. Using other thermal grease may lead to damage.

                    Note   

                    CPU spares come with two syringes of thermal grease; one with a blue cap and one with a red cap. The syringe with the blue cap is UCS-CPU-GREASE=, which is used with this server

                    Figure 17. Thermal Grease Application Pattern

                    Step 10   Replace the heat sink. See callout 4.
                    Caution   

                    On certain models, heat sinks are keyed to fit into the plastic baffle extending from the motherboard. Do not force a heat sink if it is not fitting well, rotate it and re-orient the heat sink.

                    Step 11   Secure the heat sink to the motherboard by tightening the four captive screws a quarter turn at a time in an X pattern as shown in the upper right.

                    Installing Memory

                    To install a DIMM into the blade server, follow these steps:

                    Procedure
                      Step 1   Press the DIMM into its slot evenly on both ends until it clicks into place.

                      DIMMs are keyed, if a gentle force is not sufficient, make sure the notch on the DIMM is correctly aligned.

                      Note   

                      Be sure that the notch in the DIMM aligns with the slot. If the notch is misaligned you may damage the DIMM, the slot, or both.

                      Step 2   Press the DIMM connector latches inward slightly to seat them fully.

                      Supported DIMMs

                      The DIMMs supported in this blade server are constantly being updated. A list of currently supported and available DIMMs is in the specification sheets at:

                      http:/​/​www.cisco.com/​en/​US/​products/​ps10280/​products_​data_​sheets_​list.html

                      Cisco does not support third-party memory DIMMs, and in some cases their use may irreparably damage the server and require an RMA and down time.

                      Memory Arrangement

                      The Cisco UCS B420 high-performance blade server contains 48 slots for installing DIMMs—12 for each CPU. Each CPU has 12 DIMM slots spread over 4 channels. This blade server needs at least one DIMM attached to all populated CPUs. DIMMs installed in slots for an absent CPU will not be recognized. For optimal performance, distribute DIMMs evenly across all CPUs. DIMM connector latches are color coded blue, black, and white, and we recommend that you install memory in roughly that order.

                      Figure 18. Memory Slots Within the Blade Server

                      1

                      DIMMs for CPU 1

                      2

                      DIMMs for CPU 2

                      3

                      DIMMs for CPU 3

                      4

                      DIMMs for CPU 4

                      Channels

                      Each CPU has 4 channels, consisting of 3 DIMMs. Each channel is identified by a letter. Each channel member is identified by numbers, 0, 1 or 2.

                      The DIMM slots are contiguous to their associated CPU. When installing DIMMs, you must add them in the configurations shown in the following table.

                      Table 2 UCS B420 M3 DIMM Slot Population

                      DIMMs per CPU

                      Populate 
CPU 1 Slots

                      Populate 
CPU 2 Slots

                      Populate 
CPU 3 Slots

                      Populate 
CPU 4 Slots

                      Color Coding

                      1

                      A0

                      E0

                      I0

                      M0

                      Blue

                      2

                      A0, B0

                      E0, F0

                      I0, J0

                      M0, N0

                      Blue

                      3

                      A0, B0, C0

                      E0, F0, G0

                      I0, J0, K0

                      M0, N0, O0

                      Blue

                      4

                      A0, B0, C0, D0

                      E0, F0, G0, H0

                      I0, J0, K0, L0

                      M0, N0, O0, P0

                      Blue

                      5

                      Not recommended for performance reasons.

                      6

                      A0, B0, C0, 
A1, B1, C1

                      E0, F0, G0, 
E1, F1, G1

                      I0, J0, K0, 
I1, J1, K1

                      M0, N0, O0, M1, N1, O1

                      Blue,
Black

                      7

                      Not recommended for performance reasons.

                      8

                      A0, B0, C0, D0,
A1, B1, C1, D1

                      E0, F0, G0, H0, E1, F1, G1, H1

                      I0, J0, K0, L0, I1, J1, K1, L1

                      M0, N0, O0, P0, M1, N1, O1, P1

                      Blue,
Black

                      9

                      A0, B0, C0, 
A1, B1, C1, 
A2, B2, C2

                      E0, F0, G0, 
E1, F1, G1,
E2, F2, G2

                      I0, J0, K0, 
I1, J1, K1, 
I2, J2, K2

                      M0, N0, O0, 
M1, N1, O1, 
M2, N2, O2

                      Blue,
Black
White

                      10

                      Not recommended for performance reasons.

                      11

                      Not recommended for performance reasons.

                      12

                      A0, B0, C0, D0,
A1, B1, C1, D1,
A2, B2, C2, D2

                      E0, F0, G0, H0, E1, F1, G1, H1, E2, F2, G2, H2

                      I0, J0, K0, L0, I1, J1, K1, L1, I2, J2, K2, L2

                      M0, N0, O0, P0, M1, N1, O1, P1, M2, N2, O2, P2

                      Blue,
Black
White

                      Figure 19. Physical Representation of DIMMs and CPUs

                      Figure 20. Logical Representation of Channels



                      Memory Performance

                      When configuring your server, consider the following:

                      • DIMMs within the blade server can be of a different size, but mixing speeds causes the faster DIMMs to run at the speed of the slower DIMMs.

                      • No mixing of DIMM type (LRDIMM, RDIMM) is allowed.

                      • Your selected CPU(s) can have some affect on performance. CPUs used must be of the same type.

                      Bandwidth and Performance

                      You can achieve maximum bandwidth, performance, and system memory by using the following configuration:

                      • DDR3, 1600 millions of transfers per second (MT/s) across four Channels

                      • 12 DIMMs per CPU (48 DIMMs total)

                      • Maximum capacity of 1536 GB (using 32-GB DIMMs)

                      Performance is less than optimal if the following memory configurations are used:

                      • Mixing DIMM sizes and densities

                      • Unevenly populating DIMMs between CPUs

                      Depending on the application needed, performance loss might or might not be noticeable or measurable.

                      Installing a mLOM Adapter


                      Note


                      You must remove the adapter card to service the mLOM.


                      To install an mLOM on the blade server, follow these steps:

                      Procedure
                        Step 1   Position the mLOM’s board connector above the motherboard connector and align the captive screw to the standoff post on the motherboard.
                        Step 2   Firmly press the mLOM’s board connector into the motherboard connector.
                        Step 3   Tighten the captive screw.
                        Tip   

                        To remove an mLOM, reverse the above procedure. You might find it helpful when removing the connector from the motherboard to gently rock the board along the length of the connector until it loosens.

                        Figure 21. Installing an mLOM


                        Installing an Adapter Card

                        The network adapters and interface cards all have a shared installation process and are constantly being updated. A list of currently supported and available models for this server is in the specification sheets at this URL:

                        http:/​/​www.cisco.com/​en/​US/​products/​ps10280/​products_​data_​sheets_​list.html


                        Note


                        If a VIC 1240 mLOM is not installed, you must have an adapter card installed.



                        Note


                        Use of the adapters available for this server might require an upgrade to the FEX in the chassis. The 2104XP FEX is not compatible with any Cisco-certified adapter installed in slot 1 or slot 2. If a VIC 1240 mLOM card is installed, you will have connectivity through the mLOM but other adapters will not be recognized. Use of all three slots requires Cisco UCS 2200 series FEXes.


                        If you are switching from one type of adapter card to another, before you physically perform the switch make sure that you download the latest device drivers and load them into the server’s operating system. For more information, see the firmware management chapter of one of the Cisco UCS Manager software configuration guides.

                        Adapter cards can be installed in either slot 1 or slot 2; they can be of the same type or a mixed configuration.


                        Note


                        Cisco UCS Manager will recognize adapters in these slots as “Adapter 2” and “Adapter 3,” and counts the mLOM as being “Adapter 1.“ This does not match the markings on the motherboard.


                        The Cisco UCS 785GB or 365GB MLC Fusion-io Drive and LSI 400GB SLC WarpDrive have the same form factor as M3 adapter cards and can be installed and removed using the same procedures. Using these drives in a B200 M3, or B22 M3 blade server requires the presence of a VIC 1240 mLOM to provide blade I/O. They will not work in M1 and M2 generation Cisco UCS servers, and can be mixed with an adapter in the B420 M3 server. These drives appear in Cisco UCS Manager as regular SSDs.

                        Procedure
                          Step 1   Position the adapter board connector above the motherboard connector and align the two adapter captive screws to the standoff posts (see callout 1) on the motherboard.
                          Step 2   Firmly press the adapter connector into the motherboard connector (see callout 2).
                          Step 3   Tighten the two captive screws (see callout 3).
                          Tip   

                          Removing an adapter card is the reverse of installing it. You might find it helpful when removing the connector from the motherboard to gently rock the board along the length of the connector until it loosens.

                          Figure 22. Installing an Adapter Card

                          Figure 23. Installing an Adapter Card


                          Installing the Flash-Backed Write Cache and Supercap

                          The Flash-backed Write Cache (FBWC) is an intelligent backup solution that protects disk write cache data during a long term power loss on the RAID controller. It has two components, the TFM memory and the Supercap module, which provides emergency power. The TFM installs into a dedicated slot, but the installation steps are identical to installing a DIMM. The flash-backed write cache feature and its components are not supported at the initial server release.

                          Verify whether replacement is required by using the show raid-battery detail command in the CLI.

                          To install the Supercap module, follow these steps:

                          Procedure
                            Step 1   Using Cisco UCS Manager, perform a graceful shutdown of the server. Without a graceful shutdown, data can be permanently lost.
                            Step 2   Remove the server from the chassis.
                            Step 3   Remove the top cover from the server.
                            Step 4   Remove the adapter in slot 2.
                            Step 5   With a No.1 Phillips screwdriver, remove the four screws holding the top plate of the Supercap’s enclosure.
                            Step 6   Angle the top plate up and remove the tabs from the slots at the rear. Set the plate aside.
                            Step 7   Press the clip at the end of the Supercap’s wires into the clip attached to the enclosure.
                            Step 8   Place the Supercap inside the enclosure.
                            Step 9   Slide the tabs on the top plate into the slots at the rear of the Supercap enclosure.
                            Step 10   With a No.1 Phillips screwdriver, replace the four screws and attach the top plate to the enclosure as shown below.
                            Step 11   Replace the adapter, top cover, and the server in the chassis. Cisco UCS Manager reestablishes management of the server and the service profile.
                            Figure 24. Supercap Installation


                            Installing and Enabling a Trusted Platform Module

                            The Trusted Platform Module (TPM, Cisco Product ID UCSX-TPM2-001) is a component that can securely store artifacts used to authenticate the server. These artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to store platform measurements that help ensure that the platform remains trustworthy. Authentication (ensuring that the platform can prove that it is what it claims to be) and attestation (a process helping to prove that a platform is trustworthy and has not been breached) are necessary steps to ensure safer computing in all environments. It is a requirement for the Intel Trusted Execution Technology (TXT) security feature, which must be enabled in the BIOS settings for a server equipped with a TPM.

                            Intel Trusted Execution Technology (TXT) provides greater protection for information that is used and stored on the business server. A key aspect of that protection is the provision of an isolated execution environment and associated sections of memory where operations can be conducted on sensitive data, invisibly to the rest of the system. Intel TXT provides for a sealed portion of storage where sensitive data such as encryption keys can be kept, helping to shield them from being compromised during an attack by malicious code.


                            Note


                            TPM installation is supported after-factory. However, a TPM installs with a one-way screw and cannot be replaced or moved to another server. If a server with a TPM is returned, the replacement server must be ordered with a new TPM.
                            Procedure
                              Step 1   Install the TPM hardware.
                              1. Power off, decommission, and remove the blade server from the chassis.
                              2. Remove the blade server cover.
                              3. Install the TPM to the TPM socket on the server motherboard and secure it using the one-way screw that is provided. See the figure below for the location of the TPM socket.
                              4. Return the blade server to the chassis, power it on, and allow it to be automatically reacknowledged, reassociated, and recommissioned.
                              5. Continue with enabling TPM support in the server BIOS in the next step.
                              Figure 25. TPM Socket Location

                              1

                              Front of server

                              2

                              TPM socket on motherboard

                              Step 2   Enable TPM Support in the BIOS.
                              1. Enable Quiet Mode in the BIOS policy of the server’s service profile.
                              2. Establish a direct connection to the server, either by connecting a keyboard, monitor, and mouse to the front panel using a KVM dongle (N20-BKVM) or by other means.
                              3. Reboot the server.
                              4. Press F2 during reboot to enter the BIOS setup screens.
                              5. On the Advanced tab, select Trusted Computing and press Enter to open the TPM Security Device Configuration window.
                              6. Set the TPM Support option to Enabled.
                              7. Press F10 to save and exit. Allow the server to reboot, but watch for the prompt to press F2 in the next step.
                              Step 3   Enable TPM State in the BIOS.
                              1. Press F2 during reboot to enter the BIOS setup screens.
                              2. On the Advanced tab, select Trusted Computing and press Enter to open the TPM Security Device Configuration window.
                              3. Set the TPM State option to Enabled.
                              4. Press F10 to save and exit. Allow the server to reboot, but watch for the prompt to press F2 in the next step.
                              Step 4   Verify that TPM Support and TPM State are enabled.
                              1. Press F2 during reboot to enter the BIOS setup screens.
                              2. On the Advanced tab, select Trusted Computing and press Enter to open the TPM Security Device Configuration window.
                              3. Verify that TPM Support and TPM State are set to Enabled.
                              4. Continue with enabling the Intel TXT feature in the next step.
                              Step 5   Enable the Intel TXT feature in the BIOS.
                              1. Choose the Advanced tab.
                              2. Choose Intel TXT (LT-SX) Configuration to open the Intel TXT (LT-SX) Hardware Support window.
                              3. Set TXT Support to Enabled.
                              4. Verify that the following items are listed as Enabled:
                                • VT Support (default is Enabled)

                                • VT-d Support (default is Enabled)

                                • TPM Support

                                • TPM State

                                If VT Support and VT-d Support are not enabled, return to the Advanced tab, select Processor Configuration, and then set Intel (R) VT and Intel (R) VT-d to Enabled.

                              5. Press F10 to save and exit.

                              Server Troubleshooting

                              For general troubleshooting information, see the see the Cisco UCS Manager B-Series Troubleshooting Guide.

                              Server Configuration

                              Cisco UCS blade servers are intended to be configured and managed using Cisco UCS Manager. For details, see the Configuration Guide for the version of Cisco UCS Manager that you are using. The configuration guides are available at the following URL: http:/​/​www.cisco.com/​en/​US/​products/​ps10281/​products_​installation_​and_​configuration_​guides_​list.html

                              Physical Specifications for the Cisco UCS B420 M3

                              Specification

                              Value

                              Height

                              1.95 inches (50 mm)

                              Width

                              16.50 inches (419.1 mm)

                              Depth

                              24.4 inches (620 mm)

                              Weight

                              34.5 lbs (15.65 kg)

                              The system weight listed here is an estimate for a fully configured system and will vary depending on peripheral devices installed.