Cisco UCS 5108 Server Chassis Installation Guide
Overview
Downloads: This chapterpdf (PDF - 2.72MB) The complete bookPDF (PDF - 8.12MB) | The complete bookePub (ePub - 5.2MB) | The complete bookMobi (Mobi - 10.02MB) | Feedback

Overview

Contents

Overview

This chapter contains the following sections:

System Overview

The Cisco UCS 5108 server chassis and its components are part of the Cisco Unified Computing System (UCS), which uses the Cisco UCS 5108 server system with the two I/O modules and the Cisco UCS Fabric Interconnects to provide advanced options and capabilities in server and data management. All servers are managed via the GUI or CLI with Cisco UCS Manager.

The Cisco UCS 5108 server chassis system consists of the following components:

  • Cisco UCS 5108 server chassis–AC version (UCSB-5108-AC2 and N20-C6508)
  • Cisco UCS 5108 server chassis–DC version (UCSB-5108-DC2 and UCSB-5108-DC)
  • Cisco UCS 2104XP I/O Module (N20-I6584)—Up to two I/O modules, each providing four ports of 10-Gb Ethernet, Cisco Data Center Ethernet, and Fibre Channel over Ethernet (FCoE) connection to the fabric interconnect
  • Cisco UCS 2208XP I/O Module (UCS-IOM-2208XP)—Up to two I/O modules, each providing eight universal ports configurable as a 10-Gb Ethernet, Cisco Data Center Ethernet, or Fibre Channel over Ethernet (FCoE) connection to the fabric interconnect
  • Cisco UCS 2204XP I/O Module (UCS-IOM-2204XP)—Up to two I/O modules, each providing four universal ports configurable as a 10-Gb Ethernet, Cisco Data Center Ethernet, or Fibre Channel over Ethernet (FCoE) connection to the fabric interconnect
  • A number of SFP+ choices using copper or optical fiber
  • Power supplies (N20-PAC5-2500W, UCSB-PSU-2500ACPL or UCSB-PSU-2500DC48)—Up to four 2500 Watt hot-swappable power supplies
  • Fan modules (N20-FAN5)—Eight hot-swappable fan modules
  • UCS B-series blade servers, including
    • Cisco UCS B200 blade servers (N20-B6620-1 for M1 or N20-B6625-1 for M2)—Up to eight half-width blade servers, each containing two CPUs and holding up to two hard drives capable of RAID 0 or 1
    • Cisco UCS B200 M3 blade servers (UCSB-B200-M3)—Up to eight half-width blade servers, each containing two CPUs and holding up to two hard drives capable of RAID 0 or 1
    • Cisco UCS B22 blade servers (UCSB-B22-M3)—Up to eight half-width blade servers, each containing two CPUs and holding up to two hard drives capable of RAID 0 or 1
    • Cisco UCS B230 blade servers (N20-B6730)—Up to eight half-width blade servers, each containing two CPUs and holding up to two SDD drives capable of RAID 0 or 1
    • Cisco UCS B250 blade servers (N20-B6620-2 for M1 or N20-B6625-2 for M2)—Up to four full-width blade servers, each containing two CPUs and holding up to two hard drives capable of RAID 0 or 1
    • Cisco UCS B440 blade servers (N20-B6740-2)—Up to four full-width blade servers, each containing four CPUs and holding up to four hard drives capable of RAID 0, 1, 5, and 6
    • Cisco UCS B420 blade servers (UCSB-B420-M3)—Up to four full-width blade servers, each containing four CPUs and holding up to four hard drives capable of RAID 0, 1, 5, and 10
    • Cisco UCS B260 M4 blade servers (UCSB-EX-M4-1C)—Up to four full-width blade servers, each containing two CPUs and a SAS RAID controller
    • Cisco UCS B460 M4 blade servers (UCSB-EX-M4-1A)—Up to two full-width blade servers, each containing four CPUs and SAS RAID controllers

For smaller solutions, the Cisco UCS 6324 Fabric Interconnect can be used in the I/O slots at the back of the Cisco USC 5108 Chassis. The 6324 Fabric Interconnect is only supported in the UCSB-5108-AC2 and UCSB-5108-DC2 versions of the 5100 Series Chassis.

The smaller solution consists of the following components:

  • Cisco UCS 5108 server chassis–AC version (UCSB-5108-AC2)
  • Cisco UCS 5108 server chassis–DC version (UCSB-5108-DC2)
  • Cisco UCS 6324 Fabric Interconnect for the UCS Mini system (UCS-FI-M-6324)—Up to two integrated fabric interconnect modules, each providing four SFP+ ports of 10-Gigabit Ethernet and Fibre Channel over Ethernet (FCoE), and a QSFP+ port
  • A number of SFP+ choices using copper or optical fiber
  • Power supplies (UCSB-PSU-2500ACDV, UCSB-PSU-2500DC48, and UCSB-PSU-2500HVDC)—Up to four 2500 Watt, hot-swappable power supplies
  • Fan modules (N20-FAN5)—Eight hot-swappable fan modules
  • UCS B-Series blade servers, including the following:
    • Cisco UCS B200 M3 blade servers (UCSB-B200-M3)—Up to eight half-width blade servers, each containing two CPUs and holding up to two hard drives capable of RAID 0 or 1
  • UCS C-Series rack servers, including the following:
    • Cisco UCS C240 M3 rack servers (UCSC-C240-M3) and Cisco UCS C220 M3 rack servers—Up to seven rack servers, either C240 M3 or C220 M3 or a combination of the two.

Features and Benefits

The Cisco UCS 5108 server chassis revolutionizes the use and deployment of blade-based systems. By incorporating unified fabric, integrated, embedded management, and fabric extender technology, the Cisco Unified Computing System enables the chassis to have fewer physical components, no independent management, and to be more energy efficient than traditional blade server chassis.

This simplicity eliminates the need for dedicated chassis management and blade switches, reduces cabling, and enables the Cisco Unified Computing System to scale to 40 chassis without adding complexity. The Cisco UCS 5108 server chassis is a critical component in delivering the Cisco Unified Computing System benefits of data center simplicity and IT responsiveness.

Table 1 Features and Benefits

Feature

Benefit

Management by Cisco UCS Manager

Reduces total cost of ownership by removing management modules from the chassis, making the chassis stateless.

Provides a single, highly available management domain for all system chassis, reducing administrative tasks.

Unified fabric

Decreases TCO by reducing the number of network interface cards (NICs), host bus adapters (HBAs), switches, and cables needed.

Support for one or two Cisco UCS 2100 Series or Cisco UCS 2200 FEXes, and support for one or two Cisco UCS 6324 Fabric Interconnects in the UCS Mini chassis

Eliminates switches from the chassis, including the complex configuration and management of those switches, allowing a system to scale without adding complexity and cost.

Allows use of two I/O modules for redundancy or aggregation of bandwidth.

Enables bandwidth scaling based on application needs; blades can be configured from 1.25 Gbps to 40 Gbps or more.

Auto discovery

Requires no configuration; like all components in the Cisco Unified Computing System, chassis are automatically recognized and configured by Cisco UCS Manager.

High-performance midplane

Provides investment protection for new fabric extenders and future blade servers.

Supports up to 2x 40 Gigabit Ethernet for every blade server slot.

Provides 8 blades with 1.2 Tbps of available Ethernet throughput for future I/O requirements. The Cisco UCS 6324 Fabric Interconnect supports only 512 Gbps.

Provides reconfigurable chassis to accommodate a variety of form factors and functions.

Redundant hot swappable power supplies and fans

Provides high availability in multiple configurations.

Increases serviceability.

Provides uninterrupted service during maintenance.

Available configured for AC or DC environments (mixing not supported)

Hot-pluggable blade servers, FEXes, and fabric interconnects

Provides uninterrupted service during maintenance and server deployment.

Comprehensive monitoring

Provides extensive environmental monitoring on each chassis

Allows use of user thresholds to optimize environmental management of the chassis.

Efficient front-to-back airflow

Helps reduce power consumption and increase component reliability.

Tool-free installation

Requires no specialized tools for chassis installation.

Provides mounting rails for easy installation and servicing.

Mixed blade configurations

Allows up to 8 half-width or 4 full-width blade servers, or any combination thereof, for outstanding flexibility. When configured with the 6324 Fabric Interconnect, only 8 half-width B200 M3 blades are supported.

Components

Cisco UCS 5108 Server Chassis

The Cisco UCS 5100 Series Blade Server Chassis is a scalable and flexible blade server chassis for today’s and tomorrow’s data center that helps reduce total cost of ownership. There are two versions available that can be configured for AC (N20-C6508 and UCSB-5108-AC2) and two versions that can be configured for DC (UCSB-5108-DC and UCSB-5108-DC2) power environments. An additional version (UCSB-5108-HVDC) is available that can be configured for 200 - 380V DC environments.

Is six rack units (6 RU) high and can mount in an industry-standard 19-inch rack with square holes (such as the Cisco R Series Racks) or in round hole racks when an adapter is used. The chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half- and full-width blade form factors.

Up to four hot-swappable AC, DC or HVDC power supplies are accessible from the front of the chassis. These power supplies can be configured to support nonredundant, N+1 redundant, and grid-redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for I/O modules. A passive backplane provides support for up to 80 Gbps of I/O bandwidth to each half-width blade and 160 Gbps of I/O bandwidth to each full width-blade.

Scalability is dependent on both hardware and software. For more information, see FEX Upgrade Considerations and the appropriate UCS software release notes.

LEDs

LEDs on the chassis indicate system connectivity and failure warnings. See LED Locations for details. There is also a flashing blue Beaconing LED and button that can be triggered manually or remotely from UCS Manager.

Buttons

The beaconing function LED is also a feature on/off button. When triggered, beaconing of the server chassis is observable remotely from UCS Manager.

Connectors

There are no user connectors such as RJ-45 ports on the chassis itself.

Midplane

The integral chassis midplane supports the following:

  • 320 G total bandwidth to each of two I/O Modules
  • Auto-discover of all components
  • Redundant data and management paths
  • 10 G Base-KR

The midplane is an entirely passive device.

Blade Servers

The Cisco UCS B-Series Blade Servers are based on industry-standard server technologies and provide the following:

  • Up to two or four Intel multi-core processors, depending on the server
  • Front-accessible, hot-swappable hard drives or solid-state disk (SSD) drives
  • Depending on the server, support is available for up to three adapter card connections for up to 160 Gbps of redundant I/O throughput
  • Industry-standard double-data-rate 3 (DDR3) memory
  • Remote management through an integrated service processor that also executes policy established in Cisco UCS Manager software
  • Local keyboard, video, and mouse (KVM) and serial console access through a front console port on each server
  • Out-of-band access by remote KVM, Secure Shell (SSH), and virtual media (vMedia) as well as Intelligent Platform Management Interface (IPMI)

The Cisco UCS B-Series offers multiple blade server models. The supported processor family is indicated by M1, M2, M3, or M4 designations on the model.

Cisco UCS B200 Blade Servers

For full service and installation instructions, see the Cisco UCS B200 Blade Server Installation and Service Note. You can install up to eight UCS B200 M1 or M2 Blade Servers to a chassis.

Figure 1. Cisco UCS B200 M1 and M2



1

Paper tab for server name or serial numbers

7

Network link status LED

2

Blade ejector handle

8

Blade health LED

3

Ejector captive screw

9

Console connector

4

Hard drive bay 1

10

Reset button access

5

Hard drive bay 2

11

Beaconing LED and button

6

Power button and LED

   

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the over all health of the blade server, and whether the server is set to give a flashing blue beaconing indication. See Interpreting LEDs for details.

The removable hard disks also have LEDs indicating hard disk access activity and hard disk health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may get turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port gives a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Cisco UCS B200 M3 Blade Servers

For full service and installation instructions, see the Cisco UCS B200 M3 Blade Server Installation and Service Note. You can install up to eight UCS B200 M3 Blade Servers to a chassis.

Figure 2. Cisco UCS B200 M3



1

Asset Tag 1

7

Network link status LED

2

Blade ejector handle

8

Blade health LED

3

Ejector captive screw

9

Console connector

4

Hard drive bay 1

10

Reset button access

5

Hard drive bay 2

11

Beaconing LED and button

6

Power button and LED

   
1 Each server has a blank plastic tag that pulls out of the front panel which is provided so that you can add your own asset tracking label without interfering with the intended air flow.

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the over all health of the blade server, and whether the server is set to give a flashing blue beaconing indication. See Interpreting LEDs for details.

The removable hard disks also have LEDs indicating hard disk access activity and hard disk health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may get turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port gives a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Cisco UCS B22 M3 Blade Servers

For full service and installation instructions, see the Cisco UCS B22 Blade Server Installation and Service Note. You can install up to eight UCS B22 M3 Blade Servers to a chassis.

Figure 3. Cisco UCS B22 M3



1

Asset tag 2

7

Network link status LED

2

Blade ejector handle

8

Blade health LED

3

Ejector captive screw

9

Console connector

4

Hard drive bay 1

10

Reset button access

5

Hard drive bay 2

11

Beaconing LED and button

6

Power button and LED

   
2 Each server has a blank plastic asset tag that pulls out of the front panel, provided so you can add your own asset tracking label without interfering with the intended air flow.

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the over all health of the blade server, and whether the server is set to give a flashing blue beaconing indication. See Interpreting LEDs for details.

The removable hard disks also have LEDs indicating hard disk access activity and hard disk health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may get turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port gives a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Cisco UCS B230 Blade Servers

For full service and installation instructions, see the Cisco UCS B230 Blade Server Installation and Service Note. You can install up to eight UCS B230 Blade Servers to a chassis.

Figure 4. Cisco UCS B230 (N20-B6730) Front Panel

1

SSD 1 Activity LED

9

Beaconing LED and button

2

SSD 1 Fault/Locate LED

10

System Activity LED

3

SSD sled in Bay 1

11

Blade health LED

4

SSD 2 Activity

12

Reset button access

5

SSD 2 Fault LED

13

Power button and LED

6

Ejector lever captive screw

14

Console connector

7

Ejector lever

15

Asset tag

8

SSD sled in Bay 1

   

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the over all health of the blade server, and whether the server is set to give a flashing blue beaconing indication. See Interpreting LEDs for details.

The removable hard disks also have LEDs indicating hard disk access activity and hard disk health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may get turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port gives a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Cisco UCS B250 Blade Servers

For full service and installation instructions, see the Cisco UCS B250 Blade Server Installation and Service Note.

Figure 5. Cisco UCS B250



1

Hard drive bay 1

8

Power button and LED

2

Hard drive bay 2

9

Network link status LED

3

Left ejector captive screw

10

Blade health LED

4

Left blade ejector handle

11

Console connector

5

Paper tab for server name or serial numbers

12

Reset button access

6

Right blade ejector handle

13

Beaconing LED and button

7

Right ejector captive screw

   

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the over all health of the blade server, and whether the server is set to give a flashing blue beaconing indication. See Interpreting LEDs for details.

The removable hard disks also have LEDs indicating hard disk access activity and hard disk health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may get turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port gives a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Cisco UCS B440 Blade Servers

For full service and installation instructions, see the Cisco UCS B440 High Performance Blade Server Installation and Service Note.

Figure 6. Cisco UCS B440



1

Hard drive bay 1

9

Right ejector thumbscrew

2

Hard drive bay 2

10

Power button and LED

3

Hard drive bay 3

11

Network link status LED

4

Hard drive bay 4

12

Blade health LED

5

RAID battery backup module (BBU)

13

Local console connection

6

Left ejector thumbscrew

14

Reset button access

7

Left ejector handle

15

Locate button and LED

8

Right ejector handle

   

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the over all health of the blade server, and whether the server is set to give a flashing blue beaconing indication. See Interpreting LEDs for details.

The removable hard disks also have LEDs indicating hard disk access activity and hard disk health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may get turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port gives a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Cisco UCS B420 M3 High Performance Blade Server

For full service and installation instructions, see the Cisco UCS B420 M3 High Performance Blade Server Installation and Service Note. You can install up to four UCS B420 M3 High Performance Blade Servers to a chassis.

Figure 7. Cisco UCS B420 M3



1

Hard drive bay 1

8

Power button and LED

2

Hard drive bay 2

9

Network link status LED

3

Hard drive bay 3

10

Blade health LED

4

Hard drive bay 4

11

Console connector

5

Left ejector handle

12

Reset button access

6

Asset tag 3

13

Beaconing LED and button

7

Right ejector handle

   
3 Each server has a blank plastic asset tag that pulls out of the front panel, provided so you can add your own asset tracking label without interfering with the intended air flow.

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the over all health of the blade server, and whether the server is set to give a flashing blue beaconing indication. See Interpreting LEDs for details.

The removable hard disks also have LEDs indicating hard disk access activity and hard disk health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may get turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port gives a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Cisco UCS B260 M4 Scalable Blade Server

You can install up to four UCS B260 M4 Blade Servers in the Cisco UCS 5108 server chassis.

Figure 8. Cisco UCS B260 M4 Scalable Blade Server



1

Drive bay 1

7

Network link status LED

2

Drive bay 2

8

Power button and LED

3

Reset button access

9

Right ejector handle

4

Beaconing button and LED

10

UCS Scalability Terminator

5

Local console connection

11

Left ejector handle

6

Blade health LED

12

Asset tag

Each server has a blank plastic tag that pulls out of the front panel so you can add your own asset tracking label without interfering with the intended air flow.

Cisco UCS B460 M4 Blade Server

The UCS B460 M4 Blade Server is a four-socket blade server that consists of two UCS Scalable M4 Blade Modules that are attached together with the UCS Scalability Connector. Up to two Cisco UCS B460 M4 Blade Servers can be installed in the Cisco UCS 5108 chassis.

Figure 9. Cisco UCS B460 M4 Blade Server

1

Drive bay 1

4

UCS Scalability Connector

2

Drive bay 2

5

Drive bay 4

3

Drive bay 3

Adapter Cards

Depending on the model of server in question, one to three adapter cards will reside in each blade server, providing failover connectivity to each FEX in the chassis. The following models are available, and others are released on an ongoing basis:

Cisco UCS Virtual Interface Card 1240

The Cisco UCS Virtual Interface Card 1240 is a four-port 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) designed exclusively for the M3 generation of Cisco UCS B-Series blade servers. When used in combination with an optional port expander, the Cisco UCS VIC 1240 capabilities can be expanded to either ports of 10 Gigabit Ethernet.

The Cisco UCS VIC 1240 enables a policy-based, stateless, agile server infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1240 supports Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment.

Cisco UCS Virtual Interface Card 1280

The Cisco UCS Virtual Interface Card 1280 (UCS-VIC-M82-8P) is an eight-port 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable mezzanine card designed exclusively for Cisco UCS B-Series Blade Servers. The card enables a policy-based, stateless, agile server infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS Virtual Interface Card 1280 supports Cisco Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS Fabric Interconnect ports to virtual machines, simplifying server virtualization deployment.

Cisco UCS M81KR Virtual Interface Card

The Cisco UCS M81KR Virtual Interface Card is a virtualization-optimized Fibre Channel over Ethernet (FCoE) adapter card. The virtual interface card is a dual-port 10 Gigabit Ethernet adapter card that supports up to 128 Peripheral Component Interconnect Express (PCIe) standards-compliant virtual interfaces that can be dynamically configured so that both their interface type (network interface card [NIC] or host bus adapter [HBA]) and identity (MAC address and worldwide name [WWNN]) are established using just-in-time provisioning. In addition, the Cisco UCS M81KR supports network interface virtualization and Cisco VN-Link technology.

The Cisco UCS M81KR is designed for both traditional operating system and virtualization environments. It is optimized for virtualized environments, for organizations that seek increased mobility in their physical environments, and for data centers that want reduced TCO through NIC, HBA, cabling, and switch reduction.

The Cisco UCS M81KR presents up to 128 virtual interfaces to the operating system on a given blade. The 128 virtual interfaces can be dynamically configured by Cisco UCS Manager as either Fibre Channel or Ethernet devices. Deployment of applications using multiple Ethernet and Fibre Channel interfaces is no longer constrained by the available physical adapters. To an operating system or a hypervisor running on a Cisco UCS B-Series Blade Server, the virtual interfaces appear as regular PCIe devices.

The Cisco UCS M81KR has built-in architectural support enabling the virtual machine to directly access the adapter. I/O bottlenecks and memory performance can be improved by providing virtual machines direct access to hardware I/O devices, eliminating the overhead of embedded software switches.

The Cisco UCS M81KR also brings adapter consolidation to physical environments. The adapter can be defined as multiple different NICs and HBAs. For example, one adapter card can replace two quad-port NICs and two single-port HBAs, resulting in fewer NICs, HBAs, switches, and cables.

Cisco UCS 82598KR-CI 10 Gigabit Ethernet Adapter

The Cisco UCS 82598KR-CI 10 Gigabit Ethernet adapter is based on the Intel 82598 10 Gigabit Ethernet controller, which is designed for efficient high-performance Ethernet transport. It provides a solution for data center environments that need low-latency 10 Gigabit Ethernet transport capability, and a dual-port connection to the midplane of the blade server chassis.

The Cisco UCS 82598KR-CI supports Intel Input/Output Acceleration Technology (I/OAT) as well as virtual queues for I/O virtualization. The adapter is energy efficient and can also help reduce CPU utilization by providing large segment offload (LSO) and TCP segmentation offload (TSO). The Cisco UCS 82598KR-CI uses Intel Virtual Machine Device Queue (VMDq) technology for the efficient routing of packets to the appropriate virtual machine.

Cisco UCS M71KR-E Emulex Converged Network Adapter

The Cisco UCS M71KR-E Emulex Converged Network Adapter (CNA) is an Emulex-based Fibre Channel over Ethernet (FCoE) adapter card that provides connectivity for Cisco UCS B-Series Blade Servers in the Cisco Unified Computing System.

Designed specifically for the Cisco UCS blades, the adapter provides a dual-port connection to the midplane of the blade server chassis. The Cisco UCS M71KR-E uses an Intel 82598 10 Gigabit Ethernet controller for network traffic and an Emulex 4-Gbps Fibre Channel controller for Fibre Channel traffic all on the same adapter card. The Cisco UCS M71KR-E presents two discrete Fibre Channel host bus adapter (HBA) ports and two Ethernet network ports to the operating system.

The Cisco UCS M71KR-E provides both 10 Gigabit Ethernet and 4-Gbps Fibre Channel functions using drivers from Emulex, providing:

  • Compatibility with current Emulex adapter-based SAN environments and drivers
  • Consolidation of LAN and SAN traffic over the same adapter card and fabric, reducing the overall number of network interface cards (NICs), HBAs, cables, and switches
  • Integrated management with Cisco UCS Manager

Cisco UCS M71KR-Q QLogic Converged Network Adapter

The Cisco UCS M71KR-Q QLogic Converged Network Adapter (CNA) is a QLogic-based Fibre Channel over Ethernet (FCoE) adapter card that provides connectivity for Cisco UCS B-Series Blade Servers in the Cisco Unified Computing System.

Designed specifically for the Cisco UCS blades, the adapter provides a dual-port connection to the midplane of the blade server chassis. The Cisco UCS M71KR-Q uses an Intel 82598 10 Gigabit Ethernet controller for network traffic and a QLogic 4-Gbps Fibre Channel controller for Fibre Channel traffic, all on the same adapter card. The Cisco UCS M71KR-Q presents two discrete Fibre Channel host bus adapter (HBA) ports and two Ethernet network ports to the operating system.

The Cisco UCS M71KR-Q provides both 10 Gigabit Ethernet and 4-Gbps Fibre Channel functions using drivers from QLogic, providing:

  • Compatibility with current QLogic adapter-based SAN environments and drivers
  • Consolidation of LAN and SAN traffic over the same adapter card and fabric, reducing the overall number of network interface cards (NICs), HBAs, cables, and switches
  • Integrated management with Cisco UCS Manager

Cisco UCS 6324 Fabric Interconnect

The Cisco UCS 6324 Fabric Interconnect (UCS-FI-M-6324) is an integrated fabric interconnect and I/O module. It can be configured only with the UCSB-5108-AC2 and UCSB-5108-DC2 versions of the chassis.

Figure 10. Cisco UCS 6324 Fabric Interconnect

1

Management port

5

QSFP+ licensed server port

2

Power-on LED

6

Console management port

3

USB port

7

Ejector captive screws

4

Port LEDs

8

Four SPF+ unified ports

The Cisco UCS 6324 Fabric Interconnect connects directly to external Cisco Nexus switches through 10-Gigabit Ethernet ports and Fibre Channel over Ethernet (FCoE) ports.

The Cisco UCS 6324 Fabric Interconnect fits into the back of the Cisco UCS Mini chassis. Each Cisco UCS Mini chassis can support up to two UCS 6324 Fabric Interconnects, which enables increased capacity as well as redundancy.

Cisco UCS 2104XP FEXes

Figure 11. Cisco UCS 2104 IO Module

1

Fabric extender status indicator LED

3

Connection ports (to the fabric interconnect)

2

Link status indicator LEDs

4

Captive screws for the insertion latches

Cisco UCS 2100 Series FEXes bring the unified fabric into the blade server enclosure, providing 10 Gigabit Ethernet connections between blade servers and the fabric interconnect, simplifying diagnostics, cabling, and management.

The Cisco UCS 2104 (N20-I6584) extends the I/O fabric between the fabric interconnects and the Cisco UCS 5100 Series Blade Server Chassis, enabling a lossless and deterministic Fibre Channel over Ethernet (FCoE) fabric to connect all blades and chassis together. Because the FEX is similar to a distributed line card, it does not do any switching and is managed as an extension of the fabric interconnects. This approach removes switching from the chassis, reducing overall infrastructure complexity and enabling the Cisco Unified Computing System to scale to many chassis without multiplying the number of switches needed, reducing TCO and allowing all chassis to be managed as a single, highly available management domain.

The Cisco 2100 Series also manages the chassis environment (the power supply and fans as well as the blades) in conjunction with the fabric interconnect. Therefore, separate chassis management modules are not required.

Cisco UCS 2100 Series FEXes fit into the back of the Cisco UCS 5100 Series chassis. Each Cisco UCS 5100 Series chassis can support up to two FEXes, enabling increased capacity as well as redundancy.

LEDs

There are port activity LEDs and an LED that indicates connectivity to the servers in the chassis.

Buttons

No buttons are on the FEX.

Connectors

I/O ports support SFP+ 10 Gb Ethernet connections. There is also a console connection for use by Cisco diagnostic technicians. It is not intended for customer use.

Cisco UCS 2200 Series FEXes

Figure 12. Cisco UCS 2208 FEX (UCS-IOM-2208XP)

1

Fabric extender status indicator LED

3

Connection ports (to the fabric interconnect)

2

Link status indicator LEDs

4

Captive screws for the insertion latches

Figure 13. Cisco UCS 2204XP FEX

1

Fabric extender status indicator LED

3

Connection ports (to the fabric interconnect)

2

Link status indicator LEDs

4

Captive screws for the insertion latches

Cisco UCS 2200 Series FEXes bring the unified fabric into the blade server enclosure, providing 10 Gigabit Ethernet connections between blade servers and the fabric interconnect, simplifying diagnostics, cabling, and management.

The Cisco UCS 2200 Series extends the I/O fabric between the fabric interconnects and the Cisco UCS 5100 Series Blade Server Chassis, enabling a lossless and deterministic Fibre Channel over Ethernet (FCoE) fabric to connect all blades and chassis together. Because the FEX is similar to a distributed line card, it does not do any switching and is managed as an extension of the fabric interconnects. This approach removes switching from the chassis, reducing overall infrastructure complexity and enabling the Cisco Unified Computing System to scale to many chassis without multiplying the number of switches needed, reducing TCO and allowing all chassis to be managed as a single, highly available management domain.

The Cisco 2200 Series also manages the chassis environment (the power supply and fans as well as the blades) in conjunction with the fabric interconnect. Therefore, separate chassis management modules are not required.

Cisco UCS 2200 Series FEXes fit into the back of the Cisco UCS 5100 Series chassis. Each Cisco UCS 5100 Series chassis can support up to two FEXes, enabling increased capacity as well as redundancy.

LEDs

There are port activity LEDs and an LED that indicates connectivity to the servers in the chassis.

Buttons

No buttons are on the FEX.

Connectors

I/O ports support SFP+ 10 Gb Ethernet connections. There is also a console connection for use by Cisco diagnostic technicians. It is not intended for customer use.

Power Distribution Unit (PDU)

The AC PDU (N01-UAC1) provides load balancing between the installed power supplies, as well as distributing power to the other chassis components. DC versions of the chassis use a different PDU with appropriate connectors. The PDU is not field-serviceable, and converting an AC chassis to a DC chassis by swapping the PDU is not supported, as the PDU is not separately orderable.

LEDs

No LEDs are on the PDU.

Buttons

No buttons are on the PDU.

Connectors

The AC version of the PDU has four power connectors rated for 15.5 A, 200-240V @ 50-60 Hz. Only use power cords that are certified by the relevant country safety authority or that are installed by a licensed or certified electrician in accordance with the relevant electrical codes. All connectors, plugs, receptacles, and cables must be rated to at least the amperage of inlet connector on the PSU or be independently fused in accordance with the relevant electrical code. See for more information about the supported power cords. See Supported AC Power Cords and Plugs for more information.

The DC version of the PDU has eight dual-post lug power connections, four positive and four negative. A single dual-post lug grounding connection is also provided. The HDVC version of the PDU uses one Andersen SAF-D-GRID(R) connector per power supply.

Fan Modules

The chassis can accept up to eight fan modules (N20-FAN5). A chassis must have filler plates in place if no fan will be installed in a slot for an extended period.

LEDs

There is one LED indication of the fan module’s operational state. See Interpreting LEDs for details.

Buttons and Connectors

No buttons or connectors are on a fan module.

Power Supplies

Different power supplies are available to work with the AC (UCSB-PSU-2500ACPL or N20-PAC5-2500W) or DC (UCSB-PSU-2500DC48) versions of the chassis.

When configured with the Cisco UCS 6324 Fabric Interconnect, only the following power supplies are supported: UCSB-PSU-2500ACDV dual-voltage supply and UCSB-PSU-2500DC48 -48V DC power supply.

To determine the number of power supplies needed for a given configuration, use the Cisco UCS Power Calculator tool.

LEDs

Two LEDs indicate power connection presence, power supply operation, and fault states. See Interpreting LEDs for details.

Buttons

There are no buttons on a power supply.

Connectors

The power connections are at the rear of the chassis on the PDU, with different types for AC, DC, or HVDC input. Four hot-swappable power supplies are accessible from the front of the chassis. These power supplies can be configured to support non-redundant, N+1 redundant, and grid-redundant configurations.

Power Supply Redundancy

Power supply redundancy functions identically for AC and DC configured systems. When considering power supply redundancy you need to take several things into consideration:

  • AC power supplies are all single phase and have a single input for connectivity to customer power source (a rack PDU such as the Cisco RP Series PDU or equivalent).
  • The number of power supplies required to power a chassis varies depending on the following factors:
    • The total "Maximum Draw" required to power all the components configured within that chassis—such as I/O modules, fans, blade servers (CPU and memory configuration of the blade servers).
    • The Desired Power Redundancy for the chassis. The supported power configurations are non-redundant, N+1 redundancy (or any requirement greater than N+1), and grid redundancy.

To configure redundancy, see the Configuration Guide for the version of Cisco UCS Manager that you are using. The configuration guides are available at the following URL: http:/​/​www.cisco.com/​en/​US/​products/​ps10281/​products_​installation_​and_​configuration_​guides_​list.html.

Non-redundant Mode

In non-redundant mode, the system may go down with the loss of any supply or power grid associated with any particular chassis. We do not recommend running a production system in non-redundant mode. To operate in non-redundant mode, each chassis should have at least two power supplies installed. Supplies that are not used by the system are placed into standby. The supplies that are placed into standby depends on the installation order (not on the slot number). The load is balanced across active power supplies, not including any supplies in standby.

When using Cisco UCS Release 1.3(1) or earlier releases, small configurations that use less than 25000W may be powered up on a single power supply. When using Cisco UCS Release 1.4(1) and later releases, the chassis requires a minimum of 2 power supplies.


Note


In a non-redundant system, power supplies can be in any slot. Installing less than the required number of power supplies results in undesired behavior such as server blade shutdown. Installing more than the required amount of power supplies may result in lower power supply efficiency. At most, this mode will require two power supplies.


N+1 Redundancy

The N+1 redundancy configuration implies that the chassis contains a total number of power supplies to satisfy non-redundancy, plus one additional power supply for redundancy. All the power supplies that are participating in N+1 redundancy are turned on and equally share the power load for the chassis. If any additional power supplies are installed, Cisco UCS Manager recognizes these “unnecessary” power supplies and places them on standby.

If a power supply should fail, the surviving supplies can provide power to the chassis. In addition, UCS Manager turns on any "turned-off" power supplies to bring the system back to N+1 status.

To provide N+1 protection, the following number of power supplies is recommended:

  • Three power supplies are recommended if the power configuration for that chassis requires greater than 2500 W or if using UCS Release 1.4(1) and later releases
  • Two power supplies are sufficient if the power configuration for that chassis requires less than 2500 W or the system is using UCS Release 1.3(1) or earlier releases
  • Four power supplies are recommended when running the dual-voltage power supply from a 100 - 120V source.

Adding an additional power supply to either of these configurations will provide an extra level of protection. Cisco UCS Manager turns on the extra power supply in the event of a failure and restores N+1 protection.


Note


An n+1 redundant system has either two or three power supplies, which may be in any slot.


Grid Redundancy

The grid redundant configuration is sometimes used when you have two power sources to power a chassis or you require greater than N+1 redundancy. If one source fails (which causes a loss of power to one or two power supplies), the surviving power supplies on the other power circuit continue to provide power to the chassis. A common reason for using grid redundancy is if the rack power distribution is such that power is provided by two PDUs and you want the grid redundancy protection in the case of a PDU failure.

To provide grid redundant (or greater than N+1) protection, the following number of power supplies is recommended:

  • Four power supplies are recommended if the power configuration for that chassis requires greater than 2500W or if using Cisco UCS Release 1.4(1) and later releases
  • Two power supplies are recommended if the power configuration for that chassis requires less than 2500W or the system is using Cisco UCS Release 1.3(1) or earlier releases

Note


Both grids in a power redundant system should have the same number of power supplies. If your system is configured for grid redundancy, slots 1 and 2 are assigned to grid 1 and slots 3 and 4 are assigned to grid 2. If there are only two power supplies (PS) in the a redundant mode chassis, they should be in slots 1 and 3. Slot and cord connection numbering is shown below.


Figure 14. Power Supply Bay and Connector Numbering

LEDs

LEDs on both the chassis and the modules installed within the chassis identify operational states, both separately and in combination with other LEDs.

LED Locations

Figure 15. LEDs on a Cisco UCS 5108 Server Chassis—Front View

Figure 16. LEDs on the Cisco UCS 5108 Server Chassis—Rear View

Figure 17. Cisco UCS 5108 Server Chassis—Rear View with the Cisco UCS 6324 Fabric Interconnect

Interpreting LEDs

Table 2 Chassis, Fan, and Power Supply LEDs

LED

Color

Description

Beaconing



LED and button

Off

Beaconing not enabled.

Blinking blue 1 Hz

Beaconing to locate a selected chassis—If the LED is not blinking, the chassis is not selected. You can initiate beaconing in UCS Manager or with the button.

Chassis connections



Off

No power.

Amber

No I/O module is installed or the I/O module is booting.

Green

Normal operation.

Chassis health



Solid amber

Indicates a component failure or a major over-temperature alarm.

Fan Module

Off

No power to the chassis or the fan module was removed from the chassis.

Amber

Fan module restarting.

Green

Normal operation.

Blinking amber

The fan module has failed.

Power Supply

OK

Off

No power to the slot.

Green

Normal operation.

Blinking green

AC power is present but the PS is either in redundancy standby mode or is not fully seated.

Fail

Off

Normal operation.

Amber

Over-voltage failure or over-temperature alarm.

Table 3 I/O Module LEDs

LED

Color

Description

Body



Off

No power.

Green

Normal operation.

Amber

Booting or minor temperature alarm.

Blinking amber

POST error or other error condition.

Port 1-4

Off

Link down.

Green

Link up and operationally enabled.

Amber

Link up and administratively disabled.

Blinking amber

POST error or other error condition.

Table 4 Cisco UCS 6324 Fabric Interconnect LEDs

LED

Color

Description

Body



Off

No power.

Green

Normal operation.

Amber

Booting or minor temperature alarm.

Blinking amber

Stopped due to user intervention or unable to come online, or major temperature alarm.

Port 1-4

Off

Link enabled but not connected.

Green

Link connected.

Amber

Operator disabled.

Blinking amber

Disabled due to error.

Table 5 Blade Server LEDs

LED

Color

Description

Power



Off

Power off.

Green

Normal operation.

Amber

Standby.

Link



Off

None of the network links are up.

Green

At least one network link is up.

Health



Off

Power off.

Green

Normal operation.

Amber

Minor error.

Blinking Amber

Critical error.

Activity



(Disk Drive)

Off

Inactive.

Green

Outstanding I/O to disk drive.

Health



(Disk Drive)

Off

No fault.

Amber

Some fault.