Guest

Support

Product Overview

Hierarchical Navigation

  • Viewing Options

  • PDF (4.1 MB)
  • Feedback
Product Overview

Table Of Contents

Product Overview

Features and Benefits

Components

Cisco UCS 5108 Server Chassis (N20-C6508 or UCSB-5108-DC)

LEDs

Buttons

Connectors

Midplane

Blade Servers

Cisco UCS B200 Blade Servers

Cisco UCS B230 Blade Servers

Cisco UCS B250 Blade Servers

Cisco UCS B440 Blade Servers

Adapter Cards

Cisco UCS Virtual Interface Card 1280

Cisco UCS M81KR Virtual Interface Card

Cisco UCS 82598KR-CI 10 Gigabit Ethernet Adapter

Cisco UCS M71KR-E Emulex Converged Network Adapter

Cisco UCS M71KR-Q QLogic Converged Network Adapter

Cisco UCS 2104XP I/O Modules (N20-I6584)

LEDs

Buttons

Connectors

Cisco UCS 2200 Series I/O Modules

LEDs

Buttons

Connectors

Power Distribution Unit (PDU) (N01-UAC1)

LEDs

Buttons

Connectors

Fan Modules (N20-FAN5)

LEDs

Buttons and Connectors

Power Supplies (N20-PAC5-2500W and UCSB-PSU-2500DC48)

LEDs

Buttons

Connectors

Power Supply Redundancy

LEDs

LED Locations

Interpreting LEDs


Product Overview


The Cisco UCS 5108 server chassis and its components are part of the Cisco Unified Computing System (UCS), which uses the Cisco UCS 5108 server system with the two I/O modules and the Cisco UCS Fabric Interconnects to provide advanced options and capabilities in server and data management. All servers are managed via GUI or CLI with Cisco UCS Manager.

The Cisco UCS 5108 server chassis system (Figure 1-1) consists of the following components:

Cisco UCS 5108 server chassis-AC version (N20-C6508)

Cisco UCS 5108 server chassis-DC version (UCSB-5108-DC)

Cisco UCS B200 blade servers (N20-B6620-1 for M1 or N20-B6625-1 for M2)—up to eight half-width blade servers, each containing two CPUs and holding up to two hard drives capable of RAID 0 or 1

Cisco UCS B230 blade servers (N20-B6730)—up to eight half-width blade servers, each containing two CPUs and holding up to two SDD drives capable of RAID 0 or 1

Cisco UCS B250 blade servers (N20-B6620-2 for M1 or N20-B6625-2 for M2)—up to four full-width blade servers, each containing two CPUs and holding up to two hard drives capable of RAID 0 or 1

Cisco UCS B440 blade servers (N20-B6740-2)—up to four full-width blade servers, each containing four CPUs and holding up to four hard drives capable of RAID 0, 1, 5, and 6

Cisco UCS 2104XP I/O Module (N20-I6584)—up to two I/O modules, each providing four ports of 10-Gb Ethernet, Cisco Data Center Ethernet, and Fibre Channel over Ethernet (FCoE) connection to the fabric interconnect

Cisco UCS 2208XP I/O Module (UCS-IOM-2208XP)—up to two I/O modules, each providing eight universal ports configurable as a 10-Gb Ethernet, Cisco Data Center Ethernet, or Fibre Channel over Ethernet (FCoE) connection to the fabric interconnect

Cisco UCS 2204XP I/O Module (UCS-IOM-2204XP)—up to two I/O modules, each providing four universal ports configurable as a 10-Gb Ethernet, Cisco Data Center Ethernet, or Fibre Channel over Ethernet (FCoE) connection to the fabric interconnect

A number of SFP+ choices from copper to fiber

Power supplies (N20-PAC5-2500W, UCSB-PSU-2500ACPL or N20-DC-2500)—up to four 2500 Watt hot-swapable power supplies

Fan modules (N20-FAN5)—eight hot-swapable fan modules

Figure 1-1 View of a Fully Populated Cisco UCS 5108 Server Chassis (AC Version Shown)

The rear of the chassis contains eight hot swappable fans, four power connectors (one for each power supply) in a replaceable power distribution unit, and two I/O bays for I/O Modules.

Features and Benefits

The Cisco UCS 5108 revolutionizes the use and deployment of blade-based systems. By incorporating unified fabric, integrated, embedded management, and I/O Module technology, the Cisco Unified Computing System enables the chassis to have fewer physical components, no independent management, and to be more energy efficient than traditional blade server chassis.

This simplicity eliminates the need for dedicated chassis management and blade switches, reduces cabling, and enables the Cisco Unified Computing System to scale to 40 chassis without adding complexity. The Cisco UCS 5108 chassis is a critical component in delivering the Cisco Unified Computing System benefits of data center simplicity and IT responsiveness.

Table 1-1 summarizes the features and benefits of the Cisco UCS 5108.

Table 1-1 Features and Benefits

Feature
Benefit

Management by Cisco UCS Manager

Reduces total cost of ownership by removing management modules from the chassis, making the chassis stateless.

Provides a single, highly available management domain for all system chassis, reducing administrative tasks.

Unified fabric

Decreases TCO by reducing the number of network interface cards (NICs), host bus adapters (HBAs), switches, and cables needed.

Support for one or two Cisco UCS 2100 Series or Cisco UCS 2200 I/O modules

Eliminates switches from the chassis along with complex configuration and management of those switches, allowing a system to scale without adding complexity and cost.

Allows use of two I/O modules for redundancy or aggregation of bandwidth.

Enables bandwidth scaling based on application needs; blades can be configured from 1.25 Gbps to 10 Gbps or more.

Auto discovery

Requires no configuration; like all components in the Cisco Unified Computing System, chassis are automatically recognized and configured by Cisco UCS Manager.

High-performance midplane

Provides investment protection.

Supports up to 2x 40 Gb Ethernet for every blade server slot when available.

Provides 8 blades with 1.2 terabits (Tb) of available Ethernet throughput for future I/O requirements.

Provides reconfigurable chassis to accommodate a variety of form factors and functions.

Redundant hot swappable power supplies and fans

Provides high availability in multiple configurations.

Increases serviceability.

Provides uninterrupted service during maintenance.

Available configured for AC or DC environments (mixing not supported)

Hot-pluggable blade servers and I/O modules

Provides uninterrupted service during maintenance and server deployment.

Comprehensive monitoring

Provides extensive environmental monitoring on each chassis

Allows use of user thresholds to optimize environmental management of the chassis.

Efficient front-to-back airflow

Helps reduce power consumption and increase component reliability.

Tool-free installation

Requires no specialized tools for chassis installation.

Provides mounting rails for easy installation and servicing.

Mixed blade configurations

Allows up to 8 half-width or 4 full-width blade servers, or any combination thereof, for maximum flexibility.


Components

Cisco UCS 5108 Server Chassis (N20-C6508 or UCSB-5108-DC)

The Cisco UCS 5100 Series Blade Server Chassis is a scalable and flexible blade server chassis for today's and tomorrow's data center while helping reduce total cost of ownership.It is available configured for AC and DC power environments.

Cisco's first blade server chassis offering, the Cisco UCS 5108 Blade Server Chassis (Figure 1-1), is six rack units (6 RU) high and can mount in an industry-standard 19-inch rack with square holes (such as the Cisco R Series Racks) or in round hole racks when an adapter is used. The chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half- and full-width blade form factors.

Four single-phase, hot-swappable AC or DC power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support nonredundant, N+1 redundant, and grid-redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors (one per power supply), and two I/O bays for I/O modules. A passive midplane provides up to an effective maximum of 20 Gbps of I/O bandwidth per server slot and up to 40 Gbps of I/O bandwidth for two slots, the midplane is built to eventually support 80Gbps per slot.

Scalability is dependant on both hardware and software. For more information, refer to the "I/OM Upgrade Considerations" section and to the appropriate UCS software release notes.

LEDs

There are two LEDs on the chassis, indicating system connectivity and failure warnings. See LED Locations, for details. There is also a flashing blue LED that can be triggered manually or remotely from UCS Manager.

Buttons

The beaconing function LED is also a feature on/off button. When triggered, beaconing of the server chassis is observable remotely from UCS Manager.

Connectors

There are no user connectors such as RJ-45 ports on the chassis itself.

Midplane

The integral chassis midplane supports:

40G total bandwidth to each of two I/O Modules

Auto-discover of all components

Redundant data and management paths

10G Base-KR

The midplane is an entirely passive device,

Blade Servers

The Cisco UCS B-Series Blade Servers are based on industry-standard server technologies and provide:

Up to two or four Intel multi-core processors, depending on the server

Front-accessible, hot-swappable hard drives or solid-state disk (SSD) drives

Support for up to one or two dual-port adapter card connections for up to 40 Gbps of redundant I/O throughput

Industry-standard double-data-rate 3 (DDR3) memory

Remote management through an integrated service processor that also executes policy established in Cisco UCS Manager software

Local keyboard, video, and mouse (KVM) and serial console access through a front console port on each server

Out-of-band access by remote KVM, Secure Shell (SSH) Protocol, and virtual media (vMedia) as well as Intelligent Platform Management Interface (IPMI)

The Cisco UCS B-Series offers four blade server models.

Cisco UCS B200 Blade Servers

For full service and installation instructions, refer to the Cisco UCS B200 Blade Server Installation and Service Note. You may install up to eight UCS B200 Blade Servers to a chassis.

1

Paper tab for server name or serial numbers

2

Blade ejector handle

3

Ejector captive screw

4

Hard drive bay 1

5

Hard drive bay 2

6

Power button and LED

7

Network link status LED

8

Blade health LED

9

Console connector

10

Reset button access

11

Beaconing LED and button

   

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the over all health of the blade server, and whether the server is set to give a flashing blue beaconing indication. See Interpreting LEDs for details.

The removable hard disks also have LEDs indicating hard disk access activity and hard disk health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may get turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port is provided to give a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Cisco UCS B230 Blade Servers

For full service and installation instructions, refer to the Cisco UCS B230 Blade Server Installation and Service Note. You may install up to eight UCS B230 Blade Servers to a chassis.

Figure 2 Cisco UCS B230 (N20-B6730) Front Panel

1

SSD 1 Activity LED

9

Beaconing LED and button

2

SSD 1 Fault/Locate LED

10

System Activity LED

3

SSD sled in Bay 1

11

Blade health LED

4

SSD 2 Activity

12

Reset button access

5

SSD 2 Fault LED

13

Power button and LED

6

Ejector lever captive screw

14

Console connector

7

Ejector lever

15

Asset tag

8

SSD sled in Bay 1

   

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the over all health of the blade server, and whether the server is set to give a flashing blue beaconing indication. See Interpreting LEDs for details.

The removable SSD drives also have LED on the server front panel indicating disk activity and health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may get turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port is provided to give a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Cisco UCS B250 Blade Servers

For full service and installation instructions, refer to the Cisco UCS B250 Blade Server Installation and Service Note.

1

Hard drive bay 1

2

Hard drive bay 2

3

Left ejector captive screw

4

Left blade ejector handle

5

Paper tab for server name or serial numbers

6

Right blade ejector handle

7

Right ejector captive screw

8

Power button and LED

9

Network link status LED

10

Blade health LED

11

Console connector

12

Reset button access

13

Beaconing LED and button

   

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the overall health of the blade server, and whether the server is set to give a flashing blue beaconing indication.See Interpreting LEDs for details.

The removable hard disks also have LEDs indicating hard disk access activity and hard disk health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may be turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port is provided to give a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Cisco UCS B440 Blade Servers

For full service and installation instructions, refer to the Cisco UCS B440 High Performance Blade Server Installation and Service Note.

1

Hard drive bay 1

9

Right ejector thumbscrew

2

Hard drive bay 2

10

Power button and LED

3

Hard drive bay 3

11

Network link status LED

4

Hard drive bay 4

12

Blade health LED

5

RAID battery backup module (BBU)

13

Local console connection

6

Left ejector thumbscrew

14

Reset button access

7

Left ejector handle

15

Locate button and LED

8

Right ejector handle

   

LEDs

The LED indicators indicate whether the blade server is in active or standby mode, the status of the network link, the overall health of the blade server, and whether the server is set to give a flashing blue beaconing indication.See Interpreting LEDs for details.

The removable hard disks also have LEDs indicating hard disk access activity and hard disk health.

Buttons

The Reset button is just inside the chassis and must be pressed using the tip of a paper clip or a similar item. Hold the button down for five seconds and then release it to restart the server if other methods of restarting are not working.

The beaconing function for an individual server may be turned on or off by pressing the combination button and LED. See Interpreting LEDs for details.

The power button and LED allows you to manually take a server temporarily out of service but leave it in a state where it can be restarted quickly.

Connectors

A console port is provided to give a direct connection to a blade server to allow operating system installation and other management tasks to be done directly rather than remotely. The port uses the KVM dongle device included in the chassis accessory kit. See KVM Cable for more information.

Adapter Cards

Cisco UCS Virtual Interface Card 1280

The Cisco UCS Virtual Interface Card 1280 (UCS-VIC-M82-8P) is an eight-port 10 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)-capable mezzanine card designed exclusively for Cisco UCS B-Series Blade Servers. The card enables a policy-based, stateless, agile server infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS Virtual Interface Card 1280 supports Cisco Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS Fabric Interconnect ports to virtual machines, simplifying server virtualization deployment.

Cisco UCS M81KR Virtual Interface Card

The Cisco UCS M81KR Virtual Interface Card is a virtualization-optimized Fibre Channel over Ethernet (FCoE) adapter card. The virtual interface card is a dual-port 10 Gigabit Ethernet adapter card that supports up to 128 Peripheral Component Interconnect Express (PCIe) standards-compliant virtual interfaces that can be dynamically configured so that both their interface type (network interface card [NIC] or host bus adapter [HBA]) and identity (MAC address and worldwide name [WWNN]) are established using just-in-time provisioning. In addition, the Cisco UCS M81KR supports network interface virtualization and Cisco VN-Link technology.

Unique to the Cisco Unified Computing System, the Cisco UCS M81KR is designed for both traditional operating system and virtualization environments. It is optimized for virtualized environments, for organizations that seek increased mobility in their physical environments, and for data centers that want reduced TCO through NIC, HBA, cabling, and switch reduction.

The Cisco UCS M81KR presents up to 128 virtual interfaces to the operating system on a given blade. The 128 virtual interfaces can be dynamically configured by Cisco UCS Manager as either Fibre Channel or Ethernet devices. Deployment of applications using multiple Ethernet and Fibre Channel interfaces is no longer constrained by the available physical adapters. To an operating system or a hypervisor running on a Cisco UCS B-Series Blade Server, the virtual interfaces appear as regular PCIe devices.

The Cisco UCS M81KR has built-in architectural support enabling the virtual machine to directly access the adapter. I/O bottlenecks and memory performance can be improved by providing virtual machines direct access to hardware I/O devices, eliminating the overhead of embedded software switches.

The Cisco UCS M81KR also brings adapter consolidation to physical environments. The adapter can be defined as multiple different NICs and HBAs. For example, one adapter card can replace two quad-port NICs and two single-port HBAs, resulting in fewer NICs, HBAs, switches, and cables.

Cisco UCS 82598KR-CI 10 Gigabit Ethernet Adapter

The Cisco UCS 82598KR-CI 10 Gigabit Ethernet adapter is based on the Intel 82598 10 Gigabit Ethernet controller, which is designed for efficient high-performance Ethernet transport. It provides a solution for data center environments that need low-latency 10 Gigabit Ethernet transport capability, and a dual-port connection to the midplane of the blade server chassis.

The Cisco UCS 82598KR-CI supports Intel Input/Output Acceleration Technology (I/OAT) as well as virtual queues for I/O virtualization. The adapter is energy efficient and can also help reduce CPU utilization by providing large segment offload (LSO) and TCP segmentation offload (TSO). The Cisco UCS 82598KR-CI uses Intel Virtual Machine Device Queue (VMDq) technology for the efficient routing of packets to the appropriate virtual machine.

Cisco UCS M71KR-E Emulex Converged Network Adapter

The Cisco UCS M71KR-E Emulex Converged Network Adapter (CNA) is an Emulex-based Fibre Channel over Ethernet (FCoE) adapter card that provides connectivity for Cisco UCS B-Series Blade Servers in the Cisco Unified Computing System.

Designed specifically for the Cisco UCS blades, the adapter provides a dual-port connection to the midplane of the blade server chassis. The Cisco UCS M71KR-E uses an Intel 82598 10 Gigabit Ethernet controller for network traffic and an Emulex 4-Gbps Fibre Channel controller for Fibre Channel traffic all on the same adapter card. The Cisco UCS M71KR-E presents two discrete Fibre Channel host bus adapter (HBA) ports and two Ethernet network ports to the operating system.

The Cisco UCS M71KR-E provides both 10 Gigabit Ethernet and 4-Gbps Fibre Channel functions using drivers from Emulex, providing:

Compatibility with current Emulex adapter-based SAN environments and drivers

Consolidation of LAN and SAN traffic over the same adapter card and fabric, reducing the overall number of network interface cards (NICs), HBAs, cables, and switches

Integrated management with Cisco UCS Manager

Cisco UCS M71KR-Q QLogic Converged Network Adapter

The Cisco UCS M71KR-Q QLogic Converged Network Adapter (CNA) is a QLogic-based Fibre Channel over Ethernet (FCoE) adapter card that provides connectivity for Cisco UCS B-Series Blade Servers in the Cisco Unified Computing System.

Designed specifically for the Cisco UCS blades, the adapter provides a dual-port connection to the midplane of the blade server chassis. The Cisco UCS M71KR-Q uses an Intel 82598 10 Gigabit Ethernet controller for network traffic and a QLogic 4-Gbps Fibre Channel controller for Fibre Channel traffic, all on the same adapter card. The Cisco UCS M71KR-Q presents two discrete Fibre Channel host bus adapter (HBA) ports and two Ethernet network ports to the operating system.

The Cisco UCS M71KR-Q provides both 10 Gigabit Ethernet and 4-Gbps Fibre Channel functions using drivers from QLogic, providing:

Compatibility with current QLogic adapter-based SAN environments and drivers

Consolidation of LAN and SAN traffic over the same adapter card and fabric, reducing the overall number of network interface cards (NICs), HBAs, cables, and switches

Integrated management with Cisco UCS Manager

Cisco UCS 2104XP I/O Modules (N20-I6584)

Figure 1-3 Cisco UCS 2104 IO Module

1

Fabric extender status indicator LED

3

Connection ports (to the fabric interconnect)

2

Link status indicator LEDs

4

Captive screws for the insertion latches


Cisco UCS 2100 Series I/O Modules bring the unified fabric into the blade server enclosure, providing 10 Gigabit Ethernet connections between blade servers and the fabric interconnect, simplifying diagnostics, cabling, and management.

The Cisco UCS 2100 Series extends the I/O fabric between the fabric interconnects and the Cisco UCS 5100 Series Blade Server Chassis, enabling a lossless and deterministic Fibre Channel over Ethernet (FCoE) fabric to connect all blades and chassis together. Since the I/O Module is similar to a distributed line card, it does not do any switching and is managed as an extension of the fabric interconnects. This approach removes switching from the chassis, reducing overall infrastructure complexity and enabling the Cisco Unified Computing System to scale to many chassis without multiplying the number of switches needed, reducing TCO and allowing all chassis to be managed as a single, highly available management domain.

The Cisco 2100 Series also manages the chassis environment (the power supply and fans as well as the blades) in conjunction with the fabric interconnect. Therefore, separate chassis management modules are not required.

Cisco UCS 2100 Series I/O Modules fit into the back of the Cisco UCS 5100 Series chassis. Each Cisco UCS 5100 Series chassis can support up to two I/O modules, enabling increased capacity as well as redundancy.

LEDs

There are 4 port activity LEDs, and an LED indicating connectivity to the servers in the chassis.

Buttons

There are no buttons on the I/O module.

Connectors

There are four I./O ports supporting SFP+ 10 Gb Ethernet connections. There is also a console connection for use by Cisco diagnostic technicians. It is not intended for customer use.

Cisco UCS 2200 Series I/O Modules

Figure 1-4 Cisco UCS 2208 IO Module (UCS-IOM-2208XP)

1

Fabric extender status indicator LED

3

Connection ports (to the fabric interconnect)

2

Link status indicator LEDs

4

Captive screws for the insertion latches


Figure 1-5 Cisco UCS 2204XP IO Module

1

Fabric extender status indicator LED

3

Connection ports (to the fabric interconnect)

2

Link status indicator LEDs

4

Captive screws for the insertion latches


Cisco UCS 2200 Series I/O Modules bring the unified fabric into the blade server enclosure, providing 10 Gigabit Ethernet connections between blade servers and the fabric interconnect, simplifying diagnostics, cabling, and management.

The Cisco UCS 2200 Series extends the I/O fabric between the fabric interconnects and the Cisco UCS 5100 Series Blade Server Chassis, enabling a lossless and deterministic Fibre Channel over Ethernet (FCoE) fabric to connect all blades and chassis together. Since the I/O Module is similar to a distributed line card, it does not do any switching and is managed as an extension of the fabric interconnects. This approach removes switching from the chassis, reducing overall infrastructure complexity and enabling the Cisco Unified Computing System to scale to many chassis without multiplying the number of switches needed, reducing TCO and allowing all chassis to be managed as a single, highly available management domain.

The Cisco 2200 Series also manages the chassis environment (the power supply and fans as well as the blades) in conjunction with the fabric interconnect. Therefore, separate chassis management modules are not required.

Cisco UCS 2200 Series I/O Modules fit into the back of the Cisco UCS 5100 Series chassis. Each Cisco UCS 5100 Series chassis can support up to two I/O modules, enabling increased capacity as well as redundancy.

LEDs

There are port activity LEDs, and an LED indicating connectivity to the servers in the chassis.

Buttons

There are no buttons on the I/O module.

Connectors

There are I./O ports supporting SFP+ 10 Gb Ethernet connections. There is also a console connection for use by Cisco diagnostic technicians. It is not intended for customer use.

Power Distribution Unit (PDU) (N01-UAC1)

The AC PDU provides load balancing between the installed power supplies, as well as distributing power to the other chassis components. DC versions of the chassis use a different PDU with appropriate connectors.

LEDs

There are no LEDs on the PDU.

Buttons

There are no buttons on the PDU.

Connectors

There are four power connectors rated for 15.5 A, 200-240V @ 50-60 Hz. Only use Cisco approved power cords, different power cords are available for many countries and applications. See for more information about the supported power cords. See Supported AC Power Cords and Plugs for more information.

Fan Modules (N20-FAN5)

The chassis can accept up to eight fan modules. A chassis must have filler plates in place if no fan in a slot for an extended period.

LEDs

There is one LED indication the fan module's operational state. See Interpreting LEDs for details.

Buttons and Connectors

There are no buttons or connectors on a fan module.

Power Supplies (N20-PAC5-2500W and UCSB-PSU-2500DC48)

To determine the number of power supplies needed for a given configuration, refer the Cisco UCS Power Calculator.

LEDs

There are two LEDs indicating power connection, AC or DC power supply operation, and fault states. See Interpreting LEDs for details.

Buttons

There are no buttons on a power supply.

Connectors

Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power supplies are 92 percent efficient and can be configured to support nonredundant, N+1 redundant, and grid-redundant configurations.

Power Supply Redundancy

Power Supply redundancy functions identically for AC and DC configured systems. When considering power supply redundancy you need to take several things into consideration:

Power supplies are all single phase and have a single input for connectivity to customer power source (rack PDU such as the Cisco RP Series PDU or equivalent).

The number of power supplies required to power a chassis varies depending on the following factors:

The total "Maximum Draw" required to power all the components configured within that chassis—such as I/O modules, fans, blade servers (CPU and memory configuration of the blade servers).

The Desired Power Redundancy for the chassis. The supported power configurations are non-redundant, N+1 redundancy (or any requirement greater than N+1), and grid redundancy.

To configure redundancy, use the UCS Manager GUI or use the scope psu-policy CLI command to enter PSU policy mode, then use the appropriate option for the set redundancy {grid | n-plus-1 | non-redund} command.

Non-redundant Mode

In a non-redundant or combined mode, all installed power supplies are turned on and balance the load evenly. Smaller configurations (requiring less than 2500W) can be powered by a single power supply when the system is using UCS Release 1.3(1) or earlier. However, a single power supply cannot provide redundancy and if either the power input or power supply fail, the system will immediately shut down. More common configurations require two or more power supplies (if requirements are between 2500 and 5000 watts peak) in non-redundant mode.

When using UCS Release 1.4(1) and later, the chassis requires a minimum of 2 power supplies.


Note In a non- redundant system, power supplies can be in any slot. Installing less than the required number of power supplies results in undesired behavior such as server blade shutdown. Installing more than the required amount of power supplies may result in lower power supply efficiency. At most, this mode will require two power supplies.


N+1 Redundancy

The N+1 redundancy configuration implies that the chassis contains a total number of power supplies to satisfy non-redundancy, plus one additional power supply for redundancy. All the power supplies that are participating in N+1 redundancy are turned on and equally share the power load for the chassis. If any additional power supplies are installed, UCS Manager recognizes these "unnecessary" power supplies and places them on standby.

If a power supply should fail, the surviving supplies can provide power to the chassis. In addition, UCS Manager turns on any "turned-off" power supplies to bring the system back to N+1 status.

To provide N+1 protection, the following number of power supplies is recommended:

Three power supplies are recommended if the power configuration for that chassis requires greater than 2500W or if using UCS Release 1.4(1) and later

Two power supplies are sufficient if the power configuration for that chassis requires less than 2500W or the system is using UCS Release 1.3(1) or earlier

Adding an additional power supply to either of these configurations will provide an extra level of protection. UCS Manager turns on the extra power supply in the event of a failure, and restores N+1 protection.


Note An n+1 redundant system will have either two or three power supplies, which may be in any slot.


Grid Redundancy

The grid redundant configuration is sometimes used when you have two power sources to power a chassis or you require greater than N+1 redundancy. If one source fails (which causes a loss of power to one or two power supplies), the surviving power supplies on the other power circuit continue to provide power to the chassis. A common reason for using grid redundancy is if the rack power distribution is such that power is provided by two PDUs and you want the grid redundancy protection in the case of a PDU failure.

To provide grid redundant (or greater than N+1) protection, the following number of power supplies is recommended:

Four power supplies are recommended if the power configuration for that chassis requires greater than 2500W or if using UCS Release 1.4(1) and later

Two power supplies are recommended if the power configuration for that chassis requires less than 2500W or the system is using UCS Release 1.3(1) or earlier


Note Both grids in a power redundant system should have the same number of power supplies. If your system is configured for grid redundancy, slots 1 and 2 are assigned to grid 1 and slots 3 and 4 are assigned to grid 2. If there are only two power supplies (PS) in the a redundant- mode chassis, they should be in slots 1 and 3. Slot and cord connection numbering is shown in Figure 1-6.


Figure 1-6 Power Supply Bay and Connector Numbering

LEDs

LEDs on both the chassis and the modules installed within the chassis identify operational states, both separately and in combination with other LEDs. This section includes the following topics:

LED Locations

Interpreting LEDs

LED Locations

Figure 1-7 shows the front view of a fully populated Cisco UCS 5108 server chassis and the location of single LEDs and groups of associated LEDs.

Figure 1-7 LEDs on a Cisco UCS 5108 Server Chassis—Front View

1

Server health, and blade network (See Table 1-4)

2

Server identification (blue)

3

Chassis identification (blue), chassis health, and blade network (See Table 1-4)

4

Power supply OK and fail LEDs (See Table 1-2)

5

Hard drive status and hard drive activity (See Table 1-4)

   

Figure 1-8 shows the rear view of a fully populated Cisco UCS 5108 server chassis and the location of single LEDs and groups of associated LEDs.

Figure 1-8 LEDs on the Cisco UCS 5108 Server Chassis—Rear View

1

Chassis identification (blue), chassis health, and blade network (See Table 1-4)

2

I/O module port status for ports 1-4 (See Table 1-3)

3

Fan status (See Table 1-2)

   

(AC Version Shown)

Interpreting LEDs

This section describes how to interpret the LEDs on various parts of the Cisco UCS 5108 server chassis.

Table 1-2 Chassis, Fan, and Power Supply LEDs 

LED
Color
Description

Beaconing

LED and button

Off

Beaconing not enabled.

Blinking blue 1 Hz

Beaconing to locate a selected chassis—If the LED is not blinking, the chassis is not selected. You can initiate beaconing in UCS Manager or with the button.

Chassis connections

Off

No power.

Amber

No I/O module is installed or the I/O module is booting.

Green

Normal operation.

Chassis health

Blinking amber

Indicates a component failure or a major over-temperature alarm.

Fan Module

Off

No power to the chassis or the fan module was removed from the chassis.

Amber

Fan module restarting.

Green

Normal operation.

Blinking amber

The fan module has failed.

Power Supply

OK

Off

No power to the slot.

Green

Normal operation.

Blinking green

AC power is present but the PS is either in redundancy standby mode or is not fully seated.

Fail

Off

Normal operation.

Amber

Over-voltage failure or over-temperature alarm.


Table 1-3 I/O Module LEDs 

LED
Color
Description

Body

Off

No power.

Green

Normal operation.

Amber

Booting or minor temperature alarm.

Blinking amber

POST error or other error condition.

Port 1-4

Off

Link down.

Green

Link up and operationally enabled.

Amber

Link up and administratively disabled.

Blinking amber

POST error or other error condition.


Table 1-4 Blade Server LEDs 

LED
Color
Description
Power

Off

Power off.

Green

Normal operation.

Amber

Standby.

Link

Off

None of the network links are up.

Green

At least one network link is up.

Health

Off

Power off.

Green

Normal operation.

Amber

Minor error.

Blinking Amber

Critical error.

Activity

(Disk Drive)

Off

Inactive.

Green

Outstanding I/O to disk drive.

Health

(Disk Drive)

Off

No fault.

Amber

Some fault.