ASR 5000 Hardware Platform Overview


ASR 5000 Hardware Platform Overview
 
This chapter provides information on the hardware components that comprise ASR 5000 platforms.
Chassis Configurations
The system is designed to scale from a minimum configuration, as shown in the table below to a fully-loaded redundant configuration containing a maximum of 48 cards.
 
Note that if Session Recovery is enabled, the minimum number of packet processing cards per chassis increases from one to four cards. Three packet processing cards are active and one packet processing card is standby (redundant). This minimal configuration is designed to protect against software failures only. In addition to increased hardware requirements, Session Recovery may reduce subscriber session capacity, performance, and data throughput.
Important: For Release 9.0, only PDSN and HA are supported on the PPC.
Notes:
1. These numbers represent the minimum number of components with no redundancy.
2. These numbers represent the minimum number of components with hardware redundancy. Additional components are required if Session Recovery is to be supported.
*1:1 redundancy is supported for these cards however some subscriber sessions and accounting information may be lost in the event of a hardware or software failure even though the system remains operational.
**The physical maximum number of half-height line cards you can install is 28; however, redundant configurations may use fewer than the physical maximum number of line cards since they are not required behind standby PSCs or PSC2s.
***The 10 Gigabit Ethernet Line Card is a full-height line card that takes up the upper and lower slots in the back of the chassis. Use the upper slot number only when referring to installed XGLCs. Slot numbering for other installed half-height cards is maintained: 17 to 32 and 33 to 48, regardless of the number of installed XGLCs.
 
Chassis Components (front and rear views)
This diagram shows exploded views of the front and rear chassis components. They are described below:
Chassis and Sub-component Identification Key
Chassis: Supports 16 front-loading slots for application cards and 32 rear-loading slots for line cards. To support the XGLC, a full-height line card, remove the half-height guide from the rear slots.
Mounting brackets: Support installation in a standard 19-inch rack or telecommunications cabinet. Standard and mid-mount options are supported. In addition, each bracket contains an electro-static discharge jack for use when handling equipment.
Upper fan tray: Draws air up through the chassis for cooling and ventilation. It then exhausts air through the vents at the upper-rear of the chassis.
Upper bezel: Covers the upper fan tray bay.
Lower fan tray cover: Secures the lower fan tray assembly in place. The cover also provides an air baffle allowing air to enter into the chassis.
Lower bezel: Covers the lower fan tray bay.
Lower fan tray assembly: Draws air through the chassis’ front and sides for cooling and ventilation. It is equipped with a particulate air filter to prevent dust and debris from entering the system.
Power Filter Units (PFUs): Each of the system’s two PFUs provides -48 VDC power to the chassis and its associated cards. Each load-sharing PFU operates independently of the other to ensure maximum power feed redundancy.
ASR 5000 Chassis Descriptions
Slot Numbering
ASR 5000 chassis feature a 48-slot design with 16 front-loading slots for application cards and 32 rear-loading slots (16 upper and 16 lower) for line cards.
 
Front Slot Numbering Scheme for Application Cards
The rear of the chassis features a half-slot design that supports up to 32 line cards:
 
Rear Slot Numbering Scheme for Line Cards
The following table shows the front slot numbers and their corresponding rear slot numbers.
Front and Rear Slot Numbering Relationship
Rear Slot Numbering for Half-Height Line Cards
Rear-installed line cards must be installed directly behind their respective front-loaded application card. For example, an application card in Slot 1 must have a corresponding line card in Slot 17. The redundant line card for this configuration would be placed in Slot 33. This establishes a directly mapped communication path through the chassis midplane between the application and line cards.
To help identify which rear slot corresponds with the front-loaded application card, note that the upper rear slot numbers are equal to the slot number of the front-loaded card plus 16. For example, to insert a line card to support an application card installed in slot 1, add 16 to the slot number of the front-loaded application card (Slot 1 + 16 slots = Slot 17). Slot 17 is the upper right-most slot on the rear of the chassis, directly behind Slot 1.
For lower rear slot numbers, add 32. Again, a redundant line card for an application card in Slot 1 would be (Slot 1 + 32 = Slot 33). Slot 33 is the lower right-most slot on the rear of the chassis, also behind Slot 1.
Rear Slot Numbering with Full-height Line Cards
ASR 5000 systems may be configured with 10 Gigabit Ethernet Line Cards (XGLCs). These are full-height line cards for which the half-height card guide is removed in order to accommodate the cards. In this case, only the upper slot number is used to refer to the XGLC. For half-height cards installed with the XGLCs, the half-height slot numbering scheme is maintained.
For example, XGLCs installed in slots 17 and 32 also take up slots 33 and 48, but are referred to as cards in slots 17 and 32 only. The slots in which the SPIOs and RCCs are installed in the same configuration, are slots 24 and 25, and 40 and 41, respectively.
Mounting Options
The chassis is designed for installation in a standard 19-inch wide (48.26 cm) equipment rack. Additional rack hardware (such as extension brackets) may be used to install the chassis in a standard 23-inch (58.42 cm) rack. Each chassis is 24.50 inches (62.23 cm) high. This equates to roughly 14 Rack Mount Units (RMUs: 1 RMU = 1.75 in (4.45 cm).
You can mount a maximum of three chassis in a standard 48 RMU (7 feet) equipment rack or telco cabinet provided that all system cooling and ventilation requirements are met. A fully-loaded rack with three chassis installed has approximately 5.5 inches (13.97 cm, 3.14 RMUs) of vertical space remaining.
To ensure all Central Office (CO) requirements and regulations are met, Nortel Networks currently mounts two PDSN 16000 shelves in a PTE 2000 frame measuring 600 mm (23.6-inch) wide by 900 mm (35.4-inch) deep by 2125 mm (6.97-feet) high.
There are two options for mounting the chassis in a standard equipment rack or telecommunications cabinet:
Standard: In this configuration, the flanges of the mounting brackets are flush with the front of the chassis. This is the default configuration as shipped.
Mid-mount: In this configuration, the flanges of the mounting brackets are recessed from the front of the chassis. To do this, install the mounting brackets toward the middle of the chassis on either side.
Caution: When planning chassis installation, take care to ensure that equipment rack or cabinet hardware does not hinder air flow at any of the intake or exhaust vents. Additionally, ensure that the rack/cabinet hardware, as well as the ambient environment, allow the system to function within the required limits. For more information, refer to the Environmental Specifications chapter of this guide.
Midplane Architecture
Separating the front and rear chassis slots is the midplane. The connectors on the midplane provide intra-chassis communications, power connections, and data transport paths between the various installed cards.
The midplane also contains two separate -48 VDC busses (not shown) that distribute redundant power to each card within the chassis.
 
Midplane/Switch Fabric Architecture
Midplane and Bus Descriptions
The following sections provide descriptions for each bus:
320 Gbps Switch Fabric
System Management Card (SMC), this IP-based, or packetized, switch fabric provides a transport path for user data throughout the system. The 320 Gbps switch fabric establishes inter-card communication between the SMC(s) and other application cards within the chassis, and their respective line cards.
32 Gbps Control Bus
The Control Bus features redundant 32 Gbps Ethernet paths that interconnect all control and management processors within the system. The bus uses a full-duplex Gigabit Ethernet (GE) switching hierarchy from both SMCs to each of the 14 application card slots in the chassis. Each application card is provisioned with a GE switch to meet its specific needs. This bus also interconnects the two SMC modules.
System Management Bus
The System Management Bus supports management access to each component within the chassis. It provides a communication path from each SMC to every card in the system supporting a 1 Mbps transfer rate to each card. This allows the SMCs to manage several low-level system functions, such as supplying power, monitoring temperature, board status, pending card removals, and data path errors, and controlling redundant/secondary path switchovers, card resets, and other failover features. Additionally, the System Management Bus monitors and controls the fan trays, power filter units, and alarming functions.
280 Gbps Redundancy Bus
The Redundancy Bus consists of multiple, full-duplex serial links providing packet processing card-to-line card redundancy through the chassis’ Redundancy Crossbar Cards (RCCs) as shown below.
 
 
Logical View of RCC Links for Failover
Each RCC facilitates 28 links:
Each serial link facilitates up to 5 Gbps symbol rate, equivalent to 4 Gbps of user data traffic, in each direction. Therefore, the Redundancy Bus provides 140 Gbps symbol rate (112 Gbps user data) of throughput per RCC, 280 Gbps symbol rate (224 Gbps user data) total for both.
OC-48 TDM Bus
The system also hosts a dual OC-48 TDM bus consisting of 128 independent TDM paths each consisting of 512 DS0 channels. This bus supports voice services on the system. Higher speed TDM traffic requirements are addressed using the system’s data fabric.
SPIO Cross-Connect Bus
To provide redundancy between Switch Processor I/O (SPIO) cards, the system possesses a physical interconnect between the ports on the SPIOs. This cross-connect allows management traffic or alarm outputs to be migrated from an active SPIO experiencing a failure to the redundant SPIO.
While it is recommended that an SPIO is installed directly behind its corresponding SMC, this bus allows either SMC to utilize either SPIO.
Power Filter Units
Located at the bottom rear of the chassis are slots for two 165A Power Filter Unit (PFU) assemblies. Each PFU provides DC power from the Central Office (CO) battery sub-system to the chassis and its associated cards. Each load-sharing PFU operates independently of the other to ensure maximum power feed redundancy. The maximum input operating voltage range of the PFU is -40 VDC to -60 VDC; the nominal rage is -48 VDC to -60 VDC.
Important: In the event that the CO has AC power only, a separate rack mount AC to DC converter is required.
The following drawing shows the PFU and its connectors. Refer to the Cabling the Power Filter Units chapter for information on installing and cabling the PF.
 
Power Filter Unit
Power Filter Unit Component Descriptions
Fan Tray Assemblies
There are two fan tray assemblies within the chassis. A lower fan tray provides air intake and an upper fan tray exhausts warmed air from the chassis. Each fan tray is connected to both PFUs to ensure power feed redundancy. Both fan tray assemblies are variable speed units that are automatically adjusted based on temperature or failover situations.
Thermal sensors monitor temperatures within the chassis. In the event of a fan failure or other temperature-related condition, the Switch Management Card (SMC) notifies all operable fans in the system to switch to high speed and generates an alarm.
Lower Fan Tray
The lower fan tray assembly contains multiple fans and pulls air into the chassis from the lower front and sides of the chassis. The air is then pushed upward across the various cards and midplane within the chassis to support vertical convection cooling.
 
Lower Fan Tray Assembly
Air Filter Assembly
The chassis supports a replaceable particulate air filter that meets UL 94-HF-1 standards for NEBS-compliant electronics filtering applications. This filter is mounted at the top of the lower fan tray assembly, providing ingress filtering to remove contaminants before they enter the system. Temperature sensors measure the temperature at various points throughout the chassis. The system monitors this information, and if it detects a clogged filter, generates a maintenance alarm.
 
Particulate Air Filter
Important: A replacement air filter is shipped with each chassis. It is recommended that a minimum of one replacement air filter for each deployed chassis be kept on site. This ensures that qualified service personnel can quickly replace the filter when needed.
Upper Fan Tray
The upper fan tray unit contains multiple fans that exhaust air from the upper rear and sides of the chassis.
 
Upper Fan Tray Assembly
Chassis Airflow
Airflow within the chassis is designed per Telcordia recommendations to ensure the proper vertical convection cooling of the system. Detailed information is located in the Chassis Air Flow section in Environmental Specifications chapter of this guide.
ASR 5000 Application Cards
The following application cards are supported by the system.
System Management Card
The System Management Card (SMC) is used with packet processing cards in the ASR 5000 hardware platform. The SMC serves as the primary controller, initializing the entire system and loading the software’s configuration image into other cards in the chassis as applcable.
SMCs are installed in the chassis slots 8 and 9. During normal operation, the SMC in slot 8 serves as the primary card and the SMC in slot 9 serves as the secondary. Each SMC has a dual-core central processing unit (CPU) and 4 GB of random access memory (RAM).
There is a single PC-card slot on the SMC that supports removable ATA Type I or Type II PCMCIA cards for temporary storage. Use these cards to load and store configuration data, software updates, buffer accounting information, and store diagnostic or troubleshooting information.
There is also a type II CompactFlash™ slot on the SMC that hosts configuration files, software images, and the session limiting/feature use license keys for the system.
The SMC provides the following major functions:
The front panel of the SMC and its major components is shown below:
 
SMC Callout Descriptions
System Management Card (SMC)
Card Ejector Levers —Use to insert/remove card to/from chassis.
Interlock Switch —When pulled downward, the interlock switch notifies the system to safely power down card prior to removal.
Card Level Status LEDs—Show the status of the card. (See Applying Power and Verifying Installation for definitions).
System Level Status LEDs—Show the status of overall system health and/or maintenance requirements. (See Applying Power and Verifying Installation for definitions).
PC-Card/PCMCIA Slot—Stores or moves software, diagnostics, and other information.
System Alarm Speaker—Sounds an audible alarm when specific system failures occur.
Alarm Cut-Off (ACO)—Press and release this recessed toggle switch to reset the system alarm speaker and other audible or visual alarm indicators connected to the CO Alarm interface on the SPIO.
SMC RAID Support
Each SMC is equipped with a hard disk, commonly referred to as a Small Form Factor (SFF) disk.
Important: The hard disk is not physically accessible. Disk failure constitutes SMC failure.
To access physical RAID details, such as disk manufacturer, serial number, number of partitions, disk size, and so on, in the Executive Mode of the CLI, type the command show hd raid verbose.
If there is a redundant SMC in the chassis, the standby disk works as a mirror to the disk in the active chassis, forming an active Redundant Array of Inexpensive Disks (RAID).
Use the HD RAID commands in the Command Line Interface Reference to configure RAID. RAID control mechanisms allow xDR charging data to be written to the hard disks on both the active and standby SMCs for later upload to a suitable local or remote storage server. Configuring CDR, EDR, and UDR storage is described in the Command Line Interface Reference.
Event logs related to disk and RAID include disk name, serial number and RAID UUID for reference. They are generated at the Critical, Error, Warning, and Informational levels. For more information on configuring and viewing log files, refer to Configuring and Viewing System Logs in the System Administration Guide.
Event logs at the Critical level are generated for service-affecting events such as:
Event logs at the Error level are generated for important failures:
Event logs at Warning level are generated for important abnormal cases:
Event logs at the Informational level are generated for normal situations:
The hard disk supports SNMP notifications. These are described in the SNMP MIB Manual.
Packet Processing Cards: PSC, PSC2, and PPC
The Packet Services Cards, PSC and PSC2, and Packet Processing Card (PPC) are used with the System Management Card (SMC) in the ASR 5000 hardware platform. These cards provide the packet processing and forwarding capabilities within a system. Each packet processing card type supports multiple contexts, which allows you to overlap or assign duplicate IP address ranges in different contexts.
Important: For Release 9.0, the PPC card is limited to CDMA and HA functionality.
Specialized hardware engines support parallel distributed processing for compression, classification, traffic scheduling, forwarding, packet filtering, and statistics.
The packet processing cards use control processors to perform packet-processing operations, and a dedicated high-speed network processing unit (NPU). The NPU does the following:
The following sections describe the differences between the PSC and PSC2 cards.
Packet Services Card (PSC) Description
Each PSC has two x86-based control processor (CP) subsystems that perform the bulk of the packet-based user service processing. The main x86 CP contains 4 cores split across two chips. It is equipped with 16 GB of RAM. Therefore, a fully-loaded system consisting of 14 PSCs, provides 224 GB of RAM dedicated to packet processing tasks. The second CP is in the NPU. This CP contains 1.5 GB of memory, but only 512 MB is available to the OS for use in session processing.The hardware encryption components are part of the standard PSC hardware.
To take advantage of the distributed processing capabilities of the system, you can add additional PSCs to the chassis without their supporting line cards, if desired. This results in increased packet handling and control transaction processing capabilities. Another advantage is a decrease in CPU utilization when the system performs processor-intensive tasks such as encryption or data compression.
PSCs can be installed in chassis slots 1 through 7 and 10 through 16. Each installed PSC can either be allocated as active, available to the system for session processing, or redundant, a standby component available in the event of a failure.
The front panel of the PSC and its major components is shown below:
 
Packet Services Card (PSC)
PSC Callout Descriptions
Card Ejector Levers—Use to insert/remove card to/from chassis.
Interlock Switch—When pulled downward, the interlock switch notifies the system to safely power down card prior to removal.
Card Level Status LEDs—Show the current status of the card. (See Applying Power and Verifying Installation for definitions.)
Packet Services Card 2 (PSC2) Description
The Packet Services Card 2 (PSC2) is the next-generation packet forwarding card for the ASR 5000. The PSC2 provides increased aggregate throughput and performance, and a higher number of subscriber sessions.
The PSC2 has been enhanced with a faster network processor unit, featuring two quad-core x86 2.5Ghz CPUs, 32 GB of RAM. These processors run a single copy of the operating system and appear as a single CPU in the show cpu table command (CPU0). The operating system running on the PSC treats the two dual-core processors as a 4-way multi-processor. You can see this in the output of the show cpu info verbose command.
The PSC2 provides 2 to 2.7 times the data throughput of the original PSC, and the switch fabric interface has been doubled. A second-generation data transport fixed programmable gate array (DT2 FPGA, abbreviated as DT2) connects the PSC2’s NPU bus to the switch fabric interface. The FPGA also provides a bypass path between the line card or Redundancy Crossbar Card (RCC) and the switch fabric for ATM traffic. Traffic from the line cards or the RCC is received over the FPGA’s serial links and is sent to the NPU on its switch fabric interface. The traffic destined for the line cards or RCC is diverted from the NPU interface and sent over the serial links.
DT2 FPGA also connects to the control processors subsystem via a PCI-E bus. The PCI-E interface allows the control processors to perform register accesses to the FPGA and some components attached to it, and also allows DMA operations between the NPU and the control processors’ memory. A statistics engine is provided in the FPGA. Two reduced latency DRAM (RLDRAM) chips attached to the FPGA provide 64MB of storage for counters.
The PSC2 has a 2.5 G/bps-based security processor that provides the highest performance for cryptographic acceleration of next-generation IP Security (IPsec), Secure Sockets Layer (SSL) and wireless LAN/WAN security applications with the latest security algorithms.
Interoperability
It is not recommended that you mix PSC2s with PSCs or PPCs, since this prevents the PSC2 from operating at its full potential. Due to the different processor speeds and memory configurations, the PSC2 cannot be combined in a chassis with PSCs or PPCs.
The system will reduce the performance of the PSC2 to that of a PSC or PPC if either of those cards are in the system. This is due to the different performance and switch fabric configuration. A system booting up with mixed cards will default to the slower performance mode. A PSC or PPC added to a running PSC2 system will be taken offline. A PSC2 added to a running PSC or PPC system will start up in this slower mode.
The PSC2 is capable of dynamically adjusting the line card connection mode to support switching between XGLCs and non-XGLCs with minimal service interruption.
Redundancy
 
Capacity
 
Power Estimate
 
The front panel of the PSC2 and its major components is shown below:
 
Packet Services Card 2 (PSC2)
PSC2 Callout Descriptions
Card Ejector Levers—Use to insert/remove card to/from chassis.
Interlock Switch—When pulled downward, the interlock switch notifies the system to safely power down card prior to removal.
Card Level Status LEDs—Show the current status of the card. (See Applying Power and Verifying Installation for definitions)
Packet Processor Card (PPC) Description
The PPC has features a quad-core x86 2.5Ghz CPU and 16GB of RAM. The processor runs a single copy of the operating system. To check the CPU in the CLI, use the show cpu table command. The operating system running on the PPC treats the dual-core processor as a 2-way multi-processor. You can see this in the output of the show cpu info verbose command.
A second-generation data transport fixed programmable gate array (DT2 FPGA, abbreviated as DT2) connects the PPC’s NPU bus to the switch fabric interface. The FPGA also provides a bypass path between the line card or Redundancy Crossbar Card (RCC) and the switch fabric for ATM traffic. Traffic from the line cards or the RCC is received over the FPGA’s serial links and is sent to the NPU on its switch fabric interface. The traffic destined for the line cards or RCC is diverted from the NPU interface and sent over the serial links.
DT2 FPGA also connects to the control processors subsystem via a PCI-E bus. The PCI-E interface allows the control processors to perform register accesses to the FPGA and some components attached to it, and also allows DMA operations between the NPU and the control processors’ memory. A statistics engine is provided in the FPGA. Two reduced latency DRAM (RLDRAM) chips attached to the FPGA provide 64MB of storage for counters.
The PPC has a 2.5 G/bps-based security processor that provides the highest performance for cryptographic acceleration of next-generation IP Security (IPsec), Secure Sockets Layer (SSL) and wireless LAN/WAN security applications with the latest security algorithms.
Redundancy
 
Capacity
 
Power Estimate
 
The front panel of the PPC and its major components is shown below:
 
Packet Processor Card
PPC Callout Descriptions
Card Ejector Levers—Use to insert/remove card to/from chassis.
Interlock Switch—When pulled downward, the interlock switch notifies the system to safely power down card prior to removal.
Card Level Status LEDs—Show the current status of the card. (See Applying Power and Verifying Installation for definitions)
ASR 5000 Line Cards
The following rear-loaded cards are currently supported by the system.
Switch Processor I/O Card
The Switch Processor I/O (SPIO) card provides connectivity for local and remote management, CO alarming, and BITS timing input. SPIOs are installed in chassis slots 24 and 25, behind SMCs. During normal operation, the SPIO in slot 24 works with the active SMC in slot 8. The SPIO in slot 25 serves as a redundant component. In the event that the SMC in slot 8 fails, the redundant SMC in slot 9 becomes active and works with the SPIO in slot 24. If the SPIO in slot 24 should fail, the redundant SPIO in slot 25 takes over.
The following shows the panel of the SPIO card, its interfaces, and other major components.
 
Switch Processor I/O Card
SPIO Callout Definitions
Card Ejector Levers—Use to insert/remove card to or from the chassis.
Interlock Switch—When pulled downward, the interlock switch notifies the system to safely power down card prior to removal.
Card Level Status LEDs—Show the status of the card. See the Applying Power and Verifying Installation for definitions.
Optical Gigabit Ethernet Management LAN Interfaces—Two Small Form-factor Pluggable (SFP) optical Gigabit Ethernet interfaces to connect optical transceivers.
10/100/1000 Mbps Ethernet Management LAN Interfaces—Two RJ-45 interfaces, supporting 10/100 Mbps or 1 Gbps Ethernet.
Console Port—RJ-45 interface used for local connectivity to the command line interface (CLI). See Cabling the Switch Processor Input/Output Line Card for more information.
BITS Timing Interface—Either a BNC interface or 3-pin wire wrap connector. Used for application services that use either the optical or channelized line cards.
CO Alarm Interface—Dry contact relay switches, allowing connectivity to central office, rack, or cabinet alarms. See the Applying Power and Verifying Installation for more information.
Management LAN Interfaces
SPIO management LAN interfaces connect the system to the carrier’s management network and subsequent applications, normally located remotely in a Network Operations Center (NOC). You can use the RJ-45 10/100/1000 Mbps Ethernet interfaces or optical SFP Gigabit Ethernet interfaces.
When using the RJ-45 interfaces, CAT5 shielded twisted pair cabling is recommended.
Important: Use shielded cabling whenever possible to further protect the chassis and its installed components from ESD or other transient voltage damage.
SFP Interface Supported Cable Types
Fiber Type: Multi-mode fiber (MMF), 850 nm wavelength
Console Port
The console uses an RS-232 serial communications port to provide local management access to the command line interface (CLI). A 9-pin-to-RJ-45 console cable is supplied with each SPIO card. The console cable must provide carrier-detect when attached in a null modem configuration.
Should connection to a terminal server or other device requiring a 25-pin D-subminiature connector be required, a specialized cable can be constructed to support DB-25 to RJ-45 connectivity. Refer to the Technical Specifications chapter later in this document for the pin-outs for this cable. The baud rate for this interface is configurable between 9600 bps and 115,200 bps (default is 9600 bps).
For detailed information on using the console port, see the See Cabling the Switch Processor Input/Output Line Card.
BITS Timing
A Building Integrated Timing Supply (BITS) module is available on two versions of the SPIO: one supports a BITS BNC interface and the other a BITS 3-pin interface. If your system uses the optical and/or channelized line cards (for SDH/SONET), you can configure it to have the SPIO’s BITS module provide the transmit timing source, compliant with Stratum 3 requirements, for all the line cards in the chassis.
Central Office Alarm Interface
The CO alarm interface is a 10-pin connector for up to three dry-contact relay switches to trigger external alarms, such as lights, sirens or horns, for bay, rack, or CO premise alarm situations. The three Normally Closed alarm relays can be wired to support Normally Open or Normally Closed devices, indicating minor, major, and critical alarms. Pin-outs and a sample wiring diagram for this interface are shown in Technical Specifications chapter, later in this guide.
A CO alarm cable is shipped with the product so you can connect the CO Alarm interfaces on the SPIO card to your alarming devices. The “Y” cable design ensures CO alarm redundancy by connecting to both primary and secondary SPIO cards.
Redundancy Crossbar Card
The RCC uses 5 Gbps serial links to ensure connectivity between rear-mounted line cards and every non-SMC front-loaded application card slot in the system. This creates a high availability architecture that minimizes data loss and ensures session integrity. If a packet processing card were to experience a failure, IP traffic would be redirected to and from the LC to the redundant packet processing card in another slot. Each RCC connects up to 14 line cards and 14 packet processing cards for a total of 28 bi-directional links or 56 serial 2.5 Gbps bi-directional serial paths.
The RCC provides each packet processing card with a full-duplex 5 Gbps link to 14 (of the maximum 28) line cards placed in the chassis. This means that each RCC is effectively a 70 Gbps full-duplex crossbar fabric, giving the two RCC configuration (for maximum failover protection) a 140 Gbps full-duplex redundancy capability.
The RCC located in slot 40 supports line cards in slots 17 through 23 and 26 through 32 (upper rear slots). The RCC in slot 41 supports line cards in slots 33 through 39 and 42 through 48 (lower rear slots):
 
Redundancy Crossbar Car
RCC Callout Definitions
Card Ejector Levers—Use to insert/remove a card to and from the chassis.
Interlock Switch—When pulled downward, the interlock switch notifies the system to safely power down card prior to removal.
Card Level Status LEDs—Show the status of the card. (See Applying Power and Verifying Installation for definitions).
Ethernet 10/100 Line Card
The Ethernet 10/100 line card, commonly referred to as the Fast Ethernet Line Card (FELC), is installed directly behind its respective packet processing card, providing network connectivity to the RAN interface and the packet data network. Each card has eight RJ-45 interfaces, numbered top to bottom from 1 to 8. Each of these IEEE 802.3-compliant interfaces supports auto-sensing 10/100 Mbps Ethernet. Allowable cabling includes:
 
Important: Use shielded cabling whenever possible to further protect the chassis and its installed components from ESD or other transient voltage damage.
The Ethernet 10/100 Line Card can be installed in chassis slots 17 through 23, 26 through 39, and 42 through 48. These cards are always installed directly behind their respective packet processing cards, but are not required to be placed behind any redundant packet processing cards (those operating in Standby mode).
The following shows the panel of the Ethernet 10/100 line card, identifying its interfaces and major components:
 
Ethernet 10/100 Line Card
Ethernet 10/100 Line Card Callout Definitions
Card Ejector Levers—Use to insert/remove card to/from chassis.
Card Level Status LEDs—Show the status of the card. (See Applying Power and Verifying Installation for definitions).
RJ-45 10/100 Ethernet Interfaces—Eight auto-sensing RJ-45 interfaces for R-P interface connectivity, carrying user data. Ports are numbered 1 through 8 from top to bottom.
Ethernet 1000 (Gigabit Ethernet) Line Cards
The Ethernet 1000 line card is commonly referred to as the GigE or Gigabit Ethernet Line Card (GELC). The Ethernet 1000 line card is installed directly behind its respective packet processing card, providing network connectivity to the packet data network. The type of interfaces for the Ethernet 1000 line cards is dictated by the Small Form-factor Pluggable (SFP) module installed as described below:
 
SFP Modules Supported by the Ethernet 1000 Line Cards
Fiber Type: Multi-mode fiber (MMF), 850 nm wavelength
Fiber Type: Single-mode fiber (SMF), 1310 nm wavelength
Core Size (microns)/Range: 9/32808.4 feet (10 Kilometers)
Important: Class 1 Laser Compliance Notice This product has been tested and found to comply with the limits for Class 1 laser devices for IEC825, EN60825, and 21CFR1040 specifications.
Warning: Only trained and qualified personnel should install, replace, or service this equipment. Invisible laser radiation may be emitted from the aperture of the port when no cable is connected. Avoid exposure to laser radiation and do not stare into open apertures. BE SURE TO KEEP COVER ON INTERFACE WHEN NOT IN USE.
Important: Disposal of this product should be performed in accordance with all national laws and regulations.
The Ethernet 1000 Line Cards can be installed in chassis slots 17 through 23, 26 through 39, and 42 through 48. These cards are always installed directly behind their respective or packet processing cards, but they are not required behind any redundant packet processing cards (those operating in Standby mode).
The following shows the panel of the Ethernet 1000 line card with the fiber connector, identifying its interfaces and major components.
 
Ethernet 1000 Line Card
Quad Gigabit Ethernet Line Card
The 4-port Gigabit Ethernet line card is commonly referred to as the Quad-GigE Line Card or the QGLC. The QGLC is installed directly behind its associated packet processing card to provide network connectivity to the packet data network. There are several different versions of Small Form-factor Pluggable (SFP) modules available:
 
SFP Modules Supported by the QGLC
Fiber Type: Multi-mode fiber (MMF), 850 nm wavelength
Fiber Type: Single-mode fiber (SMF), 1310 nm wavelength
Core Size (microns)/Range: 9/32808.4 feet (10 Kilometers)
Important: Class 1 Laser Compliance Notice This product has been tested and found to comply with the limits for Class 1 laser devices for IEC825, EN60825, and 21CFR1040 specifications.
Warning: Only trained and qualified personnel should install, replace, or service this equipment. Invisible laser radiation may be emitted from the aperture of the port when no cable is connected. Avoid exposure to laser radiation and do not stare into open apertures. BE SURE TO KEEP COVER ON INTERFACE WHEN NOT IN USE.
Important: Disposal of this product should be performed in accordance with all national laws and regulations.
Install QGLCs in chassis slots 17 through 23, 26 through 39, and 42 through 48. Always install these cards directly behind their respective packet processing cards. They are not required behind any redundant packet processing cards (those operating in Standby mode).
The following shows the front panel of the QGLC, identifying its interfaces and major components:
 
Quad Gigabit Line Card (QGLC)
Quad Gigabit Line Card (QGLC) Callout Definitions
Card Ejector Levers—Use to insert/remove card to/from chassis.
Card Level Status LEDs—Show the status of the card. (See Applying Power and Verifying Installation for definitions)
Gigabit Ethernet Interface(s)—Gigabit Ethernet (GE) SFP modules.
10 Gigabit Ethernet Line Card
The 10 Gigabit Ethernet Line Card is commonly referred to as the XGLC. The XGLC supports higher speed connections to packet core equipment, increases effective throughput between the ASR 5000 and the packet core network, and reduces the number of physical ports needed on the ASR 5000.
The XGLC is a full-height line card, unlike the other line cards, which are half height. To install an XGLC, you must remove the half-height card guide in the rear of the chassis. Once installed, use only the upper slot number to refer to or configure the XGLC. Software refers to the XGLC by the top slot number and port; for example, 17/1, not 33/1. For half-height cards that are installed with the XGLCs, the half-height slot numbering scheme is maintained.
The one-port XGLC supports the IEEE 802.3-2005 revision which defines full duplex operation of 10 Gigabit Ethernet. When combined with a PSC or PPC, the XGLC supports a maximum sustained forwarding rate of 2.8 Gbps and can support bursts up to full line rate. When combined with a PSC2, the XGLC supports a maximum sustained forwarding rate of 6 Gbps, and can support bursts up to full line rate. The XGLC supports a maximum Ethernet Frame size of 3.5KB.
The XGLC use a Small Form Factor Pluggable (SPF+) module. The modules support one of two media types: 10GBASE-SR (Short Reach) 850nm, 300m over Multimode (MMF), or 10GBASE-LR (Long Reach) 1310nm, 10km over Single Mode (SMF).
The XGLC is configured and monitored via the System Management Card (SMC) over the system’s control bus. Both SMCs must be active to maintain maximum forwarding rates. A feature of the higher speed line cards (10 Gigabit Ethernet Line Card or XGLC, and the Quad Gigabit Ethernet Line Card or QGLC), is the ability to use the Star Channel if the firmware needs to be upgraded. The Star Channel is a 2x140Gbps redundancy bus between the packet processing card and the line card that allows a faster download. Another way to perform a firmware upgrade is via the System Management Bus, with 1 Mbps throughput, which connects the SMC to every card in the system.
Install XGLCs in chassis slots 17 through 23 and 26 through 32. These cards should always be installed directly behind their respective packet processing cards, but they are not required behind any redundant packet processing cards (those operating in Standby mode).
The supported redundancy schemes for XGLC are L3, Equal Cost Multi Path (ECMP) and 1:1 side-by-side redundancy. Refer to the “Line Card Installation” chapter for additional information.
Power Estimate: 30W maximum
 
Side by side redundancy allows two XGLC cards installed in neighboring slots to act as a redundant pair. Side by side pair slots are 17-18, 19-20, 21-22, 23-26, 27-28, 29-30, and 31-32.
Side by side redundancy only works with XGLC cards. When configured for non-XGLC cards, the cards are brought offline. If the XGLCs are not configured for side by side redundancy, the run independently without redundancy.
When you first configure side by side redundancy, the higher-numbered slot’s configuration is erased and then duplicated from the lower-numbered slot. The lower-numbered top slot retains all other configuration settings. While side by side redundancy is configured, all other configuration commands work as if the side by side slots were top-bottom slots. Configuration commands directed at the bottom slots either fail with errors are are disallowed.
When you unconfigure side by side redundancy, the configuration for the higher-numbered top and bottom slots are initialized to the dfaults. The configuration for the lower-numbered stop slot retains all other configuration settings. If you install non-XGLC cards in the slots, you may bring them back online.
 
SFP Modules Supported by the XGLC
Fiber Type: Multi-mode fiber (MMF), 850 nm wavelength
Rx Sensitivity: -11.1 dBm
Fiber Type: Single-mode fiber (SMF), 1310 nm wavelength
Core Size (microns)/Range: 9/32808.4 feet (10 Kilometers)
Important: Class 1 Laser Compliance Notice This product has been tested and found to comply with the limits for Class 1 laser devices for IEC825, EN60825, and 21CFR1040 specifications.
Warning: Only trained and qualified personnel should install, replace, or service this equipment. Invisible laser radiation may be emitted from the aperture of the port when no cable is connected. Avoid exposure to laser radiation and do not stare into open apertures. BE SURE TO KEEP COVER ON INTERFACE WHEN NOT IN USE.
Important: Disposal of this product should be performed in accordance with all national laws and regulations.
The following shows the front panel of the XGLC, identifying its interfaces and major components:
10 Gigabit Ethernet Line Card (XGLC)
10 Gigabit Ethernet Line Card (GLC) Callout Definitions
Card Ejector Levers—Use to insert/remove card to/from chassis.
Card Level Status LEDs—Show the status of the card. (See Applying Power and Verifying Installation for definitions)
Gigabit Ethernet Interface(s)—10 Gigabit Ethernet (GE) SFP+ modules. 10Base-SR and 10Base-LR interfaces are supported, depending on the SFP+ module installed.
Optical Line Cards (OLC and OLC2)
There are two optical fiber line cards: OLC and OLC2. The OLC is labeled ATM/POS OC-3. The OLC2 is labeled OLC2 OC-3/STM-1 Multi Mode (or Single Mode depending on SFP type). Both cards provide either OC-3 or STM-1 signaling and both support ATM. The primary difference between the two cards is that the OLC2 is RoHS 6/6 compliant. RoHS stands for Restriction of Hazardous Substances. It is the European Union directive for restricting the use of six hazardous substances in the manufacture of electrical components
 
The OLC/OLC2 support both SDH and SONET. The basic unit of framing in SDH is STM-1 (Synchronous Transport Module level - 1), which operates at 155.52 Mbit/s. SONET refers to this basic unit as STS-3c (Synchronous Transport Signal - 3, concatenated), but its high-level functionality, frame size, and bit-rate are the same as STM-1.
SONET offers an additional basic unit of transmission, STS-1 (Synchronous Transport Signal - 1), operating at 51.84 Mbit/s—exactly one third of an STM-1/STS-3c. The OLC/OLC2 concatenates three STS-1 (OC-1) frames to provide transmission speeds up to 155.52 Mb/s with payload rates of 149.76 Mb/s and overhead rates of 5.76 Mb/s.
The OLC/OLC2 optical fiber line cards support network connectivity through Iu or IuPS interfaces to the UMTS Terrestrial Radio Access Network (UTRAN). These interfaces are commonly used with our SGSN products to provide either non-IP 3G traffic or all IP 3G traffic (for all-IP packet-based networking) over ATM (Asynchronous Transfer Mode).
 
Each OLC/OLC2 provides four physical interfaces (ports) numbered top-to-bottom from 1 to 4 and populated by Small Form-factor Pluggable (SFP) modules which include LC-type Bellcore GR-253-CORE compliant connectors. The Optical (ATM) line Card supports two types of SFP modules (ports) and applicable cabling, but each card supports only one type at-a-time, as indicated in the following table:
Fiber Types: Single-mode optical fiber
Wavelength: 1310 nm
Core Size: 9 micrometers
Cladding Diameter: 125 micrometers
Range: Intermediate/21 kilometers
Attenuation: 0.25 dB/KM
Min/Max Tx Power: -15 dBm/-8 dBm
Fiber Types: Multi-mode optical fiber
Wavelength: 1310 nm
Core Size: 62.5 micrometers
Cladding Diameter: 125 micrometers
Range: Short/2 kilometers
Min/Max Tx Power: -19 dBm/-14 dBm
 
Install the OLC/OLC2 directly behind its respective (Active) packet processing card. You may optionally install an OLC/OLC2 behind a redundant packet processing card (those operating in Standby mode). As with other line cards, install the Optical (ATM) Line Card in slots 17 through 23, 26 through 39, and 42 through 48.
The following figures show the panel of the OLC and OLC2 Optical (ATM) Line Cards, indicating their ports and major components.
 
OLC Optical (ATM) Line Card
 
OLC2 Optical (ATM) Line Card
Optical (ATM) Line Card Callout Definitions
Card Ejector Levers—Use to insert/remove card to/from chassis.
Interlock Switch—When pulled downward, the interlock switch notifies the system to safely power down card prior to removal.
Card Level Status LEDs—Show the status of the card. See the Applying Power and Verifying Installation for definitions.
Port connectors—Fiber LC duplex female connector.
Port Level Status LEDs—Show the status of a port. See the Applying Power and Verifying Installation for definitions.
Line Card Label—Identifies the type of SFP modules and cabling supported:
Channelized Line Cards (CLC and CLC2)
There are two types of Channelized STM-1/OC-3 optical fiber line cards. Often referred to as the CLC, CLC2, or Frame Relay line card, they provide frame relay over SONET or SDH. The CLC/CLC2 supports network connectivity through a Gb interface to connect to the Packet Control Unit (PCU) of the base station subsystem (BSS). These interfaces are commonly used with our SGSN products to provide frame relay.
Channelized Line Card (CLC)
In North America, the card supplies ANSI SONET STS-3 (optical OC-3) signaling. In Europe, the card supplies SDH STM-1 (optical OC-3). The transmission rate for the card is 155.52 Mb/s with 84 SONET channels supplying T1 and 63 SDH channels supplying E1.
Each CLC provides one optical fiber physical interface (port). The port is populated by a Small Form-factor Pluggable (SFP) module which includes an LC-type connector. The port of the CLC supports two types of SFP modules and cabling, as shown in the following table.
Channelized Line Card 2 (CLC2)
In North America, the card supplies ANSI SONET STS-3 (optical OC-3) signaling. In Europe, the card supplies SDH STM-1 (optical OC-3). The transmission rate for the card is 155.52 Mb/s with 336 SONET channels supplying T1 and 252 SDH channels supplying E1. The CLC2 is RoHs 6/6 compliant.
 
Each CLC2 provides four optical fiber physical interfaces (ports). The ports are populated by a Small Form-factor Pluggable (SFP) modules which include an LC-type connector. The ports of the CLC2 supports two types of SFP modules and cabling, as shown in the following table.
 
 
Fiber Types: Single-mode optical fiber
Wavelength: 1310 nm
Core Size: 9 micrometers
Cladding Diameter: 125 micrometers
Range: Intermediate/21 kilometers
Attenuation: 0.25 dB/KM
Min/Max Tx Power: -15 dBm/-8 dBm
Fiber Types: Multi-mode optical fiber
Wavelength: 1310 nm
Core Size: 62.5 micrometers
Cladding Diameter: 125 micrometers
Range: Short/2 kilometers
Min/Max Tx Power: -19 dBm/-14 dBm
Install the CLC/CLC2 directly behind its respective (Active) packet processing card. You may optionally install CLCs/CLC2s behind a redundant (Standby) packet processing card. As with other line cards, install the Channelized Line Cards in slots 17 through 23, 26 through 39, and 42 through 48.
The following figures show the panel of the CLC and CLC2 Channelized Line Cards, identifying their interfaces and major components.
 
CLC Channelized Line Card
 
CLC Channelized Line Card
Channelized Line Card Callout Definitions
Card Ejector Levers—Use to insert/remove card to/from chassis.
Interlock Switch—When pulled downward, the interlock switch notifies the system to safely power down card prior to removal.
Card Level Status LEDs—Show the status of the card. See the Applying Power and Verifying Installation for definitions.
Port connectors—Fiber LC duplex female connector.
Port Level Status LEDs—Show the status of a port. See the Applying Power and Verifying Installation for definitions.
Line Card Label—Identifies the type of SFP modules and cabling supported:
Standards Compliance
The Channelized Line Card (CLC) was developed in compliance with the following standards:
 
General Application and Line Card Information
Card Interlock Switch
Each card has a switched interlock mechanism that is integrated with the upper card ejector lever. This ensures proper notification to the system before a card is removed. You cannot configure or place a card into service until you push the card interlock switch upward. This locks the upper ejector lever in place and signals the system that the card is ready for use.
Important: You must push the interlock switch upward into position before the upper attaching screw on the card will properly align with the screw hole in the chassis.
When you pull the interlock downward, it allows the upper ejector lever to be operated. This sliding lock mechanism provides notification to the system before you physically remove a card from the chassis. This allows the system time to migrate various processes on the particular operational card.The upper card ejector only operates when the slide lock is pulled downward to the unlocked position.
Caution: Failure to lower the interlock switch before operating the upper card ejector lever may result in damage to the interlock switch and possibly the card itself.
The following shows an exploded view of how the card interlock switch works in conjunction with the ejector lever.
 
Card Interlock Switch in the Lever Locked Position
 

Cisco Systems Inc.
Tel: 408-526-4000
Fax: 408-527-0883