Cisco ASR 9000 Series Aggregation Services Router Overview and Reference Guide
Functional Description
Downloads: This chapterpdf (PDF - 9.4MB) The complete bookPDF (PDF - 19.52MB) | Feedback

Table of Contents

Functional Description

Router Operation

Route Switch Processor Card

Route Processor Card

Front Panel Connectors

Management LAN Ports

Console Port

Auxiliary Port

Alarm Out

Synchronization Ports

RP USB Port

Front Panel Indicators

LED Matrix Display

LED Matrix Boot Stage and Runtime Display

LED Matrix CAN Bus Controller Error Display

Push Buttons

Functional Description

Switch Fabric

Unicast Traffic

Multicast Traffic

Route Processor Functions

Processor-to-Processor Communication

Route Processor/Fabric Interconnect

Fabric Controller Card

FC Card Front Panel Indicator

Ethernet Line Cards

Functional Description

40-Port Gigabit Ethernet (40x1GE) Line Card

8-Port 10-Gigabit Ethernet (8x10GE) 2:1 Oversubscribed Line Card

4-Port 10-Gigabit Ethernet (4x10GE) Line Card

8-port 10-Gigabit Ethernet (8x10GE) 80-Gbps Line Rate Card

2-Port 10-Gigabit Ethernet + 20-port 1-Gigabit Ethernet (2x10GE + 20x1GE) Combination Line Card

16-port 10-Gigabit Ethernet (16x10GE) Oversubscribed Line Card

24-Port 10-Gigabit Ethernet Line Card

36-port 10-Gigabit Ethernet Line Card

2-port 100-Gigabit Ethernet Line Card

1-Port 100-Gigabit Ethernet Line Card

Modular Line Cards

20-port Gigabit Ethernet Modular Port Adapter

8-port 10-Gigabit Ethernet Modular Port Adapter

4-Port 10-Gigabit Ethernet Modular Port Adapter

2-port 10-Gigabit Ethernet Modular Port Adapter

2-Port 40-Gigabit Ethernet Modular Port Adapter

1-Port 40-Gigabit Ethernet Modular Port Adapter

Power System Functional Description

Power Modules

Power Module Status Indicators

System Power Redundancy

AC Power Trays

AC Tray Power Switch

AC Input Voltage Range

DC Output Levels

AC System Operation

Power Up

Power Down

DC Power Trays

DC Tray Power Switch

DC Power Tray Rear Panel

DC Power Tray Power Feed Indicator

DC System Operation

Power Up

Power Down

Cooling System Functional Description

Cooling Path

Fan Trays

Fan Trays

Fan Trays

Fan Tray

and Fan Trays

Status Indicators

Fan Tray Servicing

Slot Fillers

Chassis Air Filter

Speed Control

Temperature Sensing and Monitoring

Servicing

System Shutdown

System Management and Configuration

Cisco IOS XR Software

System Management Interfaces

Command-Line Interface

Craft Works Interface

XML

SNMP

SNMP Agent

MIBs

Online Diagnostics

Functional Description

This chapter provides a functional description of the Cisco ASR 9000 Series Router, Route Switch Processor (RSP) card, Route Processor (RP) card, Fabric Controller (FC) card, Ethernet line cards, power and cooling systems, and subsystems such as management, configuration, alarms, and monitoring.

Router Operation

The ASR 9000 Series Routers are fully distributed routers that use a switch fabric to interconnect a series of chassis slots, each of which can hold one of several types of line cards. Each line card in the Cisco ASR 9000 Series has integrated I/O and forwarding engines, plus sufficient control plane resources to manage line card resources. Two slots in the chassis are reserved for RSP/RP cards to provide a single point of contact for chassis provisioning and management.

Figure 2-1 shows the platform architecture of the Cisco ASR 9010 Router, Cisco ASR 9006 Router, and Cisco ASR 9904 Router.

Figure 2-1 Cisco ASR 9010 Router, Cisco ASR 9006 Router, and Cisco ASR 9904 Router Platform Architecture

 

Figure 2-2 shows the platform architecture of the Cisco ASR 9922 Router and Cisco ASR 9912 Router.

Figure 2-2 Cisco ASR 9922 Router and Cisco ASR 9912 Router Platform Architecture

 

Figure 2-3 shows the major system components and interconnections of the Cisco ASR 9000 Series Routers.

Figure 2-3 Major System Components and Interconnections in the Cisco ASR 9000 Series Routers

 

Figure 2-4 Additional System Components in the Cisco ASR 9000 Series Routers

 

Figure 2-5 Major System Components and Interconnections in the Cisco ASR 9922 Series Router

 

Route Switch Processor Card

The RSP card is the main control and switch fabric element in the Cisco ASR 9010 Router, Cisco ASR 9006 Router, and Cisco ASR 9904 Router chassis. The RSP card provides system control, packet switching, and timing control for the system. To provide redundancy, there can be two RSP cards in the system, one as the active control RSP and the other as the standby RSP. The standby RSP takes over all control functions should the active RSP fail.

Figure 2-6 shows the front panel connectors and indicators of the RSP card.

Figure 2-6 RSP Card Front Panel Indicators and Connectors

 

 

1

Management LAN ports

5

Compact Flash type I/II

2

CONSOLE and AUX ports

6

Alarm Cutoff (ACO) and LAMP TEST push buttons

3

SYNC (BITS/J.211) ports

7

Eight discrete LED indicators

4

Alarm Out DB9 connector

8

LED matrix display

Figure 2-7 shows the front panel of the RSP-440 card.

Figure 2-7 RSP-440 Card Front Panel

 

1

SYNC (BITS/J.211) ports

7

External USB port

2

SFP ports

8

Management LAN ports

3

IEEE 1588 port

9

CONSOLE and AUX ports

4

ToD port

10

Alarm Cutoff (ACO) and LAMP TEST push buttons

5

10MHz and 1PPS indicators

11

Eight discrete LED indicators

6

Alarm Out DB9 connector

12

LED matrix display

Route Processor Card

The RP card is the main control element in the Cisco ASR 9922 Router and Cisco ASR 9912 Router chassis. The switch fabric element has been moved to the FC cards. The RP card provides system control, packet switching, and timing control for the system. To provide redundancy, there are two RP cards in the system, one as the active control RP and the other as the standby RP. The standby RP takes over all control functions should the active RP fail.

Figure 2-8 shows the front panel connectors and indicators of the RP card.

Figure 2-8 RP Card Front Panel Connectors and Indicators

 

 

1

SYNC (BITS/J.211) ports

8

External USB port

2

SFP/SFP+ ports

9

Management LAN ports

3

IEEE 1588 port

10

CONSOLE and AUX ports

4

Inter-chassis nv Sync0

11

Alarm Cutoff (ACO) and Lamp Test push buttons

5

Inter-chassis nv Sync1 GPS ToD

12

Nine discrete LED indicators

6

10 MHz and 1 PPS indicators

13

LED matrix display

7

Alarm Out DB9 connector

Front Panel Connectors

This section describes the front panel ports and connectors of the RSP/RP card.

Management LAN Ports

Two dual-speed (100M/1000M) management LAN RJ-45 connectors are provided for use as out-of-band management ports. The speed of the management LAN is autonegotiated.

Console Port

The EIA/TIA-232 RJ-45 Console Port provides a data circuit-terminating equipment (DCE) interface for connecting a console terminal. This port defaults to 9600 Baud, 8 data, no parity, 2 stop bits with flow control none.

Auxiliary Port

The EIA/TIA-232 RJ-45 auxiliary port provides a data circuit-terminating equipment (DCE) interface that supports flow control. Use this port to connect a modem, a channel service unit (CSU), or other optional equipment for Telnet management. This port defaults to 9600 Baud, 8 data, no parity, 1 stop bit with software handshake.

Alarm Out

Alarm circuitry on the RSP/RP activates dry contact closures that are accessible through the nine-pin Alarm Out connector on the RSP/RP front panel. Each RSP/RP card drives a set of three alarm output contacts. Both normally-open and normally-closed contacts are available.

Only the active RSP/RP drives the alarm outputs. Should a switchover to the standby RSP/RP occur, the newly active RSP/RP drives the alarm outputs.

Synchronization Ports

The SYNC 0 and SYNC 1 ports are timing ports that can be configured as Building Integrated Timing System (BITS) ports. A BITS port provides a connection for an external synchronization source to establish precise frequency control at multiple network nodes, if required for your application. The RSP/RP card contains a Synchronous Equipment Timing Source (SETS) that can receive a frequency reference from an external BITS timing interface or from a clock signal recovered from any incoming interface, such as a Gigabit Ethernet, 10-Gigabit Ethernet, or SONET interface. The RSP/RP SETS circuit filters the received timing signal and uses it to drive an outgoing Ethernet interface or BITS output port.

The timing port(s) can also be configured as J.211 or DTI ports. A DOCSIS Timing Interface (DTI) port is used to connect to an external DTI server to synchronize timing and frequency across multiple routers. The timing function allows precise synchronization of real-time clocks in a network for measurements of network performance, for example, measuring delay across a VPN. The frequency reference acts like a BITS input.

RP USB Port

The RP card has a single external Universal Serial Bus (USB) port. A USB flash memory device can be inserted to load and transfer software images and files. This memory device can be used to turboboot the system or as the installation source for Package Information Envelopes (PIE) and Software Maintenance Upgrades (SMU). This memory device can also be used for users' data files, core files, and configuration backups.

Front Panel Indicators

The RSP card has eight discrete LED indicators and an LED dot-matrix display for system information. The RSP-440 adds three USB-specific LEDs. The RP has nine discrete LED indicators and an LED dot-matrix display for system information.

Table 2-1 shows the display definitions of the eight discrete LEDs on the RSP front panel and the three RSP-440 specific USB LEDs.

 

Table 2-1 RSP and RSP-440 Discrete LED Display Definitions

Indicator (Label)
Color
Description

Power Fail (FAIL)

Red

Standby Power Fail LED. The LED is turned off by the Controller Area Network (CAN) bus controller after it is up and running.

Off

Standby power is normal.

Critical Alarm (CRIT)

Red

Critical Alarm LED. A critical alarm has occurred.

Off
(Default after reset)

No critical alarm has occurred.

Major Alarm (MAJ)

Red

Major alarm LED. A major alarm has occurred.

Off
(Default after reset)

No major alarm has occurred.

Minor Alarm (MIN)

Amber

Minor alarm LED. A minor alarm has occurred.

Off
(Default after reset)

No minor alarm has occurred.

Synchronization (SYNC)

Green

System timing is synchronized to an external timing source.

Amber

System timing is free running.

Off

LED never turns off.

Internal Hard Disk Drive (HDD)

Green

Hard Disk Drive is busy/active. The LED is driven by the SAS controller.

Off
(Default after reset)

Hard Disk Drive is not busy/active

External Compact Flash (CF)

Green

Compact Flash is busy/active.

Off
(Default after reset)

Compact Flash is not busy/active.

Alarm Cutoff (ACO)

Amber

Alarm Cutoff has been enabled. The ACO push button was pressed after at least one alarm has occurred.

Off
(Default after reset)

Alarm Cutoff is not enabled.

External USB 2.0

[RSP-440]

Green

External USB is busy/active.

Off
(Default after reset)

External USB is not busy/active.

Internal USB 2.0 A

[RSP-440]

Green

Internal USB is busy/active.

Off
(Default after reset)

Internal USB is not busy/active.

Internal USB 2.0 B

[RSP-440]

Green

Internal USB is busy/active.

Off
(Default after reset)

Internal USB is not busy/active.

Table 2-2 lists the display definitions of the nine discrete LEDs on the RP front panel.

 

Table 2-2 RP Discrete LED Display Definitions

Indicator (Label)
Color
Description

Power Fail (FAIL)

Red
(Default after power on)

Standby Power Fail LED. The LED is turned off by the CAN bus controller after it is up and running.

Off

Standby power is normal.

Critical Alarm (CRIT)

Red

Critical Alarm LED. A critical alarm has occurred.

Off
(Default after reset)

No critical alarm has occurred.

Major Alarm (MAJ)

Red

Major alarm LED. A major alarm has occurred.

Off
(Default after reset)

No major alarm has occurred.

Minor Alarm (MIN)

Amber

Minor alarm LED. A minor alarm has occurred.

Off
(Default after reset)

No minor alarm has occurred.

Alarm Cutoff (ACO)

Amber

Alarm Cutoff has been enabled. The ACO push button was pressed after at least one alarm has occurred.

Off
(Default after reset)

Alarm Cutoff is not enabled.

Synchronization (SYNC)

Green

System timing is synchronized to an external timing source including
IEEE 1588.

Amber

System timing is free running.

Off
(Default after reset)

LED never turns off.

Internal Solid State Hard Disk Drive (SSD)

Green

Internal Solid State Hard Disk Drive (SSD0) is busy/active. The LED is driven by the SSD controller.

Off
(Default after reset)

Internal Solid State Hard Disk Drive is not busy/active.

FC Fault

Amber

A fault has occurred on any or all of the FC cards installed. This LED will be on during the boot phase of the FC.

Off
(Default after reset)

FC cards are booted up and ready.

GPS

Green

GPS interface provisioned and ports are turned on. ToD, 1 PPS, 10 Mhz are all valid.

Off
(Default after reset)

Either the interface is not provisioned, or the ports are not turned on. ToD,
1 PPS, and 10 Mhz are not valid.

LED Matrix Display

The LED matrix displays one row of four characters. The matrix becomes active when the CPU powers on and displays the stages of the boot process, as well as displaying runtime information during normal operation. If there are CAN Bus Controller problems, error messages are displayed.

LED Matrix Boot Stage and Runtime Display

Table 2-3 describes the RSP LED matrix displays of the stages of the boot process and runtime information.

Table 2-4 describes the RSP-440 and RP LED matrix displays of the stages of the boot process and runtime information.

Not all of these messages are seen during a successful boot up process because the screen is updated too quickly for the message to be visible. A failure detected during the boot up process results in the message remaining visible indicating the stage where the boot up process stopped. When possible, the RSP/RP card logs the failure information and reboots.

 

Table 2-3 RSP LED Matrix Boot Stage and Runtime Display

LED Matrix Display
Description

INIT

Card is inserted and microcontroller is initialized.

BOOT

Card is powered on and CPU is booting.

IMEM

Starting initialization of memory.

IGEN

Starting initialization of card.

ICBC

Initializing communication with the microcontroller.

PDxy

Loading programmable devices (x = FPGA, y = ROMMON).

PSTx

Power on self test x.

RMN

All tests finished and ROMMON is ready for commands.

LOAD

Downloading Minimum Boot Image (MBI) image to CPU.

MBI

Starting execution of MBI.

IOXR

Cisco IOS XR Software is starting execution.

ACTV

RSP role is determined to be active RSP.

STBY

RSP role is determined to be standby RSP.

PREP

Preparing disk boot.

 

Table 2-4 RSP-440 and RP LED Matrix Boot Stage and Runtime Display

LED Matrix Display
Description

INIT

Card is inserted and microcontroller is initialized.

BOOT

Card is powered on and CPU is booting.

IMEM

Starting initialization of memory.

IGEN

Starting initialization of card.

ICBC

Initializing communication with the microcontroller.

SCPI

Board is not plugged in properly.

STID

CBC was unable to read slot ID pins correctly.

PSEQ

CBC detected power sequencer failure.

DBPO

CBC detected an issue during board power up.

KPWR

CBC detected an issue during board power up.

LGNP

CBC detected an issue during board power up.

LGNI

CBC detected an issue during board power up.

RMN

All tests finished and ROMMON is ready for commands.

LOAD

Downloading Minimum Boot Image (MBI) image to CPU.

RRST

ROMMON rebooting board after MBI validation timeout.

MVB

ROMMON trying MBI validation boot.

MBI

Starting execution of MBI.

IOXR

Cisco IOS XR Software is starting execution.

LDG

The RSP/RP is loading (MBI started and card preparing for activity).

INCP

The software or configuration is incompatible with the RSP/RP.

OOSM

The RSP/RP is in Out of Service, Maintenance mode.

ACT

The RSP/RP is active (IOS-XR completely up and ready for traffic)

STBY

The RSP/RP is standby (IOS-XR completely up and ready)

LED Matrix CAN Bus Controller Error Display

Table 2-5 shows the error messages the LED matrix displays if the RSP card fails one of the power on self tests.

 

Table 2-5 RSP LED Matrix CAN Bus Controller Status Display

LED Matrix Display
Description

PST1

Failed DDR RAM memory test

PST2

Failed FPGA image cyclic redundancy checking (CRC) check

PST3

Failed card type and slot ID verification

Push Buttons

Two push buttons are provided on the RSP/RP card front panel.

  • Alarm Cutoff (ACO)—ACO activation suppresses alarm outputs. When the ACO button is pushed while critical alarms are active, the ACO LED turns on and the corresponding alarm output contacts revert to the normally open (non-alarm) state, thus suppressing the alarm. If subsequent critical alarms are detected and activated after the ACO activation, the ACO function is deactivated to notify the user of the arrival of the new alarm(s). In this case, the ACO LED will turn off and any active alarms are again indicated by driving their alarm output contacts to the alarm state.
  • Lamp Test—When the Lamp Test button is pushed, the RSP/RP status LED, line card status and port LEDs, and Fan Tray LEDs light until the button is released. The LED matrix display is not affected.

Functional Description

The switch fabric and route processor functions are combined on a single RSP card in the Cisco ASR 9010 Router, Cisco ASR 9006 Router, and Cisco ASR 9904 Router. In the Cisco ASR 9922 Router and Cisco ASR 9912 Router, the route processor functions are on the RP card. whereas the switch fabric is on the FC card. The RSP/RP card also provides shared resources for backplane Ethernet, timing, and chassis control. Redundant RSP/RP cards provide the central point of control for chassis provisioning, management, and data-plane switching.

Switch Fabric

The switch fabric portion of the RSP card links the line cards together. The switch fabric is configured as a single stage of switching with multiple parallel planes. The fabric is responsible for getting packets from one line card to another, but has no packet processing capabilities. Each fabric plane is a single-stage, non-blocking, packet-based, store-and-forward switch. To manage fabric congestion, the RSP card also provides centralized Virtual Output Queue (VOQ) arbitration.

In systems with the RSP card, the switch fabric is capable of delivering 80-Gbps per line card slot. In systems with the RSP-440 card, the switch fabric is capable of delivering 200-Gbps per line card slot.

The switch fabric is 1+1 redundant, with one copy of the fabric on each redundant RSP card. Each RSP card carries enough switching capacity to meet the router throughput specifications, allowing for full redundancy.

In the Cisco ASR 9922 Router and Cisco ASR 9912 Router, the switch fabric element has been moved to dedicated FC cards that connect to the backplanes alongside the RP cards. The switch fabric is capable of delivering 550-Gbps per line card slot.

When five FC cards are installed in the chassis, the switch fabric is 4+1 redundant. When all seven FC cards are installed in the chassis, the switch fabric is 6+1 redundant. The switch fabric is fully redundant, with one copy of the fabric on each FC, and each FC carries enough switching capacity to meet the chassis throughput specifications.

Figure 2-9 shows the switch fabric interconnections.

Figure 2-9 Switch Fabric Interconnections

 

Figure 2-10 shows the Cisco ASR 9922 Router switch fabric.

Figure 2-10 Cisco ASR 9922 Router Switch Fabric

 

Unicast Traffic

Unicast traffic through the switch is managed by a VOQ scheduler chip. The VOQ scheduler ensures that a buffer is available at the egress of the switch to receive a packet before the packet can be sent into the switch. This mechanism ensures that all ingress line cards have fair access to an egress card, no matter how congested that egress card may be.

The VOQ mechanism is an overlay, separate from the switch fabric itself. VOQ arbitration does not directly control the switch fabric, but ensures that traffic presented to the switch will ultimately have a place to go when it exits the switch, preventing congestion in the fabric.

The VOQ scheduler is also one-for-one redundant, with one VOQ scheduler chip on each of the two redundant RSP/RP cards.

Multicast Traffic

Multicast traffic is replicated in the switch fabric. For multicast (including unicast floods), the Cisco ASR 9000 Series Routers replicate the packet as necessary at the divergence points inside the system, so that the multicast packets can replicate efficiently without having to burden any particular path with multiple copies of the same packet.

The switch fabric has the capability to replicate multicast packets to downlink egress ports. In addition, the line cards have the capability to put multiple copies inside different tunnels or attachment circuits in a single port.

There are 64-K Fabric Multicast Groups (RSP 2-based line cards) or 128-K Fabric Multicast Groups (RSP 440-based line cards) in the system, which allow the replication to go only to the downlink paths that need them, without sending all multicast traffic to every packet processor. Each multicast group in the system can be configured as to which line card and which packet processor on that card a packet is replicated to. Multicast is not arbitrated by the VOQ mechanism, but it is subject to arbitration at congestion points within the switch fabric.

Route Processor Functions

The Route Processor performs the ordinary chassis management functions. The ASR 9000 Series Routers run Cisco IOS XR software, so the Route Processor runs the centralized portions of the software for chassis control and management.

Secondary functions of the Route Processor include boot media, system timing (frequency and time of date) synchronization, precision clock synchronization, backplane Ethernet communication, and power control (through a separate CAN bus controller network).

The Route Processor communicates with other route processors and linecards over a switched Ethernet out-of-band channel (EOBC) for management and control purposes.

Figure 2-11 shows the route processor interconnections on the RSP.

Figure 2-12 shows the component interconnections on the RP.

Figure 2-13 shows the component interconnections on the FC.

Figure 2-11 Route Processor Interconnections

 

Figure 2-12 RP Component Interconnections

 

Figure 2-13 FC Component Interconnections

 

Processor-to-Processor Communication

The RSP/RP card communicates with the control processors on each line card through the Ethernet Over Backplane Channel (EOBC) Gigabit Ethernet switch. This path is for processor-to-processor communication, such as IPC (InterProcess Communication). The Active RSP/RP card also uses the EOBC to communicate to the Standby RSP/RP card, if installed.

Route Processor/Fabric Interconnect

The RSP card has a fabric interface chip (FIC) attached to the switch fabric and linked to the Route Processor through a Gigabit Ethernet interface through a packet diversion FPGA. This path is used for external traffic diverted to the RSP card by line card network processors.

The packet diversion FPGA has three key functions:

  • Packet header translation between the header used by the fabric interface chip and the header exchanged with the Ethernet interface on the route processor.
  • I/O interface protocol conversion (rate-matching) between the 20-Gbps DDR bus from the fabric interface chip and the 1-Gbps interface on the processor.
  • Flow control to prevent overflow in the from-fabric buffer within the packet diversion FPGA, in case of fabric congestion.

The Route Processor communicates with the switch fabric via a FIC to process control traffic. The FIC has sufficient bandwidth to handle the control traffic and flow control in the event of fabric congestion. External traffic is diverted to the Route Processor by the line card network processors.

The RP and FC cards in the Cisco ASR 9922 Router have control interface chips and FICs attached to the backplanes that provide control plane and punt paths.

Fabric Controller Card

On the Cisco ASR 9922 Router and Cisco ASR 9912 Router, the switch fabric has been moved to FC cards.

The switch fabric is configured as a single stage of switching with multiple parallel planes. The switch fabric is responsible for transporting packets from one line card to another but has no packet processing capabilities. Each fabric plane is a single-stage, non-blocking, packet-based, store-and-forward switch. To manage fabric congestion, the RP provides centralized Virtual Output Queue (VOQ) arbitration.

The switch fabric is capable of delivering 550-Gbps per line card slot. When five FC cards are installed in the chassis, the switch fabric is 4+1 redundant. When all seven FC cards are installed in the chassis, the switch fabric is 6+1 redundant. The switch fabric is fully redundant, with one copy of the fabric on each FC, and each FC carries enough switching capacity to meet the chassis throughput specifications.

Figure 2-14 shows the FC card.

Figure 2-14 FC Card

 

Figure 2-15 shows the front panel of the FC card. The front panel has a status LED, ejector levers, ejector lever release buttons, and mounting screws.

Figure 2-15 FC Card Front Panel

 

FC Card Front Panel Indicator

The front panel of the FC card has one tri-color LED indicator for system information.

Table 2-6 lists the display definitions of the discrete LED on the FC card front panel.

 

Table 2-6 FC Card LED Display Definitions

Indicator (Label)
Color
Description

Power Fail (FAIL)

Green

FC card powered on and FPGA is programmed.

Note Fabric Data Link failure is not detected so LED remains green. Monitor CLI messages for status.

Red

Fault or malfunction in FC card power up or FPGA programming.

Note Once any ejector lever release button is pushed in, the FC card must be physically removed and reinserted (OIR) to restart the FC card. During this time before the FC card is restarted, the LED is red.

Amber

FC card powered on but fabric not active.

Off
(Default after reset)

FC card powered off via CLI.

Ethernet Line Cards

Table 2-7 lists the Ethernet line cards available for the Cisco ASR 9000 Series Routers.

.

Table 2-7 Ethernet Line Cards Available for the ASR 9000 Series Routers

Line Card
Module Type

40-port Gigabit Ethernet (40x1GE) line card

SFP1

8-port 10-Gigabit Ethernet (8x10GE) 2:1 oversubscribed line card

XFP2

4-port 10-Gigabit Ethernet (4x10GE) line card

XFP

8-port 10-Gigabit Ethernet (8x10GE) 80G line rate card

XFP

2-port 10-Gigabit Ethernet plus 20-port Gigabit Ethernet (2x10GE + 20x1GE) combination line card

XFP for 10GE ports

SFP for 1GE ports

16-port 10-Gigabit Ethernet (16x10GE) oversubscribed line card

SFP+3

24-port 10-GE DX Line Card, Packet Transport Optimized Requires
SFP+ Modules

SFP+

24-port 10-GE DX Line Card, Service Edge Optimized Requires
SFP+ Modules

SFP+

36-port 10-GE DX Line Card, Packet Transport Optimized Requires
SFP+ Modules

SFP+

36-port 10-GE DX Line Card, Service Edge Optimized Requires
SFP+ Modules

SFP+

2-port 100-GE DX Line Card, Packet Transport Optimized Requires
CFP Modules

CFP4

2-port 100-GE DX Line Card, Service Edge Optimized Requires
CFP Modules

CFP

1-port 100-GE DX Line Card, Packet Transport Optimized Requires
CFP Modules

CFP

1-port 100-GE DX Line Card, Service Edge Optimized Requires
CFP Modules

CFP

80 Gigabyte Modular Line Card, Packet Transport Optimized

N/A

80 Gigabyte Modular Line Card, Service Edge Optimized

N/A

160 Gigabyte Modular Line Card, Packet Transport Optimized

N/A

160 Gigabyte Modular Line Card, Service Edge Optimized

N/A

20-port GE Modular Port Adapter (MPA)

SFP

8-port 10-GE MPA

SFP+

4-port 10-GE MPA

XFP

2-port 10-GE MPA

XFP

2-port 40-GE MPA

QSFP+5

1-port 40-GE MPA

QSFP+6

1.SFP = Gigabit Ethernet small form-factor pluggable transceiver module

2.XFP = 10-Gigabit Ethernet small form-factor pluggable transceiver module

3.SFP+ = 10-Gigabit Ethernet small form-factor pluggable transceiver module

4.CFP = 100-Gigabit Ethernet small form-factor pluggable transceiver module

5.QSFP+ = 40-Gigabit Ethernet quad small form-factor pluggable transceiver module

6.QSFP+ = 40-Gigabit Ethernet quad small form-factor pluggable transceiver module

Functional Description

Ethernet line cards for the Cisco ASR 9000 Series Routers provide forwarding throughput of line rate for packets as small as 64 bytes. The small form factor pluggable (SFP, SFP+, QSFP+, XFP, or CFP) transceiver module ports are polled periodically to keep track of state changes and optical monitor values. Packet features are implemented within network processor unit (NPU) ASICs (see Figure 2-16).

Figure 2-16 General Line Card Data Plane Block Diagram

 

Most of the line cards have four NPUs per card (the 80-G line rate card has eight). The 2-port 100GE DX line card has eight NPUs per card, while the 2-port 100GE DX line card, the 80-gigabyte modular line card, the 160-gigabyte modular line card, and the modular port adapters (MPAs) they support have four NPUs per card. There are two data paths from the NPUs. The primary path is to a bridge FPGA, which manipulates the header and does interface conversion, then to the fabric interface ASIC where packets are where packets are queued using VOQ and then sent to the backplane where they flow to the RSP/RP fabric. This path handles all main data and also control data that are routed to the RSP/RP card CPU. The second path is to the local CPU through a switched Gigabit Ethernet link. This second link is used to process control data routed to the line card CPU or packets sent to the RSP/RP card through the fabric link.

The backplane Gigabit Ethernet links, one to each RSP/RP card, are used primarily for control plane functions such as application image download, system configuration data from the IOS XR software, statistics gathering, and line card power-up and reset control.

A CAN bus controller (CBC) supervises power supply operation and power-on reset functions. The CBC local 3.3 V regulator uses 10 V from the backplane to be operational at boot up. It then controls a power sequencer to control the power-up of the rest of the circuits on the card.

Each NPU can handle a total of approximately 25 to 30 million packets per second, accounting for ingress and egress, with a simple configuration. The more packet processing features enabled, the lower the packets per second that can be processed in the pipeline. This corresponds to up to 15-Gbps of bidirectional packet processing capability for an NPU. There is a minimum packet size of 64 bytes, and a maximum packet size of 9 KB (9216) from the external interface. The NPU can handle frames up to 16 KB, and the bridge FPGA and fabric interface chip have been designed to handle a frame size of 10 KB.

Packet streams are processed by the NPUs and are routed either locally over the Gigabit Ethernet link to the local CPU or to the RSP/RP fabric card through two bridge FPGAs and the fabric interface chip. The total bandwidth of the path from four NPUs to two bridge FPGAs is 60-Gbps. The total bandwidth of the path from the two bridge FPGAs to the fabric interface chip is 60-Gbps. The total bandwidth from fabric interface chip to the backplane is 46-Gbps redundant. The fabric interface chip connects through four 23-Gbps links to the backplane.

Each NPU can handle up to 15-Gbps of line rate traffic (depending on the packet size and processing requirements). The line cards can handle many different Ethernet protocols to provide Layer2/Layer3 switching. Each NPU can handle 30-Gbps of line rate data in a fully subscribed configuration. All switching between ports is handled on the RSP/RP card, which is connected through the backplane to all line cards. VOQ is implemented in the fabric interface chip both on the line cards and on the RSP/RP card, which assures that all ingress data paths have equal access to their egress data ports.

Although the usable fabric bandwidth over the backplane from the fabric interface ASIC is 80-Gbps, only up to 40-Gbps (usable data) flows over the interface plus any added overhead traffic (46-Gbps).

40-Port Gigabit Ethernet (40x1GE) Line Card

The 40-port Gigabit Ethernet (40x1GE) line card has 40 ports connected to SFP modules handling 40 Gigabit Ethernet interfaces through SGMII connections to four NPUs. The 40 SFP ports are organized into four blocks of 10 ports. Each block of 10 ports connects to one NPU through an SGMII serial bus interface.

The 40x1GE line card is available in base, extended, and low-queue versions. All versions are functionally equivalent, with the extended version of the line card providing typically twice the service scale of the base line card.

Figure 2-17 shows a block diagram for the 40x1GE line card, and Figure 2-18 shows the front panel connectors and indicators.

Figure 2-17 40-Port Gigabit Ethernet (40x1GE) Line Card Block Diagram

 

Figure 2-18 40-Port Gigabit Ethernet (40x1GE) Line Card Front Panel

 

 

1

Ejector lever (one of two)

5

Line Card Status LED

2

Port 0 SFP cage

6

Port 39 SFP cage

3

Port Status LED (one per port)

7

Port 1 SFP cage

4

Port 38 SFP cage

8

Captive installation screw (one of two)

8-Port 10-Gigabit Ethernet (8x10GE) 2:1 Oversubscribed Line Card

The 8-port 10-Gigabit Ethernet (8x10GE) 2:1 oversubscribed line card has eight 10-Gigabit Ethernet, oversubscribed, XFP module ports. Two 10 Gigabit Ethernet ports connect to XAUI interfaces on each of the four NPUs.

The 8x10GE 2:1 oversubscribed line card is available in base, extended, and low-queue versions. All versions are functionally equivalent, with the extended version of the line card providing typically twice the service scale of the base line card.

Figure 2-19 shows the block diagram for the 8x10GE 2:1 oversubscribed line card, and Figure 2-20 shows the front panel connectors and indicators.

Figure 2-19 8-Port 10-Gigabit Ethernet (8x10GE) 2:1 Oversubscribed Line Card Block Diagram

 

Figure 2-20 8-Port 10-Gigabit Ethernet (8x10GE) 2:1 Oversubscribed Line Card Front Panel

 

 

1

Ejector lever (one of two)

4

Port 7 XFP cage

2

Port 0 XFP cage

5

Line Card Status LED

3

Port Status LED (one per port)

6

Captive installation screw (one of two)

4-Port 10-Gigabit Ethernet (4x10GE) Line Card

The 4-port 10-Gigabit Ethernet (4x10GE) line card has four 10-Gigabit Ethernet XFP module ports. One 10-Gigabit Ethernet port connects to an XAUI interface on each of the four NPUs.

The 4x10GE line card is available in base, extended, and low-queue versions. All versions are functionally equivalent, with the extended version of the line card providing typically twice the service scale of the base line card.

Figure 2-21 shows the block diagram for the 4x10GE Line card, and Figure 2-22 shows the front panel connectors and indicators.

Figure 2-21 4-Port 10-Gigabit Ethernet (4x10GE) Line Card Block Diagram

 

Figure 2-22 4-Port 10-Gigabit Ethernet (4x10GE) Line Card Front Panel

 

 

1

Ejector lever (one of two)

4

Port 3 XFP cage

2

Port 0 XFP cage

5

Line Card Status LED

3

Port Status LED (one per port)

6

Captive installation screw (one of two)

8-port 10-Gigabit Ethernet (8x10GE) 80-Gbps Line Rate Card

The 8-port 10-Gigabit Ethernet (8x10GE) 80-Gbps line rate card has eight 10-Gigabit Ethernet XFP module ports. One 10-Gigabit Ethernet port connects to an XAUI interface on each of the eight NPUs. The 8x10GE 80-Gbps line rate card supports WAN PHY and OTN modes as well as the default LAN mode.

The 8x10GE 80-Gbps line rate card is available in base, extended, and low-queue versions. All versions are functionally equivalent, with the extended version of the line card providing typically twice the service scale of the base line card.

Figure 2-23 shows the block diagram for the 8x10GE 80-G line rate card, and Figure 2-24 shows the front panel connectors and indicators.

Figure 2-23 8-Port 10-Gigabit Ethernet (8x10GE) 80-Gbps Line Rate Card Block Diagram

 

Figure 2-24 8-Port 10-Gigabit Ethernet (8x10GE) 80-Gbps Line Rate Card Front Panel

 

 

1

Ejector lever (one of two)

4

Port 7 XFP cage

2

Port Status LED (one per port)

5

Line Card Status LED

3

Port 0 XFP cage

6

Captive installation screw (one of two)

2-Port 10-Gigabit Ethernet + 20-port 1-Gigabit Ethernet (2x10GE + 20x1GE) Combination Line Card

The 2-port 10-Gigabit Ethernet + 20-port 1-Gigabit Ethernet (2x10GE + 20x1GE) combination line card has two 10-Gigabit Ethernet XFP module ports and 20 Gigabit Ethernet SFP module ports. Each port (XFP or SFP) connects to an XAUI interface on one of the four NPUs. The 2x10GE + 20x1GE combination line card supports WAN PHY and OTN modes as well as the default LAN mode.

The 2x10GE + 20x1GE combination line card is available in base, extended, and low-queue versions. All versions are functionally equivalent, with the extended version of the line card providing typically twice the service scale of the base line card.

Figure 2-25 shows the block diagram for the 2x10GE + 20x1GE combination line card, and Figure 2-26 shows the front panel connectors and indicators.

Figure 2-25 2-Port 10-Gigabit Ethernet + 20-Port Gigabit Ethernet (2x10GE + 20x1GE) Combination Line Card Block Diagram

 

Figure 2-26 2-port 10-Gigabit Ethernet + 20-Port 1-Gigabit Ethernet (2x10GE + 20x1GE) Combination Line Card Front Panel

 

 

1

Ejector lever (one of two)

6

1GE Port 18 SFP cage

2

10GE Port 0 XFP cage

7

Line Card Status LED

3

XFP Port Status LED (one per XFP port)

8

1GE Port 19 SFP cage

4

1GE Port 0 SFP cage

9

1GE Port 1 SFP cage

5

SFP Port Status LED (one per SFP port)

10

Captive installation screw (one of two)

16-port 10-Gigabit Ethernet (16x10GE) Oversubscribed Line Card

The 16-port 10-Gigabit Ethernet (16x10GE) oversubscribed line card has sixteen 10-Gigabit Ethernet, oversubscribed, SFP+ (10-Gigabit Ethernet SFP) module ports. Two 10-Gigabit Ethernet ports connect to XAUI interfaces on each of the eight NPUs.

The 16x10GE oversubscribed line card is available in a base version.

Figure 2-27 shows the block diagram for the 16x10GE oversubscribed line card, and Figure 2-28 shows the front panel connectors and indicators.

Figure 2-27 16x10GE Oversubscribed Line Card Block Diagram

 

Figure 2-28 16-Port 10-Gigabit Ethernet (16x10GE) Oversubscribed Line Card Front Panel

 

 

1

Ejector lever (one of two)

5

Line Card Status LED

2

Port 0 SFP+ cage

6

Port 15 SFP+ cage

3

Port Status LED (one per port)

7

Port 7 SFP+ cage

4

Port 8 SFP+ cage

8

Captive installation screw (one of two)

24-Port 10-Gigabit Ethernet Line Card

The 24-port 10-Gigabit Ethernet line card provides two stacked 2x6 cage assemblies for SFP+ Ethernet optical interface modules. The 24 SFP+ modules operate at a rate of 10-Gbps.

With two RSP cards installed in the router, the 24-port 10-Gigabit Ethernet line card runs at line rate.

With a single RSP card installed in the router, the 24-port 10-Gigabit Ethernet line card is a 220-Gbps line rate card.

The 24-port 10-Gigabit Ethernet line card is available in either an -SE (Service Edge Optimized) or -TR (Packet Transport Optimized) version.

Each SFP+ cage on the 24-port 10-Gigabit Ethernet line card has an adjacent Link LED visible on the front panel. The Link LED indicates the status of the associated SFP+ port.

Figure 2-29 shows the front panel and connectors of the 24-port 10-Gigabit Ethernet line card.

Figure 2-29 24-Port 10-Gigabit Ethernet Line Card

 

Figure 2-30 24-port 10-Gigabit Ethernet (24x10GE) Line Card Front Panel

 

 

1

Ejector lever (one of two)

5

Port 12 SFP+ cage

2

Captive installation screw (one of two)

6

Port 23 SFP+ cage

3

Port 0 SFP+ cage

7

Line Card Status LED

4

Port 11 SFP+ cage

36-port 10-Gigabit Ethernet Line Card

The 36-port 10-Gigabit Ethernet line card provides three stacked 2x6 cage assemblies for SFP+ Ethernet optical interface modules. The 36 SFP+ modules operate at a rate of 10-Gbps.

The card consists of two boards: a motherboard and a daughter board. Major components on the motherboard include two Network Processors, a CPU, and ASICs. Major components on the daughter board include four Network Processors, two ASICs, six Hex Phys, and three 2x6 SFP+ cages.

With two RP cards installed in the Cisco ASR 9922 Router, the 36-port10-Gigabit Ethernet line card runs at line rate. With a single RP card installed in the Cisco ASR 9922 Router, the 36-port 10-Gigabit Ethernet line card is a 220-Gbps line rate card.

The 36-port 10-Gigabit Ethernet line card is available in either an -SE (Service Edge Optimized) or -TR (Packet Transport Optimized) version. Both versions are functionally equivalent but vary in configuration scale and buffer capacity.

Figure 2-31 shows the front panel connectors and indicators of the 36-port 10-GE line card.

Figure 2-31 36-Port 10-Gigabit Ethernet (36x10GE) Line Card Front Panel

 

 

1

Ejector lever (one of two)

6

Port 23 SFP+ cage

2

Captive installation screw (one of two)

7

Port 24 SFP+ cage

3

Port 0 SFP+ cage

8

Port 35 SFP+ cage

4

Port 11 SFP+ cage

9

Line Card Status LED

5

Port 12 SFP+ cage

2-port 100-Gigabit Ethernet Line Card

The 2-port 100-GE line card provides two CFP cages for CFP Ethernet optical interface modules that operate at a rate of 100-Gbps.

The two CFP modules can be100-Gigabit Ethernet multimode connections.

The 2-port 100-GE line card is available in either an -SE (Service Edge Optimized) or -TR (Packet Transport Optimized) version. Both versions are functionally equivalent, but vary in configuration scale and buffer capacity.

Each CFP cage on the 2-port 100-GE line card has an adjacent Link LED visible on the front panel. The Link LED indicates the status of the associated CFP port.

Figure 2-32 shows the front panel and connectors of the 2-port 100-GE line card.

Figure 2-32 2-Port 100-Gigabit Ethernet (2x100GE) Line Card Front Panel

 

 

1

Ejector lever (one of two)

4

100-GE CFP connector (two of two)

2

Captive installation screw (one of two)

5

Line Card Status LED

3

100-GE CFP connector (one of two)

1-Port 100-Gigabit Ethernet Line Card

The 1-port 100-GE line card provides one CFP cage for a CFP Ethernet optical interface module that operates at a rate of 100-Gbps. The CFP module can be a 100-Gigabit Ethernet multimode connection.

The 1-port 100-GE line card is available in either an -SE (Service Edge Optimized) or -TR (Packet Transport Optimized) version. Both versions are functionally equivalent, but vary in configuration scale and buffer capacity.

The CFP cage has an adjacent Link LED visible on the front panel. The Link LED indicates the status of the CFP port.

Figure 2-33 shows the front panel of the 1-port 100-GE line card.

Figure 2-33 1-Port 100-Gigabit Ethernet (1x100GE) Line Card Front Panel

 

1

Ejector lever (one of two)

3

100-GE Port

2

Captive installation screw (one of two)

4

Line Card Status LED

Modular Line Cards

The modular line card is available in two network processing unit (80-gb throughput) and in four network processing unit (160-gb throughput) versions. Each version is available in either a Service Edge Optimized (-SE) or Packet Transport Optimized (-TR) version. Both versions are functionally equivalent, but vary in configuration scale and buffer capacity.

Figure 2-34 shows a modular line card with a 20-port Gigabit Ethernet modular port adapter (MPA) installed in the lower bay. As shown in Figure 2-34, Bay 0 is the “upper” or “left” bay, and Bay 1 is the “lower” or “right” bay.

Figure 2-34 Modular Line Card

 

 

The MPA has Active/Link (A/L) LEDs visible on the front panel. Each A/L LED shows the status of both the port and the link. A green A/L LED means the state is on, the port is enabled, and the link is up. An amber A/L LED means the state is on, the port is enabled, and the link is down. An A/L LED that is off means the state is off, the port is not enabled, and the link is down.

The modular line card provides two bays that support the following MPAs:

  • 20-port GE MPA
  • 8-port 10-GE MPA
  • 4-port 10-GE MPA
  • 2-port 10-GE MPA
  • 2-port 40-GE MPA
  • 1-port 40-GE MPA

20-port Gigabit Ethernet Modular Port Adapter

The 20-port Gigabit Ethernet MPA provides 10 double-stacked SFP (20 total) cages that support either fiber-optic or copper Gigabit Ethernet transceivers. It also supports copper SFP modules with 10/100-1000 Mbps speed.

Each SFP cage on the Gigabit Ethernet MPA has an adjacent A/L LED visible on the front panel. The A/L LED indicates the status of the associated SFP port.

Figure 2-35 shows an example of the 20-port Gigabit Ethernet MPA.

Figure 2-35 20-Port Gigabit Ethernet MPA

 

8-port 10-Gigabit Ethernet Modular Port Adapter

The 8-Port 10-Gigabit Ethernet modular port adapter provides eight cages for SFP+ Ethernet optical interface modules that operate at a rate of 10-Gbps.

The 8-Port 10-Gigabit Ethernet modular port adapter has the following guidelines and limitations:

  • The 8-Port 10-Gigabit Ethernet modular port adapter is supported on the 160-Gigabyte Modular Line Card only (A9K-MOD160-TR and A9K-MOD160-SE).
  • The 8-Port 10-Gigabit Ethernet modular port adapter is not supported on the 80-Gigabyte Modular Line Card (A9K-MOD80-TR and A9K-MOD80-SE).
  • The 8-Port 10-Gigabit Ethernet modular port adapter is not supported on the Cisco ASR 9001 Router.

Each SFP+ cage on the 8-Port 10-Gigabit Ethernet modular port adapter has an adjacent A/L (Active/Link) LED visible on the front panel. The A/L (Active/Link) LED indicates the status of the associated SFP+ port.

Figure 2-36 shows an example of the 8-Port 10-Gigabit Ethernet MPA.

Figure 2-36 8-Port 10-Gigabit Ethernet MPA

 

4-Port 10-Gigabit Ethernet Modular Port Adapter

The 4-Port 10-Gigabit Ethernet MPA provides four cages for XFP Ethernet optical interface modules that operate at a rate of 10-Gbps. The four XFP modules can be 10-Gigabit Ethernet multimode connections.

Each XFP cage on the 4-Port 10-Gigabit Ethernet MPA has an adjacent A/L LED visible on the front panel. The A/L LED indicates the status of the associated XFP port.

Figure 2-37 shows an example of the 4-Port 10-Gigabit Ethernet MPA.

Figure 2-37 4-Port 10-Gigabit Ethernet MPA

 

2-port 10-Gigabit Ethernet Modular Port Adapter

The 2-Port10-Gigabit Ethernet MPA provides two cages for XFP Ethernet optical interface modules that operate at a rate of 10-Gbps. The two XFP modules can be 10-Gigabit Ethernet multimode connections.

Each XFP cage on the 2-Port10-Gigabit Ethernet MPA has an adjacent A/L LED visible on the front panel. The A/L LED indicates the status of the associated XFP port.

Figure 2-38 shows an example of the 2-port10-Gigabit Ethernet MPA.

Figure 2-38 2-Port 10-Gigabit Ethernet MPA

 

2-Port 40-Gigabit Ethernet Modular Port Adapter

The 2-Port 40-Gigabit Ethernet MPA provides two cages for QSFP+ Ethernet optical interface modules that operate at a rate of 40 Gbps. The two QSFP+ modules can be 40-Gigabit Ethernet multimode or single mode connections.

Each QSFP+ cage on the 2-Port 40-Gigabit Ethernet MPA has an adjacent A/L LED visible on the front panel. The A/L LED indicates the status of the associated QSFP+ port.

Figure 2-39 shows an example of the 2-Port 40-Gigabit Ethernet MPA.

Figure 2-39 2-Port 40-Gigabit Ethernet MPA

 

1-Port 40-Gigabit Ethernet Modular Port Adapter

The 1-Port 40-Gigabit Ethernet modular port adapter provides a cage for a QSFP+ Ethernet optical interface module that operates at a rate of 40-Gbps. The QSFP+ module can support either a 40-Gigabit Ethernet multimode connection or a 40-Gigabit Ethernet single mode connection.

Each QSFP cage on the 1-Port 40 Gigabit Ethernet modular port adapter has an adjacent A/L (Active/Link) LED visible on the front panel. The A/L LED indicates the status of the associated QSFP+ port.

Refer to Figure 2-40 below for an example of the 1-Port 40-Gigabit Ethernet modular port adapter.

Figure 2-40 1-Port 40-Gigabit Ethernet Modular Port Adapter

 

Power System Functional Description

The Cisco ASR 9000 Series Routers can be powered with an AC or DC source power. The power system is based on a distributed power architecture centered around a –54 VDC printed circuit power bus on the system backplane.

The –54 VDC system backplane power bus can be sourced from one of two options:

  • AC systems—AC/DC bulk power supply tray connected to the user’s 200 to 240 V +/- 10 percent (180 to 264 VAC) source.
  • DC systems—DC/DC bulk power supply tray connected to the user’s Central Office DC battery source (–54 VDC nominal).

The system backplane distributes DC power from the backplane to each card and the fan trays. Each card has on-board DC-DC converters to convert the –54 VDC from the distribution bus voltage to the voltages required by each particular card.

The power system has single-point grounding on the –54 VDC Return, that is, the –54 VDC Return is grounded to the chassis ground on the backplane only. In the Cisco ASR 9922 Router and Cisco ASR 9912 Router, the internal –54 VDC power distribution is isolated from the central office by the transformers inside the power modules. It has single-point grounding on the –54 VDC Return internal distribution bus.

All field replaceable modules of the power system are designed for Online Insertion and Removal (OIR), so they can be installed or removed without causing interruption to system operation.

Figure 2-41 and Figure 2-42 show block diagrams of the ASR 9010 Router AC power system with version 1 and version 2 power systems. Figure 2-43 and Figure 2-44 show block diagrams of the ASR 9010 Router DC power system with version 1 and version 2 power systems.


Note The Cisco ASR 9000 Series Routers have two available DC version 1 power modules, a 2100 W module and a 1500 W module. Both types of power modules can be used in a single chassis. The ASR 9000 Series Routers have one available DC version 2 power module (2100 W).


Figure 2-41 Cisco ASR 9010 Router AC Power System Block Diagram—Version 1 Power System

 

Figure 2-42 Cisco ASR 9010 Router AC Power System Block Diagram—Version 2 Power System

 

Figure 2-43 Cisco ASR 9010 Router DC Power System Block Diagram—Version 1 Power System

 

Figure 2-44 Cisco ASR 9010 Router DC Power System Block Diagram—Version 2 Power System

 

Figure 2-45 and Figure 2-46 show block diagrams of the Cisco ASR 9006 Router AC power system with version 1 and version 2 power systems. Figure 2-47 and Figure 2-48 show block diagrams of the Cisco ASR 9006 Router DC power system with version 1 and version 2 power systems.

Figure 2-45 Cisco ASR 9006 Router AC Power System Block Diagram—Version 1 Power System

 

Figure 2-46 Cisco ASR 9006 Router AC Power System Block Diagram—Version 2 Power System

 

Figure 2-47 Cisco ASR 9006 Router DC Power System Block Diagram—Version 1 Power System

 

Figure 2-48 Cisco ASR 9006 Router DC Power System Block Diagram—Version 2 Power System

 

Figure 2-49 and Figure 2-50 shows block diagrams of the Cisco ASR 9904 Router with the AC and DC version 2 power system.

Figure 2-49 Cisco ASR 9904 Router AC Power System Block Diagram—Version 2 Power System

 

Figure 2-50 Cisco ASR 9904 Router DC Power System Block Diagram—Version 2 Power System

 

Figure 2-51 and Figure 2-52 show block diagrams of the Cisco ASR 9922 Router with AC and DC version 2 power systems.

Figure 2-51 Cisco ASR 9922 Router AC Power System Block Diagram—Version 2 Power System

 

Figure 2-52 Cisco ASR 9922 Router DC Power System Block Diagram—Version 2 Power System

 

Power Modules

Multiple AC/DC power modules can be installed in each AC/DC power tray.

Figure 2-53 shows the version 1 power module, and Figure 2-54 shows the version 2 power module.

Figure 2-53 Version 1 Power Module

 

 

1

Door latch

2

Door and ejector lever

3

LED indicators

Figure 2-54 Version 2 Power Module

 

Power Module Status Indicators

Figure 2-55 shows the status indicators for the version 1 power module and Figure 2-56 shows the status indicators for the version 2 power module. The indicator definitions follow the two figures.

Figure 2-55 Version 1 Power Module Status Indicators

 

Figure 2-56 Version 2 Power Module Status Indicators

 

 

1

Input LED

ON continuously when the input voltage is present and within the correct range.

BLINKING when the input voltage is out of acceptable range.

OFF when no input voltage is present.

2

Output LED

ON when the power module output voltage is present.

BLINKING when the power module is in a power limit or overcurrent condition.

3

Fault LED

ON to indicate that a power supply failure has occurred.

System Power Redundancy

Both the AC and DC power systems have system power redundancy depending on the chassis configuration. Each tray can house up to four modules and can be configured for multiple power configurations. For more information about power system redundancy, see the “Power Supply Redundancy” section.

AC Power Trays

The AC power tray provides 20-A UL/CSA-rated, 16-IEC-rated AC receptacles. The version 1 receptacle has a bail lock retention bracket to retain the power cord. The version 2 receptacle has a clamp mechanism with a screw that can be tightened to retain the power cord. DC output power from the AC power tray is connected to the router by two power blades that mate to the power bus on the backplane. System communication is through a I2C cable from the backplane.

Figure 2-57 shows the back of the version 1 AC power tray and Figure 2-58 shows the back of the version 2 power tray.

Figure 2-57 Version 1 AC Power Tray Rear Panel

 

 

1

DC output power blades

3

Power switch

2

IEC input receptacles with retention brackets

4

I2C cable from backplane

Figure 2-58 Version 2 AC Power tray Rear Panel

 

 

1

DC output power blades

3

I2C cable from backplane

2

IEC input receptacles with retention brackets

AC Tray Power Switch

Each AC power tray provides a single-pole, single-throw power switch to power on and put in standby mode all power modules installed in the tray simultaneously. When the power modules are turned off, only the DC output power is turned off; the power module fans and LEDs still function. The power switch for the version 1 power tray is on the back of the tray, as shown in Figure 2-57. The power switch for the version 2 power tray is on the front of the tray, as shown in Figure 2-59.

Figure 2-59 Location of AC Power Switch - Version 2 Power System

 

 

1

Power switch

AC Input Voltage Range

Each AC module accepts an individual single phase 220-VAC 20-A source. Table A-17 shows the limits of the specified AC input voltage. The voltages given are single phase power source.

DC Output Levels

The output for each module is within the tolerance specifications (see Table A-19 ) under all combinations of input voltage variation, load variation, and environmental conditions. The combined, total module output power does not exceed 3000 W.

The AC tray output capacity depends on how many modules are populated. Maximum output current is determined by multiplying the maximum module current times module quantity. For example, to determine the maximum capacity with three power supply modules, multiply the current by three (x3).

AC System Operation

This section describes the normal sequence of events for system AC power up and power down.

Power Up

1. AC power is applied to the power tray by toggling the user’s AC circuit breakers to the ON position.

2. AC/DC power supplies are enabled by toggling the Power On/Off logic switch located in each of the power trays to the ON position.

3. AC/DC modules in the power trays provide –54 VDC output within six seconds after the AC is applied.

4. The soft-start circuit in the logic cards takes 100 milliseconds to charge the input capacitor of the on-board DC/DC converters.

5. The card power controller MCU enables the power sequencing of the DC/DC converters and points of load (POLs) through direct communication using the PMBus interface to digital controllers.

6. The output of the DC/DC converters ramps up to regulation within 50 milliseconds maximum after the program parameters are downloaded to each POL and the On/Off control pin has been asserted.

Power Down

1. Power conversion is disabled by toggling the Power On/Off logic switch to the OFF position or unplugging the power cords from the AC power source.

2. The AC/DC modules in the power trays stay within regulation for a minimum of 15 milliseconds after the AC power is removed.

3. The –54 V to the logic card ramps down to –36 V in 15 milliseconds minimum from the time the AC/DC modules starts ramping down from its minimum regulation level.

4. The DC/DC converters turn off immediately after the On/Off control pin is deasserted.

5. The output of the DC/DC converters stays in regulation for an additional 0.1 millisecond.

DC Power Trays

The DC power tray (see Figure 2-60) provides two power feed connector banks: A feed and B feed. System communication is through a I2C cable from the backplane.

DC Tray Power Switch

Each DC power tray provides a single-pole, single-throw power switch to power on and off all of the power modules installed in the tray simultaneously. When the power modules are turned off, only the DC output power is turned off; the power module fans and LEDs still function. The power switch is on the front panel.

DC Power Tray Rear Panel

Figure 2-60 shows the rear panel of the power tray for the version 1 power system. Figure 2-61 shows the rear panel of the power tray for the version 2 power system.

Figure 2-60 DC Power Tray Rear Panel

 

 

1

DC output power blades

4

I2C cable from backplane

2

“A” feed connectors

5

Primary ground

3

“B” feed connectors

6

Power switch

Figure 2-61 DC Power Tray Rear Panel - Cisco ASR 9006 Router and Cisco ASR 9904 Router with Version 2 Power System

 

 

1

DC output power blades

3

“B” feed connectors

2

“A” feed connectors

4

I2C cable from backplane

DC Power Tray Power Feed Indicator

Figure 2-62 shows the location of the power feed indicators on the rear panel of the DC power tray for the Cisco ASR 9010 Router and Cisco ASR 9006 Router with a version 1 power system. Figure 2-63 shows the location of the power feed indicators on the rear panel of the DC power tray for the Cisco ASR 9006 Router and Cisco ASR 9904 Router with a version 2 power system.

Figure 2-62 DC Power tray Power Feed Indicator —Version 1 Power System

 

 

1

Power feed indicators

Figure 2-63 DC Power tray Power Feed Indicator —Version 2 Power System

 

 

1

Power feed indicators

DC System Operation

This section describes the normal sequence of events for system DC power up and power down.

Power Up

1. DC power is applied to the power tray by toggling the user’s DC circuit breakers to “ON” position.

2. DC/DC power supplies are enabled by toggling the Power On/Off logic switch located in each of the power tray to ON position.

3. DC/DC power supply modules in the power tray provides –54 VDC output within seven seconds after the DC is applied.

4. The soft-start circuit in the logic cards takes 100 milliseconds to charge the input capacitor of the on-board DC/DC converters.

5. The card power controller, MCU, enables the power sequencing of the DC/DC converters and POLs through direct communication using a PMBus interface to digital controllers such as LT7510 or through a digital wrapper such as LT2978.

6. The output of the DC/DC converters ramp up to regulation within 50 milliseconds maximum. after the program parameters are downloaded to each POL and On/Off control pin has been asserted.

Power Down

1. Power conversion is disabled by toggling the Power On/Off logic switch in the power tray to OFF position.

2. The DC/DC modules in the power tray stays within regulation for a minimum of 3.5 milliseconds after the Power On/Off logic switch is disabled.

3. The –54V DC to the logic card ramps down to –36 VDC in 3.5 milliseconds minimum from the time the DC/DC modules starts ramping down from its minimum regulation level.

4. The DC/DC converters powers off immediately after the On/Off pin is deasserted.

5. The output of the DC/DC converters stays in regulation for an additional 0.1 millisecond.

Cooling System Functional Description

The Cisco ASR 9000 Series Routers chassis is cooled by removable fan trays. The fan trays provide full redundancy and maintain required cooling if a single fan failure should occur.

In the Cisco ASR 9010 Router, the two fan trays are located one above the other below the card cage and are equipped with handles for easy removal.

In the Cisco ASR 9006 Router, the two fan trays are located above the card cage, left of center, and side by side. They are covered by a fan tray door hinged at the bottom, which must be opened before removing the fan trays.

In the Cisco ASR 9904 Router, a single fan tray is located to the left of the card cage accessible from the rear, and is equipped with handles for easy removal.

In the Cisco ASR 9922 Router, the two top fan trays are located between the top and middle cages, while the two bottom fan trays are located between the middle and bottom cages. The two bottom fan trays are inserted upside down compared to the two top fan trays. In the Cisco ASR 9912 Router, the two fan trays are located above the line card cage. Each fan tray holds 12 axial fans and includes a controller that reduces the speed of the fans when the chassis temperature is within limits, thereby reducing the generation of acoustic noise. The fan controller also senses and reports individual fan failures.

Cooling Path

The Cisco ASR 9010 Router chassis has a front-to-rear cooling path. The inlet is at the bottom front of the chassis, and the exhaust is at the upper rear.

Figure 2-64 shows the cooling path of the Cisco ASR 9010 Router chassis.

Figure 2-64 Cisco ASR 9010 Router Chassis Cooling Path—Side View

 

The Cisco ASR 9006 Router chassis has a side-to-top to rear cooling path. The inlet is at the right side of the chassis, and the exhaust is at the upper rear.

Figure 2-65 shows the cooling path of the Cisco ASR 9006 Router chassis.

Figure 2-65 Cisco ASR 9006 Router Chassis Cooling Path

 

The Cisco ASR 9904 Router has a side-to-side cooling path. The inlet is at the right side of the cage, and the exhaust is at the left side.

Figure 2-66 shows the cooling path of the Cisco ASR 9904 Router chassis.

Figure 2-66 Cisco ASR 9904 Router Chassis Cooling Path

 

The cages of the Cisco ASR 9922 Router chassis have a front-to-rear cooling path. The inlet is at the front of the middle cage, and the exhaust is at the upper and lower rear.

Figure 2-67 and Figure 2-68 show the cooling path of the Cisco ASR 9922 Router chassis.

Figure 2-67 Cisco ASR 9922 Router Chassis Cooling Path—Side View

 

Figure 2-68 Cisco ASR 9912 Router Chassis Cooling Path—Side View

 

Fan Trays

Cisco ASR 9010 Router Fan Trays

The Cisco ASR 9010 Router contains two fan trays for redundancy (see Figure 2-69). The fan tray has an LED indicator to indicate fan tray status. If a fan tray fails, it is possible to swap a single fan tray assembly while the system is operational. Fan tray removal does not require removal of any cables.

Figure 2-69 Cisco ASR 9010 Router Fan Tray

 

1

Fan tray status LED

  • The fan tray contains 12 axial 120-mm (4.72-in) fans. There is a fan control board at the back end of each tray with a single power/data connector that connects with the backplane.
  • The fan tray aligns through two guide pins inside the chassis, and it is secured by two captive screws. The controller board floats within the fan tray to allow for alignment tolerances.
  • A finger guard is adjacent to the front of most fans to keep fingers away from spinning fan blades during removal of the fan tray.
  • The maximum weight of the fan tray is 13.82 lb (6.29 kg).

Cisco ASR 9006 Router Fan Trays

The Cisco ASR 9006 Router contains two fan trays for redundancy (see Figure 2-70). If a fan tray fails, it is possible to swap a single fan tray assembly while the system is operational. Fan tray removal does not require removal of any cables.


Note Both fan trays are required for normal system operation for the Cisco ASR 9010 Router and Cisco ASR 9006 Router. If both fan trays in the router are pulled out or are not installed, a critical alarm is raised.


Figure 2-70 Cisco ASR 9006 Router Fan Tray

 

  • The fan tray contains six axial 92-mm (3.62-in) fans. There is a fan control board at the back end of each tray with a single power/data connector that connects with the backplane.
  • The fan tray aligns through two guide pins inside the chassis, and is secured by one captive screw. The controller board floats within the fan tray to allow for alignment tolerances.
  • A finger guard is adjacent to the front of most of the fans to keep fingers away from spinning fan blades during removal of the fan tray.
  • The maximum weight of the fan tray is 39.7 lb (18.0 kg).

Cisco ASR 9904 Router Fan Tray

The Cisco ASR 9904 Router contains a single fan tray. If a fan tray fails, it is possible to swap a single fan tray assembly while the system is operational. Replace the missing fan tray within 4 minutes.

Figure 2-71 Cisco ASR 9904 Router Fan Tray

 

  • The fan tray contains twelve axial 88-mm (3.46-in) fans. There is a fan control board at the back end of the tray with a single power/data connector that connects with the backplane
  • The fan tray aligns through two guide pins inside the chassis, and it is secured by one captive screw. The controller board floats within the fan tray to allow for alignment tolerances.
  • A finger guard is adjacent to the front of most of the fans to keep fingers away from spinning fan blades during removal of the fan tray.
  • The maximum weight of the fan tray is 11.0 lb (4.99 kg).

Cisco ASR 9922 Router and Cisco ASR 9912 Router Fan Trays

The Cisco ASR 9922 Router contains four fan trays, and the Cisco ASR 9912 Router contains three fan trays for redundancy. The fan tray has an LED indicator to indicate fan tray status. If a fan tray fails, it is possible to swap a single fan tray assembly while the system is operational. Fan tray removal does not require removal of any cables.


Note Do not operate the chassis with any of the fan trays completely missing. Replace any missing fan tray within five minutes.


Figure 2-72 Cisco ASR 9922 Router and Cisco ASR 9912 Router Fan Tray

 

1

Fan tray status LED

  • The fan tray contains 12 axial 120-mm (4.72-in) fans. There is a fan control board at the back end of each tray with a single power/data connector that connects with the backplane.
  • The fan tray aligns through two guide pins inside the chassis, and it is secured by two captive screws. The controller board floats within the fan tray to allow for alignment tolerances.
  • A finger guard is adjacent to the front of most fans to keep fingers away from spinning fan blades during removal of the fan tray.
  • The maximum weight of the fan tray is 18.00 lb (8.16 kg).
  • The fan tray width is increased from 16.3 inches to 17.3 inches. The overall fan tray depth remains the same at 23 inches. The individual fan current rating is increased to 2 A to support higher speeds.

Status Indicators

The fan tray has a Run/Fail status LED on the front panel to indicate fan tray status.

After fan tray insertion into the chassis, the LED lights up temporarily appearing yellow. During normal operation:

  • The LED lights green to indicate that all fans in the module are operating normally.
  • The LED lights red to indicate a fan failure or another fault in the fan tray module. Possible faults are:

Fan stopped.

Fans running below required speed to maintain sufficient cooling.

Controller card has a fault.

Fan Tray Servicing

No cables or fibers must be moved during installation or removal of the fan tray(s). Replacing fan trays does not interrupt service.

Slot Fillers

To maintain optimum cooling performance in the chassis and at the slot level, unused slots must be filled with card blanks or flow restrictors. These slot fillers are simple sheet metal only and are not active. Software cannot detect their presence.

Chassis Air Filter

The chassis air filters in the ASR 9000 Series Routers are NEBS compliant. The filter is not serviceable but is a field replaceable unit. Replacing the filter does not interrupt service.

In the Cisco ASR 9010 Router, a chassis air filter is located underneath the fan trays (see Figure 2-73).

Figure 2-73 Cisco ASR 9010 Router Chassis Air Filter

 

In the Cisco ASR 9006 Router, a chassis air filter is located along the right side of the chassis, and is accessible from the rear of the chassis (see Figure 2-74).

Figure 2-74 Cisco ASR 9006 Router Chassis Air Filter

 

 

1

Air filter

2

Thumb screw

In the Cisco ASR 9904 Router, the chassis air filter is located along the right side of the chassis, and is accessible from the rear of the chassis (see Figure 2-75).

Figure 2-75 Cisco ASR 9904 Router Air Filter

 

1

Air filter

2

Thumb screw

The Cisco ASR 9922 Router has three air filters on the middle cage (see Figure 2-76). The center air filter covers the front of the FC cards. The side air filters cover the RP cards.

Figure 2-76 ASR 9922 Router Chassis Air Filters

 

The Cisco ASR 9912 Router has three air filters on the RP/FC card cage (see Figure 2-77). The center air filter covers the front of the FC cards. The side air filters cover the RP cards.

Figure 2-77 Cisco ASR 9912 Router Chassis Air Filters

 

Figure 2-78 shows how to replace the foam media inside the center air filter.

Figure 2-78 Cisco ASR 9922 Router Chassis Center Air Filter

 

 

1

Loosen thumb screws.

3

Remove foam filter media.

2

Rotate and lower inner frame.

Figure 2-79 shows how to replace the foam media inside one of the two side air filters.

Figure 2-79 Cisco ASR 9922 Router Chassis Side Air Filter

 

 

1

Loosen thumb screws

3

Remove foam filter media

2

Rotate and lower inner frame

Speed Control

The cooling system adjusts its speed to compensate for changes in system or external ambient temperatures. To reduce operating noise, the fans have variable speeds. Speed can also vary depending on system configurations that affect total power dissipation. If lower power cards are installed, the system could run at slower speeds; if higher power cards are installed, the system could run at faster speeds

Fan speed is managed by the RSP/RP card and the controller card in the fan tray. The RSP/RP monitors card temperatures and sends a fan speed to the controller card.

If the failure of a single fan within a module is detected, the failure causes an alarm and all the other fans in the fan tray go to full speed.

Complete failure of one fan tray causes the remaining fan tray to operate its fans at full speed continuously until a replacement fan tray is installed.

Temperature Sensing and Monitoring

Temperature sensors are present on cards to monitor the internal temperatures. Line cards and RSP/RP cards have their leading edge (inlet) and hottest spot continuously monitored by temperature sensors. Some cards have additional sensors located near hot components that need monitoring. Some ASICS have internal diodes that might be used to read junction temperatures.

If the ambient air temperature is within the normal operating range, the fans operate at the lowest speed possible to minimize noise & power consumption.

If the air temperature in the card cage rises, fan speed increases to provide additional cooling air to the internal components. If a fan fails, the others increase in speed to compensate.

Fan tray removal triggers environmental alarms and increases the fan speed of the remaining tray to its maximum speed.

Servicing

The system is populated with two fan trays for redundancy. If a fan tray failure occurs, it is possible to swap a single fan tray assembly while the system is operational.

Fan tray removal does not require removal of any cables.

Assuming redundant configuration, removal of a fan tray results in zero packet loss.

System Shutdown

When the system reaches critical operating temperature points, it triggers a shutdown sequence of the system.

System Management and Configuration

The Cisco IOS XR Software on the ASR 9000 Series Routers provides the system manageability interfaces: CLI, XML, and SNMP.

Cisco IOS XR Software

The ASR 9000 Series Routers run Cisco IOS XR Software and use the manageability architecture of that operating system, which includes CLI, XML, and SNMP. Craft Works Interface (CWI), a graphical craft tool for performance monitoring, is embedded with the Cisco IOS XR Software and can be downloaded through the HTTP protocol. However, the ASR 9000 Series Routers support only a subset of CWI functionality. In this mode, a user can edit the router configuration file, open Telnet/SSH application windows, and create user-defined applications.

System Management Interfaces

The system management interfaces consist of the CLI, XML, and SNMP protocols. By default, only CLI on the console is enabled. When the management LAN port is configured, various services can be started and used by external clients, such as Telnet, SSH, and SNMP, In addition, TFTP and Syslog clients can interact with external servers. CWI can be downloaded and installed on a PC or Solaris box.

For information about SNMP, see the “SNMP” section.

All system management interfaces have fault and physical inventory.

Command-Line Interface

The CLI supports configuration file upload and download through TFTP. The system supports generation of configuration output without any sensitive information such as passwords, keys, etc. The
ASR 9000 Series Routers support Embedded Fault Manager (TCL-scripted policies) through CLI commands. The system also supports feature consistency between the CLI and SNMP management interfaces.

Craft Works Interface

The system supports CWI, a graphical craft tool for performance monitoring, configuration editing, and configuration rollback. CWI is embedded with Cisco IOS XR software and can be downloaded through the HTTP protocol. A user can use CWI to edit the router configuration file, create user-defined applications, and open Telnet/SSH application windows to provide CLI access.

XML

External (or XML) clients can programmatically access the configuration and operational data of the Cisco ASR 9000 Series Router using XML. The XML support includes retrieval of inventory, interfaces, alarms, and performance data. The system is capable of supporting 15 simultaneous XML/SSH sessions. The system supports alarms and event notifications over XML and also supports bulk PM retrieval and bulk alarms retrieval.

XML clients are provided with the hierarchy and possible contents of the objects that they can include in their XML requests (and can expect in the XML responses), documented in the form of an XML schema.

When the XML agent receives a request, it uses the XML Service Library to parse and process the request. The Library forwards the request to the Management Data API (MDA) Client Library, which retrieves data from the SysDB. The data returned to the XML Service Library is encoded as XML responses. The agent then processes and sends the responses back to the client as response parameter of the invoke method call. The alarm agent uses the same XML Service Library to notify external clients about configuration data changes and alarm conditions.

SNMP

The SNMP interface allows management stations to retrieve data and to get traps. It does not allow setting anything in the system.

SNMP Agent

In conformance with SMIv2 (Structure of Management Information Version 2) as noted in RFC 2580, the system supports SNMPv1, SNMPv2c, and SNMPv3 interfaces. The system supports feature consistency between the CLI and SNMP management interfaces.

The system is capable of supporting at least 10 SNMP trap destinations. Reliable SNMP Trap/Event handling is supported.

For SNMPv1 and SNMPv2c support, the system supports SNMP View to allow inclusion/exclusion of Miss for specific community strings. The SNMP interface allows the SNMP SET operation.

MIBs

The Device Management MIBs supported by the ASR 9000 Series Routers are listed at:

http://cisco.com/public/sw-center/netmgmt/cmtk/mibs.shtml

Online Diagnostics

System run-time diagnostics are used by the Cisco Technical Assistance Center (TAC) or the end user to troubleshoot a field problem and assess the state of a given system.

Some examples of the run-time diagnostics include the following:

  • Monitoring line card to RSP/RP card communication paths
  • Monitoring line card to RSP/RP card data path
  • Monitoring CPU communication with various components on the line cards and RSP/RP cards