A. The 12000 Series Internet Router is available in seven models. This table lists the hardware differences among these models:
12008 12012 12016 12404 12406 12410 12416 Switch fabric capacity (Gbps) 40 60 802 80 120 200 320 # of slots 8 12 16 4 6 10 16 # of switch fabric slots 3 SFC, 2 CSC 3 SFC, 2 CSC 3 SFC, 2 CSC 1 board3 3 SFC, 2 CSC 5 SFC, 2 CSC 3 SFC, 2 CSC # of line card slots1 7 11 15 3 5 9 15
1 One slot is taken by the Gigabit Route Processor (GRP). If two GRPs are present for redundancy purposes, then you need to remove one available slot for the line cards.
2 The Cisco 12016 can be upgraded to a Cisco 12416 using a switch fabric upgrade kit.
3 The 12404 has one board that contains all the Clock and Scheduler Card (CSC) and Switch Fabric Card (SFC) functionalities (functionally equivalent to one CSC and three SFCs).
The GRP can be put in any of the slots. On the Cisco 12012, it is recommended that you use slots 0 and 11 for the GRP, because these slots do not cool as well and the GRP dissipates less heat than the other Line Cards (LCs).
A. The 12016 and 12416 are the same chassis. The only difference is the different Clock and Scheduler Card (CSC) and Switch Fabric Cards (SFCs). The 12016 uses the GSR16/80-CSC and the GSR16/80-SFC, while the 12416 uses the GSR16/320-CSC and the GSR16/320-SFC. With the new SFCs, the 12416 can support up to 10 Gbps per slot, whereas the 12016 supports up to 2.5 Gbps per slot.
If you have a 12016 and want to upgrade it to a 12416, all you would have to do is replace the GSR16/80-CSC and GSR16/80-SFC with the new GSR16/320-CSC and GSR16/320-SFC.
A. The SFC and CSC provide the physical switch fabric for the system as well as the clocking for the Cisco cells that carry data and control packets among the line cards and route processors.
On the 12008, 12012, and 12016, you must have at least one CSC for the router to run. Having only one CSC and no SFCs is called quarter bandwidth, and only works with Engine 0 Line Cards (LCs). If other LCs are in the system, they are automatically shut down. If you require LCs other than Engine 0, full bandwidth (three SFCs and one CSC) must be installed in the router. If redundancy is required, a second CSC is necessary. This redundant CSC only functions if either the CSC or an SFC goes bad. The redundant CSC can function as either the CSC or SFC.
The 12416, 12406, 12410, and 12404 require full bandwidth.
All Cisco 12000 Series Routers have a maximum of three SFCs and two CSCs, except for the 12410 series which has five dedicated SFCs and two dedicated CSCs, and the 12404 which has one board that contains all the CFC and SFC functionality. For the 12404, there is no redundancy.
In the 12008, 12012, 12016, 12406, and 12416, the CSC cards also function as SFCs. That is why, to get a full bandwidth redundant configuration, you only need three SFCs and two CSCs. In the 12410, there are dedicated CSCs and SFCs. To get a full bandwidth redundant configuration, you need two CSCs and five SFCs.
Quarter bandwidth configurations can only be used on the 12008, 12012, and the 12016 if you have nothing but Engine 0 LCs in the chassis. The CSC192 and SFC192, which reside in the 12400 series chassis, do not support quarter bandwidth configurations.
A. Although they use different Switch Fabric Cards (SFCs) and Clock and Scheduler Cards (CSCs), all the 12000 Series Internet Routers use the same Gigabit Route Processor (GRP) and Line Cards (LCs). The exception is all the LCs that are based on the Engine 4, such as the OC-192 POS, 10xGE, and others which are supported only in a 124xx with the 320-Gbps switch fabric. For more details, refer to How can I determine what engine card is running in the box?.
Q. With the maximum configuration for Switch Fabric Cards (SFCs) and Clock and Scheduler Cards (CSCs), what is the total capacity per slot?
A. Gigabit Route Processors (GRPs) and Line Cards (LCs) are installed from the front of the chassis and plug into a passive backplane. This backplane contains serial lines that interconnect all of the LCs to the switch fabric cards, as well as other connections for power and maintenance functions. Each 2.5 Gbps chassis slot (12008, 12012, 12016) has up to four serial line connections (1.25 Gbps), one to each of the SFCs to provide a total capacity of 5 Gbps per slot (2.5 Gbps full duplex). The 10 Gbps (12404, 12406, 12410 and 12416) use four sets of four serial line connections in each slot, providing each slot with a switching capacity of 20 Gbps full duplex.
Note: Actually, each LC has five serial line connections. One is for redundancy (it goes to the redundant card) and is the XOR of the data through the other SFCs for error correction. The same applies for the 124xx series.
A. These types of memory exist on the GRP:
Dynamic RAM (DRAM)
DRAM is also referred to as main or processor memory. Both the GRP and the Line Cards (LCs) contain DRAM that enables an onboard processor to run Cisco IOS® Software and store network routing tables. On the GRP, you can configure route memory ranging from the factory default of 128 MB up to the maximum configuration of 512 MB.
The processor on the GRP uses onboard DRAM to perform a variety of important tasks including these:
Running the Cisco IOS Software image
Storing and maintaining the network routing tables
Loading the Cisco IOS Software image into installed LCs
Formatting and distributing updated Cisco Express Forwarding tables (Forwarding Information Base (FIB) and adjacencies tables) to installed LCs
Monitoring temperature and voltage alarm conditions of installed cards and shutting them down when necessary
Supporting a console port that enables you to configure the router using an attached terminal
Participating in network routing protocols (together with other routers in the networking environment) to update the router's internal routing tables.
Note: 512 MB route memory configurations on the GRP are only compatible with Product Number GRP-B=. In addition, Cisco IOS Software Releases 12.0(19)S, 12.0(19)ST, or later is required and ROM Monitor (ROMmon) Release 11.2 (181) or later is also required.
Shared Random Access Memory (SRAM)
SRAM provides secondary CPU cache memory. The standard GRP configuration is 512 KB. Its principal function is to act as a staging area for routing table update information to and from the LCs. SRAM is not field upgradable, which means you cannot upgrade or replace it.
GRP Flash Memory
Both the onboard and PCMCIA card-based Flash memory allow you to remotely load and store multiple Cisco IOS Software and microcode images. You can download a new image over the network or from a local server. You can then add the new image to Flash memory or replace the existing files. You can boot the routers either manually or automatically from any of the stored images. Flash memory also functions as a TFTP server to allow other servers to boot remotely from stored images or to copy them into their own Flash memory.
Onboard Flash Single Inline Memory Module (SIMM)
The onboard Flash memory (called bootflash) is located in socket U17 and contains the Cisco IOS Software boot image and other user-defined files on the GRP. This is an 8 MB SIMM, which is not field upgradable. You can neither upgrade nor replace it. It is always recommended to synchronize the boot image with the main Cisco IOS Software image.
Flash Memory Card
The Flash memory card contains the Cisco IOS Software image. A Flash memory card is available as Product Number MEM-GRP-FL20=, which is a 20 MB PCMCIA Flash memory card that ships as a spare, or as a part of a Cisco 12000 Series system. This card can be inserted into either of the two PCMCIA slots in the GRP, so that the Cisco IOS Software can be loaded into the GRP main memory. Both type 1 and type 2 PCMCIA cards can be used.
For compatibility information between the PCMCIA Flash cards and various platforms, refer to PCMCIA Filesystem Compatibility Matrix.
Nonvolatile RAM (NVRAM)
The information stored in the NVRAM is nonvolatile, which means the information is still present in this memory after a system reload. System configuration files, software configuration register settings, and environmental monitoring logs are contained in the 512 KB NVRAM, which is backed up with built-in lithium batteries that retain the contents for a minimum of five years. NVRAM is not field upgradable, which means you can neither upgrade nor replace it.
Erasable Programmable Read Only Memory (EPROM)
The EPROM on the GRP contains a ROMmon that enables you to boot the default Cisco IOS Software image from a Flash memory card if the Flash memory SIMM does not contain a boot helper image. If no valid image is found, the boot process ends up in ROMmon mode, which is a subset of the main Cisco IOS Software, to allow basic commands. The 512 KB Flash EPROM is not field upgradable, which means you can neither upgrade nor replace it.
A. On an LC, there are two types of user-configurable LC memory:
Route or processor memory (located in Dynamic RAM (DRAM))
Packet memory (located in Synchronous Dynamic RAM (SDRAM))
LC memory configurations and memory socket locations differ, depending on the engine type of the LC. In general, all LCs share a common set of memory configuration options for processor or route memory, but support different default and maximum configurations for packet memory based on the type of engine on which the LC is built.
On LCs, the main memory can be configured ranging from the factory default of 128 MB (Engine 0, 1, 2) up to the maximum configuration of 256 MB which is the default for the Engine 3 and 4 LCs.
Note: If there is not enough DRAM to load Cisco Express Forwarding tables onto one LC, Cisco Express Forwarding is automatically disabled for this LC and since this is the only switching method available on 12000 Series Internet Routers, the LC itself is disabled.
LC packet memory temporarily stores data packets awaiting switching decisions by the LC processor. Once the LC processor makes the switching decisions, the packets are propagated into the router's switch fabric for transmission to the appropriate LC. For an LC to operate, the Dual In-line Memory Module (DIMM) sockets for both transmitting and receiving must be populated. The SDRAM DIMMs installed in a given buffer (either receive or transmit) must be the same type and size, although receive and transmit buffers can operate with different memory sizes.
Type of Engine Default Packet Memory Upgradable Upgradable to Engine 0 MEM-LC-PKT-128= No Engine 1 MEM-LC1-PKT-256= No Engine 2 MEM-LC1-PKT-256= Yes MEM-PKT-512-UPG= Engine 3 512 MB - No FRU yet No Engine 4 MEM-LC4-PKT-512= No
A. The Cisco 12000 Series offers an extensive portfolio of LCs, including core, edge, channelized edge, ATM, Ethernet, Dynamic Packet Transport (DPT), and End-of-Sale (EOS). These LCs deliver high performance, guaranteed priority packet delivery and service transparent Online-Insertion and Removal (OIR) through the Cisco 12000 Series distributed system architecture. This table lists the released LCs as of December 2001:
Line Card Name Engine Chassis Supported Cisco IOS Software Release Resources 1-Port OC-48 POS Internet Service Engine (ISE) One-Port OC-48c/STM -16c POS/SDH ISE Line Card Engine 3 (ISE) 10G Chassis 2.5G Chassis 12.0(21)S 12.0(21)ST 1-Port OC-48 POS One-Port OC-48c/STM-16c POS/SDH Line Card Engine 2 10G Chassis 2.5G Chassis 12.0(10)S 12.0(11)ST Datasheet 4-Port OC-48 POS Four-Port OC-48c/STM-16c POS/SDH Line Card Engine 4 10G Chassis Only 12.0(15)S 12.0(17)ST 1-Port OC-192 POS One-Port OC-192c/STM-64c POS/SDH Line Card Engine 4 10G Chassis Only 12.0(15)S 12.0(17)ST
Line Card Name Engine Chassis Supported Cisco IOS Software Release Resources 6-Port DS3 Six-Port DS3 Line Card Engine 0 10G Chassis 2.5G Chassis 12.0(10)S 12.0(11)ST 12-Port DS3 Twelve-Port DS3 Line Card Engine 0 10G Chassis 2.5G Chassis 12.0(10)S 12.0(11)ST 6-Port E3 Six-Port E3 Line Card Engine 0 10G Chassis 2.5G Chassis 12.0(15)S 12.0(16)ST 12-Port E3 Twelve-Port E3 Line Card Engine 0 10G Chassis 2.5G Chassis 12.0(15)S 12.0(16)ST 4-Port OC-3 POS Four-Port OC-3c/STM-1c POS/SDH Line Card Engine 0 10G Chassis 2.5G Chassis 12.0(05)S 12.0(11)ST Datasheet 8-Port OC-3 POS Eight-Port OC-3c/STM-1c POS/SDH Line Card Engine 2 10G Chassis 2.5G Chassis 12.0(10)S 12.0(11)ST 16-Port OC-3 POS Sixteen-Port OC-3c/STM-1c POS/SDH Line Card Engine 2 10G Chassis 2.5G Chassis 12.0(10)S 12.0(11)ST 16-Port OC-3 POS ISE Sixteen-Port OC-3c/STM-1c POS/SDH ISE Engine 3 (ISE) 10G Chassis 2.5G Chassis 12.0(21)S 12.0(21)ST 1-Port OC-12 POS One-Port OC-12c/STM-4c POS/SDH Line Card Engine 0 10G Chassis 2.5G Chassis 12.0(10)S 12.0(11)ST Datasheet 4-Port OC-12 POS Four-Port OC-12c/STM-4c POS/SDH Line Card Engine 2 10G Chassis 2.5G Chassis 12.0(10)S 12.0(11)ST Datasheet 4-Port OC-12 POS ISE Four-Port OC-12c/STM-4c POS/SDH ISE Line Card Engine 3 (ISE) 10G Chassis 2.5G Chassis 12.0(21)S 12.0(21)ST 1-Port OC-48 POS ISE One-Port OC-48c/STM -16c POS/SDH ISE Line Card Engine 3 (ISE) 10G Chassis 2.5G Chassis 12.0(21)S 12.0(21)ST
Channelized Edge LCs
Line Card Name Engine Chassis Supported Cisco IOS Software Release Resources 2-Port CHOC-3, DS1/E1 Two-Port Channelized OC-3/STM-1(DS1/E1) Line Card Engine 0 10G Chassis 2.5G Chassis 12.0(17)S 12.0(17)ST Datasheet 1-Port CHOC-12, DS3 One-Port Channelized OC-12 (DS3) Line Card Engine 0 10G Chassis 2.5G Chassis 12.0(05)S 12.0(11)ST Datasheet 1-Port CHOC-12, OC-3 One-Port Channelized OC-12/STM-4 (OC-3/STM-1) Line Card Engine 0 10G Chassis 2.5G Chassis 12.0(05)S 12.0(11)ST Datasheet 4-Port CHOC-12 ISE Four-Port Channelized OC-12/STM-4 (DS3/E3, OC-3c/STM-1c) POS/SDH ISE Engine 3 (ISE) 10G Chassis 2.5G Chassis 12.0(21)S 12.0(21)ST 1-Port CHOC-48 ISE One-Port Channelized OC-48/STM-16 (DS3/E3, OC-3c/STM-1c, OC-12c/STM-4c) POS/SDH ISE Line Card Engine 3 (ISE) 10G Chassis 2.5G Chassis 12.0(21)S 12.0(21)ST 6-Port Ch T3 Six-Port Channelized T3 (T1) Line Card Engine 0 10G Chassis 2.5G Chassis 12.0(14)S 12.0(14)ST
Line Card Name Engine Chassis Supported Cisco IOS Software Release Resources 4-Port OC-3 ATM Four-Port OC-3c/STM-1c ATM Engine 0 10G Chassis 2.5G Chassis 12.0(5)S 12.0(11)ST 1-Port OC-12 ATM One-Port OC-12c/STM-4c ATM Engine 0 10G Chassis 2.5G Chassis 12.0(7)S 12.0(11)ST Datasheet 4-Port OC-12 ATM Four-Port OC-12c/STM-4c ATM Line Card Engine 2 10G Chassis 2.5G Chassis 12.0(13)S 12.0(14)ST Datasheet
Line Card Name Engine Chassis Supported Cisco IOS Software Release Resources 8-Port FE w/ ECC Eight-Port Fast Ethernet Line Card Engine 1 10G Chassis 2.5G Chassis 12.0(10)S 12.0(16)ST 3-Port GE Three-Port Gigabit Ethernet Line Card Engine 2 10G Chassis 2.5G Chassis 12.0(11)S 12.0(16)ST Datasheet 10-Port GE Ten-Port Gigabit Ethernet Engine 4 w/RX/TX+ /density 10G Chassis 2.5G Chassis 12.0(22)S 12.0(22)ST Datasheet
Line Card Name Engine Chassis Supported Cisco IOS Software Release Resources 2-Port OC-12 DPT Two-Port OC-12c/STM-4c DPT Engine 1 10G Chassis 2.5G Chassis 12.0(10)S 12.0(11)ST Announcement 1-Port OC-48 DPT One-Port OC-48c/STM-16c DPT Engine 2 10G Chassis 2.5G Chassis 12.0(15)S 12.0(16)ST Datasheet Announcement
These LCs are no longer sold. They are listed for your reference only:
Line Card Name Engine Chassis Supported Cisco IOS Software Release 1-Port OC-192c/ STM- 64c Enabler Card One-Port OC-192c/STM-64c POs/Enabler Card Engine 2 10G Chassis 2.5G Chassis 12.0(10)S 12.0(11)ST 1-Port GE w/ ECC One-Port Gigabit Ethernet Line Card Refer to the Product Bulletin for more information. Engine 1 10G Chassis 2.5G Chassis 12.0(10)S 12.0(16)ST
Note: Engine 3 LCs are capable of performing edge features at line rate. The higher the Layer 3 (L3) engine, the more packets get switched in hardware.
A. Cisco IOS Software Release 12.0(9)S added the Layer 3 (L3) Engine type to the output of the show diag command, as illustrated:SLOT 1 (RP/LC 1 ): 1 Port Packet Over SONET OC-12c/STM-4c Single Mode MAIN: type 34, 800-2529-02 rev C0 dev 16777215 HW config: 0x00 SW key: FF-FF-FF PCA: 73-2184-04 rev D0 ver 3 HW version 1.1 S/N CAB0242ADZM MBUS: MBUS Agent (1) 73-2146-07 rev B0 dev 0 HW version 1.2 S/N CAB0236A4LE Test hist: 0xFF RMA#: FF-FF-FF RMA hist: 0xFF DIAG: Test count: 0xFFFFFFFF Test results: 0xFFFFFFFF L3 Engine: 0 - OC12 (622 Mbps) !--- Engine 0 card. MBUS Agent Software version 01.40 (RAM) (ROM version is 02.02) Using CAN Bus A ROM Monitor version 10.00 Fabric Downloader version used 13.01 (ROM version is 13.01) Primary clock is CSC 1 Board is analyzed Board State is Line Card Enabled (IOS RUN ) Insertion time: 00:00:11 (2w1d ago) DRAM size: 268435456 bytes FrFab SDRAM size: 67108864 bytes ToFab SDRAM size: 67108864 bytes 0 crashes since restart
There is a shortcut command that you can use to get the same result, but with only the useful information:Router#show diag | i (SLOT | Engine) ... SLOT 1 (RP/LC 1 ): 1 port ATM Over SONET OC12c/STM-4c Multi Mode L3 Engine: 0 - OC12 (622 Mbps) SLOT 3 (RP/LC 3 ): 3 Port Gigabit Ethernet L3 Engine: 2 - Backbone OC48 (2.5 Gbps) ...
A. Support for redundant GRPs was introduced in Cisco IOS Software Releases 12.0(5)S and 11.2(15)GS2. When two GRPs are installed in a 12000 Series Router chassis, one GRP acts as the active GRP, and the other acts as a backup, or standby, GRP. If the primary Route Processor (RP) fails or is removed from the system, the secondary GRP detects the failure and initiates a switchover. During a switchover, the secondary GRP assumes control of the router, connects with the network interfaces, and activates the local network management interface and system console.
Route Processor Redundancy
Router Processor Redundancy (RPR) is an alternative mode to High System Availability (HSA), and allows Cisco IOS Software to be booted on the standby processor prior to switchover (a "cold boot"). In RPR, the standby RP loads a Cisco IOS Software image at boot time and initializes itself in standby mode; however, although the startup configuration is synchronized to the standby RP, system changes are not. In the event of a fatal error on the active RP, the system switches to the standby processor, which reinitializes itself as the active processor, reads and parses the startup configuration, reloads all of the Line Cards (LCs), and restarts the system.
Route Processor Redundancy Plus
In RPR+ mode, the standby RP is fully initialized. The active RP dynamically synchronizes startup and the running configuration changes to standby RP, meaning that the standby RP need not be reloaded and reinitialized (a "hot boot"). Additionally, on the Cisco 10000 and 12000 Series Internet Routers, the LCs are not reset in RPR+ mode. This functionality provides a much faster switchover between the processors. Information synchronized to the standby RP includes running configuration information, startup information on the Cisco 10000 and 12000 Series Internet Routers, and changes to the chassis state such as Online Insertion and Removal (OIR) of hardware. LC, protocol, and application state information are not synchronized to the standby RP.
RPR+ was introduced in Cisco IOS Software Release 12.0(17)ST. For more information on LCs with the 12000 Series Internet Routers that support RPR+, refer to Cross-Platform Release Notes for Cisco IOS Release 12.0 S, Part 2: New Features and Important Notes. All other line cards (such as ATM and Engine 3) are reset and reloaded during an RPR+ switchover.
Stateful Switchover (SSO) mode provides all the functionality of RPR+ in that Cisco IOS Software is fully initialized on the standby RP. In addition, SSO supports synchronization of LC, protocol, and application state information between RPs for supported features and protocols (a "hot standby").
SSO is a new feature available since Cisco IOS Software Release 12.0(22)S. For more information regarding this feature, refer to Stateful Switchover.
A. Depending on the features you need, Cisco IOS Software releases 11.2GS, 12.0S, or 12.0ST can be installed on a 12000 Series Internet Router. The choice must be made based on the features that are required, the hardware parts that are installed, and the available memory.
As a reference guide to decide which Cisco IOS Software to install, consult the release notes listed. They give a detailed overview of the features and hardware components that are supported for each Cisco IOS Software release.
Note: The image running on the 12000 Series Internet Router (gsr-x-xx) includes an integrated Line Card (LC) image (glc-x-x) that gets downloaded to the LCs during system initialization.
A. Support for ACLs varies with the Layer 3 (L3) Engine type of Line Card (LC). The Engine 4 LC does not support ACLs, but Engine 4+ (now in Early Field Trial (EFT)) does support them.
Q. Which Simple Network Management Protocol (SNMP) MIBs does the 12000 Series Internet Router support for network management?
A. The 12000 Series Internet Router is generally designed for high-speed packet forwarding performance in the core of an IP network. Engine 3 and Engine 4+ Line Cards (LCs) are designed for edge applications and implement enhanced IP services (such as QoS) in hardware with no performance impact.
This table summarizes support for QoS features by Engine type:
MDRR WRED Marking Notes Engine 0 Yes - Software Yes - Software Rate-Limit statement only. Policy-based routing can also be used. Engine 1 No No Rate-Limit Statement Only. Policy-based routing can also be used. Engine 2 Yes - Hardware Yes - Hardware Single Ingress Rate-Limit statement per interface only. No ACL. Marking, MDRR, and WRED are not available on subinterfaces. Engine 3 Yes - Hardware Yes - Hardware Port, ACL, Rate-Limit Subinterfaces are supported on Engine 3. Engine 4 Yes - Hardware Yes - Hardware Yes - Based on port with Rate-Limit. Not on ACL. Minimal subinterface support. Engine 4+ Yes - Hardware Yes - Hardware Yes - like Engine 4, but also has ACL support.
1 MDRR = Modified Deficit Round Robin
2 WRED = Weighted Random Early Detection
The right packet scheduling mechanism for a router depends on its switching architecture. Weighted Fair Queueing (WFQ) and Class-Based WFQ (CBWFQ) are the well-known scheduling algorithms for resource allocation on Cisco router platforms with a bus-based architecture. However, they are not supported on the Cisco 12000 Series Router. Legacy priority queueing and custom queueing also are not supported by the Cisco 12000 Series Router. Instead, the Gigabit Switch Router (GSR) uses a queueing mechanism that better suits its architecture and high-speed switch fabric. That mechanism is MDRR.
Within Deficit Round Robin (DRR), each service queue has an associated quantum value - an average number of bytes served in each round - and a deficit counter initialized to the quantum value. Each non-empty flow queue is served in a round-robin fashion, scheduling on average packets of quantum bytes in each round. Packets in a service queue are served as long as the deficit counter is greater than zero. Each packet served decreases the deficit counter by a value equal to its length in bytes. A queue can no longer be served after the deficit counter becomes zero or negative. In each new round, each non-empty queue's deficit counter is incremented by its quantum value.
MDRR differs from regular DRR by adding a a special low-latency queue that can be serviced in one of two modes:
Strict priority mode—The queue is serviced whenever it is non-empty. This allows the lowest delay possible for this traffic.
Alternate mode—The low-latency queue is serviced alternating between itself and the other queues.
Tip: This low-latency queue is absolutely necessary for time-sensitive traffic that needs very low delay and low jitter. For instance, if you want to deploy a Voice over IP (VoIP) network, the delay and jitter requirements are quite strict, and the only way to meet these requirements is to use strict priority mode. The Service Level Agreement (SLA) in the backbone for the Priority Queue (PQ) class requires low delay and jitter, and no loss. Alternate mode introduces more delay and thus more jitter into the PQ class. A service provider designs the PQ class so that its average usage rate never exceeds 30-50 percent. It is permissible to have bursts in the PQ class above 100 percent of egress rate. In this case, the other classes starve, but for a very short time (perhaps a few hundred µs in the worst-case scenario).
These tables list support for MDRR in the ToFab (towards the switch fabric) and FrFab (from the switch fabric) hardware queues:
ToFab Alternate MDRR ToFab Strict MDRR ToFab WRED Eng0 no yes yes Eng1 no no no Eng2 yes yes yes Eng3 yes yes yes Eng4 yes yes yes Eng4+ yes yes yes
All ToFab Class of Service (CoS) on the 12000 Series Internet Router must be configured through the legacy CoS syntax.
FrFab Alternate MDRR FrFab Strict MDRR FrFab WRED Eng0 no yes yes Eng1 no no no Eng2 yes1 yes yes Eng3 yes2 yes yes Eng4 yes yes yes Eng4+ yes yes yes
1 Alternate MDRR in the FrFab direction is only usable with the legacy CoS syntax for Engine 2 LCs.
2 Engine 3/5 hardware supports per-queue egress shaping and policing. This feature provides a superset of the alternate-mode MDRR queueing.
A. The MQC simplifies the configuration of Quality of Service (QoS) features on a router running Cisco IOS Software by providing a common command-line syntax across platforms. The MQC contains these three steps:
Defining a traffic class with the class-map command
Creating a service policy by associating the traffic class with one or more QoS policies (using the policy-map command)
Attaching the service policy to the interface with the service-policy command
For more information, refer to the Modular Quality of Service Command Line Interface.
MQC on the 12000 Series Internet Router varies slightly from the implementation on other platforms. Moreover, MQC on each Layer 3 (L3) forwarding engine may vary slightly.
This table lists MQC support for all L3 Engine types of Line Cards (LCs):
L3 Engine type Engine 01 Engine 1 Engine 2 Engine 3 Engine 4 Engine 4+ MQC support yes3 no yes3 yes yes yes Cisco IOS Software Release 12.0(15)S - 12.0(15)S2 12.0(21)S 12.0(22)S 12.0(22)S
1The 4OC3/ATM and LC-1OC12/ATM Engine 0 linecards do not support MQC.
2 There are some exceptions concerning MQC support on some LCs:
For the eight-port OC3 ATM LC, it is supported in 12.0(22)S and later releases.
For the two-port CHOC3/STM1, it is supported since 12.0(17)S.
For the OC-48 DPT, it is supported since 12.0(18)S.
3For Engine 0 and Engine 2, MQC supports only these commands:
match ip precedence [value]
bandwidth percent [value]
random precedence [prec] [min] [max] 1
MQC only supports FrFab queues. ToFab queues are not supported by the MQC. As a consequence, Rx Weighted Random Early Detection (WRED) and Modified Deficit Round Robin (MDRR) can only be configured through a traditional CLI.
This is valid for all LCs. MQC does not know about ToFab Class of Service (CoS).
Rx policies cannot be used, because virtual output-queues (known as ToFab queues) are not input-queues. The reason is that the ToFab queues pertain to a destination slot or port. Input queues must be associated solely with an input interface, with no regard to the destination slot or port. On the edge engine, the only input queues are the (input) shape queues.
Engine 3 LCs support MQC as of release 2. On Engine 3, MQC can be used to configure shaped queues in the ToFab direction; regular ToFab queues can only be configured by the CLI. MQC can be used to configure all FromFab queues. MQC support is available for physical/channel interface definitions in 12.0(21)S/ST, and has been extended to support sub-interface definitions as well in 12.0(22)S/ST.
Note: While MQC supports Committed Access Rates (CAR), it does not support the continue function; this is a generic MQC issue and is not limited to the 12000 Series Internet Router or Engine 3 LCs.
Here, you can see the MQC implementation differences between Engine 2 and Engine 3:
There is only a single level of bandwidth sharing configuration.
The bandwidth percentage in the CLI is translated internally into a quantum value, and then programmed onto the appropriate queue.
There are two levels of bandwidth sharing configuration.
There is a minimum bandwidth and a quantum for every queue.
The bandwidth percentage from the CLI is translated to a rate (Kbps), depending on the underlying link rate, and is then configured directly on the queue. No conversion to a quantum value is made. The precision of this minimum bandwidth guarantee is 64 Kbps.
The quantum value is set internally corresponding to the Maximum Transmission Unit (MTU) of the interface, and is set equally for all queues. There is no MQC CLI mechanism to modify this quantum value, either directly or indirectly.
Note: It is necessary for the quantum value to be greater than or equal to the MTU of the interface. Also, the quantum value internally is in units of 512 bytes. So for our default MTU of 4470 bytes, the minimum quantum value of MTU must be 9.
Q. Is Fast EtherChannel (FEC) supported on the 8xFE and the 1XGE cards for the 12000 Series Internet Router?
A. FEC is not supported on the Fast Ethernet (FE) card. Gigabit Ether Channel (GEC) is currently not supported on all the Gigabit Ethernet (GE) Line Cards (LCs) (for example, GE and 3GE).
Q. Is Inter-Switch Link (ISL) or 802.1q encapsulation supported on the Gigabit Ethernet (GE) or Fast Ethernet (FE) Line Cards (LCs)?
A. Cisco IOS Software Release 12.0(6) introduced support for 802.1q on GE interfaces only. 802.1q encapsulation is supported on all the GE LCs. The 12000 Series Internet Router does not support ISL encapsulation, and no support is planned.
router#show interface GigabitEthernet 3/0 mac-accounting GigabitEthernet3/0 GE to LINX switch #1 Output (431 free) 0090.bff7.a871(1 ): 1 packets, 85 bytes, last: 44960ms ago 00d0.6338.8800(3 ): 2 packets, 145 bytes, last: 33384ms ago 0090.86f7.a840(9 ): 2 packets, 145 bytes, last: 12288ms ago 0050.2afc.901c(10 ): 4 packets, 265 bytes, last: 1300ms ago
A. The 3xGE Line Card (LC) also supports Sampled NetFlow Accounting and Border Gateway Protocol (BGP) policy accounting.
A. Since Cisco IOS Software Release 12.0(6)S, NetFlow is supported on the Cisco 12000 Series Routers, but only on Engine 0 and 1 Line Cards (LCs). NetFlow is not supported on the Gigabit Ethernet (GE) LCs.
Since Cisco IOS Software release 12.0(7)S, NetFlow is supported on the GE LC.
Since Cisco IOS Software release 12.0(14)S, Sampled NetFlow is supported on Engine 2 Packet-over-SONET (PoS) LCs. The Sampled NetFlow feature allows you to sample one out of x IP packets being forwarded to routers, by allowing the user to define the x interval with a value between a minimum and maximum. Sampling packets are accounted for in the NetFlow Flow Cache of the router. These sampling packets substantially decrease the CPU utilization needed to account for NetFlow packets by allowing the majority of the packets to be switched faster because they do not need to go through additional NetFlow processing.
Refer to Sampled NetFlow for more information.
Since Cisco IOS Software Release 12.0(14)S, NetFlow Export version 5 is also supported on the Cisco 12000 Series Internet Router. The version 5 export format can be enabled along with traditional NetFlow and Sampled NetFlow features. The NetFlow Export Version 5 feature provides the ability to export fine granularity data to the NetFlow collector. Per-flow information and statistics are maintained and uploaded to the workstation.
Since Cisco IOS Software Release 12.0(16)S, Sampled NetFlow is supported on the 3-Port GE LCs.
Since Cisco IOS Software Release 12.0(18)S, Sampled NetFlow and 128 Access Control Lists (ACLs) on the Packet Switch Application-Specific Integrated Circuit (ASIC) (PSA) can now be configured at the same time on Engine 2 Packet over SONET (PoS) LCs.
Since Cisco IOS Software Release 12.0(19)S, the NetFlow Multiple Export Destinations feature enables the configuration of multiple destinations of the NetFlow data. With this feature enabled, two identical streams of NetFlow data are sent to the destination host. Currently, the maximum number of export destinations allowed is two.
The NetFlow Multiple Export Destinations feature is available only if NetFlow is configured.
Refer to Sampled NetFlow Details and Platform Support for more information on the platforms supported.
Q. Are Access Control Lists (ACLs) supported on Engine 2 Line Cards (LCs) (also known as Performance LCs)?
A. Yes, as of Cisco IOS Software Release 12.0(10)S. However, there are some restrictions due to the architecture of the Engine 2 LCs. The Packet Switch Application-Specific Integrated Circuit (ASIC) (PSA) is used in Engine 2 LCs for IP and Multiprotocol Label Switching (MPLS) packets forwarding. It uses an mtrie-based lookup engine, microsequencers, and other special hardware to assist in the packet forwarding process. The PSA is a pipeline operation ASIC. Therefore, the performance of the Engine 2 LCs depends on the cycles of each of the six stages. Extra cycles required to support additional features or processing result in a performance degradation of the PSA. That is why the Engine 2-based LCs cannot support simultaneously all the Cisco IOS Software features. To assist customers in enabling certain features on the Engine 2 LC, several PSA microcode bundles are customized. For instance, ACLs cannot coexist with Per Interface Rate Control (PIRC).
A. Yes. The Cisco IOS Software Release 12.0S train supports traffic engineering and Tag Distribution Protocol (TDP). The Cisco IOS 12.0ST train adds support for MPLS Virtual Private Networks (VPNs) and Label Distribution Protocol (LDP). MPLS is supported over Dynamic Packet Transport (DPT) cards since Cisco IOS Software version 12.0(9)S.
A. The show controllers clock command displays the active CSC, as shown in this example:Router#show controllers clock Switch Card Configured 0x1F (bitmask), Primary Clock for system is CSC_1 System Fabric Clock is Redundant Slot # Primary ClockMode 0 CSC_1 Redundant 1 CSC_1 Redundant 2 CSC_1 Redundant 3 CSC_1 Redundant 4 CSC_1 Redundant 16 CSC_1 Redundant 17 CSC_1 Redundant 18 CSC_1 Redundant 19 CSC_1 Redundant 20 CSC_1 Redundant
A. The show gsr and show diag summary commands display the installed LCs. The first one gives you the state of the LC, whereas the second one is shorter, as shown in this example:Router#show gsr Slot 0 type = 1 Port SONET based SRP OC-12c/STM-4 state = Line Card Enabled Slot 1 type = 8 Port Fast Ethernet state = Line Card Enabled Slot 2 type = 1 Port E.D. Packet Over SONET OC-48c/STM-16 state = Line Card Enabled Slot 3 type = Route Processor state = IOS Running ACTIVE Slot 4 type = 4 Port E.D. Packet Over SONET OC-12c/STM-4 state = Line Card Enabled Slot 16 type = Clock Scheduler Card(6) OC-192 state = Card Powered Slot 17 type = Clock Scheduler Card(6) OC-192 state = Card Powered PRIMARY CLOCK Slot 18 type = Switch Fabric Card(6) OC-192 state = Card Powered Slot 19 type = Switch Fabric Card(6) OC-192 state = Card Powered Slot 20 type = Switch Fabric Card(6) OC-192 state = Card Powered Slot 24 type = Alarm Module(6) state = Card Powered Slot 25 type = Alarm Module(6) state = Card Powered Slot 28 type = Blower Module(6) state = Card Powered Router#show diag summary SLOT 0 (RP/LC 0 ): 1 Port SONET based SRP OC-12c/STM-4 Single Mode SLOT 1 (RP/LC 1 ): 8 Port Fast Ethernet Copper SLOT 2 (RP/LC 2 ): 1 Port E.D. Packet Over SONET OC-48c/STM-16 Single Mode/SR SC-SC connector SLOT 3 (RP/LC 3 ): Route Processor SLOT 4 (RP/LC 4 ): 4 Port E.D. Packet Over SONET OC-12c/STM-4 Multi Mode SLOT 16 (CSC 0 ): Clock Scheduler Card(6) OC-192 SLOT 17 (CSC 1 ): Clock Scheduler Card(6) OC-192 SLOT 18 (SFC 0 ): Switch Fabric Card(6) OC-192 SLOT 19 (SFC 1 ): Switch Fabric Card(6) OC-192 SLOT 20 (SFC 2 ): Switch Fabric Card(6) OC-192 SLOT 24 (PS A1 ): AC PEM(s) + Alarm Module(6) SLOT 25 (PS A2 ): AC PEM(s) + Alarm Module(6) SLOT 28 (TOP FAN ): Blower Module(6)
A. Issue the execute-on slot <slot #> execute-on all command.
A. From enable mode, issue the attach <slot #> command. To exit from the LC, issue the exit command.
A. Issue the diag <slot #> verbose command. Running diagnostics disrupts normal operation and packet forwarding on the LC. If diagnostics fail, the LC remains in a down status. To restart it, you can either issue the microcode reload <slot #> command or the hw-module slot <slot #> reload command. Diagnostics do not find problems with the Switch Fabric Cards (SFCs).
A. These commands can be used to monitor buffer utilization:
execute-on slot <slot #> show controllers tofab queues
execute-on slot <slot #> show controllers frfab queues
A. The packet memory on the Cisco 12000 Series Routers is divided into two banks: ToFab and FrFab. The ToFab memory is used for packets that come in one of the interfaces on the Line Card (LC) and make its way to the fabric, whereas the FrFab memory is used for packets that are going out an interface on the LC from the fabric.
These ToFab and FrFab queues are the most important concept to understand in order to efficiently troubleshoot ignored packets in the 12000 Series Internet Router.
Note: ToFab (towards the fabric) and Rx (received by the router) are two different names for the same thing, as are FrFab (from the fabric) and Tx (transmitted by the router). For example, the ToFab Buffer Management Application-Specific Integrated Circuit (ASIC) (BMA) is also referred to as the RxBMA. This document uses the ToFab/FrFab convention, but you may see the Rx/Tx nomenclature used elsewhere.LC-Slot1#show controllers tofab queues Carve information for ToFab buffers SDRAM size: 33554432 bytes, address: 30000000, carve base: 30029100 33386240 bytes carve size, 4 SDRAM bank(s), 8192 bytes SDRAM pagesize, 2 carve(s) max buffer data size 9248 bytes, min buffer data size 80 bytes 40606/40606 buffers specified/carved 33249088/33249088 bytes sum buffer sizes specified/carved Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 5 non-IPC free queues: 20254/20254 (buffers specified/carved), 49.87%, 80 byte data size 1 17297 17296 20254 65535 12152/12152 (buffers specified/carved), 29.92%, 608 byte data size 2 20548 20547 12152 65535 6076/6076 (buffers specified/carved), 14.96%, 1568 byte data size 3 32507 38582 6076 65535 1215/1215 (buffers specified/carved), 2.99%, 4544 byte data size 4 38583 39797 1215 65535 809/809 (buffers specified/carved), 1.99%, 9248 byte data size 5 39798 40606 809 65535 IPC Queue: 100/100 (buffers specified/carved), 0.24%, 4112 byte data size 30 72 71 100 65535 Raw Queue: 31 0 17302 0 65535 ToFab Queues: Dest Slot 0 0 0 0 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535 4 0 0 0 65535 5 0 17282 0 65535 6 0 0 0 65535 7 0 75 0 65535 8 0 0 0 65535 9 0 0 0 65535 10 0 0 0 65535 11 0 0 0 65535 12 0 0 0 65535 13 0 0 0 65535 14 0 0 0 65535 15 0 0 0 65535 Multicast 0 0 0 65535
This list describes some of the key fields found in the example referenced:
Synchronous Dynamic RAM (SDRAM) size: 33554432 bytes, address: 30000000, carve base: 30029100 - The size of receive packet memory and the address location where it begins.
max buffer data size 9248 bytes, min buffer data size 80 bytes - The maximum and minimum buffer sizes.
40606/40606 buffers specified/carved - Buffers specified by the Cisco IOS Software to be carved and the number of buffers actually carved.
non-IPC free queues - The non-Inter Process Communication (IPC) buffer pools are the packet buffer pools. Packets arriving into the LC would be allocated a buffer from one of these buffer pools depending on the size of the packet. On some LCs, the buffer-carving algorithm creates only three non-IPC free queues. The reason is that the ToFab queues are carved up to the highest-supported Maximum Transmission Unit (MTU) of the particular LC. For example, Ethernet LCs support only three queues (up to the 1568-byte size) and do not need a 4544-byte pool. The example output shows five packet buffer pools of sizes 80, 608, 1568, 4544, and 9248 bytes. For each pool, additional details are provided:
20254/20254 (buffers specified/carved), 49.87%, 80 byte data size - 49.87 percent of the receive packet memory has been carved into 20254 80-byte buffers.
Qnum - The queue number.
#Qelem - The number of buffers in this queue that are still available. This is the column to check to find out which queue is backed up.
Head and Tail - A head and tail mechanism is used to ensure the queues are moving properly.
IPC Queue - Reserved for IPC messages from the LC to the Gigabit Route Processor (GRP). For an explanation on IPC, refer to Troubleshooting CEF-Related Error Messages.
Raw Queue -When an incoming packet has been assigned a buffer from a non-IPC free queue, it is enqueued on the raw queue. The raw queue is a First In, First Out (FIFO) processed by the LC CPU during interrupts. A very large number in the #Qelem column of the Raw Queue row indicates that you have too many packets waiting on the CPU, which cannot keep up with the rate at which these packets need to be serviced. A symptom of this problem is incrementing ignored errors as seen in the show interfaces command output. This problem is very rare.
ToFab Queue - Virtual output queues; one per destination slot plus one for multicast traffic. The above sample output displays 15 virtual output queues. Although the 12012 contains 12 slots, it originally was designed as a 15-slot chassis. Virtual output queues 13 through 15 are not used.
After the ingress LC CPU makes a packet switching decision, the packet is enqueued on the virtual output queue corresponding to the slot where the packet is destined. The number in the fourth column is the number of packets currently enqueued on a virtual output queue.
From the GRP, issue the attach command to attach to an LC, then issue the show controller frfab queue command to display the transmit packet memory. In addition to the fields in the ToFab output, the FrFab output displays an Interface Queues section. The output varies with the type and number of interfaces on the outgoing LC.
One such queue exists for each interface on the LC. Packets destined out a specific interface are enqueued onto the corresponding interface queue.LC-Slot1#show controller frfab queue ========= Line Card (Slot 2) ======= Carve information for FrFab buffers SDRAM size: 16777216 bytes, address: 20000000, carve base: 2002D100 16592640 bytes carve size, 0 SDRAM bank(s), 0 bytes SDRAM pagesize, 2 carve(s) max buffer data size 9248 bytes, min buffer data size 80 bytes 20052/20052 buffers specified/carved 16581552/16581552 bytes sum buffer sizes specified/carved Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 5 non-IPC free queues: 9977/9977 (buffers specified/carved), 49.75%, 80 byte data size 1 101 10077 9977 65535 5986/5986 (buffers specified/carved), 29.85%, 608 byte data size 2 10078 16063 5986 65535 2993/2993 (buffers specified/carved), 14.92%, 1568 byte data size 3 16064 19056 2993 65535 598/598 (buffers specified/carved), 2.98%, 4544 byte data size 4 19057 19654 598 65535 398/398 (buffers specified/carved), 1.98%, 9248 byte data size 5 19655 20052 398 65535 IPC Queue: 100/100 (buffers specified/carved), 0.49%, 4112 byte data size 30 77 76 100 65535 Raw Queue: 31 0 82 0 65535 Interface Queues: 0 0 0 0 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535
This list describes some of the key fields found in the example referenced:
non-IPC free queues - These queues are the packet buffer pools of various sizes. When a packet is received over the fabric, an appropriate-sized buffer is taken from one of these queues. The packet is copied into the buffer, which is then placed on the appropriate output interface queue. Unlike the ToFab queues, the FrFab queues are carved up to the maximum MTU of the entire system to support a packet sourced from any inbound interface.
IPC queue - Reserved for IPC messages from the GRP to the LC.
Interface queues - These queues are per interface (unlike the ToFab queues, which are per destination slot). The number (65535) in the right-most column is the tx-queue-limit. This number can be tuned by issuing the tx-queue limit command but only on the Engine 0 LC. This command limits the number of transmit packet buffers that a per-interface queue can occupy. Tune down this value when a particular interface is heavily congested and is requiring the LC to buffer a large number of excess packets.
A. fl stands for fabric loader. The full command instructs the Route Processor (RP) to use the bundled fabric loader to download the Cisco IOS Software image to the Line Cards (LCs). In other words, the RP comes up first and downloads the fabric loader to the LCs. The full Cisco IOS Software image is then downloaded to the LCs using the new fabric downloader. The service download-fl command takes effect after a reboot. You can read more about this at Upgrading Line Card Firmware on a Cisco 12000 Series Router.
A. idbs-rem means that the Interface Descriptor Blocks (IDBs) associated with the interface have been removed. This message usually points to a bad card or a card that is inserted incorrectly. You must first try to reseat the LC or to reload it manually by issuing the hw-module slot <slot #> reload command. If the card is still not recognized, replace it.
Q. Are characteristics such as type of fiber and optical link loss budget purely a function of which Gigabit Interface Converter (GBIC) you attach, or are these also dependent on the platform or Line Card (LC)?
A. They are a factor of the GBIC and are not LC-dependent.
Q. What command should I use to check the Cyclic Redundancy Checks (CRCs) on the Switch Fabric Cards (SFCs)?
A. The show controllers fia command provides the requested information. You must check this command on the primary Gigabit Route Processor (GRP) and for all the Line Cards (LCs) by attaching to each of them separately. If all of them complain about one SFC, then first try to reseat it. If the problem still persists, replace the faulty board. If only one LC complains about one SFC on which CRCs are increasing, then that LC is most likely faulty and not the SFC.
More information is available at How To Read the Output of the show controller fia Command.
A. The show gsr chassis-info command can be used to find the serial number of the chassis. In this example, TBA03450002 is the serial number of this Cisco 12000 Series Internet Router.Router#show gsr chassis-info Backplane NVRAM [version 0x20] Contents - Chassis: type 12416 Fab Ver: 3 Chassis S/N: TBA03450002 PCA: 73-4214-3 rev: A0 dev: 4759 HW ver: 1.0 Backplane S/N: TBC03450002 MAC Addr: base 0030.71F3.7C00 block size: 1024 RMA Number: 0x00-0x00-0x00 code: 0x00 hist: 0x00 Preferred GRP: 7
A. The %TFIB-7- SCANSABORTED: TFIB scan not completing syslog message is received when the Cisco Express Forwarding (CEF) scanner runs periodically, but is invoked immediately while the Address Resolution Protocol (ARP) table is changed. Once invoked, the CEF scanner calls the TFIB scanner which sequentially parses the ARP table and updates the TFIB database. If the TFIB scanner is already running and, at the same time, the CEF scanner is invoked because of a change in the ARP table, then the CEF scanner will postpone calling the TFIB scanner until it finishes the current scan. If the TFIB scanner did not complete the first scan and the CEF scanner receives more than 60 requests to update TFIB0, then the %TFIB-7- SCANSABORTED: TFIB scan not completing messages are displayed. If the message ends with MAC string updated, such as %TFIB-7-SCANSABORTED: TFIB scan not completing. MAC string updated, then the message means that the adjacency string for an interface keeps changing. This is mostly due to a wrong setup or configuration.
A. GEC is not supported on SPA-10xGE or SPA-10xGE-V. The interface channeling is not supported. Therefore, it is not possible to link the Gigabit ethernet interface to a configured port channel with the channel-group port-channel-number command.
Q. Only 3.5GB can be viewed on a Gigabit Switch Router (GSR) with PRP2 equipped with 4GB of main memory. Is this normal?
A. This is an expected behavior. The CPU has 4GB of effective address space. Out of the 4GB, the last 256MB are mapped to the various HW Devices. The mapping is done by the system control chip Discovery. Thus, only 3.75GB are available to map to memory devices.
The Discovery chip supports the mapping of four memory banks. Each bank must have size, which is a power of 2. Therefore, the first three banks are configured to be 1GB in size and the last one - 0.5GB in size, which totals 3.5GB.
A. SPA-5X1GE supports flow control. For the Fast Ethernet and Gigabit Ethernet interfaces on the Cisco 12000 series router, flow control is auto negotiated when auto negotiation is enabled. Thus, there is no way to enable/disable flow-control through the CLI since it is automatically negotiated.
Refer to Configuring Autonegotiation on an Interface for more information.
The Cisco Support Community is a forum for you to ask and answer questions, share suggestions, and collaborate with your peers.
Refer to Cisco Technical Tips Conventions for information on conventions used in this document.