This document reviews how to configure Cisco IOS® Software congestion management and congestion avoidance features on the Cisco 12000 Series Internet Router.
After you read this document, you must be able to:
Understand why it is important to configure Modified Deficit Round Robin (MDRR) and Weighted Random Early Detection (WRED) in your core network.
Understand the implementation that underlies MDRR and WRED on the Cisco 12000 Series.
Configure MDRR and WRED using the legacy Class of Service (CoS) syntax and Modular QoS CLI (MQC).
Readers of this document should have knowledge of these topics:
A general familiarity with the Cisco 12000 Series Internet Router architecture.
In particular, an awareness of the queueing architecture and these terms:
ToFab (Towards the fabric) – which describes the receive-side queues on an inbound line card.
FrFab (From the fabric) – which describes the transmit-side queues on an outbound line card.
Note: It is also recommended that you look up How to Read the Output of the show controller frfab | tofab queue Commands on a Cisco 12000 Series Internet Router.
The information in this document is based on these software and hardware versions:
All the Cisco 12000 platforms, which include the 12008, 12012, 12016, 12404, 12406, 12410, and the 12416.
Cisco IOS Software Release 12.0(24)S1.
Note: Although the configurations in this document were tested on Cisco IOS Software Release 12.0(24)S1, any Cisco IOS software release that supports the Cisco 12000 Series Internet Router can be used.
The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.
For more information on document conventions, refer to the Cisco Technical Tips Conventions.
Queueing methods define the packet scheduling mechanism or the order in which packets are dequeued to the interface for transmission on the physical wire. Based on the order and number of times that a queue is serviced by a scheduler function, queueing methods also support minimum bandwidth guarantees and low latencies.
It is important to ensure that a packet scheduling mechanism supports the switching architecture on which it is implemented. Weighted fair queuing (WFQ) is the well-known scheduling algorithm for resource allocation on Cisco router platforms with a bus-based architecture. However, it is not supported on the Cisco 12000 Series Internet Router. Traditional Cisco IOS software priority queueing and custom queueing also are not supported. Instead, the Cisco 12000 Series uses a special form of queueing called Modified Deficit Round Robin (MDRR), which provides relative bandwidth guarantees as well as a low latency queue. The M of MDRR stands for "modified"; it adds the priority queue compared to DRR where no priority queue is present. For details on MDRR, see the MDRR Overview section.
In addition, the Cisco 12000 Series supports Weighted Random Early Detection (WRED) as a drop policy within the MDRR queues. This congestion avoidance mechanism provides an alternative to the default tail drop mechanism. The congestion can be avoided by controlled drops.
Congestion avoidance and management mechanisms such as WRED and MDRR are particularly important on the FrFab queues of relatively low-speed outbound interfaces, such as channelized line cards (LCs). The high-speed switch fabric can deliver packets to the channel groups much faster than the channel groups can transmit them. As queueing / buffers are managed at the physical port level, backpressure on one channel can impact all other channels on that port. Thus, it is very important to manage that backpressure through WRED/MDRR, which limits to the backpressure impact to the channel(s) in question. For details about how to manage outbound interface oversubscription, see Troubleshooting Ignored Packets and No Memory Drops on the Cisco 12000 Series Internet Router.
This section provides an overview of Modified Deficit Round Robin (MDRR).
With MDRR configured as the queueing strategy, non-empty queues are served one after the other, in a round-robin fashion. Each time a queue is served, a fixed amount of data is dequeued. The algorithm then services the next queue. When a queue is served, MDRR keeps track of the number of bytes of data that was dequeued in excess of the configured value. In the next pass, when the queue is served again, less data will be dequeued to compensate for the excess data that was served previously. As a result, the average amount of data dequeued per queue will be close to the configured value. In addition, MDRR maintains a priority queue that gets served on a preferential basis. MDRR is explained in greater detail in this section.
Each queue within MDRR is defined by two variables:
Quantum value – This is the average number of bytes served in each round.
Deficit counter – This is used to track how many bytes a queue has transmitted in each round. It is initialized to the quantum value.
Packets in a queue are served as long as the deficit counter is greater than zero. Each packet served decreases the deficit counter by a value equal to its length in bytes. A queue can no longer be served after the deficit counter becomes zero or negative. In each new round, the deficit counter of each non-empty queue is increased by its quantum value.
Note: In general, the quantum size for a queue must not be smaller than the maximum transmission unit (MTU) of the interface. This ensures that the scheduler always serves at least one packet from each non-empty queue.
Each MDRR queue can be given a relative weight, with one of the queues in the group defined as a priority queue. The weights assign relative bandwidth for each queue when the interface is congested. The MDRR algorithm dequeues data from each queue in a round-robin fashion if there is data in the queue to be sent.
If all the regular MDRR queues have data in them, they are serviced as follows:
0-1-2-3-4-5-6-0-1-2-3-4-5-6...
During each cycle, a queue can dequeue a quantum based on its configured weight. On Engine 0 and Engine 2 line cards, a value of 1 is equivalent to giving the interface a weight of its MTU. For each increment above 1, the weight of the queue increases by 512 bytes. For example, if the MTU of a particular interface is 4470 and the weight of a queue is configured to be 3, each time through the rotation, 4470 + (3-1)*512 = 5494 bytes are allowed to be dequeued. If two normal DRR queues, Q0 and Q1, are used, Q0 is configured with a weight of 1 and Q1 is configured with a weight of 9. If both queues are congested, each time through the rotation, Q0 would be allowed to send 4470 bytes and Q1 would be allowed to send [4470 + (9-1)*512] = 8566 bytes. This would give traffic that goes to Q0 approximately 1/3 of the bandwidth, and the traffic that goes through Q1 about 2/3 of the bandwidth.
Note: The standard de-queue formula used to compute MDRR bandwidth assignment is D = MTU + (weight-1)*512. In versions earlier than Cisco IOS Software Release 12.0(21) S/ST, Engine 4 line cards used a different dequeue formula. In order to configure the MDRR weight for a correct bandwidth assignment, ensure that you run a Cisco IOS Software Release later than 12.0(21) S/ST.
Note: The quantum formula for the Engine 4+ line cards is (for the toFab direction, this is not valid for FrFab) Quantum = BaseWeight + {BaseWeight * (QueueWeight - 1) * 512} / MTU. The BaseWeight is obtained with this formula: BaseWeight = {(MTU + 512 - 1) / 512} + 5
Note: All calculations are rounded off; that is, all decimals are ignored.
Note: To know whether a specific engine line card supports MDRR, see MDRR Support by Engine Type.
The Cisco 12000 Series supports a priority queue (PQ) inside MDRR. This queue provides the low delay and low jitter required by time-sensitive traffic such as Voice over IP (VoIP).
As noted above, the Cisco 12000 Series does not support weighted fair queueing (WFQ). Thus, the PQ inside MDRR operates differently from the Cisco IOS software low latency queueing (LLQ) feature available for other platforms.
A key difference is how the MDRR scheduler can be configured to service the PQ in one of two modes, as listed in table 1:
Table 1 – How to Configure the MDRR Scheduler to Service the PQ in Two ModesAlternate Mode | Strict Priority Mode | |
---|---|---|
Advantages | Here, the PQ is serviced in between the other queues. In other words, the MDRR scheduler alternatively services the PQ and any other configured queues. | Here the PQ is serviced whenever it is non-empty. This provides the lowest possible delay for matching traffic. |
Disadvantages | This mode can introduce jitter and delay when compared to strict priority mode. | This mode can starve other queues, particularly if the matching flows are aggressive senders. |
Alternate mode can exercise less control on jitter and delay. If the MDRR scheduler starts to service MTU-sized frames from a data queue and then a voice packet arrives in the PQ, the scheduler in alternate mode completely serves the non-priority queue until its deficit counter reaches zero. During this time, the PQ is not serviced, and the VoIP packets are delayed.
By contrast, in strict priority mode, the scheduler services only the current non-priority packet and then switches to the PQ. The scheduler starts to service a non-priority queue only after the PQ becomes completely empty.
It is important to note that the priority queue in alternate priority mode is serviced more than once in a cycle, and thus takes more bandwidth than other queues with the same nominal weight. How much more is a function of how many queues are defined. For example, with three queues, the low latency queue is serviced twice as often as the other queues, and it sends twice its weight per cycle. If eight queues are defined, the low latency queue is serviced seven times more often and the effective weight is seven times higher. Thus, the bandwidth the queue can take is related to how often it is served per round robin, which in turn depends on how many queues are defined overall. In alternate priority mode, the priority queue is usually configured with a small weight for this particular reason.
As an example, assume that four queues are defined: 0, 1, 2 and the priority queue. In alternate priority mode, if all queues are congested, they would be serviced as follows: 0, llq, 1, llq, 2, llq, 0, llq, 1, .... where llq stands for low latency queue.
Each time a queue is serviced, it can send up to its configured weight. Therefore, the minimum bandwidth the low latency queue can have is:
WL = weight of the low latency queue.
W0, W1, ... Wn = weights of the regular DRR queues.
n = number of regular DRR queues used for this interface.
BW = Bandwidth of the link.
For alternate priority mode, the minimum bandwidth of the low latency queue = BW * n * WL / (n * WL + Sum(W0,Wn)).
The weight is the only variable parameter in MDRR that can be configured. It influences the relative amount of bandwidth that a traffic class can use, and how much traffic is sent in one turn. Use of larger weights means that the overall cycle takes longer, and possibly increases latency.
Configuration Guidelines
It is better to configure the weight of the class that has the lowest bandwidth requirement to 1 in order to keep the delay and the jitter as low as possible amongst the other classes.
Select weight values that are as low as possible. Start with a weight of 1 for the class with the lowest bandwidth. For example, when you use two classes with 50% bandwidth for each class, you must configure 1 and 1. It does not make sense to use 10 and 10, because there is no impact on the performance when you choose 1. Also, a higher weight introduces more jitter.
A low weight value for the LLQ is very important, especially in alternate mode in order not to add too much delay or jitter to the other classes.
The example in this section is taken from Inside Cisco IOS® Software Architecture, Cisco Press.
Assume we have three queues:
Queue 0 – has a quantum of 1500 bytes; it's the low latency queue, configured to operate in alternate mode.
Queue 1 – has a quantum of 3000 bytes.
Queue 2 – has a quantum of 1500 bytes.
Figure 1 illustrates the initial state of the queues, along with some packets that have been received and queued.
Figure 1 – MDRR Initial State
Queue 0 is serviced first, its quantum is added to its deficit counter, Packet 1, which is 250 bytes, is transmitted, and its size is subtracted from the deficit counter. Because the deficit counter of queue 0 is still greater than 0 (1500 - 250 = 1250), packet 2 is transmitted as well, and its length subtracted from the deficit counter. The deficit counter of queue 0 is now -250, so queue 1 is serviced next. Figure 2 indicates this state.
Figure 2 – MDRR Subsequent State
The deficit counter of queue 1 is set to 3000 (0 + 3000 = 3000), and packets 4 and 5 are transmitted. With each packet transmitted, subtract the size of the packet from the deficit counter, so the deficit counter of queue 1 is reduced to 0. Figure 3 illustrates this state.
Figure 3 – MDRR State when the Deficit Counter of Queue 1 is Zero
We need to return from alternate priority mode to service queue 0. Again, the quantum is added to the current deficit counter, and the deficit counter of queue 0 is set to the result (-250 + 1500 = 1250). Packet 3 is now transmitted, because the deficit counter is greater than 0, and queue 0 is now empty. When a queue is emptied, its deficit counter is set to 0, as shown in Figure 4.
Figure 4 – MDRR State When a Queue is Emptied
Queue 2 is serviced next; its deficit counter is set to 1500 (0 + 1500 = 1500). Packets 7 through 10 are transmitted, which leaves the deficit counter at 500 (1500 - (4*250) = 500). Because the deficit counter is still greater than 0, packet 11 is also transmitted.
When packet 11 is transmitted, queue 2 is empty, and its deficit counter is set to 0, as shown in Figure 5.
Figure 5 – MDRR State When Packet 11 is Transmitted
Queue 0 is serviced again next (because we are in alternate priority mode). Because it is empty, we service queue 1 next, and transmit packet 6.
The Cisco 12000 Series supports five line card models with unique Layer 3 (L3) forwarding Engine architectures. Support for MDRR varies with the L3 Engine type, and the type of card. For instance, there is no support for MDRR and WRED on Engine 0 ATM line cards. You can use the show diag command to determine the L3 Engine type of your installed line cards:
router#show diags | include (SLOT | Engine) !--- The regular expression is case-sensitive. ... SLOT 1 (RP/LC 1 ): 1 port ATM Over SONET OC12c/STM-4c Multi Mode L3 Engine: 0 - OC12 (622 Mbps) SLOT 3 (RP/LC 3 ): 3 Port Gigabit Ethernet L3 Engine: 2 - Backbone OC48 (2.5 Gbps)
You can use either the "Legacy CoS Syntax" or "Modular QoS Command Line Interface" to configure MDRR on the Cisco 12000 Series. The later sections in this document discuss how to configure MDRR with Legacy CoS or Modular QoS. You must configure the ToFab queues with the legacy CoS syntax only as they do not support the newer syntax of the MQC. See table 2 for details.
Table 2 – Details on MDRR on ToFab (Rx) QueuesWhere Implemented | ToFab MDRR | ToFab Alternate PQ | ToFab Strict PQ | ToFab WRED | |
---|---|---|---|---|---|
Eng0 | Software | No** | No** | Yes | Yes |
Eng1 | - | No | No | No | No |
Eng2 | Hardware | Yes | Yes | Yes | Yes |
Eng3 | Hardware | No | Yes | Yes | Yes |
Eng4 | Hardware | Yes | Yes | Yes | Yes |
Eng4+ | Hardware | Yes | Yes | Yes | Yes |
** MDRR is supported on Engine 0 LCs in the ToFab (Rx) direction, but only the strict priority mode, not the alternate priority mode. The seven remaining queues are supported as usual.
Inbound interfaces maintain a separate virtual output queue per destination LC. How these queues are implemented depends on the L3 Engine type.
Engine 0 – Inbound LCs maintain eight queues per destination slot. Thus, the maximum number of queues is 16x8 = 128. Each queue can be configured separately.
Engines 2, 3, 4, and 4+ – Inbound LCs maintain eight queues per destination interface. With 16 destination slots and 16 interfaces per slot, the maximum number of queues is 16x16x8 = 2048. All interfaces on a destination slot must use the same parameters.
MDRR on the FrFab queues operates consistently whether implemented in hardware or software. All L3 Engine types support eight class queues for each outbound interface. See table 3 for details.
Table 3 – Details on MDRR on FrFab (Tx) QueuesWhere Implemented | FrFab Alternate PQ | FrFab Strict PQ | FrFab WRED | |
---|---|---|---|---|
Eng0 | Software1 | No | Yes | Yes |
Eng1 | - | No | No | No |
Eng2 | Hardware | Yes2 | Yes | Yes |
Eng3 | Hardware | No | Yes | Yes |
Eng4 | Hardware | Yes | Yes | Yes |
Eng4+ | Hardware | Yes | Yes | Yes |
1Support for MDRR on FrFab queues of Engine 0 LCs is introduced in these versions of Cisco IOS software:
Cisco IOS Software Release 12.0(10)S - 4xOC3 and 1xOC12 POS, 4xOC3, and 1xCHOC12/ STM4.
Cisco IOS Software Release 12.0(15)S - 6xE3 and 12xE3.
Cisco IOS Software Release 12.0(17)S - 2xCHOC3/STM1.
2You must configure alternate MDRR in the FrFab direction with the legacy CoS syntax.
Note: The 3xGE LC supports MDRR on the ToFab queues and, as from Cisco IOS Software Release 12.0(15)S, on the FrFab queues with two restrictions, namely, a fixed quantum, and a single CoS queue for each interface. The priority queue supports a quantum that can be configured, and both strict and alternate priority modes. All three interfaces share a single PQ.
Note: See the Cisco 12000 Series Routers Release Notes for the latest information on supported QoS features on Cisco 12000 Series LCs.
Weighted Random Early Detection (WRED) is designed to prevent the harmful effects of interface congestion on network throughput.
Figure 6 – WRED Packet Drop Probability
See Weighted Random Early Detection on the Cisco 12000 Series Router for an explanation of the WRED parameters. The minimum, maximum, and mark probability parameters describe the actual Random Early Detection (RED) curve. When the weighted queue average is below the minimum threshold, no packets are dropped. When the weighted queue average is above the maximum queue threshold, all packets are dropped until the average drops below the maximum threshold. When the average is between the minimum and the maximum thresholds, the probability that the packet will be dropped can be calculated by a straight line from the minimum threshold (probability of drop will be 0) to the maximum threshold (probability of drop is equal to the 1/mark probability denominator).
The difference between RED and WRED is that WRED can selectively discard lower-priority traffic when the interface begins to get congested, and can provide differentiated performance characteristics for different Classes of Service (CoS). By default, WRED uses a different RED profile for each weight (IP precedence - 8 profiles). It drops less important packets more aggressively than more important packets.
It is a complex challenge to tune WRED parameters to manage the queue depth, and depends on many factors, which include:
Offered traffic load and profile.
Ratio of load to available capacity.
Behavior of traffic in the presence of congestion.
These factors vary network by network and, in turn, depend on the offered services and on the customers who use those services. Thus, we cannot make recommendations that apply to specific customer environments. However, table 4 describes generally recommended values based on the bandwidth of the link. In that case, we do not differentiate the dropping characteristics between the different classes of service.
Table 4 – Recommended Values Based on the Bandwidth of the LinkBandwidth | Theoretical BW (kbps) | Physical BW1 (kbps) | Minimum Threshold | Maximum Threshold |
---|---|---|---|---|
OC3 | 155000 | 149760 | 94 | 606 |
OC12 | 622000 | 599040 | 375 | 2423 |
OC48 | 2400000 | 239616 | 1498 | 9690 |
OC192 | 10000000 | 9584640 | 5991 | 38759 |
1Physical SONET rate
There are several constraints that are taken into account to compute the above threshold values. For instance, the maximization of the link utilization while minimizing the average queue depth, the difference between the Maximum and Minimum must be a power of two (due to hardware limitation). Based on experience and simulation, maximum instantaneous depth of a queue controlled by RED is less than 2 MaxTh. For 0C48 and above, 1 MaxTh, and so on. However, the exact determination of these values is beyond the scope of this document.
Note: The Exponential weighting constant value need not be configured on Engine 2 and above line cards, since hardware queue sampling is used instead. For Engine 0 LCs, these values can be configured:
ds3 – 9
oc3 – 10
oc12 – 12
Note: WRED is not supported on Engine 1 LCs.
As the following sections explain, you can use both the legacy CoS syntax and the newer MQC syntax to configure WRED.
The Cisco 12000 Series legacy Class of Service (CoS) syntax uses a cos-queue-group template to define a set of CoS definitions. You then apply the template to ToFab and FrFab queues on inbound or outbound interfaces, respectively.
The cos-queue-group command creates a named template of MDRR and WRED parameters. Here are the available configuration parameters at the CLI:
Router(config)#cos-queue-group oc12 Router(config-cos-que)#? Static cos queue commands: default Set a command to its defaults dscp Set per DSCP parameters, Engine 3 only exit Exit from COS queue group configuration mode exponential-weighting-constant Set group's RED exponential weight constant. (Not used by engine 0, 1 or 2 line cards) no Negate a command or set its defaults precedence Set per precedence parameters queue Set individual queue parmeters random-detect-label Set RED drop criteria traffic-shape Enable Traffic Shaping on a COS queue group
With MDRR, you can map the IP precedence to MDRR queues and configure the special low latency queue. You can use the precedence parameter under the cos-queue-group command for this:
precedence <0-7> queue [ <0-6> | low-latency]
You can map a particular IP precedence to a regular MDRR queue (queue 0 to 6) or you can map it to the priority queue. The above command allows you to map several IP precedences to the same queue.
Note: It is recommended that you use precedence 5 for the low latency queue. Precedence 6 is used for routing updates.
You can give each MDRR queue a relative weight, with one of the queues in the group defined as a priority queue. You can use the queue command under the cos-queue-group to do this:
queue <0-6> <1-2048> queue low-latency [alternate-priority | strict-priority] <1-2048> !--- The weight option is not available with strict priority.
Use the cos-queue-group command to define any WRED parameters:
random-detect-label <label> <minimum-threshold> <maximum-threshold> <mark-probability denominator>
Here is an example of a cos-queue-group named oc12. It defines three traffic classes: queue 0, 1, and low-latency. It maps IP precedence values 0 - 3 to queue 0, precedence values 4, 6, and 7 to queue 1, and precedence 5 to the low-latency queue. Queue 1 is assigned more bandwidth.
Configuration Example |
---|
cos-queue-group oc12 !--- Creation of cos-queue-group called "oc12". precedence 0 queue 0 !--- Map precedence 0 to queue 0. precedence 0 random-detect-label 0 !--- Use RED profile 0 on queue 0. precedence 1 queue 0 precedence 1 random-detect-label 0 precedence 2 queue 0 precedence 2 random-detect-label 0 precedence 3 queue 0 precedence 3 random-detect-label 0 !--- Precedence 1, 2 and 3 also go into queue 0. precedence 4 queue 1 precedence 4 random-detect-label 1 precedence 6 queue 1 precedence 6 random-detect-label 1 precedence 7 queue 1 precedence 7 random-detect-label 1 precedence 5 queue low-latency !--- Map precedence 5 to special low latency queue. !--- We do not intend to drop any traffic from the LLQ. We have an SLA !--- that commits not to drop on this queue. You want to give it all !--- the bandwidth it requires. Random-detect-label 0 375 2423 1 !--- Minimum threshold 375 packets, maximum threshold 2423 packets. !--- Drop probability at maximum threshold is 1. random-detect-label 1 375 2423 1 queue 1 20 !--- Queue 1 gets MDRR weight of 20, thus gets more Bandwidth. queue low-latency strict-priority !--- Low latency queue runs in strict priority mode. |
To avoid head of line blocking, inbound interfaces on the Cisco 12000 Series maintain a virtual output queue per destination slot. Go to a line card using the attach command and execute the execute-on show controller tofab queue command (or directly enter the execute-on slot 0 show controllers tofab queue command) to view these virtual output queues. Sample output captured directly from the LC console is provided below. See How To Read the Output of the show controller frfab | tofab queue Commands on a Cisco 12000 Series Internet Router.
LC-Slot1#show controllers tofab queues Carve information for ToFab buffers SDRAM size: 33554432 bytes, address: 30000000, carve base: 30029100 33386240 bytes carve size, 4 SDRAM bank(s), 8192 bytes SDRAM pagesize, 2 carve(s) max buffer data size 9248 bytes, min buffer data size 80 bytes 40606/40606 buffers specified/carved 33249088/33249088 bytes sum buffer sizes specified/carved Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 5 non-IPC free queues: 20254/20254 (buffers specified/carved), 49.87%, 80 byte data size 1 17297 17296 20254 65535 12152/12152 (buffers specified/carved), 29.92%, 608 byte data size 2 20548 20547 12152 65535 6076/6076 (buffers specified/carved), 14.96%, 1568 byte data size 3 32507 38582 6076 65535 1215/1215 (buffers specified/carved), 2.99%, 4544 byte data size 4 38583 39797 1215 65535 809/809 (buffers specified/carved), 1.99%, 9248 byte data size 5 39798 40606 809 65535 IPC Queue: 100/100 (buffers specified/carved), 0.24%, 4112 byte data size 30 72 71 100 65535 Raw Queue: 31 0 17302 0 65535 ToFab Queues: Dest Slot 0 0 0 0 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535 4 0 0 0 65535 5 0 17282 0 65535 6 0 0 0 65535 7 0 75 0 65535 8 0 0 0 65535 9 0 0 0 65535 10 0 0 0 65535 11 0 0 0 65535 12 0 0 0 65535 13 0 0 0 65535 14 0 0 0 65535 15 0 0 0 65535 Multicast 0 0 0 65535 LC-Slot1#
Use the slot-table-cos command to map a named cos-queue-group to a destination virtual output queue. You can configure a unique cos-queue-group template for every output queue
Router(config)#slot-table-cos table1 Router(config-slot-cos)#destination-slot ? <0-15> Destination slot number all Configure for all destination slots Router(config-slot-cos)#destination-slot 0 oc48 Router(config-slot-cos)#destination-slot 1 oc48 Router(config-slot-cos)#destination-slot 2 oc48 Router(config-slot-cos)#destination-slot 3 oc48 Router(config-slot-cos)#destination-slot 4 oc12 Router(config-slot-cos)#destination-slot 5 oc48 Router(config-slot-cos)#destination-slot 6 oc48 Router(config-slot-cos)#destination-slot 9 oc3 Router(config-slot-cos)#destination-slot 15 oc48
Note: The above configuration uses three templates, named oc48, oc12, and oc3. The configuration for the cos-queue-group named oc12 is as shown in Step1. Similarly, configure oc3 and oc48. It is recommended that you apply a unique template to a set of interfaces based on the bandwidth and application.
Use the rx-cos-slot command to apply a slot-table-cos to an LC.
Router(config)#rx-cos-slot 0 ? WORD Name of slot-table-cos Router(config)#rx-cos-slot 0 table1 Router(config)#rx-cos-slot 2 table1
The Cisco 12000 Series maintains a separate queue per outbound interface. To view these queues, attach to the line card CLI. Use the attach command, and then execute the show controller frfab queue command, as illustrated here:
LC-Slot1#show controller frfab queue ========= Line Card (Slot 2) ======= Carve information for FrFab buffers SDRAM size: 16777216 bytes, address: 20000000, carve base: 2002D100 16592640 bytes carve size, 0 SDRAM bank(s), 0 bytes SDRAM pagesize, 2 carve(s) max buffer data size 9248 bytes, min buffer data size 80 bytes 20052/20052 buffers specified/carved 16581552/16581552 bytes sum buffer sizes specified/carved Qnum Head Tail #Qelem LenThresh ---- ---- ---- ------ --------- 5 non-IPC free queues: 9977/9977 (buffers specified/carved), 49.75%, 80 byte data size 1 101 10077 9977 65535 5986/5986 (buffers specified/carved), 29.85%, 608 byte data size 2 10078 16063 5986 65535 2993/2993 (buffers specified/carved), 14.92%, 1568 byte data size 3 16064 19056 2993 65535 598/598 (buffers specified/carved), 2.98%, 4544 byte data size 4 19057 19654 598 65535 398/398 (buffers specified/carved), 1.98%, 9248 byte data size 5 19655 20052 398 65535 IPC Queue: 100/100 (buffers specified/carved), 0.49%, 4112 byte data size 30 77 76 100 65535 Raw Queue: 31 0 82 0 65535 Interface Queues: 0 0 0 0 65535 1 0 0 0 65535 2 0 0 0 65535 3 0 0 0 65535
Use the tx-cos command to apply a cos-queue-group template to an interface queue. As shown here, you apply the parameter set directly to the interface; no tables are needed. In this example, pos48 is the name of a parameter set.
Router(config)#interface POS 4/0 Router(config-if)#tx-cos ? WORD Name of cos-queue-group Router(config-if)#tx-cos pos48
Use the show cos command to confirm your configuration:
Router#show cos !--- Only some of the fields are visible if MDRR is configured on Inbound !--- or Outbound interfaces. Interface Queue cos Group Gi4/0 eng2-frfab !--- TX-cos has been applied. Rx Slot Slot Table 4 table1 !--- rx-cos-slot has been applied. Slot Table Name - table1 1 eng0-tofab 3 eng0-tofab !--- slot-table-cos has been defined. cos Queue Group - eng2-tofab !--- cos-queue-group has been defined. Prec Red Label [min, max, prob] Drr Queue [deficit] 0 0 [6000, 15000, 1/1] 0 [10] 1 1 [10000, 20000, 1/1] 1 [40] 2 1 [10000, 20000, 1/1] 1 [40] 3 1 [10000, 20000, 1/1] 0 [10] 4 2 [15000, 25000, 1/1] 2 [80] 5 2 [15000, 25000, 1/1] 2 [80] 6 no drop low latency 7 no drop low latency
Note: The legacy CLI also uses the precedence syntax for Multiprotocol Label Switching (MPLS) traffic. The router treats the MPLS bits as though they are IP Type of Service (ToS) bits and puts the appropriate packets into the correct queues. This is not at all true for MQC. MPLS QoS is outside the scope of this document.
The objective of Cisco's Modular QoS CLI (MQC) is to connect all the different QoS features in a logical way, in order to simplify the configuration of Cisco IOS software Quality of Service (QoS) features. For instance, the classification is done separately from the queueing, policing, and shaping. It provides a single configuration framework for QoS that is template-based. Here are some points to remember about MQC configuration:
It can be easily applied to and removed from an interface.
It can be easily reused (the same policy can be applied to multiple interfaces).
It offers a single configuration framework for QoS that enables you to easily provision, monitor, and troubleshoot.
It provides a higher level of abstraction.
It is platform independent.
On the Cisco 12000 Series, MQC commands can be used instead of the legacy Class of Service (CoS) syntax.
MQC support on the Cisco 12000 Series does not imply that the same QoS feature set available on another platform, such as the Cisco 7500 Series, is now available on the Cisco 12000. The MQC provides a common syntax in which a command results in a shared function or behavior. For example, the bandwidth command implements a minimum bandwidth guarantee. The Cisco 12000 Series uses MDRR as the scheduling mechanism to make the bandwidth reservation, while the Cisco 7500 Series uses WFQ. The principal algorithm complements the particular platform.
Importantly, only the FrFab queues support configuration of QoS features through the MQC. Because the ToFab queues are virtual output queues, and not true input queues, they are not supported by the MQC. They must be configured with legacy CoS commands.
Table 5 lists support for the MQC per L3 Engine type.
Table 5 – Support for MQC for L3 Engine TypesL3 Engine Type | Engine 0 | Engine 1 | Engine 2 | Engine 3 | Engine 4 | Engine 4+ |
---|---|---|---|---|---|---|
MQC Support | Yes | No | Yes | Yes | Yes | Yes |
IOS Release | 12.0(15)S | - | 12.0(15)S1 | 12.0(21)S | 12.0(22)S | 12.0(22)S |
1Remember these exceptions with MQC support on Engine 0 and 2 line cards (LC)s:
2xCHOC3/STM1 - Introduced in 12.0(17)S.
1xOC48 DPT - Introduced in 12.0(18)S.
8xOC3 ATM - Planned for 12.0(22)S.
The MQC uses these three steps to create a QoS policy:
Define one or more traffic classes with the class-map command.
Create a QoS policy with the policy-map command and assign QoS actions such as bandwidth or priority to a named traffic class.
Use the service-policy command to attach a policy-map to the FrFab queue of an outbound interface.
Use the show policy-map interface command to monitor your policy.
See Modular Quality of Service Command Line Interface Overview for further information.
The class-map command is used to define traffic classes. Internally, on the Cisco 12000 Series, the class-map command assigns a class to a specific CoS queue on the line card (see Step 4 for details).
The class-map command supports "match-any", which places packets that match any of the match statements into the class, and "match-all", which places packets into this class only when all statements are true. These commands create a class named "Prec_5", and classify all packets with an IP precedence of 5 to this class:
Router(config-cmap)#match ? access-group Access group any Any packets class-map Class map destination-address Destination address fr-dlci Match on fr-dlci input-interface Select an input interface to match ip IP specific values mpls Multi Protocol Label Switching specific values not Negate this match result protocol Protocol qos-group Qos-group source-address Source address Router(config-cmap)#match ip precedence 5
Table 6 lists the supported match criteria for each L3 Engine type.
Table 6 – Supported Match Criteria for L3 EnginesEngine 0, 2 | Engine 3 | Engine 4 | Engine 4+ | |
---|---|---|---|---|
ip precedence | Yes | Yes | Yes | Yes 1 |
access-group | No | Yes | No | No |
mpls exp | No | Yes | No | Yes (12.0.26S) |
ip dscp | No | Yes | No | Yes (12.0.26S) |
qos-group | No | Yes | No | No |
match input-interface POS x/y | No | Yes (as receive policy only) | No | No |
1 ingress/egress since 12.0.26S
The policy-map command is used to assign packet handling policies or actions to one or more defined classes. For example, when you assign a bandwidth reservation, or apply a random drop profile.
The Cisco 12000 Series supports a subset of MQC features, based on the high-speed architecture of the L3 Engines. Table 7 lists the commands that are supported:
Table 7 – Supported CommandsCommand | Description |
---|---|
bandwidth | Provides a minimum bandwidth guarantee during periods of congestion. It is specified as a percentage of the link speed or as an absolute value. If a class does not use or need bandwidth equal to the reserved kbps, available bandwidth can be used by other bandwidth classes. |
police, shape | Limits the amount of traffic that a class can transmit. These commands are slightly different in function. The police command identifies traffic that exceeds the configured bandwidth, and drops or remarks it. The shape command buffers any excess traffic and schedules it for transmission at a constant rate, but does not drop or remark. |
Queue-limit | Assigns a fixed queue length to a given class of traffic. You can specify this in number of packets that can be held in the queue. |
priority | Identifies a queue as a low-latency queue. MQC supports strict mode only for a PQ. Alternate mode is not supported through MQC. Use the priority command without a percentage value to enable strict priority mode. Note: The implementation of the priority command on the Cisco 12000 Series differs from the implementation on other routers that run Cisco IOS software. On this platform, the priority traffic is not limited to the configured kbps value during periods of congestion. Thus, you must also configure the police command to limit how much bandwidth a priority class can use and ensure adequate bandwidth for other classes. At this time, the police command is only supported on Engine 3 line cards. On the other engine line cards, only class-default is allowed when you configure a priority class. |
random-detect | Assigns a WRED profile. Use the random-detect precedence command to configure non-default WRED values per IP precedence value. |
On Engine 3 LCs, you must configure the FrFab queues with the Modular QoS CLI (MQC); the legacy Command Line Interface (CLI) is not supported.
When you configure the bandwidth command, note that Engine 0 and 2 LCs support six bandwidth classes only. A seventh class can be used for low-latency service and an eighth class, which is class-default, is used for all the non-matching traffic. Therefore, you have a total of eight queues. Class-default is not used as a priority class.
On Engine 3 LCs, the bandwidth percent command is translated into a kbps value, which varies with the underlying link rate, and then configured directly on the queue. The precision of this minimum bandwidth guarantee is 64 kbps.
Although no conversion to a quantum value is made with the bandwidth command, all queues have a quantum. On Engine 3 LCs, the quantum value is set internally based on the maximum transmission unit (MTU) of the interface, and is set equally for all queues. There is no MQC CLI mechanism to modify this quantum value, either directly or indirectly. The quantum value must be greater than or equal to the interface MTU. Internally, the quantum value is in units of 512 bytes. Thus, with an MTU of 4470 bytes, the minimum quantum value of MTU must be 9.
This section provides configuration notes to implement WRED and MDRR on Engine 3 LCs.
MDRR bandwidth configured in CLI is translated to an amount corresponding to L2 (for example, the L1 overhead is removed). That amount is then rounded up to the next 64 kbps and programmed in hardware.
Three different WRED profiles are supported for one class.
The WRED (maximum threshold - minimum threshold) is approximated to the nearest power of 2. The minimum threshold is then adjusted automatically while the maximum threshold is kept unchanged.
Mark probability value 1 is supported.
Exponential weighting constant configuration is not supported.
IP Precedence, MPLS EXP bits, and DSCP values are supported.
Note: Each port or channel on the Tetra (4GE-SFP-LC= ) or CHOC12/DS1-IR-SC= Frostbite linecards have four queues allocated by default. The four queues consist of the following:
One priority queue (LLQ) class
One default queue class
Two normal non-priority classes
When applying a service-policy containing more than these four classes (1 HPQ, 2 LPQs and class-default) to the interface, the following error will be reported:
Router(config-if)#service-policy output mdrr-policy
% Not enough queuing resources available to satisfy request.
As of 12.0(26)S, a command has been added for the 4GE-SFP-LC= Tetra linecard that allows the configuration of eight queues/VLAN instead of four. The eight queues consist of the following:
One LLQ
One class-default queue
Six normal queues
The use of this command will require a microcode reload of the linecard and will result in the ability to configure only 508 VLANs instead of 1022. The command syntax is as follows:
[no] hw-module slot <slot#> qos interface queues 8
For example:
Router(config)#hw-module slot 2 qos interface queues 8
Warning: Please micro reload the linecard for this command to take effect
Router(config)#microcode reload 2
This command will be available for the CHOC12/DS1-IR-SC= Frostbite linecard in 12.0(32)S
Example #1 - bandwidth percent Command
This example allocates 20 percent of available bandwidth to class Prec_4 traffic and 30 percent to traffic of class Prec_3 traffic. It leaves the remaining 50 percent to the class-default class.
In addition, it configures WRED as the drop mechanism on all the data classes.
Example #1 - bandwidth percent |
---|
policy-map GSR_EXAMPLE class Prec_4 bandwidth percent 20 random-detect random-detect precedence 4 1498 packets 9690 packets 1 !--- All data classes should have WRED configured. class Prec_3 bandwidth percent 30 random-detect random-detect precedence 3 1498 packets 9690 packets 1 class class-default !--- Class-default uses any leftover bandwidth. random-detect random-detect precedence 2 1498 packets 9690 packets 1 random-detect precedence 1 1498 packets 9690 packets 1 random-detect precedence 0 1498 packets 9690 packets 1 |
Example #2 - bandwidth {kbps} Command
This example illustrates how to apply the bandwidth command as an absolute kbps value instead of a percentage.
Example #2 - bandwidth {kbps} |
---|
policy-map GSR_EXAMPLE class Prec_4 bandwidth 40000 !--- Configures a minimum bandwidth guarantee of 40000 kbps or 40 Mbps in !--- times of congestion. Random-detect random-detect precedence 4 1498 packets 9690 packets 1 class Prec_3 bandwidth 80000 !--- Configures a minimum bandwidth guarantee of 80000 kbps or 80 Mbps in !--- times of congestion. Random-detect random-detect precedence 3 1498 packets 9690 packets 1 class class-default !--- Any remaining bandwidth is given to class-default. Random-detect random-detect precedence 2 1498 packets 9690 packets 1 random-detect precedence 1 1498 packets 9690 packets 1 random-detect precedence 0 1498 packets 9690 packets 1 |
Example #3 - priority Command
This example is designed for service providers that use the Cisco 12000 Series router as an MPLS provider edge (PE) router and need to configure a QoS service policy on the link between the PE router and the customer edge (CE) router. It places IP precedence 5 packets in a priority queue, and limits the output of that queue to 64 Mbps. It then assigns a portion of the remaining bandwidth to the bandwidth classes.
All of the non-priority class queues are configured with the random-detect command to enable WRED as the drop policy. All bandwidth class and class-default must have WRED configured explicitly.
Example #3 - priority |
---|
policy-map foo class Prec_5 police 64000000 conform-action transmit exceed-action drop !--- The police command is supported on Engine 3 line cards. priority class Prec_4 bandwidth percent 30 random-detect random-detect precedence 4 1498 packets 9690 packets 1 class Prec_3 bandwidth percent 10 random-detect random-detect precedence 3 1498 packets 9690 packets 1 class Prec_2 bandwidth percent 10 random-detect random-detect precedence 2 1498 packets 9690 packets 1 class Prec_1 bandwidth percent 10 random-detect random-detect precedence 1 1498 packets 9690 packets 1 class Prec_0 bandwidth percent 25 random-detect random-detect precedence 0 1498 packets 9690 packets 1 class class-default random-detect random-detect precedence 6 1498 packets 9690 packets 1 random-detect precedence 7 1498 packets 9690 packets 1 |
As mentioned above, the MQC works only with the FrFab queues on an outbound interface. To apply a defined policy-map, use the service-policy output command, as shown here:
Router(config)#interface POS 0/0 Router(config-if)#service-policy ? history Keep history of QoS metrics input Assign policy-map to the input of an interface output Assign policy-map to the output of an interface Router(config-if)#service-policy output ? WORD policy-map name Router(config-if)#service-policy output GSR_EXAMPLE
Use the show policy-map interface command to view the application of a policy. The show policy-map interface command displays the following:
Configured bandwidth and priority classes and the match-on criteria.
Any WRED profiles.
Shape and police parameters.
Traffic accounting and rates.
The internal CoS queue to which a particular class is mapped. These queues are referenced by the same index that is used in the output of the show controller frfab queue command.
Here is an example of a complete configuration and the show commands to monitor the policy:
Complete Configuration |
---|
class-map match-all class1 match ip precedence 1 class-map match-all class2 match ip precedence 2 !--- Step 1 - Configure traffic classes. ! policy-map policy1e Class class1 bandwidth percent 10 random-detect random-detect precedence 1 375 packets 2423 packets 1 Class class2 bandwidth percent 20 random-detect !--- Step 2 - Configure a policy-map. ! interface POS6/0 ip address 12.1.1.1 255.255.255.0 no ip directed-broadcast no keepalive service-policy output policy1e !--- Step 3- Attach policy-map to the interface. |
Use the show policy-map interface command to view the policy configured on the interface, along with all configured classes. Here is the output of the command:
Router#show policy-map int pos6/0 POS6/0 Service-policy output: policy1e (1071) Class-map: class1 (match-all) (1072/3) 0 packets, 0 bytes 5 minute offered rate 0 bps, drop rate 0 bps Match: ip precedence 1 (1073) Class of service queue: 1 Tx Queue (DRR configured) bandwidth percent Weight 10 1 Tx Random-detect: Exp-weight-constant: 1 (1/2) Precedence RED Label Min Max Mark 1 1 375 2423 1 Class-map: class2 (match-all) (1076/2) 0 packets, 0 bytes 5 minute offered rate 0 bps, drop rate 0 bps Match: ip precedence 2 (1077) Class of service queue: 2 Tx Queue (DRR configured) bandwidth percent Weight 20 9 Tx Random-detect: Exp-weight-constant: 1 (1/2) Precedence RED Label Min Max Mark Class-map: class-default (match-any) (1080/0) 0 packets, 0 bytes 5 minute offered rate 0 bps, drop rate 0 bps Match: any (1081) 0 packets, 0 bytes 5 minute rate 0 bps
This section lists the commands you can use to monitor your congestion management and avoidance policy.
Table 8 lists the relevant commands for the Ingress and Egress line cards.
Table 8 – Commands for the Line CardsIngress Line Card | Egress Line Card |
---|---|
|
|
These commands is explained in this section.
Before you use this command, confirm the correct "Queuing strategy." If the output displays First In, First Out (FIFO), ensure that the service-policy command appears in the running configuration (if MQC has been used to configure MDRR).
Monitor the number of output drops, which represents the total number of WRED FrFab drops that have occurred for outgoing traffic on this interface. The number of output drops in the show interfaces command output must be equal to or higher than the number of output drops in the show interfaces <number> random command output.
Note: On the Cisco 12000 Series Router, the interface output drops are updated after WRED drops are updated. There is a small chance that if you use a tool to query both drop counters, the interface drops are not yet updated.
Router#show interfaces POS 4/0 POS4/0 is up, line protocol is up Hardware is Packet over SONET Description: link to c12f9-1 Internet address is 10.10.105.53/30 MTU 4470 bytes, BW 622000 Kbit, DLY 100 usec, rely 255/255, load 82/255 Encapsulation PPP, crc 32, loopback not set Keepalive set (10 sec) Scramble enabled LCP Open Open: IPCP, CDPCP, OSICP, TAGCP Last input 00:00:02, output 00:00:05, output hang never Last clearing of "show interface" counters 00:04:54 Queueing strategy: random early detection (WRED) Output queue 0/40, 38753019 drops; input queue 0/75, 0 drops 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 200656000 bits/sec, 16661 packets/sec 135 packets input, 6136 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 parity 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort 7435402 packets output, 11182627523 bytes, 0 underruns 0 output errors, 0 applique, 0 interface resets 0 output buffer failures, 0 output buffers swapped out 0 carrier transitions
When you use this command, you must:
Verify that the correct cos-queue-group template is applied to this interface.
Check the MDRR weights. For each MDRR queue, you can check the weighted average for the queue length and the highest value reached (in packets). The values are calculated as a weighted average, and need not reflect the actual maximum queue depth ever reached.
Check the WRED minimum and maximum thresholds.
Check the number of random drops and threshold drops for each RED label ("To Fabric" drops indicate the total amount of drops for this label on all the line cards).
The "TX-queue-limit drops" counter is used only on Engine 1 LCs, which do not support WRED. Engine 1 cards enable you to set the limit of the MDRR queues with the TX-queue-limit interface command. Where WRED is supported, the WRED thresholds determine the depth of the MDRR queues.
Router#show interfaces POS 4/0 random POS4/0 cos-queue-group: oc12 RED Drop Counts TX Link To Fabric RED Label Random Threshold Random Threshold 0 29065142 73492 9614385 0 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 0 0 0 0 5 0 0 0 0 6 0 0 0 0 TX-queue-limit drops: 0 Queue Lengths TX Queue (DRR configured) oc12 Queue Average High Water Mark Weight 0 0.000 2278.843 1 1 0.000 0.000 73 2 0.000 0.000 10 3 0.000 0.000 10 4 0.000 0.000 10 5 0.000 0.000 10 6 0.000 0.000 10 Low latency 0.000 0.000 10 TX RED config Precedence 0: 375 min threshold, 2423 max threshold, 1/1 mark weight Precedence 1: not configured for drop Precedence 2: not configured for drop Precedence 3: not configured for drop Precedence 4: 375 min threshold, 2423 max threshold, 1/1 mark weight Precedence 5: not configured for drop Precedence 6: 375 min threshold, 2423 max threshold, 1/1 mark weight Precedence 7: not configured for drop weight 1/2
This command displays the instantaneous queue depth for a given port on a given slot. The sample output in this section displays the MDRR queue on interface POS 4/1. You see a queue depth for MDRR queue 1 of 1964 packets. The weight is the number of bytes that can be served on this queue. This weight determines the percentage of bandwidth you want to give to this queue. The deficit is the value that tells the DRR algorithm how many packets still need to be served. You can see that there are no packets queued in the LLQ (DRR queue 7).
Router#execute-on slot 4 show controllers frfab queue 1 ========= Line Card (Slot 4) ======= FrFab Queue Interface 1 DRR# Head Tail Length Average Weight Deficit 0 95330 40924 0 0.000 4608 0 1 211447 233337 1964 1940.156 41472 35036 2 0 0 0 0.000 9216 0 3 0 0 0 0.000 9216 0 4 0 0 0 0.000 9216 0 5 0 0 0 0.000 9216 0 6 0 0 0 0.000 9216 0 7 0 0 0 0.000 9216 0
This command is used, in particular, to monitor the depth of the Priority Queue of the egress line card. When you see that packets start to wait on this LLQ, it is a good indication that you must divert some Voice over IP (VOIP) traffic to other egress line cards. In a good design, the length should always be 0 or 1. In a real life network, you will experience bursty traffic, even for voice data. The extra delay gets more serious when the total voice load exceeds 100% of the egress bandwidth for a short time. The router cannot put more traffic on the wire than what is allowed, and thus voice traffic gets queued on its own priority queue. This creates voice latency and voice jitter introduced by the burst of the voice traffic itself.
Router#execute-on slot 4 show controllers frfab queue 0 ========= Line Card (Slot 4) ======= FrFab Queue Interface 0 DRR# Head Tail Length Average Weight Deficit 0 181008 53494 2487 2282.937 4608 249 1 16887 45447 7 0.000 41472 0 2 0 0 0 0.000 9216 0 3 0 0 0 0.000 9216 0 4 0 0 0 0.000 9216 0 5 0 0 0 0.000 9216 0 6 0 0 0 0.000 9216 0 7 107818 142207 93 0.000 9216 -183600
Queue 7 is the LLQ, and the length tells you how many packets are in this LLQ.
Use this command when you suspect that the packet memory of an LC starts to approach full capacity. An increasing value for the "no mem drop" counter suggests that WRED is not configured or that the WRED thresholds are set too high. This counter must not increment under normal conditions. See Troubleshooting Ignored Packets and No Memory Drops on the Cisco 12000 Series Internet Router for more information.
Router#execute-on slot 4 show controllers frfab QM stat ========= Line Card (Slot 4) ======= 68142538 no mem drop, 0 soft drop, 0 bump count 0 rawq drops, 8314999254 global red drops, 515761905 global force drops 0 no memory (ns), 0 no memory hwm (Ns) no free queue 0 0 1968 88 0 0 0 0 0 0 0 0 0 0 0 0 0 multicast drops TX Counts Interface 0 859672328848 TX bytes, 3908130535 TX pkts, 75431 kbps, 6269 pps Interface 1 86967615809 TX bytes, 57881504 TX pkts, 104480 kbps, 8683 PPS Interface 2 0 TX bytes, 0 TX pkts, 0 kbps, 0 PPS Interface 3 0 TX bytes, 0 TX pkts, 0 kbps, 0 PPS
This section describes the commands used to monitor inbound congestion management.
Before you issue this command, check whether the value in the ignored counter is on the increase. You will see ignored packets if you run out of memory on the ToFab side or if the line card does not accept the packets fast enough. For more information, see Troubleshooting Input Drops on the Cisco 12000 Series Internet Router.
Router#show interfaces POS 14/0 POS14/0 is up, line protocol is up Hardware is Packet over SONET Description: agilent 3b for QOS tests Internet address is 10.10.105.138/30 MTU 4470 bytes, BW 2488000 Kbit, DLY 100 usec, rely 234/255, load 1/255 Encapsulation HDLC, crc 32, loopback not set Keepalive not set Scramble disabled Last input never, output 00:00:03, output hang never Last clearing of "show interface" counters 00:34:09 Queueing strategy: random early detection (WRED) Output queue 0/40, 0 drops; input queue 0/75, 0 drops 5 minute input rate 2231000 bits/sec, 4149 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 563509152 packets input, 38318622336 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 0 parity 166568973 input errors, 0 CRC, 0 frame, 0 overrun, 166568973 ignored, 0 abort 35 packets output, 12460 bytes, 0 underruns 0 output errors, 0 applique, 0 interface resets 0 output buffer failures, 0 output buffers swapped out 0 carrier transitions
This sample output of the exec slot <x> show controller tofab queue command was captured when there was no congestion on an egress line card in slot 3.
Router#execute-on slot 13 show controllers tofab queue ========= Line Card (Slot 13) ======= Carve information for ToFab buffers !--- Output omitted. ToFab Queues: Dest Slot 0 0 0 0 9690 1 0 0 0 9690 2 0 0 0 9690 3 11419 16812 0 9690 4 0 0 0 2423 5 0 0 0 9690 6 0 0 0 9690 7 0 0 0 262143 8 0 0 0 262143 9 0 0 0 606 10 0 0 0 262143 11 0 0 0 262143 12 0 0 0 262143 13 0 0 0 262143 14 0 0 0 262143 15 0 0 0 9690 Multicast 0 0 0 262143
The following output was captured when there was congestion on slot 3:
Router#execute-on slot 13 show controllers tofab queue ========= Line Card (Slot 13) ======= Carve information for ToFab buffers !--- Output omitted. ToFab Queues: Dest Slot 0 0 0 0 9690 1 0 0 0 9690 2 0 0 0 9690 3 123689 14003 1842 9690 4 0 0 0 2423 5 0 0 0 9690 6 0 0 0 9690 7 0 0 0 262143 8 0 0 0 262143 9 0 0 0 606 10 0 0 0 262143 11 0 0 0 262143 12 0 0 0 262143 13 0 0 0 262143 14 0 0 0 262143 15 0 0 0 9690 Multicast 0 0 0 262143
Use this command to determine how much memory is used on the ToFab side. In particular, note the number in the '#Qelem" column. Notice that:
When no memory is used, the values are at their highest.
The value of the "#Qelem" column decreases as packets are buffered.
When the "#Qelem" column reaches zero, all carved buffers are in use. On Engine 2 LC, small packets can borrow buffer space from larger packets.
You can also use this command to determine the number of queued packets on a virtual output queue. The example here shows how to check slot 14 for the instantaneous number of packets on these queues for slot 4, port 1 (POS 4/1). We see 830 packets queued on MDRR queue 1.
Router# execute-on slot 14 show controllers tofab queue 4 1 ========= Line Card (Slot 14) ======= ToFab Queue Slot 4 Int 1 DRR# Head Tail Length Average Weight Deficit 0 0 0 0 0.000 4608 0 1 203005 234676 830 781.093 41472 37248 2 0 0 0 0.000 9216 0 3 0 0 0 0.000 9216 0 4 0 0 0 0.000 9216 0 5 0 0 0 0.000 9216 0 6 0 0 0 0.000 9216 0 7 0 0 0 0.000 9216 0
Use this command to see the number of ToFab drops per line card. Also check for a "no memory drop" counter that increments. This counter increments when CoS is not configured on the ToFab side.
Router#execute-on slot 13 show controllers tofab QM stat ========= Line Card (Slot 13) ======= 0 no mem drop, 0 soft drop, 0 bump count 0 rawq drops, 1956216536 global red drops, 6804252 global force drops 0 no memory (Ns), 0 no memory hwm (Ns) no free queue 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Q status errors 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
This case study shows how to configure a typical policy for the network core of a service provider environment. It applies queue commands and enables you to use MDRR/WRED for active queue management. QoS policies in edge routers normally use traffic marking, conditioning, and so on, to enable routers in the core to sort traffic into classes based on IP precedence or DiffServ Code Point (DSCP) values. This case study uses Cisco IOS software QoS features to meet tight Service Level Agreements (SLAs) and different service levels for voice, video, and data services on the same IP backbone.
In the approach, a service provider has implemented three classes of traffic. The most important is the LLQ or Low Latency Queueing class. This is the class for Voice and Video. This class must experience a minimum delay and jitter, and must never experience packet loss or reordered packets as long as the bandwidth of this class does not exceed the link bandwidth. This class is known as Expedited Forwarding Per-Hop Behavior (EF PHB) traffic in the DiffServ architecture. The Internet Service Provider (ISP) designed the network in a way that this class does not exceed 30% on average load of the link bandwidth. The other two classes are the business class and the best effort class.
In the design, we have configured the routers in such a way that the business class always gets 90% of the remaining bandwidth and the best effort class gets 10%. These two classes have less time sensitive traffic and can experience traffic loss, higher delay, and jitter. In the design, the focus is on Engine 2 line cards: 1xOC48 rev B, 4xOC12 rev B, and 8xOC3 line cards.
Rev B line cards are best suited to carry VoIP traffic because of a revised ASIC and hardware architecture, which introduces very little latency. With the revised ASIC, the transmit FIFO queue is resized by the line card driver to roughly two times the largest MTU on the card. Look for a "-B" appended to the part number, such as OC48E/POS-SR-SC-B=.
Note: Do not confuse the transmit FIFO queue with the FrFab queues that can be tuned on Engine 0 line cards with the tx-queue-limit interface command.
Table 9 lists the matching criteria for each class.
Table 9 – Matching Criteria for Each ClassClass Name | Matching Criteria |
---|---|
Priority Queue - Voice traffic | Precedence 5 |
Business Queue | Precedence 4 |
Best Effort Queue | Precedence 0 |
The OC48 line cards can queue a large number of packets in the ToFab queues. Thus, it is important to configure MDRR/WRED on the ToFab queues, especially when the egress interface is a high speed interface such as OC48. The fabric can only switch traffic to the receiving line card at a theoretical maximum rate of 3 Gbps (1500 bytes packets). If the total amount of traffic sent is larger than what the switching fabric can carry to its receiving card, many packets will be queued on the ToFab queues.
Interface POS3/0 description OC48 egress interface ip address 10.10.105.53 255.255.255.252 no ip directed-broadcast ip router Isis encapsulation ppp mpls traffic-eng tunnels tag-switching ip no peer neighbor-route crc 32 clock source internal POS framing sdh POS scramble-atm POS threshold sf-ber 4 POS flag s1s0 2 TX-cos oc48 Isis metric 2 level-1 Isis metric 2 level-2 ip rsvp bandwidth 2400000 2400000 ! interface POS4/1 description OC12 egress interface ip address 10.10.105.121 255.255.255.252 no ip directed-broadcast ip router Isis encapsulation ppp mpls traffic-eng tunnels no peer neighbor-route crc 32 clock source internal POS framing sdh POS scramble-ATM POS threshold sf-ber 4 POS flag s1s0 2 TX-cos oc12 Isis metric 2 level-1 Isis metric 2 level-2 ip RSVP bandwidth 600000 60000 ! interface POS9/2 description OC3 egress interface ip address 10.10.105.57 255.255.255.252 no ip directed-broadcast ip router Isis crc 16 POS framing sdh POS scramble-ATM POS flag s1s0 2 TX-cos oc3 Isis metric 200 level-1 Isis metric 2 level-2 ! interface POS13/0 description agilent 3a for QOS tests - ingress interface. ip address 10.10.105.130 255.255.255.252 no ip directed-broadcast no ip route-cache cef no ip route-cache no ip mroute-cache no keepalive crc 32 POS threshold sf-ber 4 TX-cos oc48 ! interface POS14/0 description agilent 3b for QOS tests - ingress interface. ip address 10.10.105.138 255.255.255.252 no ip directed-broadcast no keepalive crc 32 POS threshold sf-ber 4 TX-cos oc48 ! interface POS15/0 description agilent 4A for QOS tests - ingress interface ip address 10.10.105.134 255.255.255.252 no ip directed-broadcast no ip mroute-cache no keepalive crc 32 POS threshold sf-ber 4 TX-CoS oc48 ! rx-cos-slot 3 StotTable rx-cos-slot 4 StotTable rx-cos-slot 9 StotTable rx-cos-slot 13 StotTable rx-cos-slot 14 StotTable rx-cos-slot 15 StotTable ! slot-table-cos StotTable destination-slot 0 oc48 destination-slot 1 oc48 destination-slot 2 oc48 destination-slot 3 oc48 destination-slot 4 oc12 destination-slot 5 oc48 destination-slot 6 oc48 destination-slot 9 oc3 destination-slot 15 oc48 ! cos-queue-groupoc3 precedence 0 random-detect-label 0 precedence 4 queue 1 precedence 4 random-detect-label 1 precedence 5 queue low-latency precedence 6 queue 1 precedence 6 random-detect-label 1 random-detect-label 0 94 606 1 random-detect-label 1 94 606 1 queue 0 1 queue 1 73 queue low-latency strict-priority !--- Respect the tight SLA requirements. !--- No packets drop/low delay and jitter for the priority queue. ! CoS-queue-groupoc12 precedence 0 random-detect-label 0 precedence 4 queue 1 precedence 4 random-detect-label 1 precedence 5 queue low-latency precedence 6 queue 1 precedence 6 random-detect-label 1 random-detect-label 0 375 2423 1 random-detect-label 1 375 2423 1 queue 0 1 queue 1 73 queue low-latency strict-priority ! CoS-queue-groupoc48 precedence 0 random-detect-label 0 precedence 4 queue 1 precedence 4 random-detect-label 1 precedence 5 queue low-latency precedence 6 queue 1 precedence 6 random-detect-label 1 random-detect-label 0 1498 9690 1 random-detect-label 1 1498 9690 1 queue 0 1 queue 1 73 queue low-latency strict-priority
It is expected that the more VOIP traffic you have, the more business traffic has to wait before it gets served. However, this is not a problem because the tight SLA requires no packet drops, and very low latency and jitter for the priority queue.
Revision | Publish Date | Comments |
---|---|---|
1.0 |
02-Dec-2013 |
Initial Release |