Modular QoS Configuration Guide for Cisco 8000 Series Routers, IOS XR Release 7.0.x
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Read this configuration guide to understand the overall architecture that powers the Cisco Quality of Service (QoS) technology,
and also how to use its features to configure and manage the traffic bandwidth and packet loss parameters on your network.
Traditional Traffic Management
In traditional methods of traffic management, traffic packets are sent to the egress output queues without taking into consideration
the egress interface availability to transmit.
Therein lies the problem as well. In case of a traffic congestion, the traffic packets may get dropped at the egress port.
Which means that the network resources spent getting the packets from the ingress input queue across the switch fabric to
the output queues at egress are wasted. That is not all—input queues buffer traffic meant for different egress ports, so congestion
on one egress port could affect traffic on another port, an event referred to as head-of-line-blocking.
Traffic Management on Your Router
Your router's Network Processing Unit (NPU) uses a coupled ingress-egress virtual output queuing (VoQ)-based forwarding architecture
to manage traffic.
Here, each ingress traffic class has a one-to-one VoQ mapping from each ingress slice (pipeline) to each egress port. Which
means that every egress interface (#5 in the figure) has earmarked buffer space on every ingress pipeline (#1 in the figure)
for each of its VoQs.
Here is how the story of packet travel in times of congestion on your router system unravels:
#2: These packets are stored in separate buffer storage spaces in dedicated VoQs. This is where queuing, VoQ transmit, and
drop packet and byte counters come into play. (For details, see Congestion Avoidance .)
#3: Depending on the bandwidth available on the egress interface, these packets are subjected to egress scheduling, where
egress credit and transmit schedulers are configured. In other words, the packets and the sequence in which they will now
proceed towards the egress interface is determined here. This is where the fabric bandwidth is taken into consideration for
#4: The packets are switched through the fabric.
#5: In the final phase, the egress marking and classification takes place, and the congestion is managed in a way that at
this stage there is no packet dropped, and all the packets are transmitted to the next hop.
Limitations of the VoQ Model
While the VoQ model of traffic management offers distinct advantages (reducing memory bandwidth requirements, providing end-to-end
QoS flow), it has this limitation:
The total egress queue scale is lower because each egress queue must be replicated as an ingress VoQ on each slice of each
NPU/ASIC in the system. This means that with the addition of 1 NPU with 20 interfaces, the number of VoQs used on each and
every NPU in the system will increase by 20 x 8 (queue/interface) = 160. There is also an increase in the number of credit
connectors from each scheduler for each egress port on pre-existing NPUs to each slice in the newly inserted NPU.
Cisco Modular QoS CLI to Deploy QoS
Cisco Modular QoS CLI (MQC) framework is the Cisco IOS QoS user language that enables:
a standard Command Line Interface (CLI) and semantics for QoS features.
simple and accurate configurations.
QoS provisioning within the context of an extensible language.
For your router, in the egress direction, two types of MQC policies are supported: queuing and marking. You use the queuing
policy to configure credit scheduling hierarchy, rates, priority, buffering, and congestion avoidance. You use the marking
policy to classify and mark packets that have been scheduled for transmission. Even when a queuing policy is not applied,
there is an implicit queuing policy with TC7 - P1, TC6 - P2, TC5 - TC0 (6 x Pn), so packets marked with TC7 and control inject
packets are always prioritized over other packets. In the ingress, only one policy is supported for classification and marking.
You can apply queuing and marking policy independent of each other or together in the egress direction. If you apply both
policies together, the queuing policy actions are provisioned first, followed by marking policy actions.
Important Points about MQC Egress Queuing Policy
These are important points that you must know about the MQC egress queuing policy:
The MQC queuing policy consists of a set of class maps, which are added to a policy map. You control queuing and scheduling
parameters for that traffic class by applying actions to the policy.
class-default always matches traffic-class 0. Also, no other class can match traffic-class 0.
If a traffic class has no matching class in the applied policy map, it always matches class-default. In other words, it uses
the traffic-class 0 VoQ.
Each unique combination of traffic classes that match class-default require a separate traffic class (TC) profile. The number of TC profiles are limited to 8 for main interfaces and 8 for sub-interfaces.
You cannot configure multiple traffic classes with the same priority level.
Each priority level, when configured, must be configured to the class that matches the corresponding TC as shown in the following
If all the priority levels configured in a policy-map are sorted, they must be contiguous. In other words, you cannot skip
a priority level. For example, P1 P2 P4 (skipping P3), is not allowed.
MQC supports up to two levels (parent, child) of queuing policy. The parent level aggregates all the traffic classes and whereas
the child level differentiates traffic classes using MQC classes.
Only these actions are supported in the queuing policy:
bandwidth remaining ratio
Random Early Detection (RED)
Priority flow control
You can have only one match traffic-class value in the class map.
You cannot apply a queuing policy to a main interface and its sub-interfaces.