Guest

Cisco ASR 1000 Series Aggregation Services Routers

Cisco ASR 1000 Aggregation Services Routers Quality of Service (QoS) FAQ

  • Viewing Options

  • PDF (1.4 MB)
  • Feedback

The Cisco® ASR 1000 Aggregation Services Router platform has a robust and scalable quality-of-service (QoS) implementation. It adheres to the modular QoS CLI (MQC) command-line interface (CLI), so the configuration is familiar to Cisco IOS® and IOS XE Software users from other platforms. Because QoS on the Cisco ASR 1000 is implemented in hardware, certain details of operation may vary from other Cisco platforms.

General

Q.    How does the Cisco ASR 1000 calculate packet sizes?
A.     Please refer to Table 1 for general information about queuing policy maps applied to physical interfaces, sub-interfaces, ATM virtual circuits, virtual templates or tunnel interfaces. Please refer to Table 2 for general information about policing policy maps applied to physical interfaces, sub-interfaces, ATM virtual circuits, virtual templates or tunnel interfaces.

Table 1.       Packet Size Calculation for Queuing Functions and Counters

QoS Target

What Is Not Included

What Is Included

Ethernet main and sub-interfaces

Inter-frame gap (IFG)/preamble and cyclic redundancy check (CRC)

Layer 1 overheads

Layer 2 headers and Layer 2 payload

802.1q header

All Layer 3 and up payloads

ATM virtual circuits and ATM virtual paths

Layer 1 overheads

5-byte ATM cell headers

All ATM Adaptation Layer (AAL) headers

AAL CRC values

ATM cell tax and ATM cell padding

All Layer 3 and up payloads

Serial and Packet over SONET (PoS)
main interfaces

CRC and High-Level Data Link Control (HDLC) bit stuffing

Layer 2 headers and Layer 2 payload

All Layer 3 and up payloads

Virtual access, broadband virtual template, and sessions

IFG/preamble and CRC

Layer 1 overheads

Layer 2 headers and Layer 2 payload

802.1q header

Layer 2 Tunneling Protocol (L2TP) headers

Point-to-Point Protocol over X (PPPoX) headers

All Layer 3 and up payloads

Tunnels
(generic routing encapsulation [GRE], Dynamic Multipoint VPN [DMVPN], Dynamic Virtual Tunnel Interface [dVTI], IPsec Site-to-Site VPN [sVTI], and IP Security [IPsec])

IFG/preamble and CRC

Layer 1 overheads

Layer 2 headers and Layer 2 payload

802.1q header

GRE headers

Cryptographic headers and trailer

All Layer 3 and up payloads

PPP multilink bundle

IFG/pre-amble, CRC

L1 overheads

L2 multilink PPP headers

L2 PPP headers

L2TP headers

ATM cell tax, ATM cell padding

Table 2.       Packet Size Calculation for Classification and Policing Functions and Counters in the Egress Direction

QoS Target

What Is Not Included

What Is Included

Ethernet main and sub-interfaces

IFG/preamble, and CRC

Layer 1 overheads

Layer 2 headers, Layer 2 payload, 802.1q header, and all Layer 3 and up payloads

ATM virtual circuits and
ATM virtual paths

5-byte cell headers, all AAL headers, AAL CRC values, ATM cell tax, ATM cell padding, and all payloads

Layer 3 and up payloads

Serial and PoS
main interfaces

CRC and HDLC bit stuffing

Layer 2 headers, Layer 2 payload, and all Layer 3 and up payloads

Virtual access, broadband virtual template, and sessions

IFG/preamble and CRC

Layer 1 overheads

Layer 2 headers* and Layer 2 payload

802.1q header

L2TP headers

PPPoX headers

All Layer 3 and up payloads

Tunnel
(GRE, DMVPN, dVTI, sVTI, and IPsec)

IFG/preamble and CRC

Layer 1 overheads

Layer 2 headers and Layer 2 payload

802.1q header

Cryptographic headers and trailers

GRE headers

All Layer 3 and up payloads

PPP multilink bundle

IFG/pre-amble, CRC

L1 overheads

ATM cell tax, ATM cell padding

L2 multilink PPP headers

L2 PPP headers

L2TP headers

*Note that for broadband L2TP Network Server (LNS) scenarios, QoS policers configured on sessions will not observe the Layer 2 overhead. So the 14 bytes for Layer 2 source/destination address and Layer 2 type and any 802.1q headers will not be included. As a result, any policers used for priority traffic would not include any overhead accounting offsets that are used for queuing or scheduling decisions.

 

Topic

Reference

Multilink PPP Support for the ASR 1000 Series Aggregation Services Routers

http://www.cisco.com/c/en/us/td/docs/routers/asr1000/configuration/guide/chassis/asrswcfg/multilink_ppp.html - pgfId-1097011

Q.    Is it possible to account for downstream changes in packet size?
A.     Yes, with the overhead accounting feature, all queuing functions can adjust the size of packets for the purposes of scheduling packets for transmission by using the account keyword with the queuing feature. You can configure custom offsets ranging from -64 to 64 bytes. Additionally, you can use some predefined offsets. Note that queuing features are only supported on egress, therefore overhead accounting is only supported on egress policy-maps with queuing functions. An example of the command-line interface (CLI) follows:

policy-map test

  class class-default

    shape average account user-defined -4

Additionally, with the atm keyword, queuing functions can compensate for ATM cell division and cell padding (sometimes called the ATM cell tax). This function compensates for the 5-byte header of each cell and the padding of the last cell to fill a full 48 bytes of payload. If additional AAL5, Subnetwork Access Protocol (SNAP), or other headers need to be accounted for, they should be included with the user-defined parameter or some of the predefined keywords.
There is no support at this time for overhead accounting for policing features, including priority queues that are rated limited with policers (conditional or strict).
For more information, please reference:

   MQC Traffic Shaping Overhead Accounting for ATM: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_plcshp/configuration/xe-3s/asr1000/qos-plcshp-mqc-ts-ohead-actg-atm.html

   Ethernet overhead accounting: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_plcshp/configuration/xe-3s/asr1000/qos-plcshp-xe-3s-asr-1000-book/qos-plcshp-ether-ohead-actg.html

Q.    Can QoS be confiugred on the management interface, GigabitEthernet0?
A.     No, you cannot configure QoS on the management interface. The management interface is handled entirely within the route processor, and traffic to and from the management interface does not move through the Cisco ASR 1000 Series Embedded Services Processor (ESP). Because all QoS functions are performed on the ESP, QoS cannot be applied.
Q.    Is there a difference in QoS behavior on shared port adapter (SPA)-based Ethernet ports compared to built-in Ethernet ports?
A.     For QoS behavior managed by MQC service-policy commands, there is no difference in QoS behavior. All advanced QoS processing is done on the ESP and is not affected by the type of ingress Ethernet port.
All ASR 1000 platforms have low- and high-priority queues on a per-port basis in the ingress and egress path. This is the same regardless of a modular or fixed platform design.
SPA Interface Processor (SIP10 and SIP40) line cards engage in slightly different behavior when scheduling ingress traffic and forwarding to the ESP for processing. This variation only comes into play if the SIP10 is oversubscribed with traffic (for example, two 10-GE SPAs installed in a SIP10 attempt to forward more than 10 Gbps of traffic to the ESP for processing). In undersubscribed scenarios, the behavior will be the same on SIP10 and SIP40. For the vast majority of customers, these subtle differences in behavior would not be observed in normal network behavior. It is not recommended to attempt to manipulate the SIP-based QoS behavior without specific instructions to do so.
Egress behavior is the same between SIP10 and SIP40. The Cisco ASR 1002 Router has a built-in SIP10.  The ASR 1002-X Router has a built-in SIP40. In both the ASR 1002 and ASR 1002-X Routers, the built-in SIP is always undersubscribed. The ASR 1001-X Router does not have a built-in SIP as Ethernet interfaces are managed directly by an integrated chipset. The ASR 1001-X has a reduced amount of ingress packet buffer compared to the other ASR 1000 platforms.
Q.    Can QoS manage control-plane traffic that is destined for Cisco IOS Software running on the route processor?
A.     Yes, a nonqueuing QoS policy map is supported on the control plane in Cisco IOS Software configuration mode. This feature is known as CoPP (Control Plane Policing). Usually, a policy map is applied to the control plane to protect the route processor from denial-of-service (DoS) attacks. A policy map applied in the input direction on the control plane will affect traffic that is destined for the route processor from regular interfaces. It is possible to classify packets such that some are rate limited and others are not.
When using show plat hardware qfp commands on the control-plane interface, keep in mind that even though the policy map is configured as “ingress” to the control plane, it is egress from the ESP card. Thus, the show plat hardware qfp commands must use the output direction.
For more information about Control-Plane Policing (CoPP), please visit: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_plcshp/configuration/xe-3s/asr1000/qos-plcshp-ctrl-pln-plc.html.
A.     For the puposes of egress queueing, a given QFP complex has responsibility for the queuing functions on certain shared-port-adapter (SPA) bays in the Cisco ASR 1000 chassis. For systems with one QFP complex, this situation is not a concern because all interfaces are handled by a single QFP complex. For systems with multiple QFPs, it is important to distribute interfaces among the QFPs if there will be a large number of queues or schedules or if there is concern about high packet-buffer-memory usage. Note that this queuing responsibility is independent of other feature processing. For example, a packet could have its ingress and egress features handled by QFP 0 while the egress queuing responbility is handled by QFP 1.
Figures 1 and 2 show how interfaces are distributed in the Cisco ASR 1006 and ASR 1013 chassis:
Cisco ASR 1006 chassis with ESP100:

   SPA slots in green serviced by QFP 0

   SPA slots in blue serviced by QFP 1

It is not possible for multiple QFPs to service a Cisco ASR 1000 Series SPA Interface Processor 10 (SIP10) installed in any slot. If a SIP10 is used in a slot that is normally divided among QFPs, the QFP that normally owns the left side of the SIP will service all interfaces. SIP40 cards can be serviced by multiple QFPs.
For Cisco ASR 1000 Series Fixed Ethernet Line Card (ASR1000-2T+20X1GE), the two 10 Gigabit Ethernet interfaces are owned by the right-side QFP and the twenty 1 Gigabit Ethernet interfaces are owned by the left-side QFP (Figure 1). For the Cisco ASR 1000 Series Fixed Ethernet Line Card (ASR1000-6TGE) the odd even number ports are owned by the left side QFP and the odd number ports are owned by the right side QFP (Figure 1).
Figure 1.      Cisco ASR 1006 QFP Distribution with ESP100

C67-731655-00_Figure01

Cisco ASR 1013 chassis with ESP100 or ESP200*:

   SPA slots in green serviced by QFP 0

   SPA slots in blue serviced by QFP 1

   SPA slots in purple serviced by QFP 2

   SPA slots in orange serviced by QFP 3

Figure 2.      QFP Interface Ownership Distribution Using ESP100 and ESP200

C67-731655-00_Figure02

*Note that Figure 2 assumes SIP40 line cards are used in a Cisco ASR 1013 chassis. If SIP10 line cards are used, all egress queues are handled by the QFP that owns the left side (even numbered SPA bays) in the figure. For example, if a SIP10 was installed in slot 2 (third from the bottom), all queues for all ports on that SIP10 would be serviced by QFP 0 (green) with ESP100 and QFP 1 (blue) with ESP200.
**For the Cisco ASR 1000 Series Fixed Ethernet Line Card (ASR1000-2T+20X1GE), the two 10 Gigabit Ethernet interfaces are owned by the right-side QFP and the twenty 1 Gigabit Ethernet interfaces are owned by the left-side QFP. For the Cisco ASR 1000 Series Fixed Ethernet Line Card (ASR1000-6TGE) the odd even number ports are owned by the left side QFP and the odd number ports are owned by the right side QFP (Figure 1).
Q.    How does the three-parameter scheduler used by the Cisco ASR 1000 differ from two-parameter schedulers used by other platforms?
A.     The Cisco ASR1000 QoS scheduler uses three parameters: maximum, minimum, and excess. Most other other platforms use only two parameters: maximum and minimum.
Both models handle maximum ( shape) and minimum ( bandwidth) the same way. The difference is how they distribute excess ( bandwidth remaining). Maximum is an upper limit of the bandwidth of traffic that a class is allowed to forward. Minimum is a guarentee that the given amount of traffic will always be available, even if the interface or hierarchy is congested.
Excess is the difference between the maximum possible rate (parent shaper) and all the used mimumums (priority and bandwidth-guaranteeed traffic). A two-parameter scheduler distributes the excess bandwidth proportionally according to the minimum rates. A three-parameter scheduler has a programmable parameter to control that sharing. By default, the Cisco ASR 1000 uses equal sharing or excess values of 1 for every class. Because of restrictions in Cisco IOS Software, you cannot configure the minimum (bandwidth) and excess (bandwith remaining) parameters at the same time in a class. This concurrent configuration was supported in classic Cisco IOS Software.
For more information, please reference:

   Policing and shaping overview: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_plcshp/configuration/xe-3s/asr1000/qos-plcshp-oview.html

   Distribution of remaining bandwidth using ratio: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_plcshp/configuration/xe-3s/asr1000/qos-plcshp-dist-rem-bw.html

   Leaky bucket algorithm as a queue: http://en.wikipedia.org/wiki/Leaky_bucket#The_Leaky_Bucket_Algorithm_as_a_Queue
(Note: This document is not controlled or endorsed by Cisco. It is provided only as a convenience.)

Q.    What do the non-MQC bandwidth and bandwidth qos-reference commands do and where are they useful?
A.     Typically the interface bandwidth command is used on an interface to influence the bandwidth metric that routing protocols use for their path decisions. In certain situations, however, the value given for the bandwidth command can influence QoS. The bandwidth qos-reference interface command was intended to convey to the QoS infrastructure how much bandwidth is available for the downstream tunnel bandwidth. Table 3 details when bandwidth and bandwidth qos-reference are applicable.

Table 3.       Uses for Interface bandwidth and bandwidth qos-reference

Command

Target

Affect

bandwidth

Any generic main interface for physical interface

Any top-level QoS MQC references for percent-based configuration will use this value for the interface throughput instead of the actual throughput. For example, if bandwidth 5000 is configured on a Gigabit Ethernet interface and a top-level class-default shaper is configured for shape average percent 50, the interface will be limited to 2.5 Mbps of traffic.

bandwidth

Any generic sub-interface for a physical interface

This command does not affect QoS. QoS applied on a sub-interface is affected by a bandwidth command configured on the corresponding main interface.

bandwidth

Multilink Point-to-Point Protocol (MLP) bundle

Configuring on the actual bundle interface rate limits traffic even without the application of the QoS MQC configuration. Any percent-based configuration that is part of a policy map applied to the bundle uses the bandwidth value for calculations.

bandwidth qos-reference

GRE tunnel, sVTI tunnel, dVTI tunnel, and virtual template for broadband

Any top-level QoS MQC references for percent-based configuration use this value for the maximum throughput instead of the actual throughput for the used physical interface. For example, if bandwidth 5000 is configured on a sVTI tunnel interface and a top-level class-default shaper is configured for shape average percent 50, the tunnel will be limited to 2.5 Mbps of traffic.

bandwidth qos-reference

Tunnel interface used for DMVPN

This command does not affect the QoS MQC configuration. It is essentially ignored for QoS purposes.

Q.    What are PAK_PRIORITY packets and how are they handled?
A.     Certain packets that are considered so important that they are considered “no drop” and given a special designation called PAK_PRIORITY. They are generated by Cisco IOS Software on the route processor. PAK_PRIORITY packets are typically associated with protocols where reliable delivery is highly desired and there is not a retransmission or hold time built into the protocol.  Not all packets for a given protocol will be PAK_PRIORITY.
In order to achieve the “no-drop” behavior, PAK_PRIORITY packets are not run through the queues created by MQC policy maps. PAK_PRIORITY packets are run through the interface default queue with few exceptions. If a PAK_PRIORITY packet is classified to a priority (low-latency) queue by a MQC policy map, PAK_PRIORITY packets will move through the user-defined priority queue instead of the interface default queue.  Otherwise the packet will increment the classification counters (but not queuing counters) for the matching class and then be enqueued in the interface default queue.
For non-ATM interfaces, there is a single interface default queue per physical interface. It carries PAK_PRIOIRTY and non-PAK_PRIORITY traffic that doesn’t move through an MQC policy map.  For ATM interfaces, there is a single interface default queue, but in addition, each ATM virtual circuit has a default queue associated with it.  The per-ATM virtual-circuit default queue carries the non-PAK_PRIORITY traffic of a given virtual circuit without MQC applied.  All PAK_PRIORITY traffic (not otherwise classified into a low-latency priority queue by a MQC policy map) moves through the ATM interface default queue.
The interface default queue exists outside of the queues created when a queuing QoS policy map is applied to an interface. The interface default queue has guaranteed minimum bandwidth to service PAK_PRIORITY packets. Moving the traffic through this queue helps to avoid (but does not guarantee avoidance of) starvation by user-defined priority packets. If better starvation avoidance is necessary in particular customer scenarios, then it is possible to classify that specific traffic (through class-map filters) to user-defined classes with priority (low-latency queuing [LLQ]), which will allow that particular PAK_PRIO traffic to flow through the user-defined priority queue instead of the default interface queue (as discussed). This traffic will then compete evenly with other priority traffic.
PAK_PRIORITY packets appear in the classification counters for a policy map applied to an egress interface.  The packets do not show up in the queuing counters, however, because they are actually enqueued through the interface default queue.  In order to observe the number of packets that have moved through the interface default queue, use the following command (note that the interface name must be fully expressed with matching capitalization):

show plat hard qfp active infra bqs int GigabitEthernet0/0/0

The values for tail drops and total_enqs give the number of packets that were dropped because of a full queue and the number of packets that were enqueued.
PAK_PRIORITY packets are not subject to tail drops, random-detect drops, or policer drops. For example, these packets are added to the interface default queue, even if the queue depth is greater than the queue limit. Non-PAK_PRIORITY packets targeted for the interface default queue are tail dropped as any other packet if the queue limit is exceeded. PAK_PRIORITY packets classified to a low-latency queue also are protected from tail dropping by the same logic. Only if the overall ESP packet memory is very full (more than 98 percent) are PAK_PRIOIRTY packets tail dropped.
It is not possible to mark packets as PAK_PRIORITY through the CLI. This function is reserved for packets generated and marked by Cisco IOS Software. There are no Cisco IOS Software counters specific to PAK_PRIORITY packets. Some protocols, however, provided configuration control to mark their packets as PAK_PRIORITY. Address Resolution Protocol (ARP) is one example, through the following CLI:
arp packet-priority enable
Following is a list of protocols with packets that are marked as PAK_PRIORITY. This list is subject to change without notice and it not considered comprehensive or exhaustive:

   Layers 1 and 2

     ATM Address Resolution Protocol Negative Acknowledgement (ARP NAK)

     ATM ARP requests

     ATM host ping operations, administration and management cell(OA&M)

     ATM Interim Local Management Interface (ILMI)

     ATM OA&M

     ATM ARP reply

     Cisco Discovery Protocol

     Dynamic Trunking Protocol (DTP)

     Ethernet loopback packet

     Frame Relay End2End Keepalive

     Frame Relay inverse ARP

     Frame Relay Link Access Procedure (LAPF)

     Frame Relay Local Management Interface (LMI)

     Hot standby Connection-to-Connectrion Control packets (HCCP)

     High-Level Data Link Control (HDLC) keepalives

     Link Aggregation Control Protocol (LACP) (802.3ad)

       Port Aggregation Protocol (PAgP)

       PPP keepalives

       Link Control Protocol (LCP) Messages

       PPP LZS-DCP

       Serial Line Address Resolution Protocol (SLARP)

       Some Multilink Point-to-Point Protocol (MLPP) control packets (LCP)

   IPv4 Layer 3

     Protocol Independent Multicast (PIM) hellos 

     Interior Gateway Routing Protocol (IGRP) hellos 

     OSPF hellos 

     EIGRP hellos

     Intermediate System-to-Intermediate System (IS-IS) hellos, complete sequence number PDU  (CSNP), PSNP, and label switched paths (LSPs)

     ESIS hellos

     Triggered Routing Information Protocol (RIP) Ack

     TDP and LDP hellos

     Resource Reservation Protocol (RSVP)

     Some L2TP control packets

     Some L2F control packets

     GRE IP Keepalive

     IGRP CLNS 

     Bidirectional Forwarding Protocol (BFD)

Q.    Are packets marked as PAK_PRIO treated with priority or guaranteed not to drop?
A.     No, they are not treated with priority by default and they are subject to dropping under certain conditions. They are not subject to tail drop, random-detect drop, or policer drop unless the packet memory is very full (over 98%). They are given a minimum bandwidth associated with the interface default queue (which can sometimes be managed by a policy map on the main physical interface). However, they share this minimum bandwidth with all other traffic that flows through the default interface queue. Therefore, this traffic can still be dropped in congestion scenarios. If you want greater protection or priority handling for specific traffic marked as PAK_PRIO, then you should classify that traffic (with specific filters) to a user-defined class map that has LLQ (Low-Latency Queuing) enabled. It would also be a good practice to provision either strict or conditional policing in this class to manage any denial of service-type attacks.
Q.    The Cisco ASR 1000 isn’t showing a class-map filter or access control entries (ACE) matches. How can I access the information?
A.     By default, the ASR 1000 does not track per class-map filter or per-ACE matches for QoS. However, you can access these statistics by enabling one of the following CLIs:

platform qos match-statistics per-filter     (supported in Cisco IOS XE Software 3.3)

platform qos match-statistics per-ace        (supported in Cisco IOS XE Software 3.10)

Note that these commands will not be affective if added to the configuration while any QoS policies are attached to any interfaces. To become effective, all QoS policies must be removed and then reapplied or the router must be rebooted.
For more information about QoS packet-matching statistics configuration, please visit: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-match.html.
Q.    The Cisco ASR 1000 isn’t showing packet-marker statistics. How can I access the information?
A.     By default, the ASR 1000 does not track marking statistics for QoS. However, you can enable these statistics by configuring the following CLI:

platform qos marker-statistics        (supported in Cisco IOS Software XE3.3)

Note that this command will not take effect if added to the configuration while any QoS policies are attached to any interfaces. To become effective, all QoS policies must be removed and then reapplied or the router must be rebooted.
For more information about QoS packet-marking statistics, please visit: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-mrkg.html.
Q.    How many class maps, policy maps, or match rules are supported?
A.     Support as of Cisco IOS XE Software 3.10 is listed in Table 4.

Table 4.       Number of Class Maps, Policy Maps, and Match Rules Supported

Cisco IOS XE Software Versions

2.0S-2.2S

2.3S

3.5S-3.9S

3.10S

Number of unique policy maps

1,024

4,096

4,096

16,000 or 4,096*

Number of unique class maps

4,096

4,096

4,096

4,096

Number of classes per policy map

8

256

1,000

1,000

Number of filters per class map

16

16

32

32

*16,000 for Cisco ASR 1000 Series Route Processor 2 (RP2) with ESP40, ESP100, or ESP200\All other platform combinations are 4096.
For more information about applying QoS features using the MQC, please visit: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-apply.html.
Q.    What are the causes for FMFP_QOS-6-QOS_STATS_PROGRESS messages in the system log?
A.     The “FMFP_QOS-6-QOS_STATS_STALLED” message is simply an informational message indicating that the statistics upload from the ESP card to the RP card is not progressing as quickly as normally expected. There are no long term bad effects from this command other than QoS statistics in IOS may not be updated as quickly as expected. This would affect statistics gathered from the CLI as well as from SNMPß. This error could occur during a heavy processing load on the RP, for example during a large BGP routing update or during a period of high rate session bringup.
Q.    How many policers are supported in the entire system?
A.     For conditional policing, the limits are dictated by the number of queues that the platform supports.
For strict policing, there is no set limit. The primary limiting factor for strict policers is available memory (both control plane and data plane).
Q.    How can the usage of control plane memory be determined?
A.     The command show platform software status control-processor brief can be used to check the amount of available control plane memory. The command show plat hard qfp act infra exmem stat can be used to check the amount of free data plane memory.
Q.    What is the burst profile associated with shapers on the ASR 1000?
A.     When configuring the shape command on the ASR 1000, the CLI will accept the bc and be parameters in order to maintain configuration compatibility with migration of configurations from prior platforms. Even though these parameters are accepted, they are ignored by the hardware that does the QoS processing. Classic Cisco IOS Software shapers were based on an interval (Tc). Whenever that interval arrived, the scheduler would send a burst of data ( bc and be) such that, over time, the desired shape rate would be achieved. The minimum interval of four msec was based on the Cisco IOS tick timer that fired periodically to trigger such time-based events.
On the ASR 1000, the shaper is implemented in hardware and will send packets as often as possible to help maintain shape rate. There are a two mechanisms that appear like ASR 1000 bursts data at an interval; however, it actually isn’t the case. These two cases:

   When small packets are in the queue, the hardware may group them into a batch of about 512 bytes and send them as a group.

   The scheduler will generally send no fewer than two packets when a queue is cleared to transmit.

Both of these decrease required instructions and allow the hardware to service high-speed 10-GE interfaces without consuming extra CPU cycles. Neither of these small burst scenarios should cause a problem when looking at the overall rate.
Another way to view this shaper implementation is as a purely leaky bucket, whereas previous shapers could be considered as token leaky buckets. This purely leaky bucket algorithm prevents us from having a burst of bc or be packets like some of the older platforms that required tuning of parameters to protect downstream devices with limited buffering. Transmissions from the ASR 1000 should be much smoother overall, without previously observed bursting that had to be managed. Bursting for downstream devices should not be considered a major concern.
Q.    Are there any restrictions on high data rates and low data rates used at the same level of a QoS hierarchy?
A.     There are no restrictions, but there are some best-practice guidelines. In general, there should not be elements in the same policy map (or at the same level of a QoS hierarchy in hardware) that are more than three levels of magnitude apart. If this rule is not followed, the higher-speed interfaces will suffer from a higher amount of jitter and bursty traffic than would otherwise be anticipated. If there is a need for this, the recommended solution is to insert an artificial level into the hierarchy. Adding this level of hierarchy can put the slow and fast shapers at different levels of the hierarchy, thus working around the restriction.
Note that this problem can be found if vastly different rates are used in the same policy map, or if different policy maps with vastly different rates are applied at sibling nodes (for instance, two Gigabit Ethernet sub-interfaces, two subscriber sessions on the same interface, etc.).
An example of a situation where this would be required would be two sub-interfaces for a Gigabit Ethernet interface. One needs to be shaped at 512 kbps and the other at 600 Mbps. The 600 Mbps interface is 1171 times the rate of the 512 kbps shaper and breaks the 1:1000 (three levels of magnitude) guidance. In this instance, the recommended solution would be to deploy policy maps that look like the following:
policy-map 512kb-shaper
  class class-default
    bandwidth remaining ratio 1
    service-policy 512kb-shaper-child
!
policy-map 512kb-shaper-child
  class class-default
    shape average 512000
!
policy-map 600Mb-shaper-child
  class class-default
    shape average 600000000
!
interface GigabitEthernet 0/0/0.100
  service-policy output 512kb-shaper
!
interface GigabitEthernet 0/0/0.101
  service-policy output 600Mb-shaper

Q.    What are the details of the packet counters in the show policy-map interface output?
A.     The output is divided into several different sections. Typically there are sections for each of the following:

   Classification

   Policing

   Queuing

   Wired random early detection (WRED), random-detect

   Fair queue

   Marking

The following configuration was used to generate the output for the example being documented:

platform qos marker-statistics

platform qos match-statistics per-filter

platform qos match-statistics per-ace

!

policy-map reference

  class p12

    police cir 5000000 pir 75000000

      conform-action transmit

      exceed-action set-dscp-transmit 0

      violate-action drop

    shape average 40000000

    random-detect

    random-detect precedence 0 10 20 10

    random-detect precedence 1 12 20 10

    random-detect precedence 2 14 20 10

    fair-queue

  class class-default

!

class-map p12

  match precedence 1

  match precedence 2

!

interface GigabitEthernet1/0/2

  service-policy output reference

Queue Memory

Q.    How is packet memory managed?
A.     On all Cisco ASR 1000 platforms, the packet buffer memory on the ESP is one large pool that is used on an as-needed basis for all interfaces in the chassis. Interfaces do not reserve sections of memory. If 85 percent of all packet memory is used, nonpriority packets are dropped. At 98-percent packet memory usage, priority packets are dropped. The remaining 2 percent is reserved for internal control packet information. It is recommended that no more than 50 percent of packet buffer memory be allocated with configured queue-limit commands. Although not enforced, this recommendation is a best-practice recommendation. For certain special applications this recommendation may not apply. Only under unusual circumstances would you expect to see the packet buffer memory highly used. ‘When the 85- and 98-percent thresholds are crossed, Cisco IOS Software generates a console log message.
Q.    How can I monitor packet buffer memory usage?
A.     The following command can show how much of the packet buffer memory is used at any given time. Note that on systems with multiple QFP complexes (ESP100 and ESP200), you can vary the number after the bqs keyword to check the different QFP complexes.

ASR1000#show plat hard qfp active bqs 0 packet-buffer utilization

Packet buffer memory utilization details:

  Total:     256.00 MB

  Used :    2003.00 KB

  Free :     254.04 MB

 

  Utilization:    0 %

 

  Threshold Values:

    Out of Memory (OOM)    :     255.96 MB, Status: False

    Vital (> 98%)          :     253.44 MB, Status: False

    Out of Resource (OOR)  :     217.60 MB, Status: False

Q.    What is the scalability of packet memory, ternary content addressable memory (TCAM), and queue for various Cisco ASR 1000 hardware devices?
A.     Table 5 details that information:

Table 5.       Packet Memory, Queue, and TCAM Scalability

ESP Hardware

Packet Memory

Maximum Queues

TCAM Size

ASR1001

64 MB

16,000

5 Mb

ASR1001-X

512 MB

16,000

10 Mb

ASR1002-F

64 MB

64,000

5 Mb

ASR1002-X

512 MB

116,000

40 Mb

ESP5

64 MB

64,000

10 Mb

ESP10

128 MB

128,000

10 Mb

ESP20

256 MB

128,000

40 Mb

ESP40

256 MB

128,000

40 Mb

ESP100

1 GB (two 512-MB)

232,000*

80 Mb

ESP200

2 GB (four 512-MB)

464,000*

160 Mb

*Note that for ESP100 and ESP200, physical ports are associated with a particular QFP complex on the ESP card. In order to fully use all queues, the queues must be distributed among different slots and SPAs in the chassis. Additional information is included in this Q&A in this question: “ How do QFP complexes map to physical interfaces for egress queuing with Cisco ASR 1000 Series 100- and 200-Gbps  ESPs (ESP100 and ESP200, respectively)?

Queue Limits

Q.    How are default queue limits calculated on the Cisco ASR 1000 when QoS is applied?
A.     By default, the ASR 1000 assigns a default queue limit on the greater of the two following items:

   Sixty-four packets

   The number of packets of interface maximum-transmission-unit (MTU) size that would pass through the interface at the configured rate for 50 milliseconds. If only a shape average rate or shape percent value is used, then the rate is the shaper. If a bandwidth rate or bandwidth percent value is included, then it is used instead of the shaper rate. If bandwidth remaining ratio value is used, then the parent maximum rate (policy map or interface) is used.

Here are some examples with a Gigabit Ethernet interface with a default MTU of 1500 bytes:
For example, a class with a shape rate of 500 Mbps on a Gigabit Ethernet interface would give a default queue limit of:

C67-731655-00_Page22_1

For example, a class with a shape rate of 300 Mbps on a Gigabit Ethernet interface would give a default queue limit of:

C67-731655-00_Page22_2

For example, a class with a shape rate of 2 Mbps and a minimum bandwidth of 1000 kbps on a Gigabit Ethernet interface would use the minimum rate for calculations and give a default queue limit of:

C67-731655-00_Page22_3

Q.    If QoS is not configured, what is the queue limit for the interface?
A.     Typically on Cisco IOS Software platforms, the output for show interface will give you the number of packets in the output hold queue. On the Cisco ASR 1000, even if QoS is not configured, the QFP complex still manages the interface queuing. The output hold-queue value does not apply on the ASR 1000. When QoS is not configured on an interface, all the traffic for that physical interface moves through the interface default queue. The interface default queue is by default configured to handle 50 msec worth of traffic at 105 percent of interface bandwidth speed for interfaces 100 Mbps or faster. (Note that there are two exceptions: interfaces slower than 100 Mbps are based on 100 percent of interface bandwidth, and is based on 25 msec for all interface speeds.) For ESP5 through ESP40, if the default calculation comes up with a value that is less than 9280 bytes, then the default queue size is set to 9280 bytes. For the Cisco ASR 1002-X and ESP100 and higher, if the default calculation comes up with a value that is less than 9218 bytes, then the default queue size is set to 9218 bytes.
You can use the following command to check the actual interface queue limit for a given physical interface (note that the interface name must be fully expressed with matching capitalization):

show plat hard qfp active infra bqs queue output default interface GigabitEthernet1/1/0 | inc qlimit

Note that traffic for sub-interfaces with queuing QoS configured moves through the MQC-created queues, whereas traffic forwarded through other sub-interfaces or the main interface moves through the interface default queue.
The interface default queue is always handled in byte mode instead of packet mode, which is the default for MQC policy maps.
Q.    Can I change the units (packets, time, and bytes) of the queue limit in real time?
A.     No, you cannot change units used for a given policy map in real time. You would have to remove the policy map from any interfaces, reconfigure it, and then reattach it. If you have a feature such as WRED configured with a given type of units for the min’th and max’th values, you would have to remove WRED, change the queue-limit command units, and then reapply WRED. Also keep in mind that all classes in a given policy map must use the same units.
Q.    From time to time, drops are seen in various queues.  I do not suspect that the maximum rate is being overdriven.  How should I address this problem?
A.     The class showing the drops may be experiencing microbursts. Microbursts are small bursts of traffic that are long enough to fill up the queue for the class but not sustained long enough for network management to see the bandwidth as high enough to tail drop. The first thing to try is to increase the queue limit for the class. You can make this change in real time without affecting forwarding traffic. Try doubling the queue limit and then monitor for drops. If you still observe drops, you can increase the queue limit again. Eventually the drops should become less frequent or stop altogether. During nonburst times, traffic will have the same behavior. During the microbursts, there will be periods of higher latency as packets drain from the deeper queue. Note that if WRED is on the class, you will need to also adjust the min’th and max’th values accordingly or temporarily remove WRED and reapply it so that WRED can be installed with min’th and max’th values based on the increased queue limit.
Q.    When should I use time-, byte-, or packet-based queue limits?
A.     By default, queue limits are defined in units of packets, giving a predictable number of MTU-sized packets that can be queued for the class. However, the queue could also fill up with just as many very small packets that would start to tail drop packets while the overall latency of packets at the end of a full queue is quite small. For most applications, the use of packet-based queue limits works well. If you prefer to have a tightly controlled and predictable latency, you should switch to byte- or time-based queue limits. When you use time or bytes, the maximum latency is fixed and the number of packets that can be queued is variable. Note that all classes in a policy map must use the same units and WRED must be configured using the same units that the queue limit is specified in. Operationally, time- and byte-based configuration is the same. If you use time units, the system will use the maximum allowed bandwidth for the class to convert the time value into a number of bytes and use that value to program the QFP hardware.
Q.    When should I use small or large queue limits?
A.     You should use large queue limits as a mechanism to deal with bursty traffic. Having the available queue space minimizes the chance of dropping packets when there are short bursts of high-data-rate traffic in an otherwise slower stream of traffic. Queues that normally function well but occasionally show packet are good candidates for an increased queue limit. If a traffic class is constantly overdriven, a large queue limit is doing nothing other than increasing latency for most of the packets delivered. It would be better to have a smaller queue limit because just as many packets would be forwarded and they would have spent less time sitting idly in a queue. Priority queues by default have a queue limit of 512 packets, helping keep latency low but allowing buffering if the need arises. Typically, there is no need to tune the priority queue limits because only rarely are more than one or two packets waiting in the priority queue. If maximum latency and bursts of small packets are of concern, you should consider changing the queue limit to units of time or bytes.

WRED - Random-Detect

Q.    Why do WRED configurations ported to the Cisco ASR 1000 have restrictive queue limits?
A.     Cisco ASR 1000 calculates default queue limits differently from other platforms. Often older platforms have a higher default queue-limit value than the ASR 1000. You need to either manually increase the queue limit for the QoS class with the queue-limit value command or reconfigure your WRED min’th and max’th values according to the default ASR 1000 queue-limit value for the given class.
Q.    What are the default min’th and max’th values used by WRED?
A.     The default min’th and max’th values are based on the queue limit for the class. For all precedence and differentiated services code point (DSCP) values, max’th values are by default half of the queue limit. Headroom between the max’th values and the hard queue limit is important because WRED is based on the mean average queue depth that trails that instantaneous queue depth. The headroom between max’th and hard queue limit may be needed as the mean queue depth catches up with instantaneous queue depth.
Table 6 presents the default min’th values for all precedence and DSCP values. It is easiest to think of min’th values as a fraction of the corresponding max’th value. The example values given are based on a queue limit of 3200.

Table 6.       WRED Defaults for Queue Limit (Example with Queue Limit of 3200)

DSCP or Precedence

Minimum

Maximum

Minimum as Fraction
of Maximum

af11

1400

1600

14/16

af12

1200

1600

12/16

af13

1000

1600

10/16

af21

1400

1600

14/16

af22

1200

1600

12/16

af23

1000

1600

10/16

af31

1400

1600

14/16

af32

1200

1600

12/16

af33

1000

1600

10/16

af41

1400

1600

14/16

af42

1200

1600

12/16

af43

1000

1600

10/16

ef

1500

1600

15/16

Default or Precedence 0

800

1600

8/16

cs1/prec 1

900

1600

9/16

cs2/prec 2

1000

1600

10/16

cs3/prec 3

1100

1600

11/16

cs4/prec 4

1200

1600

12/16

cs5/prec 5

1300

1600

13/16

cs6/prec 6

1400

1600

14/16

cs7/prec 7

1500

1600

15/16

Q.    How is the average or mean queue depth calculated?
A.     The average or mean queue size is calculated according the following formula, where n is the exponential constant value, current_queue_size is the instantaneous queue size when the drop decision is being made, and old_average_queue_size is the queue size the previous time this calculation was performed:

C67-731655-00_Page26_1

As n increases, the mean queue depth is slower to respond to changes in instantaneous queue depth.
Q.    What is the scalability for wired random early detection (WRED)?
A.     The ASR 1000 does not have a hard limit on the number of WRED profiles that are available across the entire system. A WRED profile defined in a given policy map that is reused on multiple targets is only counted as a single profile. The primary limiting factor is available memory. In typical enterprise deployments, you should be able to scale up to 64 profiles without issue. The number of WRED profiles is not dependent upon time-, packet-, or byte-based queue-limit configurations.

Fair-Queue Behavior

Q.    What are the queue limits for the queues created by the fair-queue feature?
A.     By default, each of the 16 queues created by the fair-queue feature has a limit of 25 percent of the queue limit of the class. For example, if a class is configured to have a queue limit of 1000 packets and fair queue is configured, each of the 16 underlying queues has a limit of 250 packets. For this reason, it is important to consider the per-flow queue limit when manually adjusting the WRED min’th and max’th values.
Q.    Is it possible to specifically change the queue limit for the queues created by fair queuing?
A.     Yes, you can adjust the queue limits for the 16 queues created by fair queuing but only when using packet-based queue limits. As of Cisco IOS XE Software Release 3.11, the CLI is limited such that it is not possible to adjust the queue limits for the 16 queues using time- or byte-based queue-limit configurations. The workaround is to manipulate the overall class queue limit in byte or packet mode such that the fair queues are at the desired value. So if the desired per-flow queue limit is 100 ms, you should configure the class queue limit to be 400 ms.
Q.    How does fair queue divide traffic into different flows?
A.     The Cisco ASR 1000 uses a 5-tuple on the packets contents to hash the traffic into a given queue. The 5-tuple consists of:

   Source and destination IP address

   Protocol (TCP, UDP, etc.)

   Source and destination protocol ports

There are some special considerations when using fair-queue with tunnel traffic. Specifically, fair-queue will use the outermost IP addresses as part of the tuple calculation. For tunnel traffic moving across a class with fair-queue, all the traffic for a given tunnel will use only one of the 16 fair queues even if the inner IP addresses are different. If there are multiple tunnels using the class-map with fair-queue configured, then the tunnels will be distributed amongst the 16 queues based on the tunnel source and destination addresses. Fair-queue may not be the best choice to use on a main-interface for sub-interface that is carrying a number of tunnel connections.
Q.    How does fair queuing interact with random detect?
A.     Adding fair queue to random detect introduces some additional checks and considerations for applying custom random-detect configurations. Figure 3 shows a flow diagram of the decision-making process when the two features are configured together.
Figure 3.      Decision-Making Process (WRED with Fair Queue)

C67-731655-00_Figure03

FQD (flow-queue depth): Per-flow queue depth, which is the number of packets in a particular flow queue
FQL (flow-queue limit): Per-flow individual queue limit, set by the fair-queue queue-limit <x> command on the CLI
AQD (aggregate queue depth): Virtual queue depth, which is the sum of all individual flow-queue depths
AQL (aggregate queue limit): Virtual queue limit, set by the queue-limit <x> command on the CLI
Q.    How does fair queuing interact with queue limits when random detect is not configured?
A.     Having only fair queue configured without random detect significantly changes how the QFP decides when to drop a packet. The flow diagram in Figure 4 describes the process. The key difference in this scenario is that the decision to drop is based solely on the comparison with the per-flow queue limit. There is no comparison against the aggregate queue limit. This lack of decision against the aggregate queue limit can be misleading because it is possible to manipulate the aggregate queue limit to affect changes to the per-flow queue limit (25 percent).
Figure 4.      Decision Making Process (WRED without Fair-Queue)

C67-731655-00_Figure04

FQD (flow-queue depth): Per-flow queue depth, which is the number of packets in a particular flow queue
FQL (flow-queue limit): Per-flow individual queue limit, set by the fair-queue queue-limit <x> command on the CLI

Cisco EtherChannel QoS

Please note that some documents refer to EtherChannel, while others may refer to Port-channel, Gigabit Etherchannel (GEC) or Link Aggregation (LAG). All of these technologies are the same. This document will use the term Etherchannel for the technology.

   For information about QoS policies aggregation, please visit: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-agg.html.

   For information about QoS for Cisco EtherChannel interfaces, please visit: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-eth-int.html.

   For information about Point-to-Point Protocol over (GEC), please visit: http://www.cisco.com/en/US/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/asr1000/qos-pppgec.html.

Q.    What modes are supported for Cisco EtherChannel QoS?
A.     Cisco EtherChannel QoS on the Cisco ASR 1000 is supported in numerous configurations. There are requirements for coordinated configuration of VLAN load-balancing mode and QoS configurations. Following are the combinations of load balancing and QoS that are supported on a given port channel:

   With VLAN-based load balancing:

     Egress MQC queuing configuration on port-channel sub-interfaces

     Egress MQC queuing configuration on port-channel member

     Policy aggregation: Egress MQC queuing on sub-interface

     Ingress policing and marking on port-channel sub-interface

     Egress policing and marking on port-channel member link

     Policy aggregation for multiple queues

     Cisco IOS XE Software Release 2.6 and later

   Active/standby with LACP (1 + 1)

     Egress MQC queuing configuration on port-channel member link

     Cisco IOS XE Software Release 2.4 and later

     Egress MQC queuing configuration on Point-to-Point Protocol over Ethernet (PPPoE) sessions

     Policy map on session only, model D.2

      Cisco IOS XE Software Release 3.7 and later

     Policy maps on sub-interface and session, model F

      Cisco IOS XE Software Release 3.8 and later

   EtherChannel with LACP and load balancing (active/active)

     Egress MQC queuing configuration supported on port-channel member link

     Cisco IOS XE Software Release 2.5 and later

   Aggregate EtherChannel with flow-based load-balancing (active/active)

     Egress and ingress MQC queuing configurations are supported on the port-channel main interface

     Cisco IOS XE Software Release 3.12 and later

Q.    Can different port channels in the same router have different supported QoS combinations?
A.     Yes, each port channel is independent. If a global load-balancing method is configured, it could be necessary to configure a unique load-balancing method on a given port channel to allow certain QoS configurations. For example, if the global mode is configured to flow-based load balancing, you would need to configure VLAN-based load balancing on a specific port channel to configure ingress port-channel sub-interface policy maps.
Q.    Can I configure egress and ingress QoS simultaneously on a port-channel interface?
A.     With VLAN-based load-balancing, you can configure ingress QoS (non-queuing) on port-channel sub-interfaces and the egress policy map on the member links or port-channel sub-interfaces (but not both simultaneously). If a port channel is configured to use aggregate QoS (through the “ platform qos port-channel-aggregate X” command), then ingress and egress QoS commands may be configured on the port-channel main interface (but not on any sub-interfaces of that port channel.)
Q.    Is egress policing or marking supported on port-channel sub-interfaces?
A.     No, policing and marking for port-channel configurations are limited to ingress port-channel sub-interfaces and egress member-link interfaces.
A.     Yes, Cisco IOS XE Software Release 3.12 has support for aggregate Etherchannel. This allows configuration of a policy map on the port-channel main interface that will manage all traffic moving through the logical interface before the load-balancing mechanism distributes traffic to physical interfaces. This functionality requires configuration of platform qos port-channel-aggregate X prior to creating the port-channel interface.

Tunnel QoS

Table 7.       Tunnel QoS Topics and Resources

Q.    Are tunnels (GRE, IPSEC, dVTI, sVTI) configured with queuing QoS supported over port-channel interfaces?
A.     Yes, for certain tunnel types. In Cisco IOS XE Software Release 3.14 support was added for sourcing GRE tunnels from an aggregate port-channel interface. Queuing and non-queuing policy maps will be supported on the tunnels. A class-default-only shaping policy map will be supported on the aggregate port-channel interface. This does not include DMVPN or any other type of dynamic tunnels, with QoS sourced from an aggregate Gigabit EtherChannel (GEC) interface.
Q.    Are QoS policies supported on both the tunnel interface and the physical/sub-interface over which the tunnel is routed?
A.     Only in certain well-defined scenarios. See “ When is it acceptable to configure multiple policy maps for traffic?” on page 20.
Q.    Is GRE tunnel marking (marking the tunnel header) supported for IPSEC tunnels?
A.     No. GRE tunnel marking is only supported for non-IPSEC tunnels. It is not blocked by the CLI, however, it simply does not work when configured.
Q.    Is IPv6 supported together with DMVPN and NHRP?
A.     Yes, in IOS XE3.11, support was added for IPv6 DMVPN. As a result, the “ip nhrp” commands used on the tunnel interface were changed so that the preceeding “ip” keyword are not required.
Q.    Can DMVPN tunnels dynamically adjust QoS bandwidth based on changing network conditions?
A.     Yes, with support for adaptive QoS over DMVPN using the “shape adaptive” MQC directive, QoS rate-limiting can be adjusted dynamically as network conditions change.

Priority (Low-Latency) Behavior

Q.    What is the difference in strict priority (priority with policer) and conditional priority (priority with a rate)?
A.     Strict priority is always rate limited by the explicitly configured policer. The configuration looks like this:

policy-map test

  class voice

    police cir 1000000

    priority

With strict priority, even if there is available bandwidth from the parent (that is, it is not congested), the policed Low-Latency Queuing (LLQ) class forwards only up to the policer rate. The policer always rate limits the traffic.
Conditional priority configuration looks like this:

policy-map test

  class voice

    priority 1000

Conditional priority rate limits traffic with a policer only if there is congestion at the parent (policy map or physical interface). The parent is congested if more than the configured maximum rate of traffic attempts to move through the class (and/or interface). A conditional priority class can use more than its configured rate, but only if there is no contention with other classes in the same policy. As soon as there is congestion at the parent, the priority class(es) throttle back to the configured rate until there is no longer any congestion.
Q.    How many levels of priority does the Cisco ASR 1000 support?
A.     Two levels of high-priority traffic are supported. Priority level 1 is serviced first, then priority level 2. After all priority traffic is forwarded, nonpriority traffic is serviced.
Q.    How are queues for multiple priority classes in a single policy map managed?
A.     Individual queues are created for each class configured for priority treatment. Classification statistics and any related policer and marking statistics will be reported on a per class basis. The queue statistics for all the priority level 1 statistics will be reported in aggregate. The same applies for all of the priority level 2 statistics. The queuing statistics for priority levels 1 and 2 will not be aggregated together.

Hierarchical Policy Maps

Q.    How many levels of hierarchical policy maps are supported?
A.     In general, three levels of hierarchy are supported. If you mix queuing and nonqueuing policies together in a hierarchy, the nonqueuing policy maps must be at the leaf level of the policy map (child policy beneath grandparent and parent queuing policies, for example).
In a three-level queuing policy map, the highest level (grandparent), can consist only of class default.
If the policy map is applied to a virtual interface (such as a tunnel or session), there may be additional restrictions limiting the hierarchy to two levels of queuing, depending on the configuration.
Q.    How is bandwidth shared among sub-interfaces (or tunnels) when a parent node is oversubscribed?
A.     Sharing behavior is controlled with the bandwidth remaining value configured among the hierarchy nodes just below the congestion point. By default, all schedules have a bandwidth remaining ratio value of 1. Consider the example in Figure 5.
Figure 5.      Example of Sharing Behavior

In this example, the topmost (grandparent, physical interface with a class-default shaper at 20 Mbps) is congested. Three tunnels are egressing the router through this physical interface. The leftmost and rightmost tunnels are not configured with a bandwidth remaining ratio (BRR) and thus use the default value of 1. The center tunnel has a BRR value of 2, configured in its parent policy map. Since the 20 Mbps shaper is congested, the tunnels have to share the available bandwidth. The center tunnel has access to at least half (2 / (1+2+1)) of the 20 Mbps available on the grandparent node. The left and right tunnels each have access to at least 25 percent of the grandparent’s overall bandwidth (1/ (1+ 2 + 1)). This is the most simple case where all the tunnels are overdriven.

In a new scenario, assume that the leftmost tunnel has no traffic. In this case, the center tunnel would get access to 2 / (2 + 1), or 66.67 percent, of 20 Mbps while the rightmost tunnel receives 33.33 percent. As soon as the leftmost tunnel has traffic, it would potentially have access to up to 10 Mbps.

Q.    What are the restrictions on the use of bandwidth in Cisco IOS XE Software?
A.     In classic Cisco IOS Software, it is permitted to configure bandwidth at the leaf and intermediate nodes of a hierarchy. In IOS XE, bandwidth is only allowed at the leaf node of a hierarchy. This is a restriction in software and may be lifted in the future. For current deployments, where a classic IOS QoS policy map is being moved to a IOS XE platform, the best option is to convert the intermediate node bandwidth commands to bandwidth remaining commands. bandwidth remaining percent or bandwidth remaining ratio commands could be used to achieve very similar behavior.
Q.    What is the impact of using very slow and fast rate shapers on the same physical interface?
A.     Neither IOS XE software, nor the ASR 1000 QFP hardware impose any limitations on the range of rates that can be used on a given physical interface. However, the hardware will use the lowest rate configured at a given schedule level to decide how often to check if traffic is permitted from a given level of the schedule. As a general rule, if all the shapers at a given level of the hierarchy are within a 1:1000 ratio, the jitter profile of the transmitted traffic will be within normal parameters. If the range of shapers is outside the 1:1000 range, the schedule will be checked based on the slowest rate configured. The slower the rate, the less often the hardware checks the schedule for transmitting. This can cause the faster schedule nodes to transmit in a bursty nature since there will be fewer opportunities to transmit.
A workaround to avoid the bursty transmission of the high rate traffic is to put the slow and fast rates at different levels of the hierarchy. Consider a scenario where multiple Ethernet sub-interfaces on a given physical interface are configured with two level policy maps, and parent shaper rates ranging from 500 Mbps to 64 kbps. 500,000 kbps: 64 kbps is clearly beyond the 1000:1 ratio. The solution to this issue would be to add a grandparent shaper to the slow rate policy maps. The grandparent class-default-only shaper could shape at 500 Mbps; the parent shaper would be the original 64 kbps rate. By introducing this extra level the topmost schedule nodes are all at a given level of magnitude. The grandparent shapers on the slow rate sub-interfaces will never actually rate-limit traffic since the rate-limiting will be completed by the shaper at the parent level (middle level). This configuration allows the hardware to appropriately schedule the other, faster rate shapers that are still two level hierarchies and provide the slow rate interfaces with the appropriate behavior.

Interaction with Cryptography

Q.    How is QoS low-latency priority queuing acknowledged as traffic is sent to the cryptography engine?
A.     There are high- and low-priority queues for traffic being sent to the cryptography engine. Any traffic that matches an egress high-priority QoS class is sent through the high-priority queue to the cryptography engine. Priority-levels 1 and 2 traffic move through a single high-priority queue to the cryptography hardware. All other traffic is sent through the low-priority queue to the cryptography hardware. After the traffic has returned from the cryptography hardware, the priority-levels 1 and 2 are honored in independent queues, followed by nonpriority traffic. PAK_PRI traffic will move through the low-prioirty queue for cryptography by default. Only if the PAK_PRI traffic is classified into a high priority class via a MQC policy-map will it use the high priority queue for cryptography.
Q.    How does cryptography affect the size of packets that QoS observes?
A.     Queuing functions on physical interfaces or tunnel interfaces see the complete packet size including any cryptography overhead that was added to the packet. If the policy map is applied to the tunnel interface, policers do not observe the Layer 2 and/or cryptography overhead. Note that if a policer is used on a priority class, it is advisable to adjust the policer rate down accordingly because the observed rate for the priority policer will be different from the rates used for classes configured with other queuing functions.
Q.    Why do cryptographic connections sometimes fail when QoS is configured?
A.     Cryptography happens before egress QoS queuing. When encryption occurs a sequence number is sometimes included in the encryption headers. If the packets are subsequently delayed significantly because of high queue depths, the remote router can declare the packets outside of the anti-replay window and drop the encrypted connection. Potential workarounds include increasing the available bandwidth with QoS (to decrease latency) or increase the replay window size.
For information about IPsec anti-replay window expanding and disabling, please visit: http://www.cisco.com/en/US/docs/ios-xml/ios/sec_conn_dplane/configuration/xe-3s/asr1000/sec-ipsec-antireplay.html.
Q.    How can packet drops to the cryptography engine be monitored?
A.     There are high and low priority queues for traffic destined for cryptography. Those queues can be monitored via platform hardware commands. The following gives an example of how to monitor those queues. You can see statistics for packet and byte drops with the “tail drop” statistic.
plevel 0 is low priority traffic and plevel 1 is high priority traffic.

ASR1000#show plat hardware qfp active infrastructure bqs queue output default all   | inc crypto                                    

Interface: internal1/0/crypto:0 QFP: 0.0 if_h: 6 Num Queues/Schedules: 2

 

ASR1000#show plat hardware qfp active infrastructure bqs queue output default interface-string internal1/0/crypto:0                 

Interface: internal1/0/crypto:0 QFP: 0.0 if_h: 6 Num Queues/Schedules: 2

  Queue specifics:

    Index 0 (Queue ID:0x88, Name: i2l_if_6_cpp_0_prio0)

    Software Control Info:

      (cache) queue id: 0x00000088, wred: 0x88b168c2, qlimit (bytes): 73125056

      parent_sid: 0x261, debug_name: i2l_if_6_cpp_0_prio0

      sw_flags: 0x08000001, sw_state: 0x00000c01, port_uidb: 0

      orig_min  : 0                   ,      min: 0                  

      min_qos   : 0                   , min_dflt: 0                  

      orig_max  : 0                   ,      max: 0                   

      max_qos   : 0                   , max_dflt: 0                  

      share     : 1

      plevel    : 0, priority: 65535

      defer_obj_refcnt: 0

    Statistics:

      tail drops (bytes): 0           ,          (packets): 0                   

      total enqs (bytes): 0           ,          (packets): 0                  

      queue_depth (bytes): 0                  

  Queue specifics:

    Index 1 (Queue ID:0x89, Name: i2l_if_6_cpp_0_prio1)

    Software Control Info:

      (cache) queue id: 0x00000089, wred: 0x88b168d2, qlimit (bytes): 73125056

      parent_sid: 0x262, debug_name: i2l_if_6_cpp_0_prio1

      sw_flags: 0x18000001, sw_state: 0x00000c01, port_uidb: 0

      orig_min  : 0                   ,      min: 0                  

      min_qos   : 0                   , min_dflt: 0                  

      orig_max  : 0                   ,      max: 0                  

      max_qos   : 0                   , max_dflt: 0                  

      share     : 0

      plevel    : 1, priority: 0

      defer_obj_refcnt: 0

    Statistics:

      tail drops (bytes): 0           ,          (packets): 0                  

      total enqs (bytes): 0           ,          (packets): 0                  

      queue_depth (bytes): 0                  

 

General Recommendataions

Q.    In what order should I add commands to a class map?
A.     Although there is no strict requirement that you add commands in a particular order, the following describes the best practice:
For queuing classes, add commands in this order:

   Queuing features (shape, bandwidth, bandwidth remaining, and priority)

   account

   queue-limit

   set actions

   police

   fair-queue

   random-detect

   service-policy

For nonqueuing classes ordering is not as important, but the following order is preferred:

   set actions

   police

   service-policy

A.     First it is important to understand the difference in queuing and nonqueuing policy maps. Queuing policy maps include the following features in at least one class:

   shape

   bandwidth

   bandwidth remaining

   random-detect

   queue-limit

   priority

The practice of configuring multiple queuing policy maps for traffic to traverse is sometimes called multiple policy maps (MPOL). In general on the Cisco ASR 1000, it is acceptable to configure only one queuing policy map that traffic will be forwarded through in the egress direction. For example, if a Gigabit Ethernet sub-interface has a queuing policy map configured, it is not possible to configure another queuing policy map on the main interface.
Certain configurations do not carry this limitation, however. Here is a list of those scenarios where multiple queuing policy maps are supported:

   Broadband QoS, class default-only queuing policy map on Ethernet sub-interface, and two-level hierarchical queuing policy map on session (through virtual template or RADIUS configuration) (sometimes referred to as model F broadband QoS).

   Tunnels (GRE, DMVPN, sVTI, and dVTI) with two-level hierarchical queuing policy map and the targeted egress physical interface with a class default-only flat queuing policy map with a maximum rate configured (shape): The tunnels may target the physical interface directly or depend on the routing table to point toward the egress interface. This feature is supported as of Cisco IOS XE Software Release 3.6.

   Policy aggregation where priority queues are configured on the sub-interfaces and nonpriority queues are configured on the main interface: This scenario requires the use of service fragments.

   Policy aggregation where priority queues are configured on the main interface and nonpriority queues are configured on the sub-interfaces: This scenario requires the use of service fragments.