Guest

IP to ATM Class of Service

Understanding the Transmit Queue Limit With IP to ATM CoS

Cisco - Understanding the Transmit Queue Limit With IP to ATM CoS

Document ID: 6190

Updated: Nov 15, 2007

   Print

Introduction

This document clarifies how a router calculates the size of the queue limit when per-VC queueing features are enabled on an ATM router interface that supports IP to ATM Class of Service (CoS). Cisco's modular Quality of Service (QoS) CLI (known as MQC) is used to configure service policies that you apply to a logical interface, be this a main interface, subinterface or virtual circuit. These service policies implement some QoS action, from policing and shaping to marking and queueing.

Before You Begin

Conventions

For more information on document conventions, see the Cisco Technical Tips Conventions.

Prerequisites

There are no specific prerequisites for this document.

Components Used

This document is not restricted to specific software and hardware versions.

The information presented in this document was created from devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If you are working in a live network, ensure that you understand the potential impact of any command before using it.

Two Sets of Queues

Cisco router interfaces with per-VC queueing features enabled store packets for an ATM VC in one of two sets of queues depending on the congestion level of the VC:

Queue Location Queueing Methods Service Policies Apply Command to Tune
Hardware queue or transmit ring Port adapter or network module FIFO only No tx-ring-limit
Layer-3 queue Layer-3 processor system or interface buffers None Yes Varies with queueing method: - vc-hold-limit - queue-limit

Congestion is defined as filling the transmission ring (tx-ring-limit). See Understanding and Tuning the tx-ring-limit Value.

Activating Layer-3 Queues

It is important to understand when your router uses the layer-3 queues, since service policies apply only to packets stored in the layer-3 queues. The ATM port adapter or network module and the layer-3 processor system collaborate in the following way:

  1. The ATM interface transmits cells on each ATM permanent virtual circuit (PVC) according to the ATM shaping rate.

  2. The ATM interface maintains a per-VC hardware queue or transmit ring, where it stores the packets waiting for transmission onto that VC.

  3. When the hardware queue or transmit ring fills, the ATM interface provides explicit back pressure to the layer-3 processor system. The per-VC back pressure prevents unnecessary over-consumption of buffers by a single ATM PVC. It notifies the layer-3 processor to stop dequeueing packets destined out from the particular VC to the ATM interface's transmit ring because the per-VC queue has reached a certain occupancy level. The layer-3 processor now stores the excess packets in the layer-3 queues. During this time, the layer-3 processor continues to forward packets destined to other noncongested PVCs.

  4. When the ATM interface sends the packets on the transmit ring and empties the ring, it again has sufficient buffers available to store the packets. It releases the back pressure, and the layer-3 processor dequeues new packets to the ATM interface.

  5. When the total number of packets buffered on the ATM interface for all PVCs reaches a certain level compared to the total available buffering space, the ATM interface provides back pressure at the aggregate all-VC level. This back pressure notifies the layer-3 processor to stop sending any packets to the ATM interface.

Importantly, with this communication system, the ATM interface recognizes that its transmit ring is full for a particular VC and throttles the receipt of new packets from the layer-3 processor system. Thus, when the VC is congested, the drop decision is moved from a random, last-in/first-dropped decision in the transmit ring's first in, first out (FIFO) queue to a differentiated decision based on IP-level service policies implemented by the layer-3 processor.

What is the Queue Limit?

The layer-3 queue always has a queue limit. This value defines the number of packets inside the queue. When this queue fills, the router initiates a drop policy. This policy can be tail drop or Weighted Random Early Detection (WRED). In other words, the queue limit defines how many packets can be stored in the layer-3 queue before drops start to occur.

The router automatically assigns a default queue-limit value. The calculated value varies with the queueing method and with the platform. Importantly, the queue limit needs to be small enough to avoid introducing latency due to queueing, but large enough to avoid drops and a resulting impact to TCP-based flows.

On distributed platforms like the Cisco 7500 series and the FlexWAN, the default value varies with the number of interfaces in the system. Thus, classes in a system with only two interfaces may receive more buffers than in a system with hundreds of subinterfaces and VCs. The router gives each class a minimum value to ensure enough buffers to feed the interface at line rate. The queue limits represent a credit limit for the interface. In other words, the router allocates the buffers among interfaces, PVCs, and classes in proportion to the bandwidth of those interfaces, PVCs, and classes. By default, the queue-limit values do not oversubscribe the available buffers.

The following sections discuss the queue limits in more detail.

Queue Limit With FIFO

On ATM VCs on non-distributed platforms, per-VC queueing and the layer-3 queues are enabled by default on supporting Cisco IOS® software releases. FIFO is the default queueing method applied to the layer-3 queues when no specific queueing mechanism has been configured. The layer-3 queues use FIFO by default since the default queueing algorithm on an ATM interface is also FIFO. Originally, these queues supported a queue limit of only 40. We can see this in the output below:

router#show queueing interface atm 2/0.10    
        Interface ATM2/0.10 VC 10/32 
        Queueing strategy: FIFO 
        Output queue 0/40, 244 drops per VC 

As of Cisco IOS software release 12.1(5)T, you can tune the size of the per-VC FIFO queue to a value between 5 and 1024 with the vc-hold-queue command.

Queue Limit With CBWFQ

The queue-limit command applies only to classes configured with Class-Based, Weighted Fair Queuing (CBWFQ) using the bandwidth command. The queue-limit command defines the number of packets that the layer-3 queues will store before drops begin to occur. In other words, it is the depth of the layer-3 queue.

The default queue-limit value varies with the platform.

  • Cisco 2600, 3600, 7200 series router and MC3810: The default value is 64. The following sample output was captured on an ATM network module in the 2600 series.

    router#show queueing interface atm 2/0.10    
          Interface ATM2/0.10 VC 10/32 
          Queueing strategy: weighted fair 
          Total output drops per VC: 1539 
          Output queue: 0/512/64/1539 (size/max total/threshold/drops)    
             Conversations  0/37/128 (active/max active/max total)  
             Reserved Conversations 0/0 (allocated/max allocated) 
    
  • Cisco 7500 series and FlexWAN: The default value is calculated by giving each class its proportional share of the parent buffers. The proportion is based on the bandwidth allocated to the class as compared to the bandwidth of the parent. Specifically, the queue limit is determined by the maximum delay of 500ms with an average packet size of 250 bytes. For example, a class with 1 MB of bandwidth is given a queue limit of 1000000 / (250 x 8 x 2) = 250. Importantly, it also is based on the following:

    • The amount of available SRAM or packet memory.

    • The amount of interfaces, since the available SRAM must be divided among the interfaces.

      interface ATM9/1/0.100 point-to-point 
         ip address 1.1.1.1 255.255.255.0 
         pvc 1/100 
          ubr 1000 
          service-policy out pmap 
         flexwan#show policy-map interface atm 9/1/0.100
         ATM9/1/0.100: VC 1/100
         service-policy output: pmap
         queue stats for all priority classes:        
                     queue size 0, queue limit 75 
                     packets output 0, packet drops 0 
                     tail/random drops 0, no buffer drops 0, other drops 0 
         class-map: e1 (match-all) 
                     0 packets, 0 bytes 
                     5 minute offered rate 0 bps, drop rate 0 bps 
                     match: ip dscp 10 
                     Priority: kbps 300, burst bytes 7500, b/w exceed drops: 0 
         class-map: e2 (match-all) 
                     0 packets, 0 bytes 
                     5 minute offered rate 0 bps, drop rate 0 bps 
                     match: ip dscp 20 
                     queue size 0, queue limit 75 
                     packets output 0, packet drops 0 
                     tail/random drops 0, no buffer drops 0, other drops 0 
                     bandwidth: kbps 300, weight 42 
         class-map: class-default (match-any)        
                     0 packets, 0 bytes 
                     5 minute offered rate 0 bps, drop rate 0 bps 
                     match: any 
                       0 packets, 0 bytes 
                       5 minute rate 0 bps 
                     queue size 0, queue limit 33 
                     packets output 2, packet drops 0 
                     tail/random drops 0, no buffer drops 0, other drops 0 
      

Note: The Versatile Interface Processor (VIP) and the FlexWAN choose the default queue-limit value and send it to the main processor (such as the Route Switch Processor [RSP] on the 7500 series) with the first set of packet count statistics. Thus, until the ATM VC carries traffic, an incorrect value may appear in the output of show policy-map interface.

Queue Limit With LLQ

Low latency queueing (LLQ) implements both a minimum and a maximum bandwidth guarantee, which you configure with the priority command. LLQ implements a device that restrains priority traffic to its allocated bandwidth during congestion to ensure that non-priority traffic, such as routing packets and other data, is not starved. Since policing is used to drop packets and queue limit is not being imposed, the queue-limit command cannot be used with the priority command.

Queue Limit and WRED

WRED can be configured as an optional drop policy on packets in the layer-3 queues. You can configure both WRED and a fancy queueing mechanism like CBWFQ or Low-latency queuing (LLQ).

On the VIP and FlexWAN, the default WRED parameters are derived directly from the default queue-limit. Specifically, the max-threshold value is set to half of the default queue-limit, and the min-threshold values are scaled down proportionally.

In addition, the default WRED threshold values take into account the ATM shaping parameters associated with the VC. To accommodate for larger bursts that can appear at higher rates, the higher the VC shaping rate the larger the default min- and max-thresholds. For instance, with a 10-kbps ATM, the default WRED parameters applied to the VC in a particular router are shown below:

nf-7505-1# show running-config 
     interface ATM1/1/0.47 point-to-point 
      atm pvc 47 0 47 aal5snap 10 10 1 random-detect wredgroup1    
     nf-7505-1# show queueing red    
     VC 0/47 - 
     random-detect group default:    
     exponential weight 9 
     precedence    min-threshold    max-threshold   mark-probability 
     ---------------------------------------------------------------    
     0:            20                    40                    1/10 
     1:            22                    40                    1/10 
     2:            24                    40                    1/10 
     3:            26                    40                    1/10 
     4:            28                    40                    1/10 
     5:            30                    40                    1/10 
     6:            32                    40                    1/10 
     7:            34                    40                    1/10 

In comparison, here are the default WRED parameters applied by the same router to a VC shaped at nine Mbps of sustained cell rate (SCR) and 10 Mbps of Peak Cell Rate (PCR):

   nf-7505-1#show running-config 
   interface ATM1/1/0.49 point-to-point 
    atm pvc 49 0 49 aal5snap 10000 9000 100 random-detect wredgroup3    
   nf-7505-1#show queueing red  
   VC 0/49 - 
   random-detect group default:  
   exponential weight 9 
   precedence    min-threshold    max-threshold       mark-probablity 
   ---------------------------------------------------------------  
   0:            72                  144                 1/10 
   1:            81                  144                 1/10 
   2:            90                  144                 1/10 
   3:            99                  144                 1/10 
   4:            108                 144                 1/10 
   5:            117                 144                 1/10 
   6:            126                 144                 1/10 
   7:            135                 144                 1/10  

The queue-limit defines the maximum number of packets that the layer-3 queues can store at any given moment in time. The max-threshold defines the maximum mean queue depth. When changing the queue limit, ensure that you also adjust the WRED thresholds and that the configured queue-limit is larger than the WRED max thresholds.

Even on a VC configured with WRED, all packets that arrive at a VC when the average queue size is above the queue limit are tail dropped. Thus, in the following configuration, the queue-limit of 400 and the minimum threshold of 460 for differentiated services code point (DSCP) 32 implements a tail drop at an average queue size of 400 packets and effectively prevents WRED from ever taking effect.

 policy-map ppwe 
     class voip 
       priority 64 
     class bus 
       bandwidth 168 
       random-detect dscp-based 
       random-detect exponential-weighting-constant 10    
       random-detect dscp 8 11 66 1 
       random-detect dscp 32 460 550 1 
       queue-limit 400

Note: See also Considerations on WRED Fine-Tuning in the IP to ATM Class of Service Phase 1 Design Guide when adjusting the default threshold values.

Related Information

Updated: Nov 15, 2007
Document ID: 6190