PDF(16.2 KB) View with Adobe Reader on a variety of devices
ePub(70.6 KB) View in various apps on iPhone, iPad, Android, Sony Reader, or Windows Phone
Mobi (Kindle)(79.1 KB) View on Kindle device or Kindle app on multiple devices
Updated:September 30, 2016
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
The document will help to provide an understanding of the queue structure and buffers on the Catalyst 3650/3850 platform. It also provides example on how output drops can be mitigated to a certain extent.
Output drops are generally a result of interface oversubscription caused by many to one or a 10gig to 1gig transfer. Interface buffers are a limited resource and can only absorb a burst up to a point after which packets will drop. Tuning the buffers can give you some cushion but it cannot guarantee a zero output drop scenario.
It is recommended to run 03.06 or 03.07's latest version to get appropriate buffer allocations due to some known bugs in older codes.
Cisco recommends that you have basic knowledge of QoS on Catalyst platform.
The information in this document is based on these software and hardware versions:
Cisco Catalyst 3850
Traditionally, buffers are statically allocated for each queue, and as you increase the number of queues the amount of reserved buffers decreases. This was inefficient and could lead to not having enough buffers to handle frames for all queues.
To get around that type of limitation, Catalyst 3650/3850 platform uses Hard buffers and Soft buffers.
Hard buffers: This is the minimum reserved buffer for specific queues. If a specific queue does not use the buffers, it is not available for other queues.
Soft Buffers: These buffers are assigned to a queue but can be shared by other queues and interfaces if it’s not being used.
Default Buffer Allocation with no service-policy is applied:
The Default Buffer allocation for a 1GB port is 300 buffers and for a 10GB port, it is 1800 buffers (1 buffer = 256 bytes). The port can use upto 400% of the default allocated from common pool with default settings, which is 1200 buffers and 7200 buffers for 1 Gig interface and 10Gig interface respectively.
The default soft buffer limit is set to 400 (which is the max threshold). The threshold would determine the maximum number of soft buffers that can be borrowed from the common pool.
When no service-policy is applied, there are 2 default queues (queue 0 and queue 1). The queue-0 is used for control traffic (DSCP 32 or 48 or 56) and queue-1 is used for data traffic.
By default, queue 0 will be given 40% of the buffers that are available for the interface as its hard buffers. i.e. 120 buffers are allocated for queue 0 in the context of 1G ports; 720 buffers in the context of 10G ports. The softmax, the maximum soft buffers, for this queue is set to 480 (calculated as 400% of 120) for 1GB ports and 2880 for 10GB ports, where 400 is the default max threshold that is configured for any queue.
Queue 1 does not have any hard buffers allocated. The soft buffer value for queue-1 is calculated as 400% of the interface's remaining buffer after being allocated to queue-0. So, it is 400% of 180 for 1Gig interface and 400% of 1800 for a 10Gig interface.
The show command that can be used to see this allocation is ‘show platform qos queue config <interface>’.
Hardmax or Hard Buffers is the amount of Buffer that is always reserved and available for this queue.
Softmax or Soft Buffers is the amount of buffer that can be borrowed from other queues or global pool. The total number of softmax per 1Gig Interface is 1200 (400% of 300) and 7200 buffers if it is a 10Gig inrerface. When we apply a service-policy, There will be 1 extra queue created for "Class default" if not explicitly created. All the traffic that are not matching under the previously defined classes fall under this queue. There can not be any match statement under this queue.
Tweaking Buffer Allocation
In order to tweak the buffers in 3650/3850 platform, we need to attach a Service policy under the respective interface. we can tweak the Hardmax and Softmax buffer allocation using the service-policy.
Hard buffer and Soft buffer calculations:
This is how the system allocates softmax and hardmax for each queue.
Total Port buffer = 300 (1G) or 1800 (10G) If there is a total of 5 queues (5 Classes), each queue gets 20% buffer by default.
Using Service policy for Hardmax or Softmax buffer allocation
if a service-policy is applied, only the "Priority queue with level 1/2" gets the Hardmax. Below examples will help clarify the buffer allocation for specfic service policy in 1Gig interface and 10Gig interface.
As we know, With default configuration where you have not applied any service policy, the queue-0 gets default Hardmax of 120 if the link is a 1Gig link and 720 buffers if the link is a 10Gig link.
Note: There are 5 classes present though you only created 4 classes. The 5th class is the default class. Each class represent a queue and the order in which it is shown is the order in which it is present in the running configuration when checking "show run | sec policy-map".
For this 3rd Example, i am going to add one extra class. now the total number of queues becomes 6. With 2 priority levels configured, each queue gets 51 buffers as Hardmax. The math is same as the previous example.
For 1Gig interface:
policy-map MYPOL class ONE priority level 1 percent 20 class TWO priority level 2 percent 10 class THREE bandwidth percent 10 class FOUR bandwidth percent 5 class FIVE bandwidth percent 10
3850#show run int gig1/0/1
Current configuration : 67 bytes ! interface GigabitEthernet1/0/1 service-policy output MYPOL end
Note: Sometimes you may see less buffers allocated to few queues. This is expected as the values cant fit into Softmax calculation for priority queue and non-priority queue during certain combination of configurations.
In summary, the more queues you create, the less buffers each queue gets in terms of Hardmax and softmax (as Hardmax is also dependant on Softmax value).
Note: Starting from 3.6.3 or 3.7.2, the maximum value for softmax can be modified using a CLI command: "qos queue-softmax-multiplier 1200", with 100 being default value. If configured as 1200, the softmax for non-priority queues and non-primary priority queue (!=level 1) are multiplied by 12 from their default values. This command would take effect only on the ports where a policy-map is attached. it is also not applicable for priority queue level 1.
Using Service policy for manually changing the Softmax buffer value:
The service policy configuration and the corresponding buffer allocation is shown below
policy-map TEST_POLICY class ONE priority level 1 percent 40 class TWO bandwidth percent 40 class THREE bandwidth percent 10
3850#show run int gig1/0/1
Current configuration : 67 bytes ! interface GigabitEthernet1/0/1 service-policy output TEST_POLICY end
Note: sometimes you may see less buffers allocated to few queues. This is expected as the values cant fit into Softmax calculation for priority queue and non-priority queue during certain combination of configurations. There is an internal algorithm which takes care of it.
Allocating all the softmax buffer to the single default queue
bandwidth percent 100
queue-buffers ratio 100
There is no Hardmax buffer since the policy is applied to an interface and it does not have any priority queue with "level" set.
As soon as you apply the policy-map, the 2nd queue gets disabled leaving only 1 queue in the system.
The caveat here is that all packets are going to use this single queue (including the control packets like OSPF/EIGRP/STP).
During the time of congestion (broadcast storm etc), this may easily cause network disruption.
This is true if you have other classes defined but not matching the control packets.
Case Study: Output Drops
For this test, IXIA traffic genertor is connected to 1Gig interface and the egress port is 100Mbps interface. This is a 1Gbps to 100Mbps connection and a burst of 1 Gig of packets are sent for 1 second. This will cause output drop on the egress 100mbps interface.
With the default config (no service-policy applied), the number of output drops after sending 1 stream is shown below
3850#show interfaces gig1/0/1 | in output drop
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 497000
These drops are seen in Th2, which is the default threshold. By the default , the system will use max threshold as drop threshold which is Drop-Th2.
3850#show interfaces gig1/0/1 | in output drop Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Now, the softmax for queue-0 can go upto 10,000 buffers and as a result, the drops are Zero.
Note: In real life, this kind of scenario may not be possible as other interfaces may also use the buffer, but, this can definately help in reducing the packet drops to a certain level.
The maximum soft buffer available for an interface can be increased using this command however, you should also keep in mind that this is available only if no other interface is using these buffers.
1. When you create more queues, you get less buffer for each queue.
2. The total number of buffers available can be increased using "qos queue-softmax-multiplier X" command.
3. If you define only 1 class-default, in order to tweak the buffer, all the traffic falls under the single queue (including control packets). Be advised that when all traffic is put in one queue, there is no classification between control and data traffic and during time of congestion, control traffic could get dropped. So, it is recommended to create at least 1 other class for control traffic. CPU generated control-packets will always go to the first priority queue even if not matched in the class-map. If there is no priority queue configured, it would go to the first queue of the interface, which is queue-0.
4. Prior to CSCuu14019 , interfaces wont display "output drop" counters. you have to execute "show platform qos queue stats" output to check for drops.
5. An enhancement request, CSCuz86625 , has been submitted to let us configure soft-max multiplier without using any service-policy.(Resolved in 3.6.6 and above)