The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
The document will help to provide an understanding of the queue structure and buffers on the Catalyst 3650/3850 platform. It also provides example on how output drops can be mitigated to a certain extent.
Output drops are generally a result of interface oversubscription caused by many to one or a 10gig to 1gig transfer. Interface buffers are a limited resource and can only absorb a burst up to a point after which packets will drop. Tuning the buffers can give you some cushion but it cannot guarantee a zero output drop scenario.
It is recommended to run 03.06 or 03.07's latest version to get appropriate buffer allocations due to some known bugs in older codes.
Cisco recommends that you have basic knowledge of QoS on Catalyst platform.
The information in this document is based on these software and hardware versions:
Traditionally, buffers are statically allocated for each queue, and as you increase the number of queues the amount of reserved buffers decreases. This was inefficient and could lead to not having enough buffers to handle frames for all queues.
To get around that type of limitation, Catalyst 3650/3850 platform uses Hard buffers and Soft buffers.
Hard buffers: This is the minimum reserved buffer for specific queues. If a specific queue does not use the buffers, it is not available for other queues.
Soft Buffers: These buffers are assigned to a queue but can be shared by other queues and interfaces if it’s not being used.
Default Buffer Allocation with no service-policy is applied:
The Default Buffer allocation for a 1GB port is 300 buffers and for a 10GB port, it is 1800 buffers (1 buffer = 256 bytes). The port can use upto 400% of the default allocated from common pool with default settings, which is 1200 buffers and 7200 buffers for 1 Gig interface and 10Gig interface respectively.
The default soft buffer limit is set to 400 (which is the max threshold). The threshold would determine the maximum number of soft buffers that can be borrowed from the common pool.
When no service-policy is applied, there are 2 default queues (queue 0 and queue 1). The queue-0 is used for control traffic (DSCP 32 or 48 or 56) and queue-1 is used for data traffic.
By default, queue 0 will be given 40% of the buffers that are available for the interface as its hard buffers. i.e. 120 buffers are allocated for queue 0 in the context of 1G ports; 720 buffers in the context of 10G ports. The softmax, the maximum soft buffers, for this queue is set to 480 (calculated as 400% of 120) for 1GB ports and 2880 for 10GB ports, where 400 is the default max threshold that is configured for any queue.
Queue 1 does not have any hard buffers allocated. The soft buffer value for queue-1 is calculated as 400% of the interface's remaining buffer after being allocated to queue-0. So, it is 400% of 180 for 1Gig interface and 400% of 1800 for a 10Gig interface.
The show command that can be used to see this allocation is ‘show platform qos queue config <interface>’.
For a 1Gig interface,
3850#show platform qos queue config gigabitEthernet 1/0/1 DATA Port:20 GPN:66 AFD:Disabled QoSMap:0 HW Queues: 160 - 167 DrainFast:Disabled PortSoftStart:1 - 1080 ---------------------------------------------------------- DTS Hardmax Softmax PortSMin GlblSMin PortStEnd --- -------- -------- -------- --------- --------- 0 1 5 120 6 480 6 320 0 0 3 1440 1 1 4 0 7 720 3 480 2 180 3 1440 2 1 4 0 5 0 5 0 0 0 3 1440 3 1 4 0 5 0 5 0 0 0 3 1440 4 1 4 0 5 0 5 0 0 0 3 1440 5 1 4 0 5 0 5 0 0 0 3 1440 6 1 4 0 5 0 5 0 0 0 3 1440 7 1 4 0 5 0 5 0 0 0 3 1440 <<output omitted>>
For a 10Gig interface,
3850#show platform qos queue config tenGigabitEthernet 1/0/37 DATA Port:1 GPN:37 AFD:Disabled QoSMap:0 HW Queues: 8 - 15 DrainFast:Disabled PortSoftStart:2 - 6480 ---------------------------------------------------------- DTS Hardmax Softmax PortSMin GlblSMin PortStEnd --- -------- -------- -------- --------- --------- 0 1 6 720 8 2880 7 1280 0 0 4 8640 1 1 4 0 9 4320 8 1920 3 1080 4 8640 2 1 4 0 5 0 5 0 0 0 4 8640 3 1 4 0 5 0 5 0 0 0 4 8640 4 1 4 0 5 0 5 0 0 0 4 8640
<<output omitted>>
Hardmax or Hard Buffers is the amount of Buffer that is always reserved and available for this queue.
Softmax or Soft Buffers is the amount of buffer that can be borrowed from other queues or global pool. The total number of softmax per 1Gig Interface is 1200 (400% of 300) and 7200 buffers if it is a 10Gig inrerface. When we apply a service-policy, There will be 1 extra queue created for "Class default" if not explicitly created. All the traffic that are not matching under the previously defined classes fall under this queue. There can not be any match statement under this queue.
In order to tweak the buffers in 3650/3850 platform, we need to attach a Service policy under the respective interface. we can tweak the Hardmax and Softmax buffer allocation using the service-policy.
Hard buffer and Soft buffer calculations:
This is how the system allocates softmax and hardmax for each queue.
Total Port buffer = 300 (1G) or 1800 (10G)
If there is a total of 5 queues (5 Classes), each queue gets 20% buffer by default.
Priority queue:
1Gig:
HardMax = Oper_Buff = 20% of 300 = 60.
qSoftMax = (Oper_Buff * Max_Threshold)/100=60*400/100=240
10Gig
HardMax = Oper_Buff = 20% of 1800 = 360
qsoftMax = (Oper_Buff * Max_Threshold)/100 = 360*400/100= 1440
Non-Priority queue:
1Gig:
HardMax = 0
qSoftMax = (Oper_Buffer*Max_Threshold)/100 = 300*20/100= 60. 400% of 60 = 240
10Gig:
HardMax = 0
qSoftMax = (Oper_Buffer*Max_Threshold)/100 = 1800*20/100= 360. 400% of 360 = 1440
if a service-policy is applied, only the "Priority queue with level 1/2" gets the Hardmax. Below examples will help clarify the buffer allocation for specfic service policy in 1Gig interface and 10Gig interface.
As we know, With default configuration where you have not applied any service policy, the queue-0 gets default Hardmax of 120 if the link is a 1Gig link and 720 buffers if the link is a 10Gig link.
3850#show platform qos queue config gig 1/0/1 DATA Port:0 GPN:119 AFD:Disabled QoSMap:0 HW Queues: 0 - 7 DrainFast:Disabled PortSoftStart:1 - 1080 ---------------------------------------------------------- DTS Hardmax Softmax PortSMin GlblSMin PortStEnd --- -------- -------- -------- --------- --------- 0 1 5 120 6 480 6 320 0 0 3 1440 1 1 4 0 7 720 3 480 2 180 3 1440 2 1 4 0 5 0 5 0 0 0 3 1440
<<output omitted>>
3850#show platform qos queue config tenGigabitEthernet 1/0/37
DATA Port:1 GPN:37 AFD:Disabled QoSMap:0 HW Queues: 8 - 15
DrainFast:Disabled PortSoftStart:2 - 6480
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
--- -------- -------- -------- --------- ---------
0 1 6 720 8 2880 7 1280 0 0 4 8640
1 1 4 0 9 4320 8 1920 3 1080 4 8640
2 1 4 0 5 0 5 0 0 0 4 8640
<<output omitted>>
While applying a service-policy, if you don't configure a priority queue or if you don't set a priority queue level, there will be no hardmax assigned to that queue
For a 1Gig interface:
policy-map MYPOL
class ONE
priority percent 20
class TWO
bandwidth percent 40
class THREE
bandwidth percent 10
class FOUR
bandwidth percent 5
3850#show run int gig1/0/1
Current configuration : 67 bytes
!
interface GigabitEthernet1/0/1
service-policy output MYPOL
end
3800#show platform qos queue config gigabitEthernet 1/0/1
DATA Port:21 GPN:65 AFD:Disabled QoSMap:1 HW Queues: 168 - 175
DrainFast:Disabled PortSoftStart:2 - 360
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
--- -------- -------- -------- --------- ---------
0 1 4 0 8 240 7 160 3 60 4 480
1 1 4 0 8 240 7 160 3 60 4 480
2 1 4 0 8 240 7 160 3 60 4 480
3 1 4 0 8 240 7 160 3 60 4 480
4 1 4 0 8 240 7 160 3 60 4 480
<<output omitted>>
Note: There are 5 classes present though you only created 4 classes. The 5th class is
the default class.
Each class represent a queue and the order in which it is shown is the order in which
it is present in the running configuration when checking "show run | sec policy-map".
For a 10Gig interface:
policy-map MYPOL class ONE priority percent 20 class TWO bandwidth percent 40 class THREE bandwidth percent 10 class FOUR bandwidth percent 5 3850#show run int TenGig1/0/37 Current configuration : 67 bytes ! interface TenGigabitEthernet1/0/37 service-policy output MYPOL end 3850#sh platform qos queue config te 1/0/40 DATA Port:2 GPN:40 AFD:Disabled QoSMap:1 HW Queues: 16 - 23 DrainFast:Disabled PortSoftStart:4 - 2160 ----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
--- -------- -------- -------- --------- ---------
0 1 4 0 10 1440 9 640 4 360 5 2880
1 1 4 0 10 1440 9 640 4 360 5 2880
2 1 4 0 10 1440 9 640 4 360 5 2880
3 1 4 0 10 1440 9 640 4 360 5 2880
4 1 4 0 10 1440 9 640 4 360 5 2880
5 1 4 0 5 0 5 0 0 0 5 2880 <<output omitted>>
When you apply "priority level 1", the queue-0 gets 60 buffers as Hardmax. There is a little math behind this and it was explained in the SoftMax and HardMax calculation section earlier.
For a 1Gig interface:
policy-map MYPOL
class ONE
priority level 1 percent 20
class TWO
bandwidth percent 40
class THREE
bandwidth percent 10
class FOUR
bandwidth percent 5
3850#show run int gig1/0/1
Current configuration : 67 bytes
!
interface GigabitEthernet1/0/1
service-policy output MYPOL
end
BGL.L.13-3800-1#sh platform qos queue config gigabitEthernet 1/0/1
DATA Port:21 GPN:65 AFD:Disabled QoSMap:1 HW Queues: 168 - 175
DrainFast:Disabled PortSoftStart:2 - 360
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
--- -------- -------- -------- --------- ---------
0 1 6 60 8 240 7 160 0 0 4 480
1 1 4 0 8 240 7 160 3 60 4 480
2 1 4 0 8 240 7 160 3 60 4 480
3 1 4 0 8 240 7 160 3 60 4 480
4 1 4 0 8 240 7 160 3 60 4 480
<<output omitted>>
For a 10Gig interface:
policy-map MYPOL class ONE priority level 1 percent 20 class TWO bandwidth percent 40 class THREE bandwidth percent 10 class FOUR bandwidth percent 5 3850#show run int Te1/0/37 Current configuration : 67 bytes ! interface TenGigabitEthernet1/0/37 service-policy output MYPOL end 3850_1# sh platform qos queue config tenGigabitEthernet 1/0/37 DATA Port:2 GPN:40 AFD:Disabled QoSMap:1 HW Queues: 16 - 23 DrainFast:Disabled PortSoftStart:3 - 2160 ---------------------------------------------------------- DTS Hardmax Softmax PortSMin GlblSMin PortStEnd --- -------- -------- -------- --------- --------- 0 1 7 360 10 1440 9 640 0 0 5 2880 1 1 4 0 10 1440 9 640 4 360 5 2880 2 1 4 0 10 1440 9 640 4 360 5 2880 3 1 4 0 10 1440 9 640 4 360 5 2880 4 1 4 0 10 1440 9 640 4 360 5 2880 5 1 4 0 5 0 5 0 0 0 5 2880 <<output omitted>>
For this 3rd Example, i am going to add one extra class. now the total number of queues becomes 6. With 2 priority levels configured, each queue gets 51 buffers as Hardmax. The math is same as the previous example.
For 1Gig interface:
policy-map MYPOL
class ONE
priority level 1 percent 20
class TWO
priority level 2 percent 10
class THREE
bandwidth percent 10
class FOUR
bandwidth percent 5
class FIVE
bandwidth percent 10
3850#show run int gig1/0/1
Current configuration : 67 bytes
!
interface GigabitEthernet1/0/1
service-policy output MYPOL
end
3850#show platform qos queue config gigabitEthernet 1/0/1
DATA Port:16 GPN:10 AFD:Disabled QoSMap:1 HW Queues: 128 - 135
DrainFast:Disabled PortSoftStart:3 - 306
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
--- -------- -------- -------- --------- ---------
0 1 7 51 10 204 9 136 0 0 5 408
1 1 7 51 10 204 9 136 0 0 5 408
2 1 4 0 10 204 9 136 4 51 5 408
3 1 4 0 10 204 9 136 4 51 5 408
4 1 4 0 11 192 10 128 5 48 5 408
5 1 4 0 11 192 10 128 5 48 5 408
6 1 4 0 5 0 5 0 0 0 5 408
<<output omitted>>
For a 10Gig interface:
policy-map MYPOL class ONE priority level 1 percent 20 class TWO priority level 2 percent 10 class THREE bandwidth percent 10 class FOUR bandwidth percent 5 class FIVE bandwidth percent 10 3850#show run int Te1/0/37 Current configuration : 67 bytes ! interface TenGigabitEthernet1/0/37 service-policy output MYPOL end 3850_2#sh platform qos queue config te 1/0/37 DATA Port:2 GPN:40 AFD:Disabled QoSMap:1 HW Queues: 16 - 23 DrainFast:Disabled PortSoftStart:4 - 1836 ---------------------------------------------------------- DTS Hardmax Softmax PortSMin GlblSMin PortStEnd --- -------- -------- -------- --------- --------- 0 1 8 306 12 1224 11 544 0 0 6 2448 1 1 8 306 12 1224 11 544 0 0 6 2448 2 1 4 0 12 1224 11 544 6 306 6 2448 3 1 4 0 12 1224 11 544 6 306 6 2448 4 1 4 0 13 1152 12 512 7 288 6 2448 5 1 4 0 13 1152 12 512 7 288 6 2448 6 1 4 0 5 0 5 0 0 0 6 2448 <<output omitted>>
Note: Sometimes you may see less buffers allocated to few queues. This is expected as the values cant fit into Softmax calculation for priority queue and non-priority queue during certain combination of configurations.
In summary, the more queues you create, the less buffers each queue gets in terms of Hardmax and softmax (as Hardmax is also dependant on Softmax value).
Note: Starting from 3.6.3 or 3.7.2, the maximum value for softmax can be modified using a CLI command: "qos queue-softmax-multiplier 1200", with 100 being default value. If configured as 1200, the softmax for non-priority queues and non-primary priority queue (!=level 1) are multiplied by 12 from their default values. This command would take effect only on the ports where a policy-map is attached. it is also not applicable for priority queue level 1.
The service policy configuration and the corresponding buffer allocation is shown below
policy-map TEST_POLICY
class ONE
priority level 1 percent 40
class TWO
bandwidth percent 40
class THREE
bandwidth percent 10
3850#show run int gig1/0/1
Current configuration : 67 bytes
!
interface GigabitEthernet1/0/1
service-policy output TEST_POLICY
end
3850#show platform qos queue config gigabitEthernet 1/0/1
DATA Port:21 GPN:65 AFD:Disabled QoSMap:1 HW Queues: 168 - 175
DrainFast:Disabled PortSoftStart:2 - 450
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
--- -------- -------- -------- --------- ---------
0 1 6 75 8 300 7 200 0 0 4 600
1 1 4 0 8 300 7 200 3 75 4 600
2 1 4 0 8 300 7 200 3 75 4 600
3 1 4 0 8 300 7 200 3 75 4 600
<<output omitted>>
The buffers are equally split across the queues. Using bandwidth command will only change the weight for every queue and correspondingly how the scheduler will act on it.
To tweak the softmax value, you have to use "Queue-buffer ratio" command under the respective class.
policy-map TEST_POLICY class ONE priority level 1 percent 40 class TWO bandwidth percent 40 queue-buffers ratio 50 <--------------- class THREE bandwidth percent 10 class FOUR bandwidth percent 5
The new buffer allocations are:
For 1gig interface:
3850#show platform qos queue conf gigabitEthernet 1/0/1
DATA Port:21 GPN:65 AFD:Disabled QoSMap:1 HW Queues: 168 - 175
DrainFast:Disabled PortSoftStart:0 - 900
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
--- -------- -------- -------- --------- ---------
0 1 6 39 8 156 7 104 0 0 0 1200
1 1 4 0 9 600 8 400 3 150 0 1200
2 1 4 0 8 156 7 104 4 39 0 1200
3 1 4 0 10 144 9 96 5 36 0 1200
4 1 4 0 10 144 9 96 5 36 0 1200
Now, queue-1 gets 50% of the soft buffer, i.e: 600 buffers. the remaining buffers are allocated to the other queues as per the algorithm.
Similar output for a 10-gig interface is:
3850#sh platform qos queue config te 1/0/37 DATA Port:2 GPN:40 AFD:Disabled QoSMap:1 HW Queues: 16 - 23 DrainFast:Disabled PortSoftStart:4 - 1836 ---------------------------------------------------------- DTS Hardmax Softmax PortSMin GlblSMin PortStEnd --- -------- -------- -------- --------- --------- 0 1 7 234 10 936 9 416 0 0 5 7200 1 1 4 0 11 3600 10 1600 4 900 5 7200 2 1 4 0 10 936 9 416 5 234 5 7200 3 1 4 0 4 864 11 384 1 216 5 7200 4 1 4 0 4 864 11 384 1 216 5 7200 5 1 4 0 5 0 5 0 0 0 5 7200 <<output omitted>>
Note: sometimes you may see less buffers allocated to few queues. This is expected as the values cant fit into Softmax calculation for priority queue and non-priority queue during certain combination of configurations. There is an internal algorithm which takes care of it.
Allocating all the softmax buffer to the single default queue
policy-map NODROP class class-default bandwidth percent 100 queue-buffers ratio 100
The QOS config results are as follows:
3850#show platfo qos queue config GigabitEthernet 1/1/1 DATA Port:21 GPN:65 AFD:Disabled QoSMap:1 HW Queues: 168 - 175 DrainFast:Disabled PortSoftStart:0 - 900 ---------------------------------------------------------- DTS Hardmax Softmax PortSMin GlblSMin PortStEnd --- -------- -------- -------- --------- --------- 0 1 4 0 8 1200 7 800 3 300 2 2400 1 1 4 0 5 0 5 0 0 0 2 2400
There is no Hardmax buffer since the policy is applied to an interface and it does not have any priority queue with "level" set.
As soon as you apply the policy-map, the 2nd queue gets disabled leaving only 1 queue in the system.
The caveat here is that all packets are going to use this single queue (including the control packets like OSPF/EIGRP/STP).
During the time of congestion (broadcast storm etc), this may easily cause network disruption.
This is true if you have other classes defined but not matching the control packets.
For this test, IXIA traffic genertor is connected to 1Gig interface and the egress port is 100Mbps interface. This is a 1Gbps to 100Mbps connection and a burst of 1 Gig of packets are sent for 1 second. This will cause output drop on the egress 100mbps interface.
With the default config (no service-policy applied), the number of output drops after sending 1 stream is shown below
3850#show interfaces gig1/0/1 | in output drop Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 497000
These drops are seen in Th2, which is the default threshold. By the default , the system will use max threshold as drop threshold which is Drop-Th2.
3800#show platform qos queue stats gigabitEthernet 1/0/1 <snip> DATA Port:21 Drop Counters ------------------------------- Queue Drop-TH0 Drop-TH1 Drop-TH2 SBufDrop QebDrop ----- ----------- ----------- ----------- ----------- ----------- 0 0 0 497000 0 0 1 0 0 0 0 0
After configuring the following service-policy to tweak the buffer,
policy-map TEST_POLICY
class class-default
bandwidth percent 100
queue-buffers ratio 100
3850#show runn int gig1/0/1
Current configuration : 67 bytes
!
interface GigabitEthernet1/0/1
service-policy output TEST_POLICY
end
3850#sh platform qos queue config gigabitEthernet 2/0/1
DATA Port:21 GPN:65 AFD:Disabled QoSMap:1 HW Queues: 168 - 175
DrainFast:Disabled PortSoftStart:0 - 900
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
--- -------- -------- -------- --------- ---------
0 1 4 0 8 1200 7 800 3 300 2 2400 <-- queue 0 gets all the buffer.
3850#show interfaces gig1/0/1 | in output drop
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 385064
The drops reduced from 497000 to 385064 for a same traffic burst. yet, there are still drops.
After configuring "qos queue-softmax-multiplier 1200" global config command.
3850#sh platform qos queue config gigabitEthernet 1/0/1
DATA Port:21 GPN:65 AFD:Disabled QoSMap:1 HW Queues: 168 - 175
DrainFast:Disabled PortSoftStart:0 - 900
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
--- -------- -------- -------- --------- ---------
0 1 4 0 8 10000 7 800 3 300 2 10000
3850#show interfaces gig1/0/1 | in output drop
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Now, the softmax for queue-0 can go upto 10,000 buffers and as a result, the drops are Zero.
Note: In real life, this kind of scenario may not be possible as other interfaces may also use the buffer, but, this can definately help in reducing the packet drops to a certain level.
The maximum soft buffer available for an interface can be increased using this command however, you should also keep in mind that this is available only if no other interface is using these buffers.
1. When you create more queues, you get less buffer for each queue.
2. The total number of buffers available can be increased using "qos queue-softmax-multiplier X" command.
3. If you define only 1 class-default, in order to tweak the buffer, all the traffic falls under the single queue (including control packets). Be advised that when all traffic is put in one queue, there is no classification between control and data traffic and during time of congestion, control traffic could get dropped. So, it is recommended to create at least 1 other class for control traffic. CPU generated control-packets will always go to the first priority queue even if not matched in the class-map. If there is no priority queue configured, it would go to the first queue of the interface, which is queue-0.
4. Prior to CSCuu14019 , interfaces wont display "output drop" counters. you have to execute "show platform qos queue stats" output to check for drops.
5. An enhancement request, CSCuz86625 , has been submitted to let us configure soft-max multiplier without using any service-policy.(Resolved in 3.6.6 and above)