New and Changed Information

Table 1. New Features and Changed Behavior in Cisco APIC

Cisco APIC Release Version

Feature

Description

What Changed

Release 4.0(1x)

  • QoS support for priority levels 4, 5, and 6.

    QoS configuration enabled on L3Out.

  • Custom QoS procedures

  • New DSCP level support for MPOD

Support for new QoS levels and L3 out configuration

QoS for L3Outs

Custom QoS

Multipod QoS

Release 4.0(1x)

  • Support for RoCEv2 QoS settings

Support for new QoS settings to enable RoCEv2 technology in Cisco APIC environment.

QoS for RoCEv2

Release 3.1(2m)

QoS for L3Outs

In this release, QoS policy enforcement on L3Out ingress traffic is enhanced.

QoS for L3Outs

Release 2.2(1n)

Translating QoS Ingress Markings to Egress Markings

Added additional information

Translating QoS CoS Using the REST API

Release 2.1(1h) Translating QoS Ingress Markings to Egress Markings In this release, you can enable the ACI Fabric to classify the traffic for devices that classify the traffic based only on the CoS value.

Translating QoS Ingress Markings to Egress Markings

Release 2.0(2f)

Multipod QoS

Support for Preserving CoS and DSCP settings was added for multipod topologies.

Multipod QoS

QoS for L3Outs

QoS for L3Outs

To configure QoS policies for an L3Out, use the following guidelines:

  • To configure the QoS policy to be enforced on the border leaf where the L3Out is located, the VRF instance must be in egress mode (Policy Control Enforcement Direction must be "Egress").

  • To enable the QoS policy to be enforced, the VRF Policy Control Enforcement Preference must be "Enforced."

  • When configuring the contract governing communication between the L3Out and other EPGs, include the QoS class or target DSCP in the contract or subject.

    Note
    Note

    Only configure a QoS class or target DSCP in the contract, not in the external EPG (l3extInstP).


  • When creating a contract subject, you must choose a QoS priority level. You cannot choose Unspecified.

    Note
    Note

    With the exception of Custom QoS Policies as a custom QoS Policy will set the DSCP/CoS value even if the QoS Class is set to Unspecified. When QOS level is unspecified, it by default takes as Level 3 default queue. No unspecified is supported and valid.


  • Starting with release 4.0(1x), QoS supports new levels 4, 5, and 6 configured under Global policies, EPG, L3out, custom QoS, and Contracts. The following limitations apply:

    • Number of classes that can be configured with Strict priority is up to 5.

    • The 3 new classes are not supported with non-EX and non-FX switches.

    • If traffic flows between non-EX or non-FX switches and EX or FX switches, the traffic will use QoS level 3.

    • For communicating with FEX for new classes, the traffic carries a Layer 2 COS value of 0.

  • Starting with release 4.0(1x), you can configure QoS Class or create a Custom QoS Policy to apply on an L3Out Interface.

Configuring QoS for L3Outs Using the GUI

QoS for an L3Out is configured as part of the L3Out configuration.

Procedure


Step 1

Configure the VRF instance for the tenant consuming the L3Out to support QoS to be enforced on the border leaf switch that is used by the L3Out.

  1. On the menu bar, choose Tenants > tenant-name.

  2. In the Navigation pane, expand Networking, right-click VRFs, and choose Create VRF.

  3. Enter the name of the VRF.

  4. In the Policy Control Enforcement Preference field, choose Enforced.

    Note 

    VRF enforcement should be ingress, for QOS or custom QOS on L3out interface, VRF enforcement need be egress, only when the QOS classification is going to be done in the contract for traffic between EPG and L3out or L3out to L3out.

  5. In the Policy Control Enforcement Direction choose Egress

    Note 

    Not required, please see the above comment.

  6. Complete the VRF configuration according to the requirements for the L3Out.

Step 2

When configuring filters for contracts to enable communication between the EPGs consuming the L3Out, include a QoS class or target DSCP to enforce the QoS priority in traffic ingressing through the L3Out.

  1. On the Navigation pane, under the tenant that that will consume the L3Out, expand Contracts, right-click Filters and choose Create Filter.

  2. In the Name field, enter a filter name.

  3. In the Entries field, click + to add a filter entry.

  4. Add the Entry details, click Update and Submit.

  5. Expand the previously created filter and click on a filter entry.

  6. Set the Match DSCP field to the desired DSCP level for the entry, for example, EF.

Step 3

Add a contract.

  1. Under Contracts, right-click Standard and choose Create Contract.

  2. Enter the name of the contract.

  3. In the QoS Class field, choose the QoS priority for the traffic governed by this contract. Alternatively, you can choose a Target DSCP value.

    Note 

    If QOS classification is set in the contract and VRF enforcement is egress, then contract QOS classification would override the L3out interface QOS or Custom QOS classification, So either we need to configure this one or the new one.

  4. Click the + icon on Subjects to add a subject to the contract.

  5. Enter a name for the subject.

  6. In the QoS Priority field, choose the desired priority level. You cannot choose Unspecified.

  7. Under Filter Chain, click the + icon on Filters and choose the filter you previously created, from the drop down list.

  8. Click Update.

  9. On the Create Contract Subject dialog box, click OK.

Step 4

To configure QoS for a L3Out interface, in the Navigation pane, expand External Routed Networking > routed network-name > Logical Node Profiles > node profile-name > Logical Interface Profile right-click, and choose Create Interface Profile and perform the following steps:

  1. In the Name field, enter a name for the profile.

  2. In the QoS Priority field, select a priority level from 1 to 6 and click Next to configure protocols and interface type.

  3. (Optional) For custom priority levels, select a previously configured policy from the Custom QoS Policy field or Create Custom QoS Policy.


Configuring QoS for L3Outs Using the NX-OS Style CLI

QoS for L3Out is configured as part of the L3Out configuration.

Procedure


Step 1

When configuring the tenant and VRF, to support QoS priority enforcement on the L3Out, configure the VRF for egress mode and enable policy enforcement, using the following commands:

Example:

apic1# configure
apic1(config)# tenant t1
apic1(config-tenant)# vrf context v1
apic1(config-tenant-vrf)# contract enforce egress
apic1(config-tenant-vrf)# exit
apic1(congig-tenant)# exit
apic1(config)#
Step 2

When creating filters (access-list), include the match dscp command, in this example with target DSCP level EF. When configuring contracts, include the QoS class, for example, level1, for traffic ingressing on the L3Out. Alternatively, it could define a target DSCP value. QoS policies are supported on either the contract or the subject.

Note 

VRF enforcement should be ingress, for QOS or custom QOS on L3out interface, VRF enforcement need be egress, only when the QOS classification is going to be done in the contract for traffic between EPG and L3out or L3out to L3out.

Note 

If QOS classification is set in the contract and VRF enforcement is egress, then contract QOS classification would override the L3out interface QOS or Custom QOS classification, So either we need to configure this one or the new one.

Example:

apic1(config)# tenant t1
apic1(config-tenant)# access-list http-filter
apic1(config-tenant-acl)# match ip
apic1(config-tenant-acl)# match tcp dest 80
apic1(config-tenant-acl)# match dscp EF
apic1(config-tenant-acl)# exit
apic1(config-tenant)# contract httpCtrct
apic1(config-tenant-contract)# scope vrf
apic1(config-tenant-contract)# qos-class level1
apic1(config-tenant-contract)# subject http-subject
apic1(config-tenant-contract-subj)# access-group http-filter both 
apic1(config-tenant-contract-subj)# exit
apic1(config-tenant-contract)# exit
apic1(config-tenant)# exit
apic1(config)#
Step 3

To configure QoS priorities for a L3Out SVI:

Example:

interface vlan 19
      vrf member tenant DT vrf dt-vrf
      ip address 107.2.1.252/24
      description  'SVI19'
      service-policy type qos VrfQos006   // This one for Custom qos attachment
      set qos-class level6                             // This one for set qos priority
      exit
Step 4

To configure QoS priorities for a sub-interface:

Example:

interface ethernet 1/48.10
      vrf member tenant DT vrf inter-tentant-ctx2 l3out L4_E48_inter_tennant
      ip address 210.2.0.254/16
      service-policy type qos vrfQos002
      set qos-class level5
Step 5

To Configure QoS priorities for a routed outside:

Example:

interface ethernet 1/37
      no switchport
      vrf member tenant DT vrf dt-vrf l3out L2E37
      ip address 30.1.1.1/24
      service-policy type qos vrfQos002
      set qos-class level5
      exit

Configuring QoS for L3Outs Using the REST API

QoS for L3Out is configured as part of the L3Out configuration.

Procedure


Step 1

When configuring the tenant, VRF, and bridge domain, configure the VRF for egress mode (pcEnfDir="egress") with policy enforcement enabled (pcEnfPref="enforced"). Send a post with XML similar to the following example:

Example:

<fvTenant  name="t1">
      <fvCtx name="v1" pcEnfPref="enforced" pcEnfDir="egress"/>
        <fvBD name="bd1">
            <fvRsCtx tnFvCtxName="v1"/>
            <fvSubnet ip="44.44.44.1/24" scope="public"/>
            <fvRsBDToOut tnL3extOutName="l3out1"/>
        </fvBD>"/>
</fvTenant>
Step 2

When creating the filters and contracts to enable the EPGs participating in the L3Out to communicate, configure the QoS priority.

The contract in this example includes the QoS priority, level1, for traffic ingressing on the L3Out. Alternatively, it could define a target DSCP value. QoS policies are supported on either the contract or the subject.

The filter also has the matchDscp="EF" criteria, so that traffic with this specific TAG received by the L3out processes through the queue specified in the contract subject.

Note 

VRF enforcement should be ingress, for QOS or custom QOS on L3out interface, VRF enforcement need be egress, only when the QOS classification is going to be done in the contract for traffic between EPG and L3out or L3out to L3out.

Note 

If QOS classification is set in the contract and VRF enforcement is egress, then contract QOS classification would override the L3out interface QOS or Custom QOS classification, So either we need to configure this one or the new one.

Example:

<vzFilter name="http-filter">
     <vzEntry  name="http-e" etherT="ip" prot="tcp" matchDscp="EF"/>
</vzFilter>
<vzBrCP name="httpCtrct" prio="level1" scope="context">
     <vzSubj name="subj1">
          <vzRsSubjFiltAtt tnVzFilterName="http-filter"/>
     </vzSubj>
</vzBrCP>
Step 3

To configure QoS priorities for a L3Out SVI:

Example:

<l3extLIfP annotation="" descr="" dn="uni/tn-DT/out-L3_4_2_24_SVI17/lnodep-L3_4_E2_24/lifp-L3_4_E2_24_SVI_19" name="L3_4_E2_24_SVI_19" nameAlias="" ownerKey="" ownerTag="" prio="level6" tag="yellow-green">
                                <l3extRsPathL3OutAtt addr="0.0.0.0" annotation="" autostate="disabled" descr="SVI19" encap="vlan-19" encapScope="local" ifInstT="ext-svi" ipv6Dad="enabled" llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular" mtu="inherit" tDn="topology/pod-1/protpaths-103-104/pathep-[V_L3_l4_2-24]" targetDscp="unspecified">
                                                <l3extMember addr="107.2.1.253/24" annotation="" descr="" ipv6Dad="enabled" llAddr="::" name="" nameAlias="" side="B"/>
                                                <l3extMember addr="107.2.1.252/24" annotation="" descr="" ipv6Dad="enabled" llAddr="::" name="" nameAlias="" side="A"/>
                                </l3extRsPathL3OutAtt>
                                <l3extRsLIfPCustQosPol annotation="" tnQosCustomPolName="VrfQos006"/>
                </l3extLIfP>
Step 4

To configure QoS priorities for a sub-interface:

Example:

<l3extLIfP annotation="" descr="inter-tenant to shared-tenant " dn="uni/tn-DT/out-L4E48_inter_tenant/lnodep-L4E48_inter_tenant/lifp-L4E48" name="L4E48" nameAlias="" ownerKey="" ownerTag="" prio="level4" tag="yellow-green">
                                <l3extRsPathL3OutAtt addr="210.1.0.254/16" annotation="" autostate="disabled" descr="" encap="vlan-20" encapScope="local" ifInstT="sub-interface" ipv6Dad="enabled" llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular" mtu="inherit" tDn="topology/pod-1/paths-104/pathep-[eth1/48]" targetDscp="unspecified"/>
                                <l3extRsNdIfPol annotation="" tnNdIfPolName=""/>
                                <l3extRsLIfPCustQosPol annotation="" tnQosCustomPolName=" vrfQos002"/>
                </l3extLIfP>
Step 5

To Configure QoS priorities for a routed outside:

Example:

<l3extLIfP annotation="" descr="" dn="uni/tn-DT/out-L2E37/lnodep-L2E37/lifp-L2E37OUT" name="L2E37OUT" nameAlias="" ownerKey="" ownerTag="" prio="level5" tag="yellow-green">
                                <l3extRsPathL3OutAtt addr="30.1.1.1/24" annotation="" autostate="disabled" descr="" encap="unknown" encapScope="local" ifInstT="l3-port" ipv6Dad="enabled" llAddr="::" mac="00:22:BD:F8:19:FF" mode="regular" mtu="inherit" tDn="topology/pod-1/paths-102/pathep-[eth1/37]" targetDscp="unspecified"/>
                                <l3extRsNdIfPol annotation="" tnNdIfPolName=""/>
                                <l3extRsLIfPCustQosPol annotation="" tnQosCustomPolName="vrfQos002"/>
                </l3extLIfP>

QoS for RoCEv2

RoCEv2 and the Required APIC QoS Settings

Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) technology allows data to be transferred between servers or from storage to server without having to pass through the CPU and main memory path of TCP/IP. The network adapters transfers data directly to and from the application memory bypassing the operating system and the CPU. This zero copy and CPU offloading approach ensures greater CPU availability for other tasks while providing low latency and reduced jitter. A single fabric can be used for both, storage and compute. RoCEv2 provides additional functionality by allowing RDMA to be used with both Layer-2 and Layer-3 (UDP/IP) packets, enabling Layer-3 routing over multiple subnets.

Starting with Cisco Application Policy Infrastructure Controller Release 4.0(1), you can enable RoCEv2 functionality in your fabric by configuring specific QoS options for Layer-3 traffic in Cisco APIC, such as Weighted Random Early Detection (WRED) congestion algorithm and Explicit Congestion Notification (ECN).

The following sections describe how to configure the required QoS options using three different methods – the Cisco APIC GUI, the NX-OS style CLI, and the REST API – but regardless of which you choose, you'll have to configure the following:

  • Weighted Random Early Detection (WRED) congestion algorithm, which manages congestion on spine switches using the following configuration options:

    • WRED Min Threshold – if the average queue size is below the minimum threshold value, the arriving packets are queued immediately.

    • WRED Max Threshold – if the average queue size is greater than the maximum threshold value, the arriving packets are dropped.

    • WRED Probability – if the average queue size is between the Min and Max threshold, the Probability value determines whether the packet is dropped or queued.

    • WRED Weight – weight has a range of 0 to 7 and is used to calculate average queue length. Lower weight prioritizes current queue length, while higher weight prioritizes older queue lengths.

  • Explicit Congestion Notification (ECN), which is used for congestion notification. In case of congestion, ECN gets transmitting device to reduce transmission rate until congestion clears allowing traffic to continue without pause. ECN along with WRED enables end-to-end congestion notification between two endpoints on the network.

  • Priority Flow Control (PFC), which is used to achieve Layer 2 flow control. PFC provides the capability to pause traffic in case of congestion.

ROCEv2 Hardware Support.

The following Cisco hardware is supported for ROCEv2 in this release:

  • Cisco Nexus 9300-EX platform switches

  • Cisco Nexus 9300-FX platform switches

  • Cisco Nexus 9300-FX2 platform switches

  • N9K-X9700-EX line cards

  • N9K-C9504-FM-E fabric modules

Configuring Priority Flow Control (PFC) On Interfaces

Before you can configure the appropriate QoS settings for ROCEv2, you must enable PFC on each interface that is connected to ROCE devices. PFC setting can be set to one of three values, on, off, and auto. If you set it to auto, the DCBX protocol negotiates the PFC state on the interface.

You can configure PFC on one or more interfaces using any of the following methods:

Configuring PFC On Interfaces Using the Cisco APIC GUI

You can use the Cisco APIC GUI to configure PFC state on the interfaces connecting to ROCEv2 devices.

Procedure


Step 1

Log in to Cisco APIC.

Step 2

From the top navigation bar, choose Fabric > Inventory.

Step 3

In the left-hand sidebar, navigate to <pod> > <leaf-switch>.

Step 4

In the main pane, select the Interface tab.

Step 5

In the main pane, from the Mode dropdown menu, select Configuration.

Step 6

Choose an L2 port you want to configure.

Step 7

In the bottom pane, select the FCoE/FC tab.

Step 8

Set the PFC State of the port to On.


Configuring PFC On Interfaces Using the NX-OS Style CLI

You can use the NX-OS style CLI to configure PFC state on the interfaces connecting to ROCEv2 devices.

Procedure


Step 1

Enter APIC configuration mode.

Example:

apic1# config
Step 2

Enter switch configuration.

Example:

apic1(config)# leaf 101
Step 3

Enable PFC for specific interfaces.

Example:

apic1(config-leaf)# interface ethernet 1/7-9
apic1(config-leaf-if)# priority-flow-control mode on

Configuring PFC On Interfaces Using the REST API

You can use REST API to configure PFC state on the interfaces connecting to ROCEv2 devices.

Procedure


Step 1

You can configure PFC state on a group of interfaces using a policy group.

Example:

<polUni>
  <infraInfra>
    <qosPfcIfPol name="testPfcPol1"  adminSt="on"/>
    <infraFuncP>
      <infraAccPortGrp name="groupName">
        <infraRsQosPfcIfPol tnQosPfcIfPolName="testPfcPol1"/>
      </infraAccPortGrp>
    </infraFuncP>
  </infraInfra>
</polUni>
Step 2

Alternatively, you can configure PFC state on individual interfaces.

Example:

<polUni>
  <infraInfra>
    <qosPfcIfPol name="testPfcPol"  adminSt="auto"/>
    <infraFuncP>
      <infraAccPortGrp name="testPortG">
        <infraRsQosPfcIfPol tnQosPfcIfPolName="testPfcPol"/>
      </infraAccPortGrp>
    </infraFuncP>
    <infraHPathS name="port20">
      <infraRsHPathAtt tDn="topology/pod-1/paths-102/pathep-[eth1/20]"/>
        <infraRsPathToAccBaseGrp tDn="uni/infra/funcprof/accportgrp-testPortG">  
      </infraRsPathToAccBaseGrp>
    </infraHPathS>
  </infraInfra>
</polUni>

Configuring QoS for ROCEv2

After you have enabled PFC on each interfaces that is connected to ROCE devices, you can configure the appropriate QoS settings for ROCEv2.

You can configure QoS for ROCE using any of the following methods:

Configuring QoS for ROCEv2 Using the GUI

You can use the Cisco APIC GUI to configure the required QoS options to enable support for RoCEv2 in your fabric.

Procedure


Step 1

Log in to Cisco APIC.

Step 2

Navigate to Fabric > Access Policies > Policies > Global > QOS Class

Step 3

Select the QOS Class Level for which you want to configure ROCEv2

Step 4

For the Congestion Algorithm option, select Weighted random early detection.

Step 5

For the Congestion Notification option, select Enabled.

Enabling Congestion Notification causes the packets that would be dropped to be ECN-marked instead.

Step 6

For the Min Threshold (percentage) option, set the minimum queue threshold as a percentage of the maximum queue length.

If the average queue size is below the minimum threshold value, the arriving packets are queued immediately.

Step 7

For the Max Threshold (percentage) option, set the maximum queue threshold as a percentage of the maximum queue length.

If the average queue size is greater than the maximum threshold value, the arriving packets are dropped or marked if ECN is enabled.

Step 8

For the Probability (percentage) option, set the probability value.

The probability determines whether the packet is dropped or queued when the average queue size is between the minimum and the maximum threshold values.

Step 9

For the Weight option, set the weight value.

Weight has a range of 0 to 7 and is used to calculate average queue length. Lower weight prioritizes current queue length, while higher weight prioritizes older queue lengths.

Step 10

Check the PFC Admin State checkbox and specify a value for the No-Drop-CoS option to be used by PFC.

Step 11

For the Scope option, select Fabric-wide PFC.

Step 12

Optionally, you can choose to enable the Forward Non-ECN Traffic option, so that non-ECN traffic is not dropped even when the queue is congested. Congestion Notification must be enabled for this option to be configurable.


Configuring QoS for RoCEv2 Using the NX-OS Style CLI

You can use the NX-OS style CLI to configure the required QoS options to enable support for RoCEv2 in your fabric.

Procedure


Step 1

Enter configuration mode.

Example:

apic1# config
Step 2

Choose the QoS Level you want to configure.

In the following command, replace level2 with the QoS Level you want to configure:

Example:

apic1(config)# qos parameters level2
Step 3

Configure the congestion algorithm and its parameters.

Example:

apic1(config-qos)# algo wred
apic1(config-qos-algo)# ecn enabled
apic1(config-qos-algo)# maxthreshold 60
apic1(config-qos-algo)# minthreshold 40
apic1(config-qos-algo)# probability 0
apic1(config-qos-algo)# weight 1
apic1(config-qos-algo)# exit
Step 4

(Optional) Configure forwarding of the non-ECN traffic.

You can choose to enable forwarding of all non-ECN traffic, even when the queue is congested.

Example:

apic1(config-qos-algo)# fwdnonecn enabled
Step 5

Exit congestion algorithm configuration.

Example:

apic1(config-qos-algo)# exit
Step 6

Configure the CoS value for the QoS Level you chose.

Example:

apic1(config-qos)# pause no-drop cos 4 fabric

If you do not provide the fabric parameter, the default value is set to TOR.


Configuring QoS for RoCEv2 Using the REST API

You can use REST API to configure the required QoS options to enable support for RoCEv2 in your fabric.

Procedure


Step 1

Configure QoS for RoCEv2.

In the following example, replace level2 with the QoS class you want to configure and the WRED parameters with values appropriate for your environment.

POST URL: https://<apic-ip>/api/node/mo/uni.xml

Example:

<qosClass admin="enabled" dn="uni/infra/qosinst-default/class-level2" prio="level2">
    <qosCong algo="wred" wredMaxThreshold="60" wredMinThreshold="40" wredProbability="0"
             ecn="enabled"/>
    <qosPfcPol name="default" noDropCos="cos0" adminSt="yes" enableScope="fabric"/>
</qosClass>
Step 2

(Optional) Configure forwarding of the non-ECN traffic.

You can choose to enable forwarding of all non-ECN traffic, even when the queue is congested.

Example:

<qosInstPol dn="uni/infra/qosinst-default" FabricFlushInterval=450 FabricFlushSt="yes">
</qosInstPol>

Custom QoS

Configuring a Custom QoS Policy

You can configure a custom QoS policy.

Procedure


Step 1

On the menu bar, choose Tenants > tenant-name.

Step 2

In the Navigation pane, expand Application Profiles > Application Profile-name, right-click Application Profile-name, and choose Create Application EPG.

Step 3

In the Create Application EPG dialog box, choose Create Custom QOS Policy from the Custom QoS drop-down list. The Create Custom QOS Policy dialog box appears.

Step 4

Complete the following fields:

Name Description
Name field The name of the QoS policy.
Description field The description of the QoS policy.
Step 5

In the DSCP to priority map section, click + to add a differentiated services code point (DSCP) to the prority map.

Step 6

Complete the following fields:

Name Description
Priority drop-down list Choose the priority of the DSCP.
DSCP Range From drop-down list Choose the starting point of the DSCP range.
DSCP Range To drop-down list Choose the ending point of the DSCP range.
DSCP Target drop-down list Choose the desired DSCP value.
Target Cos drop-down list Choose the desired Cos value.
Step 7

In the Dot1P Classifiers section, click + to add a dot1p classifier.

Step 8

Complete the following fields:

Name Description
Priority drop-down list Choose the priority of the DSCP.
Dot1P Range From drop-down list Choose the starting point of the DSCP range.
Dot1P Range To drop-down list Choose the ending point of the DSCP range.
DSCP Target drop-down list Choose the desired DSCP value.
Target Cos drop-down list Choose the desired Cos value.
Step 9

Click SUBMIT. The Create Custom QOS Policy dialog box closes and the custom QoS policy is created.


QoS Preservation

Preserving 802.1P Class of Service Settings

APIC enables preserving 802.1P class of service (CoS) settings within the fabric. Enable the fabric global QoS policy dot1p-preserve option to guarantee that the CoS value in packets which enter and transit the ACI fabric is preserved.

802.1P CoS preservation is supported in single pod and multipod topologies.

In multipod topologies, CoS Preservation can be used where you want to preserve the QoS priority settings of 802.1P traffic entering POD 1 and egressing out of POD 2, but you are not concerned with preserving the CoS/DSCP settings in interpod network (IPN) traffic between the pods. To preserve CoS/DSCP settings when multipod traffic is transitting an IPN, use a dot1p policy. For more information, see Preserving QoS Priority Settings in a Multipod Fabric.

Observe the following 801.1P CoS preservation guidelines and limitations:

  • The current release can only preserve the 802.1P value within a VLAN header. The DEI bit is not preserved.

  • For VXLAN encapsulated packets, the current release will not preserve the 802.1P CoS value contained in the outer header.

  • 802.1P is not preserved when the following configuration options are enabled:

    • Contracts are configured that include QoS.

    • The outgoing interface is on a FEX.

    • Preserving QoS CoS priority settings is not supported when traffic is flowing from an EPG with isolation enforced to an EPG without isolation enforced.

    • A DSCP QoS policy is configured on a VLAN EPG and the packet has an IP header. DSCP marking can be set at the filter level on the following with the precedence order from the innermost to the outermost:

      • Contract

      • Subject

      • In Term

      • Out Term

      Note
      Note

      When specifying vzAny for a contract, external EPG DSCP values are not honored because vzAny is a collection of all EPGs in a VRF, and EPG specific configuration cannot be applied. If EPG specific target DSCP values are required, then the external EPG should not use vzAny.


Preserving QoS CoS Settings Using the GUI

To ensure that QoS priority settings are handled the same for traffic entering and transiting a single-pod fabric, or for traffic entering one pod and egressing another in a multi-pod fabric, you can enable CoS preservation using the GUI.

Note
Note

Enabling CoS preservation applies a default mapping of the CoS priorities to DSCP levels to the various traffic types.


Procedure


Step 1

From the menu bar, navigate to Fabric > External Access Policies.

Step 2

In the left-hand navigation pane, select Policies > Global > QOS Class.

Step 3

In the Global - QOS Class main window pane, check the Preserve COS: Dot1p Preserve checkbox.

Step 4

Click Submit.


Preserving QoS CoS Settings Using the NX-OS Style CLI

To ensure that QoS priority settings are handled the same for traffic entering and transiting a single-pod fabric, or for traffic entering one pod and egressing another in a multi-pod fabric, you can enable CoS preservation using the NS-OS style CLI.

Note
Note

Enabling CoS preservation applies a default mapping of the CoS priorities to DSCP levels to the various traffic types.


Procedure


Step 1

Enters APIC configuration mode.

Example:

apic1# configure
Step 2

Enables CoS preservation.

Example:

apic1(config)# qos preserve cos

Preserving QoS CoS Settings Using the REST API

To ensure that QoS priority settings are handled the same for traffic entering and transiting a single-pod fabric, or for traffic entering one pod and egressing another in a multi-pod fabric, you can enable CoS preservation using the REST API.

Note
Note

Enabling CoS preservation applies a default mapping of the CoS priorities to DSCP levels to the various traffic types.


Procedure


You can use the following example to enable CoS preservation.

POST https://<apic-ip>/api/node/mo/uni/infra/qosinst-default.xml

Example:

<imdata totalCount="1">
    <qosInstPol ownerTag="" ownerKey="" name="default"
                dn="uni/infra/qosinst-default" descr="" ctrl="dot1p-preserve"/>
</imdata>

To disable CoS preservation, you can use the same POST message but with the ctrl field set to empty:

Example:

<imdata totalCount="1">
    <qosInstPol ownerTag="" ownerKey="" name="default"
                dn="uni/infra/qosinst-default" descr="" ctrl=""/>
</imdata>

Multipod QoS

Preserving QoS Priority Settings in a Multipod Fabric

This topic describes how to guarantee QoS priority settings in a multipod topology, where devices in the interpod network are not under APIC management, and may modify 802.1p settings in traffic transiting their network.

Note
Note

You can alternatively use CoS Preservation where you want to preserve the QoS priority settings of 802.1p traffic entering POD 1 and egressing out of POD 2, but you are not concerned with preserving the CoS/DSCP settings in interpod network (IPN) traffic between the pods. For more information, see Preserving 802.1P Class of Service Settings.


Figure 1. Multipod Topology

As illustrated in this figure, traffic between pods in a multipod topology passes through an IPN, which may not be under APIC management. When an 802.1p frame is sent from a spine or leaf switch in POD 1, the devices in the IPN may not preserve the CoS setting in 802.1p frames. In this situation, when the frame reaches a POD 2 spine or leaf switch, it has the CoS level assigned by the IPN device, instead of the level assigned at the source in POD 1. Use a DSCP policy to ensure that the QoS priority levels are preserved in this case.

Configure a DSCP policy to preserve the QoS priority settings in a multipod topology, where there is a need to do deterministic mapping from CoS to DSCP levels for different traffic types, and you want to prevent the devices in the IPN from changing the configured levels. With a DSCP policy enabled, APIC converts the CoS level to a DSCP level, according to the mapping you configure. When a frame is sent from POD 1 (with the PCP level mapped to a DSCP level), when it reaches POD 2, the mapped DSCP level is then mapped back to the original PCP CoS level.

DSCP Settings

Note
Note

For traffic passing through the IPN, do not map any DSCP value to COS6 (except traceroute traffic).


Note
Note

Starting with release 4.0(1x), custom DSCP values can be selected for class levels 4 through 6 for Multipod QoS polices.

Example:

apic1(config-qos-cmap# set dscp-code control CS3
apic1(config-qos-cmap# set dscp-code span CS5
apic1(config-qos-cmap# set dscp-code level1 CS0
apic1(config-qos-cmap# set dscp-code level2 CS1
apic1(config-qos-cmap# set dscp-code level3 CS2
apic1(config-qos-cmap# set dscp-code policy CS4
apic1(config-qos-cmap# set dscp-code traceroute CS6

DSCP or TOS Level

Description

AF11

Assured Forwarding Class 1, low probability of dropping

AF12

Assured Forwarding Class 1, medium probability of dropping

AF13

Assured Forwarding Class 1, high probability of dropping

AF21

Assured Forwarding Class 2, low probability of dropping

AF22

Assured Forwarding Class 2, medium probability of dropping

AF23

Assured Forwarding Class 2, high probability of dropping

AF31

Assured Forwarding Class 3, low probability of dropping

AF32

Assured Forwarding Class 3, medium probability of dropping

AF33

Assured Forwarding Class 3, high probability of dropping

AF41

Assured Forwarding Class 4, low probability of dropping

AF42

Assured Forwarding Class 4, medium probability of dropping

AF43

Assured Forwarding Class 4, high probability of dropping

CS0

TOS Class Selector value 0 (the default)

CS1

TOS Class Selector value 1 (typically used for streaming traffic)

CS2

TOS Class Selector value 2 (typically used for OAM traffic such as SNMP, SSH, and Syslog)

CS3

TOS Class Selector value 3 (typically used for signalling traffic)

CS4

TOS Class Selector value 4 (typically used for Policy Plane traffic and to priority queue)

CS5

TOS Class Selector value 5 (typically used for broadcast video traffic)

CS6

TOS Class Selector value 6 (typically used for Network control traffic)

CS7

TOS Class Selector value 7

Expedited Forwarding

EF is dedicated to low-loss, low-latency traffic

Voice Admit

Similar to EF, but also admitted through CAC

Creating a DSCP Policy Using the GUI

Create a DSCP policy to enable guaranteeing QoS priority settings in a multipod topology and configure DSCP mappings for various traffic streams in the fabric. The mappings must be unique within the policy.

Procedure


Step 1

On the menubar, click TENANTS > infra.

Step 2

In the Navigation pane, expand Protocol Policies > DSCP class-cos translation policy for L3 traffic.

Step 3

In the Properties panel, click Enabled to enable the DSCP policy.

Step 4

Map each traffic stream to one of the available levels. They must all be unique.

Step 5

Click Submit.


Creating a DSCP Policy Using the NX-OS Style CLI

Create a DSCP map (known as a DSCP policy in the APIC GUI) to guarantee QoS priority settings in a multipod topology. The mappings must be unique within the policy.

Configure a DSCP map with custom mappings for traffic streams with the following steps:

Procedure


Step 1

Enters global configuration mode.

Example:

apic1# configure
Step 2

Enters tenant configuration mode for the infra tenant.

Example:

apic1(config)# tenant infra
Step 3

Configures the DSCP map.

Example:

apic1(config-tenant)# qos dscp-map default
Step 4

Sets the custom DSCP mappings, similar to the following example. The mappings must all be unique within a DSCP map.

Note 

For traffic passing through the IPN, do not map any DSCP value to COS6 (except traceroute traffic).

Example:

apic1(config-qos-cmap# set dscp-code control CS3
apic1(config-qos-cmap# set dscp-code span CS5
apic1(config-qos-cmap# set dscp-code level1 CS0
apic1(config-qos-cmap# set dscp-code level2 CS1
apic1(config-qos-cmap# set dscp-code level3 CS2
apic1(config-qos-cmap# set dscp-code level4 CS3
apic1(config-qos-cmap# set dscp-code level5 CS4
apic1(config-qos-cmap# set dscp-code level6 CS5
apic1(config-qos-cmap# set dscp-code policy CS4
apic1(config-qos-cmap# set dscp-code traceroute CS6
Step 5

Enables the DSCP map.

Example:

apic1(config-qos-cmap)# no shutdown

Creating a DSCP Policy Using the REST API

Procedure


Step 1

Configure and enable a DSCP policy with a post, such as the following:

POST https://192.0.20.123/api/node/mo/uni/tn-infra/dscptranspol-default.xml

Example:

<imdata totalCount="1">
    <qosDscpTransPol traceroute="AF43" span="AF42" policy="AF22" ownerTag=""
                     ownerKey="" name="default" level3="AF13" level2="AF12"
                     level1="AF11" dn="uni/tn-infra/dscptranspol-default"
                     descr="" control="AF21" adminSt="enabled"/>
</imdata>
Step 2

Disable the DSCP policy with a post such as the following:

POST https://192.0.20.123/api/node/mo/uni/tn-infra/dscptranspol-default.xml

Example:

<imdata totalCount="1">
    <qosDscpTransPol traceroute="AF43" span="AF42" policy="AF22" ownerTag=""
                     ownerKey="" name="default" level3="AF13" level2="AF12"
                     level1="AF11" dn="uni/tn-infra/dscptranspol-default"
                     descr="" control="AF21" adminSt="disabled"/>
</imdata>

Translating QoS Ingress Markings to Egress Markings

Translating QoS Ingress Markings to Egress Markings

APIC enables translating the 802.1P CoS field (Class of Service) based on the ingress DSCP value. 802.1P CoS translation is supported only if DSCP is present in the IP packet and dot1P is present in the Ethernet frames.

This functionality enables the ACI Fabric to classify the traffic for devices that classify the traffic based only on the CoS value. It allows mapping the dot1P CoS value based on the ingress dot1P value. It is mainly applicable for Layer 2 packets, which do not have an IP header.

Observe the following 802.1P CoS translation guidelines and limitations:

  • Enable the fabric global QoS policy dot1p-preserve option.

  • 802.1P CoS translation is not supported on external L3 interfaces.

  • 802.1P CoS translation is supported only if the egress frame is 802.1Q encapsulated.

802.1P CoS translation is not supported when the following configuration options are enabled:

  • Contracts are configured that include QoS.

  • The outgoing interface is on a FEX.

  • Multipod QoS using a DSCP policy is enabled.

  • Dynamic packet prioritization is enabled.

  • If an EPG is configured with intra-EPG endpoint isolation enforced.

  • If an EPG is configured with allow-microsegmentation enabled.

Translating QoS CoS Settings Using the GUI

Create a custom QoS policy and then associate the policy with an EPG.

Before you begin

Create the tenant, application, and EPGs that will consume the custom QoS policy.

Procedure


Step 1

On the menu bar, click Tenant > Tenant Name > Networking > Protocol Policies > Custom QoS.

Step 2

From the Actions drop-down list, choose Create Custom QoS Policy.

Step 3

In the Create Custom QoS Policy window, specify the Target CoS in the DSCP to priority map field.

This setting allows you to map ingress DSCP value to egress CoS value.

Step 4

In the Create Custom QoS Policy window, specify the Target CoS in the Dot1P Classifiers field.

This setting allows you to translate ingress CoS value to egress CoS value.

Step 5

Click Submit.

Step 6

On the menu bar, click Tenant > Tenant Name > Application Policies > Application Policy Name > Application EPGs > Application EPG Name.

Step 7

In the EPG panel, select the custom QoS policy you created in step 3.

Step 8

Click Submit.


Translating QoS CoS Settings Using the NX-OS CLI

Create a custom QoS policy and then associate the policy with an EPG using the following commands:

Before you begin

Create the tenant, application, and EPGs that will consume the custom QoS policy.

Procedure

  Command or Action Purpose
Step 1

configure

Example:

apic1#configure

Enters global configuration mode.

Note 
Enter the commands listed in steps 1-5 to create a custom QoS policy.
Step 2

tenant tenant-name

Example:

apic1(config)#tenant t001

Enters tenant configuration mode for the tenant.

Step 3

policy-map type qos QoS-policy-name

Example:

apic1(config-tenant)#policy-map type qos baz

Creates QoS policy.

Step 4

match dscp AF23 AF31 set-cos 6

Example:

apic1(config-tenant-pmap-qos)#match dscp AF23 AF31 set-cos set-cos 6

Sets DCSP value and target QoS value.

Step 5

exit

Example:

apic1(config-tenant-pmap-qos)#exit

Returns to the tenant configuration mode.

Step 6

application app-name

Example:

apic1(config-tenant)#application ap2

Creates an application profile.

Note 
Enter the commands listed in steps 6-9 to associate the custom QoS policy with an EPG.
Step 7

epg epg-name

Example:

apic1(config-tenant-app)# epg ep2

Creates an EPG in the application profile.

Step 8

service-policy policy-name

Example:

apic1(config-tenant-app-epg)#service-policy baz

Associates the EPG to the policy.

Step 9

exit

Example:

apic1(config-tenant-app-epg)#exit

Returns to the tenant configuration mode.

Step 10

external-l2 epg epg-name

Example:

apic1(config-tenant)#external-l2 epg myout:12

Creates an external layer 2 EPG.

Note 
Enter the commands listed in steps 10-12- to associate the custom QoS policy with an external L2 EPG.
Step 11

service-policy policy-name

Example:

apic1(config-tenant-12ext-epg)#service-policy baz

Associates the EPG to the policy.

Step 12

exit

Returns to the tenant configuration mode.

Translating QoS Ingress Markings to Egress Markings Using the REST API

Create a custom QoS policy and then associate the policy with an EPG.

Before you begin

Create the tenant, application, and EPGs that will consume the custom QoS policy. The example creates the vrfQos001 custom QoS policy and associates it with the ep2 EPG, that will consume it.

Procedure


Step 1

Create a custom QoS policy by sending a post with XML such as the following example:

Example:

<qosCustomPol name="vrfQos001" dn="uni/tn-t001/qoscustom-vrfQos001">
    <qosDscpClass to="AF31" targetCos="6" target="unspecified"
                  prio="unspecified" from="AF23"/>
    <qosDot1PClass to="1" targetCos="6" target="unspecified"
                   prio="unspecified" from="0"/>
</qosCustomPol>
Step 2

Associate the policy with an EPG that will consume it by sending a post with XML such as the following example:

Example:

<fvAEPg prio="unspecified" prefGrMemb="exclude" pcEnfPref="unenforced"
        name="ep2" matchT="AtleastOne" isAttrBasedEPg="no" fwdCtrl=""
        dn="uni/tn-t001/ap-ap2/epg-ep2">
    <fvRsDomAtt tDn="uni/vmmp-VMware/dom-vs1" resImedcy="lazy" 
                primaryEncap="unknown" netflowPref="disabled"
                instrImedcy="lazy" encapMode="auto" encap="unknown"
                delimiter="" classPref="encap"/>
    <fvRsCustQosPol tnQosCustomPolName="vrfQos001"/>
    <fvRsBd tnFvBDName="default"/>
</fvAEPg>

Configuring QoS for Multipod

Use this procedure to configure QoS on a multipod setup.

Before you begin

You must have configured Multipod.

Procedure


Step 1

Preserve QoS CoS settings to ensure QoS priority settings are handled the same, in APIC traffic through the fabric.

  1. On the menu bar, click Fabric > Access Policies.

  2. In the Policies pane, expand Global Polices and click QOS Class Policies.

  3. In the Global Policies - QOS Class Policies panel, click the Preserve COS Dot1p Preserve check box.

    Note 
    By configuring Multipod QoS along with DPP, 802.1p is preserved.
  4. Click Submit.

Step 2

Match the QoS Class Policy-Level 1, QoS Class Policy-Level 2, and QoS Class Policy-Level 3 according to the policy determined in the IP network (IPN to IPN).

  1. On the menu bar, click Fabric > Access Policies.

  2. In the Policies pane, click Global Policies > QOS Class Policies > Level 1.

  3. In the QOS Class Policy - Level1 panel, update the Scheduling Algorithm and Bandwidth Allocated (in%) drop-down list.

  4. Click Submit.

  5. Repeat the steps for QoS Class Policy-Level 2 and QoS Class Policy-Level 3.

Step 3

Create a DSCP policy to enable guaranteeing QoS priority settings in a multipod topology and configure DSCP mappings for various traffic streams in the fabric.

  1. On the menubar, click TENANTS > infra.

  2. In the Navigation pane, expand Protocol Policies > DSCP class-cos translation policy for L3 traffic.

  3. In the Properties panel, click Enabled to enable the DSCP policy.

  4. Map each traffic stream to one of the available levels. They must all be unique.

    Note 

    The traffic in the IP network (from IPN to IPN) is treated as priority traffic.

  5. Click Submit.

Example:

Sample example of DSCP mappings
  • User Level 1 traffic is mapped to Expedited Forwarding, since it carries voice and real time traffic.

  • User Level 2 traffic is mapped to CS3, as it is often used for traffic marked for precedence 3 treatment.

  • User Level 3 traffic is mapped to CS0, as it is the default traffic.

  • User Level 4

  • User Level 5

  • User Level 6

  • Control Plane Traffic is mapped with CS7 and to priority queue.

  • Policy Plane Traffic is mapped with CS4 and to priority queue.

  • Span Traffic is mapped with CS1, as it is traditionally treated as background or scavenger class traffic.

  • Traceroute Traffic is mapped with CS5.

Note 

For traffic passing through the IPN, do not map any DSCP value to COS6 (except traceroute traffic).

Step 4

Create class maps to match the markings configured on the APIC.

Example:

class-map type qos match-all UserLevel1
  match dscp 46
class-map type qos match-all UserLevel2
  match dscp 24
class-map type qos match-all UserLevel3
  match dscp 0
class-map type qos match-all SpanTraffic
  match dscp 8
class-map type qos match-all iTraceroute
  match dscp 40
class-map type qos match-all CONTROL-TRAFFIC
  match dscp 48,56

Step 5

Create a policy map to label the ingress Control Plane and Policy Plane traffic with a QoS group.

Example:

policy-map type qos ACI-CLASSIFICATION
  class CONTROL-TRAFFIC
    set qos-group 7
  class UserLevel1
    set qos-group 6
  class UserLevel2
    set qos-group 3
  class UserLevel3
    set qos-group 0
  class SpanTraffic
    set qos-group 1
  class iTraceroute
    set qos-group 5

Step 6

Configure priority queue for the QoS group.

Example:

policy-map type queuing IPN-8q-out-policy
  class type queuing c-out-8q-q7
    priority level 1
  class type queuing c-out-8q-q6
    priority level 2
  class type queuing c-out-8q-q5
    bandwidth remaining percent 0
  class type queuing c-out-8q-q4
    bandwidth remaining percent 0
  class type queuing c-out-8q-q3
    bandwidth remaining percent 40
  class type queuing c-out-8q-q2
    bandwidth remaining percent 0
  class type queuing c-out-8q-q1
    bandwidth remaining percent 1
  class type queuing c-out-8q-q-default
    bandwidth remaining percent 58

Step 7

Apply the policy map to system level QoS.

Example:

system qos
  service-policy type queuing output IPN-8q-out-policy

Step 8

Associate the interfaces connected to the spine switch with the service policy.

Example:

interface Ethernet1/49.4
    description POD2-Spine-401 e1/5
  mtu 9150
  encapsulation dot1q 4
  vrf member IPNACISJC
  service-policy type qos input ACI-CLASSIFICATION
  ip address 10.149.195.106/30
  ip ospf network point-to-point
  ip router ospf IPNACISJC area 0.0.0.0
  ip pim sparse-mode
  ip dhcp relay address 10.0.0.1 
  ip dhcp relay address 10.0.0.2 
  ip dhcp relay address 10.0.0.3 
  no shutdown

interface Ethernet1/50.4
  description POD2-Spine-402 e1/5
  mtu 9150
  encapsulation dot1q 4
  vrf member IPNACISJC
  service-policy type qos input ACI-CLASSIFICATION
  ip address 10.149.195.110/30
  ip ospf network point-to-point
  ip router ospf IPNACISJC area 0.0.0.0
  ip pim sparse-mode
  ip dhcp relay address 10.0.0.1 
  ip dhcp relay address 10.0.0.2 
  ip dhcp relay address 10.0.0.3 
  no shutdown

Step 9

Verify the ingress interface on IPN.

Example:

IPNPOD2# show policy-map interface ethernet 1/50.4 input

Global statistics status :   enabled

Ethernet1/50.4

  Service-policy (qos) input:   ACI-CLASSIFICATION 
    SNMP Policy Index:  285215377

    Class-map (qos):   CONTROL-TRAFFIC (match-all)

     Slot 1
        1434 packets 
     Aggregate forwarded :
        1434 packets 
      Match: dscp 48,56
      set qos-group 7

    Class-map (qos):   UserLevel1 (match-all)
     Aggregate forwarded :
        0 packets 
      Match: dscp 46
      set qos-group 6

    Class-map (qos):   UserLevel2 (match-all)
     Aggregate forwarded :
        0 packets 
      Match: dscp 24
      set qos-group 3

    Class-map (qos):   UserLevel3 (match-all)

     Slot 1
        25 packets 
     Aggregate forwarded :
        25 packets 
      Match: dscp 0
      set qos-group 0

    Class-map (qos):   SpanTraffic (match-all)
     Aggregate forwarded :
        0 packets 
      Match: dscp 8
      set qos-group 1

    Class-map (qos):   iTraceroute (match-all)
     Aggregate forwarded :
        0 packets 
      Match: dscp 40
      set qos-group 5
 
 
IPNPOD2# show policy-map interface ethernet 1/49.4 input
Global statistics status :   enabled

Ethernet1/49.4

  Global statistics status :   enabled

Ethernet1/49.4

  Service-policy (qos) input:   ACI-CLASSIFICATION 
    SNMP Policy Index:  285215373

    Class-map (qos):   CONTROL-TRAFFIC (match-all)

     Slot 1
        5149 packets 
     Aggregate forwarded :
        5149 packets 
      Match: dscp 48,56
      set qos-group 7

    Class-map (qos):   UserLevel1 (match-all)
     Aggregate forwarded :
        0 packets 
      Match: dscp 46
      set qos-group 6

    Class-map (qos):   UserLevel2 (match-all)
     Aggregate forwarded :
        0 packets 
      Match: dscp 24
      set qos-group 3

    Class-map (qos):   UserLevel3 (match-all)

     Slot 1
        960 packets 
     Aggregate forwarded :
        960 packets 
      Match: dscp 0
      set qos-group 0

    Class-map (qos):   SpanTraffic (match-all)
     Aggregate forwarded :
        0 packets 
      Match: dscp 8
      set qos-group 1

    Class-map (qos):   iTraceroute (match-all)
     Aggregate forwarded :
        0 packets 
      Match: dscp 40
      set qos-group 5
 
Step 10

Verify the egress interface on IPN.

Example:

IPNPOD1# show queuing interface e 1/3 | b "GROUP 7"

slot  1
=======


Egress Queuing for Ethernet1/3 [System]
------------------------------------------------------------------------------
QoS-Group# Bandwidth% PrioLevel                Shape                   QLimit
                                   Min          Max        Units   
------------------------------------------------------------------------------
      7             -         1           -            -     -            9(D)
      6             -         2           -            -     -            9(D)
      5             0         -           -            -     -            9(D)
      4             0         -           -            -     -            9(D)
      3            20         -           -            -     -            9(D)
      2             0         -           -            -     -            9(D)
      1             1         -           -            -     -            9(D)
      0            59         -           -            -     -            9(D)
+-------------------------------------------------------------+
|                              QOS GROUP 0                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |          125631|              70|
|                   Tx Byts |        42902871|            8836|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 1                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 2                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 3                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 4                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 5                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 6                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |          645609|             217|
|                   Tx Byts |       115551882|           25606|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 7                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |           23428|               9|
|                   Tx Byts |         4132411|            1062|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                      CONTROL QOS GROUP                      |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |            6311|               0|
|                   Tx Byts |          809755|               0|
|            Tail Drop Pkts |               0|               0|
|            Tail Drop Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                         SPAN QOS GROUP                      |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
|            Tail Drop Pkts |               0|               0|
|            Tail Drop Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+


Ingress Queuing for Ethernet1/3
-----------------------------------------------------
QoS-Group#                 Pause                     
           Buff Size       Pause Th      Resume Th   
-----------------------------------------------------
      7              -            -            - 
      6              -            -            - 
      5              -            -            - 
      4              -            -            - 
      3              -            -            - 
      2              -            -            - 
      1              -            -            - 
      0              -            -            - 


Per Port Ingress Statistics
--------------------------------------------------------
     Hi Priority Drop Pkts                           0
    Low Priority Drop Pkts                           0
Ingress Overflow Drop Pkts                           0



PFC Statistics
------------------------------------------------------------------------------
TxPPP:                    0,   RxPPP:                    0
------------------------------------------------------------------------------
PFC_COS QOS_Group   TxPause             TxCount   RxPause             RxCount
      0         0  Inactive                   0  Inactive                   0
      1         0  Inactive                   0  Inactive                   0
      2         0  Inactive                   0  Inactive                   0
      3         0  Inactive                   0  Inactive                   0
      4         0  Inactive                   0  Inactive                   0
      5         0  Inactive                   0  Inactive                   0
      6         0  Inactive                   0  Inactive                   0
      7         0  Inactive                   0  Inactive                   0
------------------------------------------------------------------------------
 
 
 
IPNPOD2# show queuing interface e 1/4 
 
 slot  1
=======


Egress Queuing for Ethernet1/4 [System]
------------------------------------------------------------------------------
QoS-Group# Bandwidth% PrioLevel                Shape                   QLimit
                                   Min          Max        Units   
------------------------------------------------------------------------------
      7             -         1           -            -     -            9(D)
      6             -         2           -            -     -            9(D)
      5             0         -           -            -     -            9(D)
      4             0         -           -            -     -            9(D)
      3            20         -           -            -     -            9(D)
      2             0         -           -            -     -            9(D)
      1             1         -           -            -     -            9(D)
      0            59         -           -            -     -            9(D)
+-------------------------------------------------------------+
|                              QOS GROUP 0                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |           63049|               0|
|                   Tx Byts |        15968783|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 1                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 2                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 3                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 4                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 5                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 6                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |         1141418|               0|
|                   Tx Byts |       237770324|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                              QOS GROUP 7                    |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |           32440|               0|
|                   Tx Byts |         6986806|               0|
| WRED/AFD & Tail Drop Pkts |               0|               0|
| WRED/AFD & Tail Drop Byts |               0|               0|
|              Q Depth Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                      CONTROL QOS GROUP                      |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |            6275|               0|
|                   Tx Byts |          804748|               0|
|            Tail Drop Pkts |               0|               0|
|            Tail Drop Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+
|                         SPAN QOS GROUP                      |
+-------------------------------------------------------------+
|                           |  Unicast       |Multicast       |
+-------------------------------------------------------------+
|                   Tx Pkts |               0|               0|
|                   Tx Byts |               0|               0|
|            Tail Drop Pkts |               0|               0|
|            Tail Drop Byts |               0|               0|
|       WD & Tail Drop Pkts |               0|               0|
+-------------------------------------------------------------+


Ingress Queuing for Ethernet1/4
-----------------------------------------------------
QoS-Group#                 Pause                     
           Buff Size       Pause Th      Resume Th   
-----------------------------------------------------
      7              -            -            - 
      6              -            -            - 
      5              -            -            - 
      4              -            -            - 
      3              -            -            - 
      2              -            -            - 
      1              -            -            - 
      0              -            -            - 


Per Port Ingress Statistics
--------------------------------------------------------
     Hi Priority Drop Pkts                           0
    Low Priority Drop Pkts                           0
Ingress Overflow Drop Pkts                           0



PFC Statistics
------------------------------------------------------------------------------
TxPPP:                    0,   RxPPP:                    0
------------------------------------------------------------------------------
PFC_COS QOS_Group   TxPause             TxCount   RxPause             RxCount
      0         0  Inactive                   0  Inactive                   0
      1         0  Inactive                   0  Inactive                   0
      2         0  Inactive                   0  Inactive                   0
      3         0  Inactive                   0  Inactive                   0
      4         0  Inactive                   0  Inactive                   0
      5         0  Inactive                   0  Inactive                   0
      6         0  Inactive                   0  Inactive                   0
      7         0  Inactive                   0  Inactive                   0
------------------------------------------------------------------------------


Troubleshooting Cisco APIC QoS Policies

The following table summarizes common troubleshooting scenarios for the Cisco APIC QoS.

Problem

Solution

Unable to update a configured QoS policy.

  1. Invoke the following API to ensure that qospDscpRule is present on the leaf.

    GET https://192.0.20.123/api/node/class/qospDscpRule.xml
  2. Ensure that the QoS rules are accurately configured and associated to the EPG ID to which the policy is attached.

    Use the following NX-OS style CLI commands to verify the configuration.

    leaf1# show vlan   
    leaf1# show system internal aclqos qos policy detail 
     
    apic1# show running-config tenant  tenant-name policy-map type qos  custom-qos-policy-name 
    apic1# show running-config tenant  tenant-name application  application-name epg  epg-name 

Show QoS interface statistics.

CLI displays statistics for eth1/1 for only QoS classes – level1, leve2, level3, level4, level5, level6, and policy-plane – if you don’t use “detail” option.

NXOS ibash cli:
tor-leaf1# show queuing interface ethernet 1/1 [detail]

If you want to display statistics for control-plane and span classes for an interface, you need to use CLI with the “detail” option.

Example: fabric 107 show queuing interface ethernet 1/1 detail

APIC CLI:
swtb123-ifc1# fabric node_id show queuing interface ethernet 1/1