Priority Flow Control Watchdog and Monitoring Metrics

Stages of Priority Flow Control Watchdog and monitoring

After configuring Priority Flow Control (PFC), use this reference to understand the key stages of monitoring and optimizing PFC. These include watchdog operation, pause-duration analysis, and global-counter review.

Table 1. Stages of Priority Flow Control Watchdog and Monitoring

Stage

Description

See Section

Understand watchdog monitoring

Learn how the PFC watchdog monitors queues, detects stuck conditions, and prevents pause storms by transitioning through monitor, shutdown, and restore states.

Priority Flow Control watchdog

Configure and verify watchdog operation

Enable or disable the watchdog globally or per interface, adjust interval multipliers, and verify its operation.

Configure a Priority Flow Control watchdog interval

Review global counters

Display aggregated PFC and watchdog counters to assess network-wide pause and congestion activity.

Global statistics counters for Priority Flow Control and Priority Flow Control watchdog

Analyze queue pause duration

View transmit (Tx) and receive (Rx) pause percentages and durations to identify whether congestion is temporary or persistent.

Traffic class queue pause duration

Priority Flow Control watchdog

The Priority Flow Control (PFC) watchdog is a PFC fault-detection and recovery mechanism that

  • identifies queue-stuck conditions (PFC storms) caused by sustained pause frames

  • prevents pause-frame propagation and looping across the network, and

  • restores normal traffic flow automatically after congestion clears.

PFC watchdog parameters and their role in queue monitoring

The PFC watchdog reduces persistent pause conditions by enforcing time-based monitoring of queues and managing state transitions:

  • Watchdog interval: The polling interval (in milliseconds) to check the status of PFC queues to determine if they are stalled due to excessive PFC pause frames.

  • Shutdown multiplier: The multiple of the watchdog interval after which the queue is shut down if congestion persists.

  • Auto-restore multiplier: Wait period, in multiples of the watchdog interval, before the watchdog restores the queue after congestion clears.

How the Priority Flow Control watchdog works

The Priority Flow Control (PFC) watchdog monitors queues receiving pause frames. It enforces queue transitions between monitored, waiting, shutdown, and restored states to maintain network stability.

Summary

The key components involved in the PFC watchdog process are:

  • Watchdog module: detects and reacts to prolonged pause frames.

  • Line card hardware: monitors queue drain states and notifies the Watchdog module.

  • System timers: govern queue shutdown, monitoring, and restoration intervals.

Workflow

These stages describe how the PFC Watchdog process works.

  1. Monitoring of pause activity: the watchdog module observes PFC-enabled queues for excessive pause frames during the configured watchdog interval.
  2. Detection of sustained congestion: when hardware detects that packets are not draining due to persistent pause frames, it notifies the watchdog module.
  3. Queue transition to wait-to-shutdown: The watchdog starts the shutdown timer. If the queue remains blocked, the module transitions it to a wait-to-shutdown state.
  4. Queue shutdown and packet drop: If the queue doesn’t drain before the shutdown timer expires, the watchdog changes its state to “drop.” All outgoing packets on that queue are dropped, halting pause propagation.
  5. Queue monitoring during shutdown: the watchdog continues to poll the queue. If pause frames persist, the queue remains in the drop state.
  6. Auto-restore evaluation: when pause frames stop, the auto-restore timer starts. If pause frames reappear during the timer interval, it resets. If no pause frames are detected, the queue transitions to the “restored” state and resumes traffic.

Best practices for Priority Flow Control watchdog configuration

Use these best practices to configure and operate the Priority Flow Control watchdog effectively.

  • Configure PFC before enabling the watchdog: Configure Priority Flow Control (PFC) and its associated policies before enabling the PFC Watchdog. This approach ensures that pause frame behavior and queue management policies are already defined before the watchdog begins monitoring.

  • Understand global watchdog behavior: When the global watchdog mode is disabled, it overrides all interface-level configurations. This means that even if the watchdog is configured on individual interfaces, it remains inactive until the global mode is enabled.

  • Leverage interface-level overrides: When the global watchdog mode is enabled, interface-level attributes such as interval, shutdown multiplier, and auto-restore multiplier override global settings. This flexibility allows finer control for high-priority interfaces with unique traffic profiles.

  • Tune watchdog parameters per interface: Use appropriate values for interval, shutdown multiplier, and auto-restore multiplier for each interface. This configuration helps avoid premature shutdown or delayed recovery of queues. Incorrectly set timers can result in unnecessary packet drops or slow queue restoration.

Configure a Priority Flow Control watchdog interval

Use this task to configure and verify Priority Flow Control (PFC) watchdog parameters at either the global or interface level. The watchdog detects queue-stuck conditions caused by sustained PFC pause frames and automatically restores normal queue operation after congestion clears.

The PFC watchdog monitors queues for pause storms and takes corrective actions when queues fail to drain within a specified interval. You can configure watchdog timers globally for all interfaces. Alternatively, override timers individually for specific interfaces.

Before you begin

Ensure that PFC and its policies are configured and active.

Follow these steps to configure and verify the PFC watchdog interval.

Procedure


Step 1

Enable the watchdog globally.

Example:

Router(config)#priority-flow-control watchdog mode on

This step activates the watchdog feature for all PFC-enabled interfaces.

Step 2

Configure global watchdog parameters.

Example:

Router(config)#priority-flow-control watchdog interval 100
Router(config)#priority-flow-control watchdog shutdown-multiplier 2
Router(config)#priority-flow-control watchdog auto-restore-multiplier 10
Router(config)#commit

This step sets watchdog timing behavior:

  • checks queues every 100 milliseconds

  • declares queues stuck after 200 milliseconds (100 milliseconds x 2), and

  • restores queues after 1000 milliseconds (100 x 10)

Step 3

(Optional) Configure interface-level overrides.

Example:

Router(config)#interface HundredGigE0/2/0/0
Router(config-if)#priority-flow-control watchdog mode on
Router(config-if)#commit

This step enables the Watchdog for a specific interface. Interface-level settings override global values if defined.

Step 4

Verify watchdog configuration.

  • Verify global watchdog configuration.
    Router#show run priority-flow-control watchdog
    priority-flow-control watchdog interval 100
    priority-flow-control watchdog shutdown-multiplier 2
    priority-flow-control watchdog auto-restore-multiplier 10
    priority-flow-control watchdog mode on
  • Verify active watchdog configuration on an interface.
    Router#show controllers HundredGigE0/2/0/0 priority-flow-control
    
    Priority flow control information for interface HundredGigE0/2/0/0:
    
    Priority flow control watchdog configuration:
    (D) : Default value
    U : Unconfigured
    --------------------------------------------------------------------------------
    Configuration Item         Global  Interface  Effective
    --------------------------------------------------------------------------------
    PFC watchdog state :        Enabled   U        Enabled
    Poll interval :             100(D)    U        100(D)
    Shutdown multiplier :       2(D)      U        2(D)
    Auto-restore multiplier :   10(D)     U        10(D)
    
    • Global column shows configured values.

    • Interface column shows interface-specific overrides (if any).

    • Effective column shows which values are actively applied to the interface.

  • Verify watchdog operational statistics.
    Router#show controllers HundredGigE0/2/0/0 priority-flow-control watchdog-stats
    Priority flow control information for interface HundredGigE0/2/0/0:
    
    Priority flow control watchdog statistics:
    SAR: Auto restore and shutdown
    ---------------------------------------------------------------------------------------------------
    Traffic Class            :       0        1        2        3        4        5        6        7
    ---------------------------------------------------------------------------------------------------
    Watchdog Events          :       0        0        0        3        3        0        0        0
    Shutdown Events          :       0        0        0        3        3        0        0        0
    Auto Restore Events      :       0        0        0        3        3        0        0        0
    SAR Events               :       0        0        0     3510     3510        0        0        0
    SAR Instantaneous Events :       0        0        0     1172     1172        0        0        0
    Total Dropped Packets    :       0        0        0 941505767 941488166        0        0        0
    Dropped Packets          :       0        0        0 314855466 314887161        0        0        0
    
    • Watchdog Events displays the number of times the watchdog detected a stalled queue.

    • Shutdown Events displays the number of times the line card was shut down due to stuck queues.

    • Auto Restore Events displays the number of times the watchdog restored the queue after congestion cleared.

    • Total Dropped Packets displays the cumulative virtual output queue (VOQ) and output queue drops across all watchdog events since the last clear.

    • Dropped Packets displays the cumulative VOQ and output queue drops in the most recent watchdog shutdown event.

    • Ignore the SAR Events and SAR Instantaneous Events entries. These numbers are not relevant to your operations.


Global statistics counters for Priority Flow Control and Priority Flow Control watchdog

A global statistics counter for Priority Flow Control (PFC) and PFC watchdog is an observability mechanism that

  • consolidates per-interface counters into a unified, system-wide view

  • collects statistics from all line cards and aggregates them in the local statistics infrastructure, and

  • provides fast, tabular summaries of PFC and PFC watchdog behavior across all interfaces.

This feature enables you to quickly monitor overall PFC and watchdog activity without running per-interface commands. It improves troubleshooting speed and provides a holistic view of congestion control across the chassis.

Table 2. Feature History Table

Feature Name

Release Information

Feature Description

Global Statistics Counters for Priority Flow Control and Priority Flow Control Watchdog

Release 25.1.1

Introduced in this release on: Fixed Systems (8700 [ASIC:K100])(select variants only*)

This feature is now supported on Cisco 8712-MOD-M routers.

Global Statistics Counters for Priority Flow Control and Priority Flow Control Watchdog

Release 24.4.1

Introduced in this release on: Fixed Systems (8200 [ASIC: P100], 8700 [ASIC: P100])(select variants only*); Modular Systems (8800 [LC ASIC: P100])(select variants only*)

*This feature is supported on:

  • 8212-48FH-M

  • 8711-32FH-M

  • 88-LC1-36EH

  • 88-LC1-12TH24FH-E

  • 88-LC1-52Y8H-EM

Global Statistics Counters for Priority Flow Control and Priority Flow Control Watchdog

Release 7.5.5

Release 24.2.11

You can now view statistics for Priority Flow Control (PFC) and PFC Watchdog for all interfaces in a consolidated, compact, tabular, and easy-to-read format.

We’ve also made the display of these global statistics faster by ensuring data is collected from all line cards for their interfaces and cumulatively sent to the local statistics infrastructure from where the show commands collect the data.

Previously, you could view statistics only per interface for PFC and PFC Watchdog, wherein the show commands get the data from each interface.

This feature modifies the following command:

View global statistics for Priority Flow Control and Priority Flow Control watchdog

Use this task to view the consolidated statistics for all interfaces on the router, enabling faster validation of Priority Flow Control (PFC) or PFC watchdog behavior.

Procedure


Step 1

View global PFC statistics.

Example:

Router#show controllers all priority-flow-control statistics location all
Interface               TC   RxPFC   TxPFC   RxDropped
FourHundredGigE0/0/0/0  0    0       0       NA
FourHundredGigE0/0/0/0  7    0       0       NA
FourHundredGigE0/0/0/0  all  0       0       0

This example displays traffic-class-based PFC statistics for all interfaces where:

  • Rx PFC displays the number of received PFC frames

  • Tx PFC displays the number of transmitted PFC frames

  • Rx Dropped displays the number of PFC frames dropped. 0 indicates that no frames were dropped, while NA indicates the information is not available because the line card has not sent the data.

Step 2

View global PFC watchdog statistics.

Example:

Router#show controllers all priority-flow-control watchdog-stats location all 
 
Interface                TC   Watchdog  Shutdown  Auto Restore SAR   SAR Instantaneous   Total Dropped  Dropped
                  Events      Events    Events    Events       Events      Events
FourHundredGigE0/0/0/0    0      0        0         0           0          0                0              0
FourHundredGigE0/0/0/0    7      0        0         0           0          0                0              0
FourHundredGigE0/0/0/1    0      0        0         0           0          0                0              0
FourHundredGigE0/0/0/1    7      0        0         0           0          0                0              0
FourHundredGigE0/0/0/2    0      0        0         0           0          0                0              0
FourHundredGigE0/0/0/2    7      0        0         0           0          0                0              0
FourHundredGigE0/0/0/3    0      0        0         0           0          0                0              0
FourHundredGigE0/0/0/3    7      0        0         0           0          0                0              0
FourHundredGigE0/0/0/4    0      0        0         0           0          0                0              0

This example displays statistics across all interfaces for watchdog activity for every traffic class, where:

  • Watchdog Events indicates how many times the watchdog module receives notifications that excess PFC frames have been received

  • Shutdown events displays the number of times the PFC watchdog moves the queue to the shutdown state

  • Auto restore Events displays the number of times the PFC watchdog sets the auto-restore timer

  • Total Dropped displays the cumulative virtual output queue (VOQ) drops and the output drops in all the watchdog events from the time you ran the clear controllers priority-flow-control watchdog statistics command to clear the statistics counter

  • Dropped indicates the cumulative VOQ drops and the output queue drops in the most recent watchdog shutdown event.

Disregard the SAR Events and SAR Instantaneous Events counters because they have no operational impact.


You can verify that PFC frames, pause activity, and watchdog operations are visible across all interfaces. This visibility helps confirm overall congestion health.

Supported commands for global statistics for Priority Flow Control and Priority Flow Control watchdog

Use these commands to display or clear global statistics for Priority Flow Control (PFC) and PFC watchdog.

Operation

Command

Description

View global PFC statistics

show controllers all priority-flow-control statistics location all

Displays consolidated PFC statistics, including pause frame counts and flow control events across all interfaces and NPUs.

View global PFC watchdog statistics

show controllers all priority-flow-control watchdog-stats location all

Displays cumulative PFC watchdog statistics, including watchdog events, shutdowns, and auto-restore counts for each traffic class.

Clear PFC statistics

clear controller priority-flow-control statistics

Clears all PFC global counters and resets the corresponding per-interface counters that the show controllers all priority-flow-control statistics location all command displays.

Clear PFC watchdog statistics

clear controller priority-flow-control watchdog-stats

Clears all PFC watchdog global counters and resets the per-interface watchdog statistics that the show controllers all priority-flow-control watchdog-stats location all command displays.

After running a clear command, all counter values reset to zero and begin incrementing again as new pause or watchdog events occur.

Traffic class queue pause duration

A traffic class queue pause duration is a Priority Flow Control (PFC) congestion monitoring functionality that

  • reports how long per-traffic-class queues remain paused, presented as the percent of time and the absolute duration (in microseconds), for both transmitting (Tx) and receiving (Rx) paths

  • correlates pause activity across adjacent devices (Tx on the receiver versus Rx on the transmitter) to distinguish short-lived bursts from sustained congestion, and

  • supports multi-window sampling, such as one minute and five minutes, and offers per-interface granularity to facilitate troubleshooting and buffer PFC threshold tuning.

Table 3. Feature History Table

Feature Name

Release Information

Feature Description

Enhanced PFC pause duration monitoring Release 25.2.1

Introduced in this release on: Modular Systems (8800 [LC ASIC: Q200])

This feature improves network congestion analysis by providing granular visibility into Priority Flow Control (PFC) pause durations.

It introduces the ability to monitor both transmit (Tx) and receive (Rx) pause durations in microseconds, allowing for precise identification of congestion patterns.

Previously, only pause percentages were available, but this enhancement provides a more detailed and accurate understanding of PFC pause events.

The feature introduces these changes:

CLI:

YANG data models

View Traffic Class Queue Pause Duration

Release 25.1.1

Introduced in this release on: Fixed Systems (8700 [ASIC:K100])(select variants only*)

*This feature is now supported on Cisco 8712-MOD-M routers.

View Traffic Class Queue Pause Duration

Release 24.4.1

Introduced in this release on: Fixed Systems (8200 [ASIC: P100], 8700 [ASIC: P100])(select variants only*); Modular Systems (8800 [LC ASIC: P100])(select variants only*)

*This feature is supported on:

  • 8711-32FH-M

  • 8212-48FH-M

  • 88-LC1-36EH

  • 88-LC1-12TH24FH-E

  • 88-LC1-52Y8H-EM

View Traffic Class Queue Pause Duration

Release 24.2.11

Introduced in this release on Cisco 8000 Series Routers with Cisco Silicon One Q200 network processors that support the PFC buffer-extended mode function.

For traffic flows between routers, you can view the pause duration of output and input queues in the transmitting and receiving routers, respectively.

The pause duration values of the impacted traffic class queues are displayed for regular intervals within a specified time duration.

With the information, you can view the extent of congestion on PFC-enabled interfaces over a period of time and identify whether traffic congestion is due to small bursts of traffic or other causes.

The feature introduces these changes:

CLI:

YANG data models:

  • Cisco-IOS-XR-platforms-ofa-oper (see GitHub, YANG Data Models Navigator

How traffic class queue pause duration works

In PFC-enabled networks, routers exchange PAUSE and NO-PAUSE (X-on) frames to manage congestion without packet drops. When queue occupancy crosses the configured pause-threshold, the receiver issues PAUSE frames upstream. When congestion clears, it sends X-on frames.

The router records how long each queue remains paused during each sampling window and reports this as a pause percentage or pause duration.

Summary

This summary refers to the figure to clarify the workflow.

  • Transmitting router (R2): Sends traffic and pauses output queues when PAUSE frames are received.

  • Receiving router (R3): Detects congestion, issues PAUSE or X-on frames, and measures pause duration.

  • Upstream router (R1): Feeds traffic into R2 and can receive propagated PAUSE frames.

  • Downstream router (R4): Receives traffic forwarded from R3.

These components work together to regulate traffic flow and record how long queues remain paused.

Workflow

Figure 1. Traffic class queue pause duration scenario

These stages describe how traffic class queue pause duration works.

  1. R1 transmits: R1 sends packets to R2 at a line rate of 400 Gbps.
  2. R2 transmits normally: R2 forwards 400 G of traffic toward R3 across two 200 G PFC-enabled ECMP links
  3. R3 queues begin to fill: R3 forwards 399 G to R4 and buffers the remaining 1 G in input queues. Mild congestion starts but remains below the pause-threshold.
  4. R3 crosses the pause-threshold: R3 detects the threshold breach and sends PFC PAUSE frames to R2.R2 halts transmission and buffers packets in High Bandwidth Memory (HBM).
  5. Pause propagates upstream: R2 sends PAUSE frames further upstream to R1.Traffic slows from R1 to R2 to R3 and queues stabilize.
  6. R3 drains queues and resumes: As congestion clears, R3 sends X-on frames to R2.Transmission resumes at line rate.
  7. R2 releases backlog: R2 transmits its buffered packets at full 400 G, causing a brief traffic burst.R3’s queues refill temporarily, triggering another PAUSE.
  8. Pause cycles repeat: R2 and R3 alternate between PAUSE and X-on states.For example, both routers record approximately 50 percent pause-percentage, which indicates short, periodic congestion.
  9. Administrator monitors pause duration: Run show controllers npu packet-memory interface <interface> to display pause statistics for each traffic class and interval. High pause percentages (greater than or equal to 70 percent) suggest sustained congestion; lower values indicate normal operation.

Result

This process quantifies queue pause behavior in terms of time percentages or durations. This helps you distinguish transient bursts from persistent congestion and tune thresholds.

Best practices for using traffic class queue pause duration

Use these guidelines to interpret and validate traffic class queue pause-duration data accurately.

  • Select sampling intervals appropriately:

    Select a short observation window (30 seconds to 1 minute) to detect micro-bursts or transient queue spikes.

    Select a longer observation window (5 minutes) to capture congestion trends and recurring pause patterns over time.

  • Avoid over-sampling in detail option: Avoid using the detail option in automation scripts or telemetry integrations, as it generates a large volume of records

  • Verify Priority Flow Control (PFC) configuration: Ensure that PFC buffer-extended mode is enabled on the transmitting and receiving router nodes.

  • PFC mode support for viewing pause duration in microseconds: Use buffer-internal or buffer-extended mode to view pause duration in microseconds.

View traffic class queue pause duration

Use the show controllers npu packet-memory interface command to view pause statistics per interface and traffic class to determine how long queues were paused, as a percentage or in microseconds, for selected intervals.

Before you begin

Enable PFC buffer-extended mode on the transmitting and receiving router nodes.

Follow these steps to view queue pause duration on a PFC-enabled router.

Procedure


Step 1

Select Tx or Rx view.

  • rx-pause-percent —Specifies the input queues on the receiving router.

  • tx-pause-percent –Specifies the output queues on the transmitting router.

Step 2

(Optional) Choose observation window or detail level.

  • one-minute –Specifies the average pause duration of the queues in percentage for the last minute.

  • five-minute –Specifies the average pause duration of the queues in percentage for the last five minutes.

  • detail –Displays a maximum of 120 records at a frequency of one record per 250-millisecond interval.

  • Default—30 records are displayed for each traffic class at a frequency of one record per second.

  • verbose —View the time stamp in a raw mode.

Step 3

Run the show command.

Router#show controllers npu packet-memory interface <intf> tx-pause-percent <observation window in time or detail> location <node>

OR

Router#show controllers npu packet-memory interface <intf> rx-pause-percent <observation window in time or detail> location <node>
  • one-minute –Specifies the average pause duration of the queues in percentage for the last minute.

  • five-minute –Specifies the average pause duration of the queues in percentage for the last five minutes.

  • detail –Displays a maximum of 120 records at a frequency of one record per 250-millisecond interval.

  • Default—30 records are displayed for each traffic class at a frequency of one record per second.

  • verbose —View the time stamp in a raw mode.

  • View the average pause duration for input queues for a specific observation window.
    Router#show controllers npu packet-memory interface all tx-pause-percent one-minute location 0/6/CPU0 
    
    -------------------------------------------------------------------------
    Node ID: 0/6/CPU0
    Source Queue Pause Percentage Info for interface(s) all
    Intf               TC          Pause-Percentage 
    name                                            
    -------------------------------------------------------------------------
      FH0/6/0/10       2           0.00000 
      FH0/6/0/11       2           0.00000 
      FH0/6/0/13       2           0.00000 
      FH0/6/0/14       2           0.00000 
      FH0/6/0/15       2           0.00000 
      FH0/6/0/16       2           0.00000 
      FH0/6/0/18       2           0.00000 
      FH0/6/0/21       2          53.01604 
      FH0/6/0/22       2           0.00000 
      FH0/6/0/23       2          53.13991
  • View the average pause duration of output queues for a specific observation window.
    show controllers npu packet-memory interface all rx-pause-percent one-minute location 0/6/CPU0 
    -------------------------------------------------------------------------
    Node ID: 0/6/CPU0
    Out Queue Pause Percentage Info for interface(s) all
    Intf               TC          Pause-Percentage 
    name                                            
    -------------------------------------------------------------------------
      FH0/6/0/10       2           0.00000 
      FH0/6/0/11       2           0.00000 
      FH0/6/0/13       2           0.00000 
      FH0/6/0/14       2           0.00000 
      FH0/6/0/15       2           0.00000 
      FH0/6/0/16       2           0.00000 
      FH0/6/0/18       2           0.00000 
      FH0/6/0/21       2          53.01604 
      FH0/6/0/22       2           0.00000 
      FH0/6/0/23       2          53.13991 
    
  • View pause duration for Tx pause frames in microseconds.
    Router#show controllers npu packet-memory interface all tx-pause-duration location 0/2/CPU0 
    
    -------------------------------------------------------------------------
    Node ID: 0/2/CPU0
    Out Queue Pause Duration Info for interface(s) all
    
    Intf               TC          Pause-Duration 
    name                                            
    -------------------------------------------------------------------------
       FH0/2/0/0       3                 0 
       FH0/2/0/0       4                 0 
      FH0/2/0/35       3                 0 
      FH0/2/0/35       4                 0 
      FH0/2/0/34       3                 0 
      FH0/2/0/34       4                 0 
      FH0/2/0/33       3                 0 
      FH0/2/0/33       4                 0 
      FH0/2/0/31       3       33282645110 
      FH0/2/0/31       4       31821853078 
      FH0/2/0/30       3       33304272074 
      FH0/2/0/30       4       31835301237 
      FH0/2/0/29       3                 0 
      FH0/2/0/29       4                 0 
      FH0/2/0/28       3       33227075106 
      FH0/2/0/28       4       31802440395 
      FH0/2/0/27       3       33241333365 
      FH0/2/0/27       4       31740514268 
      FH0/2/0/26       3       33256748403 
      FH0/2/0/26       4       32711889077 
      FH0/2/0/25       3       33344738029 
      FH0/2/0/25       4       31827152759 
      FH0/2/0/24       3       33407963260 
      FH0/2/0/24       4       32542402762 
       FH0/2/0/1       3                 0 
       FH0/2/0/1       4                 0 
       FH0/2/0/2       3                 0 
       FH0/2/0/2       4                 0 
       FH0/2/0/3       3                 0 
       FH0/2/0/3       4                 0 
       FH0/2/0/4       3                 0 
       FH0/2/0/4       4                 0 
       FH0/2/0/5       3                 0 
       FH0/2/0/5       4                 0 
       FH0/2/0/6       3                 0 
       FH0/2/0/6       4                 0 
       FH0/2/0/7       3                 0 
       FH0/2/0/7       4                 0 
       FH0/2/0/8       3                 0 
       FH0/2/0/8       4                 0 
       FH0/2/0/9       3                 0 
       FH0/2/0/9       4                 0 
      FH0/2/0/10       3                 0 
      FH0/2/0/10       4                 0 
      FH0/2/0/11       3                 0 
      FH0/2/0/11       4                 0
  • View pause duration for Rx pause frames in microseconds.
    Router#show controllers npu packet-memory interface all rx-pause-duration location 0/2/CPU0 
    Fri Feb  7 05:02:28.551 UTC
    
    -------------------------------------------------------------------------
    Node ID: 0/2/CPU0
    Source Queue Pause Duration Info for interface(s) all
    
    Intf               TC          Pause-Duration 
    name                                            
    -------------------------------------------------------------------------
       FH0/2/0/0       3          66113289 
       FH0/2/0/0       4       20373867793 
      FH0/2/0/35       3                 0 
      FH0/2/0/35       4                 0 
      FH0/2/0/34       3                 0 
      FH0/2/0/34       4                 0 
      FH0/2/0/33       3                 0 
      FH0/2/0/33       4                 0 
      FH0/2/0/31       3                 0 
      FH0/2/0/31       4                 0 
      FH0/2/0/30       3                 0 
      FH0/2/0/30       4                 0 
      FH0/2/0/29       3                 0 
      FH0/2/0/29       4                 0 
      FH0/2/0/28       3                 0 
      FH0/2/0/28       4                 0 
      FH0/2/0/27       3                 0 
      FH0/2/0/27       4                 0 
      FH0/2/0/26       3                 0 
      FH0/2/0/26       4                 0 
      FH0/2/0/25       3                 0 
      FH0/2/0/25       4                 0 
      FH0/2/0/24       3                 0 
      FH0/2/0/24       4                 0 
       FH0/2/0/1       3          34701848 
       FH0/2/0/1       4       19824055492 
       FH0/2/0/2       3           5594691 
       FH0/2/0/2       4       19013361290 
       FH0/2/0/3       3          19015180 
       FH0/2/0/3       4       19924428067 
       FH0/2/0/4       3          27727815 
       FH0/2/0/4       4       19841945429 
       FH0/2/0/5       3                 0 
       FH0/2/0/5       4                 0 
       FH0/2/0/6       3          29833476 
       FH0/2/0/6       4       19835847699 
       FH0/2/0/7       3          29578038 
       FH0/2/0/7       4       19837595706 
       FH0/2/0/8       3                 0 
       FH0/2/0/8       4                 0 
       FH0/2/0/9       3                 0 
       FH0/2/0/9       4                 0 
      FH0/2/0/10       3                 0 
      FH0/2/0/10       4                 0 
      FH0/2/0/11       3                 0 
      FH0/2/0/11       4                 0