Traffic Monitoring

For a router to function smoothly and effortlessly, monitoring traffic is an essential task. Traffic Monitoring helps you get count of packets going inwards and outwards.

This chapter describes the process to monitor packet drops as part of troubleshooting process.

In routing, the information is passed around in the form of packets. Packets contains unit size data. Sometimes,due to network congestion, or outdated software/hardware the packets fail to reach at their destined location.You can identify packet loss in the form of incomplete information or missing information.

Traffic Statistics with Packet Drop Location

Table 1. Feature History Table

Feature Name

Release

Description

Traffic Statistics with Packet Drop Location

Release 25.1.1

Introduced in this release on: Fixed Systems (8010 [ASIC: A100])(select variants only*)

*This feature is now supported on Cisco 8011-4G24Y4H-I routers.

Traffic Statistics with Packet Drop Location

Release 24.4.1

Introduced in this release on: Fixed Systems (8200 [ASIC: P100], 8700 [ASIC: P100, K100])(select variants only*); Modular Systems (8800 [LC ASIC: P100])(select variants only*)

*This feature is supported on:

  • 8212-48FH-M

  • 8711-32FH-M

  • 8712-MOD-M

  • 88-LC1-36EH

  • 88-LC1-12TH24FH-E

  • 88-LC1-52Y8H-EM

Traffic Statistics with Packet Drop Location

Release 24.2.11

Introduced in this release on: Fixed Systems(8200); Centralized Systems (8600); Modular Systems (8800 [LC ASIC: Q100, Q200])

We help you save debugging time to locate packet drops by automatically detecting nonzero traffic drops from the commands running in the background and giving you the exact location of the packet drop.

In earlier releases, you used multiple show commands with their respective locations to detect packet drops.

This feature introduces the show drops all command.

Earlier, finding the exact location of packet drop was a long and tedious process as there are multiple node locations. You were executing show commands with their different locations to detect the packet drop location.

Starting Cisco IOS XR Software Release 24.2.11, finding a packet drop location has become easy and quick. You can use the show drops all command to know the exact packet drop location. This command shows all nonzero traffic-drops in the node at one place. This command automatically runs the required IOS XR debug commands in the background and removes insignificant information from the command output.

The following commands outputs are integrated in the show drops all command:

  • show arp traffic

  • show controllers npu stats traps-all instance instance location location

  • show controllers npu stats voq ingress interface interface name instance all location location

  • show cef drops location location

  • show lpts pifib hardware police location location

  • show spp node-counters location location | inc drop

  • show controllers npu stats counters-drop instance instance location location

Monitor Packet Drops

Use show drops all ongoing location all to see ongoing drops on the system. This command output shows ongoing drops since the last time command was executed.

Router# show drops all ongoing location all

-----------------------------------------------
Printing Drop Counters for node 0/RP0/CPU0
-----------------------------------------------

-------------------
MODULE cef
-------------------

cef ipv4 drops
------------------
No route drops       : 18

-------------------
MODULE spp
-------------------

      Local Linux Packet drops:             683
         Total Socket RX Drops:             683

-----------------------------------------------
Printing Drop Counters for node 0/0/CPU0
-----------------------------------------------

-------------------
MODULE npu
-------------------

Trap Type                                     NPU  Trap  Punt       Punt  Punt  Punt Configured Hardware   Policer Avg-Pkt Packets              Packets
                                              ID   ID    Dest       VoQ   VLAN  TC   Rate(pps)  Rate(pps)  Level   Size    Accepted             Dropped
====================================================================================================================================================================

INJECT_UP_L3_LOOKUP_FAIL                      0    109  BOTH_RP_CPU 310   1538  6    5000       4878       IFG     1520    18                   0

-------------------
MODULE spp
-------------------

      Local Linux Packet drops:              86
         Total Socket RX Drops:             104
   Drop wrong Mcast pkt on SIM:              18

Use show drops all location all to see the drops from all location or node that have happened in the system.

Router# show drops all location all

-----------------------------------------------
Printing Drop Counters for node 0/RP0/CPU0
-----------------------------------------------

-------------------
MODULE cef
-------------------

cef ipv4 drops
------------------
No route drops       : 18

-------------------
MODULE spp
-------------------

      Local Linux Packet drops:             683
         Total Socket RX Drops:             683

-----------------------------------------------
Printing Drop Counters for node 0/0/CPU0
-----------------------------------------------

-------------------
MODULE npu
-------------------

Trap Type                                     NPU  Trap  Punt       Punt  Punt  Punt Configured Hardware   Policer Avg-Pkt Packets              Packets
                                              ID   ID    Dest       VoQ   VLAN  TC   Rate(pps)  Rate(pps)  Level   Size    Accepted             Dropped
====================================================================================================================================================================

INJECT_UP_L3_LOOKUP_FAIL                      0    109  BOTH_RP_CPU 310   1538  6    5000       4878       IFG     1520    18                   0

-------------------
MODULE spp
-------------------

      Local Linux Packet drops:              86
         Total Socket RX Drops:             104
   Drop wrong Mcast pkt on SIM:              18

Aggregated drop counters for software packet path node counters

The Software Packet Path (SPP) handles the movement and management of packets in software across network devices.

Table 2. Feature History Table

Feature Name

Release

Description

Aggregated drop counters for software packet path node counters

Release 25.4.1

Introduced in this release on: Fixed Systems (8200 [ASIC: Q200, P100], 8700 [ASIC: P100, K100]), 8010 [ASIC: A100]); Centralized Systems (8600 [ASIC: Q200]); Modular Systems (8800 [LC ASIC: Q100, Q200, P100])

You can use the show spp node-counters drops command to view aggregated drop counters for the SPP. This displays a summary of dropped packets at the node level to identify and troubleshoot drop issues efficiently.

The feature introduces these changes:

Modified Command:

CLI

The show spp node-counters command was modified to introduce the drops keyword.

The show spp node-counters command displays node counters for the SPP. These counters include forwarded packets, injected packets, and dropped packets for different nodes and clients.

From Release 25.4.1, the show spp node-counters command includes the drops keyword. You can use this keyword to filter the output and display aggregated dropped packets for each node in the SPP.

Enter the show spp node-counters drops command to view aggregated drop counters for the SPP.

Router# show spp node-counters drops
0/0/CPU0:
socket/rx
aggr-drop-counter: 371
-------------------------------
socket/tx
aggr-drop-counter: 372

You can also collect aggregated drop counter statistics programmatically using the Cisco-IOS-XR-spp-oper.yang data model.

Sample Netconf get request:

<rpc message-id="9a7097c8-f1b4-44d3-9e42-38158dd33b41" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
  <get>
    <filter>
      <spp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-spp-oper">
        <nodes>
          <node>
          <node-name>0/RP0/CPU0</node-name>
                <spp-node-drop-counters>
                <spp-node-drop-counter>
                </spp-node-drop-counter>
                </spp-node-drop-counters>
          </node>
        </nodes>
      </spp>
    </filter>
  </get>
</rpc>

Sample Netconf Get Response:

<?xml version="1.0"?>
<rpc-reply message-id="9a7097c8-f1b4-44d3-9e42-38158dd33b41" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
  <data>
    <spp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-spp-oper">
      <nodes>
        <node>
          <node-name>0/RP0/CPU0</node-name>
          <spp-node-drop-counters>
            <spp-node-drop-counter>
              <spp-node-entry>0</spp-node-entry>
              <spp-node-name>pd_span_drop</spp-node-name>
              <aggr-drop-counters>1</aggr-drop-counters>
            </spp-node-drop-counter>
         #203
            <spp-node-drop-counter>
              <spp-node-entry>1</spp-node-entry>
              <spp-node-name>socket/rx</spp-node-name>
              <aggr-drop-counters>194</aggr-drop-counters>
            </spp-node-drop-counter>
         #201
            <spp-node-drop-counter>
              <spp-node-entry>2</spp-node-entry>
              <spp-node-name>socket/tx</spp-node-name>
              <aggr-drop-counters>3</aggr-drop-counters>
            </spp-node-drop-counter>
         #207
            <spp-node-drop-counter>
              <spp-node-entry>3</spp-node-entry>
              <spp-node-name>device/classify</spp-node-name>
              <aggr-drop-counters>1</aggr-drop-counters>
            </spp-node-drop-counter>
         #86
          </spp-node-drop-counters>
        </node>
      </nodes>
    </spp>
  </data>
</rpc-reply>

5.983184ms elapsed

Restrictions for aggregated drop counters for software packet path node counters

  • Cisco-IOS-XR-spp-oper.yang data model does not support get requests at the spp-node-name level.

System Log Alerts for Packet Loss

Table 3. Feature History Table

Feature Name

Release Information

Description

System Log Alerts for Packet Loss

Release 25.1.1

Introduced in this release on: Fixed Systems (8700 [ASIC: K100], 8010 [ASIC: A100])(select variants only*)

*This feature is supported on:

  • 8712-MOD-M

  • 8011-4G24Y4H-I

System Log Alerts for Packet Loss

Release 24.4.1

Introduced in this release on: Fixed Systems (8200 [ASIC: P100], 8700 [ASIC: P100])(select variants only*); Modular Systems (8800 [LC ASIC: P100])(select variants only*)

*This feature is supported on:

  • 8212-48FH-M

  • 8711-32FH-M

  • 88-LC1-36EH

  • 88-LC1-12TH24FH-E

  • 88-LC1-52Y8H-EM

System Log Alerts for Packet Loss

Release 24.1.1

You can quickly get notified about any traffic impacting errors within the router's Network Processing Unit (NPU). These notifications are error log messages on the router console for NPU interrupts that affect traffic. To diagnose traffic loss, follow the recommended actions in the log.

Previously, the only way to identify NPU errors that impacted traffic was to run the show asic-error command.

This feature introduces the following changes:

CLI:

This feature introduces the hw-module profile packet-loss-alert command.

YANG Model:

New xpaths for Cisco-IOS-XR-npu-hw-profile-cfg.yang data model

(see GitHub, YANG Data Models Navigator)

Network packet loss can significantly impact the overall experience of end-users, particularly when using real-time applications such as voice or video. Therefore, network administrators need to address such issues promptly to ensure the integrity of these services.

Now you can configure your router to provide immediate alerts in the event of packet loss, along with configuring the duration of packet loss to raise the alert. This feature enables network administrators to quickly identify the specific router within the network that is experiencing the problem. It facilitates a swift and precise response to rectify the issue and maintain optimal network performance.


Note


Only line cards and routers with the Q100, Q200, P100, or G100 based Silicon One ASIC support this feature.


Configuration Example

Execute the hw-module profile packet-loss-alert command to enable this feature, as shown below:

Router# configure
Router(config)# hw-module profile packet-loss-alert 3Min 
Router(config)# commit

You can configure the duration of packet loss to raise the alert to either 3 minutes or 5 minutes.

Running Configuration

Router# show running-config hw-module profile
hw-module profile packet-loss-alert 3Min

System Log Alert Generated for Packet Loss

When you enable this feature, the router generates a system log message whenever there’s a packet loss for the configured duration. To diagnose the reason for the packet loss, follow the recommended action in the log.

LC/0/3/CPU0:Nov 4 21:12:47.062 UTC: npu_drvr[213]: %FABRIC-NPU_DRVR-3-ASIC_ERROR_TRAFFIC_IMPACT : [10118] : npu[2]: Potential PACKET_LOSS due to error, please check configuration to see if drop is expected; if not, collect showtech fabric link-include and follow the TAC guideline for this message

Monitor interface

Table 4. Feature History Table

Feature Name

Release

Description

Monitor interface

Release 25.1.1

Introduced in this release on: Fixed Systems (8010 [ASIC: A100])(select variants only*)

*This feature is now supported on Cisco 8011-4G24Y4H-I routers.

Monitor interface

Release 24.4.1

Introduced in this release on: Fixed Systems (8200 [ASIC: P100], 8700 [ASIC: P100, K100])(select variants only*); Modular Systems (8800 [LC ASIC: P100])(select variants only*)

The filter physical keyword was introduced, along with new columns InDrops and OutDrops in the output, to provide enhanced monitoring capabilities for physical interfaces.

CLI:

*This feature is supported on:

  • 8212-48FH-M

  • 8711-32FH-M

  • 8712-MOD-M

  • 88-LC1-36EH

  • 88-LC1-12TH24FH-E

  • 88-LC1-52Y8H-EM

Monitor interface

Release 7.5.4

The keyword full-name is added, allowing users to display the full names of interfaces, which is particularly useful for interfaces with long or descriptive names.

CLI:

Monitor interface

Release 7.0.12

Introduced in this release on: Fixed Systems(8200); Centralized Systems (8600); Modular Systems (8800 [LC ASIC: Q100, Q200])

Monitor interface is introduced to enable real-time monitoring of interface counters on Cisco routers.

This feature introduces the monitor interface command.

The monitor interface command is used to monitor network interface counters in real-time.

This command provides valuable insights into the performance and status of both physical and virtual interfaces on a router.

By leveraging various arguments, you can customize the output to display

  • detailed statistics,

  • full interface names, and

  • filter for physical interfaces.

The monitor interface command is essential for network administrators to diagnose and troubleshoot interface-related issues effectively.

Enhancing FIB hardware programming failure recovery in non-OOR scenarios

Forwarding Information Base (FIB) hardware programming failure recovery in Cisco network devices refers to the process by which the router

  • detects and responds to failures encountered when updating or programming entries in the hardware FIB

  • ensures the integrity and reliability of the packet forwarding process by attempting corrective actions when hardware programming fails, and

  • minimizes disruption to data traffic by systematically managing and recovering from such failures.

Table 5. Feature History Table

Feature Name

Release Information

Description

Enhancing FIB hardware programming failure recovery in non-OOR scenarios

Release 25.3.1

Introduced in this release on: Cisco 8000 with ASIC Q100, Q200, P100, K100, and A100: Fixed Systems (8200 [ASIC: Q100, Q200, P100], 8700 [ASIC: P100, K100], 8010 [ASIC: A100]); Centralized Systems (8600 [ASIC: Q200]); Modular Systems (8800 [LC ASIC: Q100, Q200, P100])

FIB hardware programming failure recovery enhancement increases network stability and reduces performance impact during hardware programming issues in non-OOR situations by taking these actions:

  • Reduces churn by limiting the recovery attempts to twice, instead of retrying every 15 seconds.

  • Removes the errored object from the forwarding tree.

  • Displays syslog messages with details about the hardware failure and its cause.

  • Attempts recovery by deleting and then re-creating the affected hardware programming entry.

The enhancement to FIB’s hardware programming failure recovery reduces churn by limiting recovery attempts, thus improving traffic congestion. It also provides detailed syslog messages for debugging purposes.

Before Release 25.3.1, the router attempted to recover from hardware programming failures every 15 seconds, leading to excessive churn.

From Release 25.3.1, the recovery process is limited to two attempts:

  • First attempt:

    • The router triggers the first recovery attempt 15 seconds after it detects the failure.

    • It removes the errored object from the forwarding tree.

    • It deletes and re-creates the hardware programming entry.

    • It generates a syslog message detailing the failure and the recovery attempt.

  • Second attempt:

    • The router triggers the second recovery attempt at the third minute if the first attempt fails.

    • It performs another deletion and re-creation of the entry.

    • It generates another syslog message.

If both attempts fail, the router enters an unrecoverable state for the affected entry and remains in this state until routing protocols remove the errored entry from the FIB.

After each recovery attempt, the Cisco IOS XR software displays a syslog message detailing the hardware failure:

  • First attempt: The router retries to recover from the failure and displays the following system log message:

    %ROUTING-3-PLATF_UPD_FAIL: FIB platform update failed:
    Obj=FIB_DATA_TYPE_LEAF[ptr=0x22336399dc28,refc=0x2,flags=0x1000003]
    Action=FAIL-RETRY Proto=ipv4. Cerr='FIB' detected the 'try again' condition
    'Temporary failure. Try again later’
  • Second attempt: The router stops retrying and displays the following system log message:

    %ROUTING-3-PLATF_UPD_FAIL: FIB platform update failed:
    Obj=FIB_DATA_TYPE_LEAF[ptr=0x22336399dc28,refc=0x2,flags=0x1000003]
    Action=REPEATED-FAIL-STOP-RETRY Proto=ipv4. Cerr='FIB' detected the 'try again' condition
    'Temporary failure. Try again later’

Traffic class latency histogram

The Transmit Packet Processor (TXPP) is a specialized hardware component in Cisco network devices that

  • processes and manages the transmission of packets through the Network Processing Unit (NPU)

  • calculates, updates, and maintains key packet metrics such as latency and transmission statistics, and

  • enables precise monitoring and analysis of traffic behavior for various Traffic Classes (TCs).

Table 6. Feature History Table

Feature Name

Release Information

Description

Traffic class latency histogram

Release 25.3.1

Introduced in this release on: Fixed Systems (8200 [ASIC: Q200]); Centralized Systems (8600 [ASIC: Q200]); Modular Systems (8800 [LC ASIC: Q200])

You can now monitor packet delays between ingress and egress on specific ports using a new visual representation featuring latency histograms.

These histograms provide detailed insights into packet delays and jitter, enabling you to identify bottlenecks, optimize traffic flows, and enhance network efficiency.

The feature offers latency analysis for each TC across multiple levels, including Network Processing Unit (NPU), slice, Inter-Frame Gap (IFG), and specific TCs.

CLI:

This feature introduces the following commands:

YANG Model:

  • Cisco-IOS-XR-8000-platforms-npu-latency-histogram-oper.yang

  • Cisco-IOS-XR-8000-platforms-npu-latency-histogram-oper-sub1.yang

  • Cisco-IOS-XR-8000-platforms-npu-latency-histogram-act.yang

(see GitHub, YANG Data Models Navigator)

Within the NPU architecture, TXPP provides essential functions such as

  • measuring packet delays between ingress and egress points to support latency histograms

  • enabling precise analysis of latency distribution for each TC across multiple levels, including NPU, slice, IFG, and specific TCs

  • updating real-time metrics for network performance monitoring, and

  • integrating with other monitoring tools to facilitate efficient troubleshooting and performance optimization.

From Release 25.3.1, TC latency histogram is enabled by default and provides a detailed insights into latency distribution and network performance.

The Cisco IOS XR software collects aggerageted packet latency every 30 seconds, and exports 30 records every 30 seconds, ensuring timely and detailed telemetry.

You can view the cumulative packet count for each TC since either system bootup or the last clear using the show controllers npu packet-latency instance command.

You can manually clear the cumulative packet count for each TC since either system bootup or the last clear using the clear controllers npu packet-latency instance command.

Limitations for traffic class latency histogram

These are the limitation for traffic class latency histogram:

  • The Cisco IOS XR Software does not support SNMP based monitoring.

  • You must manually clear histogram data using clear controllers npu packet-latency instance commands, as the system does not automatically reset data over time.

How traffic class latency histogram works

Workflow

  1. TXPP calculates the per-packet latency, which is the difference between ingress and egress timestamp, and categorizes the latency into one of the 8 histogram bins based on the configured range.

  2. The Cisco IOS XR Software aggregates packet latency every 30 seconds and exports 30 records every 30 seconds to collect data for all enabled NPUs, slices, IFGs, and TCs.

  3. When you run the show controllers npu packet-latency instance command the Cisco IOS XR Software displays the TC latency histogram data in the desired granularity.

View Traffic class latency histogram

Perform these steps to view or clear the cumulative packet count in the latency histogram bin:

Procedure


Step 1

Run the show controllers npu packet-latency instance command to view the cumulative packet count in the latency histogram bin for all TC or each TC since either system bootup or the last clear.

Example:

You can view the cumulative packet count in the latency histogram bin with granularity for slice, IFG, location, and traffic class.

The following output displays cumulative packet count in the latency histogram bin for all TCs on slice 2, IFG 1, and location 0/2/CPU0:

Router# show controllers npu packet-latency instance 0 slice 2 ifg 1 location 0/2/CPU0 
 
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Packet Latency Histogram Bin Range In nanosecond: Node ID: 0/2/CPU0
Time             NPU  Slice IFG TC        5000        6000        7000        8000        9000        10000       15000    4000000000  Max_delay
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
14:43:50.043 EDT   0   2    1    0            0           0           0           0           0           0           0           0           0
14:43:50.043 EDT   0   2    1    1            0           0           0           0           0           0           0           0           0
14:43:50.043 EDT   0   2    1    2            0  1627324790      387009           0           0           0           0           0        6400
14:43:50.043 EDT   0   2    1    3            0           0           0           0           0           0           0           0           0
14:43:50.043 EDT   0   2    1    4            0           0           0           0           0           0           0           0           0
14:43:50.043 EDT   0   2    1    5            0           0           0           0           0           0           0           0           0
14:43:50.043 EDT   0   2    1    6            0   102965743    28788249       10708           0           0           0           0        7936
14:43:50.043 EDT   0   2    1    7        38680          15           1           1           0           0           0           0        7680

Step 2

Run the clear controllers npu packet-latency instance command to manually clear the cumulative packet count in the latency histogram bin for all TC or each TC since either system bootup or the last clear.

Example:

The following output displays manually clearing the cumulative packet count for all TC on slice 2, IFG 1, and location 0/2/CPU0 since either system bootup or the last clear:

Router# clear controller npu packet-latency instance 0 slice 2 ifg 1 location 0/2/CPU0

Port rate histogram

A port rate histogram is a network analytics tool that:

  • divides port traffic rates into discrete bins representing specific utilization thresholds,

  • displays the frequency of port traffic for each threshold over defined time intervals, and

  • enables granular monitoring, analysis, and troubleshooting of network port utilization.

Table 7. Feature History Table

Feature Name

Release Information

Feature Description

Port rate histogram

Release 25.4.1

Introduced in this release on: Fixed Systems (8200 [ASIC: Q200, P100](select variants only*); Centralized Systems (8600 [ASIC:Q200])(select variants only*); Modular Systems (8800 [LC ASIC: Q200, P100])(select variants only*)

Port rate histogram enables rapid detection of network performance issues by providing detailed, real-time port utilization analysis.

The feature displays port traffic rates in a histogram, breaking down data into specific time intervals and utilization bins. Data is collected and displayed using the CLI, allowing you to monitor and compare all network ports on supported line cards, and detect anomalies and performance issues efficiently.

This feature is supported on:

  • 8202-32FH-M

  • 8608-SYS

  • 88-LC0-36FH

  • 88-LC0-36FH-M

  • 88-LC0-34H14FH

  • 88-LC1-36EH

  • 88-LC1-12TH24FH-E

  • 88-LC1-52Y8H-EM

The feature introduces these changes:

CLI

The port rate histogram view provides a detailed display of port rate utilization. This view helps monitor traffic usage on high-speed network ports and analyze it at a granular level. The histogram breaks down data into specific time intervals, allowing for fine-grained observation and in-depth analysis.

Benefits of port rate histogram

The port rate histogram offers several benefits.

  • Microburst monitoring: It provides microburst monitoring capabilities and identifies rapid and short-duration spikes in traffic.

  • Congestion visibility: It offers a better histogram view of bursts and congestion in the network.

  • Identification of low port utilization: This helps identify ports with low utilization.

  • Load balancing improvement: It improves load balancing by checking port utilization at a granular level for bundle members and Equal-Cost Multi-Path (ECMP) members. The histogram identifies congestion that may require load balancing or proper traffic management.

  • Efficient data handling: The histogram holds up to one snapshot per port with different bin thresholds. The system updates the local data structure every 30 seconds.

Configuration guidelines for port rate histogram

Consider these guidelines and best practices for configuring port rate histogram:

  • The system automatically clears histogram data when you remove or swap a port.

  • Histogram data is not polled if a port’s link is down.

  • Ensure that you clear histogram data when the router or the line card is reloaded.

Restrictions for port rate histogram

These restrictions apply to port rate histogram:

  • Hardware limitation: The port rate histogram is supported only on Q200 and P100 based ASICs and is not supported on Q100, A100, and K100 based ASICs.

  • The histogram does not support virtual interfaces.

  • Process restart is not supported.

  • Fabric card is not supported.

  • Due to memory limitations, the system maintains only a single snapshot per port for both transmit (Tx) and receive (Rx) counters.

Configuring port rate histogram

Use this task to enable and monitor port rate histogram data to analyze granular port utilization and traffic patterns on supported routers and line cards.

Procedure


Step 1

Enable the port rate histogram feature for the desired ports or line cards by using the below command:

hw-module profile port-rate-histogram location node-id enable

Example:

Router# hw-module profile port-rate-histogram location 0/7/CPU0 enable

Step 2

View detailed port rate histogram data for specific interfaces or port ranges.

Use the show command to display average samples and bin data:

show controllers npu port-rate-histogram avg-samples location node-id

Example:

Router# show controllers npu port-rate-histogram avg-samples location 0/0/CPU0

Sample Output:
Tue Nov 25 16:50:20.173 EST
==============================================
Port rate Histogram
==============================================

HW Polling Period     : 1  Seconds 
HW Snapshot Interval  : 32 Microseconds 
Description:
==============================================
Cumulative  : Accumulated samples since the port is up
Delta       : Delta samples between the cli's
avg-samples    - aggregation of samples from various time window
avg-percentage - aggregation of samples percentage from various time window
+=====================================================================================================================================================================+
Interface     Directions  Durations    5<-35%           35-<60%         60-<80%         80-<90%         90-<95%         95-<99%         >99%
+=====================================================================================================================================================================+
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
last Rx snapshot timestamp:1764107377549798104 [Local time: 2025-11-25 16:49:37]
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
0/0/0/0         Rx         30sec          0               0               0               0               0               0               0               
0/0/0/0         Rx         5mins          0               0               0               0               0               0               0               
0/0/0/0         Rx         1hour          0               0               0               0               0               0               0               
0/0/0/0         Rx         24hours        0               0               0               0               0               0               0               
0/0/0/0         Rx         Delta          0               0               0               0               0               0               0               
0/0/0/0         Rx         Cumm           0               0               0               0               0               0               0               
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
last Tx snapshot timestamp:1764107377549798104 [Local time: 2025-11-25 16:49:37]
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
0/0/0/0         Tx         30sec          0               0               0               0               0               0               0               
0/0/0/0         Tx         5mins          0               0               0               0               0               0               0               
0/0/0/0         Tx         1hour          0               0               0               0               0               0               0               
0/0/0/0         Tx         24hours        0               0               0               0               0               0               0               
0/0/0/0         Tx         Delta          0               0               0               0               0               0               0               
0/0/0/0         Tx         Cumm           0               0               0               0               0               0               0               
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
last Rx snapshot timestamp:1764107377549782105 [Local time: 2025-11-25 16:49:37]
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+
0/0/0/1         Rx         30sec          0               0               0               0               0               0               0               
0/0/0/1         Rx         5mins          0               0               0               0               0               0               0               
0/0/0/1         Rx         1hour          0               0               0               0               0               0               0               
0/0/0/1         Rx         24hours        0               0               0               0               0               0               0               
0/0/0/1         Rx         Delta          0               0               0               0               0               0               0               
0/0/0/1         Rx         Cumm           0               0               0               0               0               0               0               
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------+