Guest

Cisco Catalyst 6500 Series Switches

Borderless Networks Security - Cisco Catalyst 6500 Series Control plane Protection Techniques for Maximum Uptime White Paper

  • Viewing Options

  • PDF (1.6 MB)
  • Feedback

What You Will Learn

The goal of this white paper is to help network design engineers and network operators understand the tools and techniques available to protect Cisco® Catalyst® 6500 Series systems from abnormal control plane traffic events. Without the use of these mechanisms, abnormal control-plane traffic events can negatively impact the operation of any network device, as well as overall network availability.

This document will focus primarily on the control plane protection capabilities available in the new Cisco Catalyst 6500 Series Supervisor Engine 2T, including:

New default service policy for classifying and policing common control-plane traffic types

Classification enhancements using modular quality-of-service (QoS) command-line interface (MQC) policies

Advanced classification using hardware rate limiters (HRL)

The use of Cisco IOS® Flexible NetFlow to monitor the control-plane interface

Overview

There is an increasing need to protect the network infrastructure from both malicious and nonmalicious network events that can negatively impact the control plane of network devices. In many cases, these events are malicious denial of service (DoS) attacks or distributed denial of service attacks (DDoS) attacks.[1] In other cases, these network events can be completely unintentional or accidental, resulting from network moves and changes or possibly component or device failures. In all cases, network operators need to protect, or harden, the infrastructure from these events so that the infrastructure and its applications remain viable.

The Control Plane and the Data Plane

In the context of this white paper, the term “control plane” refers to network control protocols or control traffic processing by the network device (switch or router). A network device will process control-plane traffic using its CPU subsystem.

There are many different types of control-plane traffic. Network routing protocols are the most obvious example: routing protocols can send traffic directly to the router’s IP address or use a Layer 3 multicast address. Either way, these messages are received into the CPU subsystem for processing by the relevant routing process.

Other examples of control-plane traffic include “data plane” forwarding exceptions. The data plane refers to the hardware forwarding engine and switching fabric components that forward traffic through the device. Today’s multilayer switches all use application-specific integrated circuits (ASICs), which process packet lookup decisions in the forwarding engine and switch data from one interface to another using the switching fabric. Typically, the forwarding engine is capable of forwarding data at Layers 2 through 4 without any involvement from the CPU subsystems. However, there can be exceptions. These exceptions cause the hardware forwarding engine to divert or punt the traffic to the CPU subsystem to be processed and switched in software.

To sum up, the term “control plane” refers to software processing and software-based switching, while “data plane” refers to hardware-based forwarding and switching. Some examples of control-plane traffic types are shown in Table1.

Table 1. Sample Control-Plane Traffic Types

Network Protocols

Exception Traffic Requiring CPU Processing

Open Shortest Path First (OSPF), Request in Progress (RIP), and Border Gateway Protocol (BGP) messages

IP packets with IP Options set

Spanning Tree Bridge Protocol Data Units (BPDUs) and messages

IP packets that require Address Resolution Protocol (ARP) resolution

ARP requests and replies

IP packets with Time-To-Live (TTL) value of 1

Simple Network Management Protocol (SNMP) messages

IP packets which fail a Reverse Path Forwarding (RPF) check

Internet Control Message Protocol (ICMP) messages

Packets that require logging

Network devices process control traffic using CPU and software resources. If a switch or router’s control plane is not protected, it is possible to oversubscribe the control plane’s resources causing a wide variety of negative results. Some symptoms of control-plane oversubscription include:

Slow response to CLI sessions

Routing protocol keepalive messages becoming inconsistent or flapping links

Memory overflows or lack of memory available for certain processes

A critical point here is that control-plane traffic requires CPU processing and resources, of which there is a finite amount. Therefore, these resources must be protected.

Given the requirement to protect the finite amount of CPU resources, it is still undesirable to simply block or drop all control-plane traffic once a given threshold is reached or once a problem is detected. This is because some amount of the control-plane traffic is very likely legitimate traffic, and processing this traffic is imperative to maintaining network stability.

For all these reasons, a more intelligent means of classifying, measuring, and limiting the amount of control-plane traffic is desired. The Cisco Catalyst 6500 Series Supervisor Engine 2T and Policy Feature Card 4 (PFC4) were designed with specific control-plane traffic classification capabilities to protect the CPU from being oversubscribed while also ensuring that critical traffic continues to be serviced.

The Cisco Catalyst 6500 Series Control Plane

The Cisco Catalyst 6500 Series Multilayer Switch Feature Card (MSFC) provides the main CPU subsystem of the 6500 Series switches. The MSFC5 uses a dual-core Power PC CPU, with each core running at 1.5 GHz and providing 2 GB of memory. The MSFC5 also includes a Connectivity Management Processor (CMP), which is a true out-of-band management CPU subsystem separate from the main CPU subsystem. The CMP is capable of downloading a system image and performing other critical management functions.

The MSFC5’s dual-core CPU subsystem runs a single Cisco IOS Software image providing all the Layer 2 and Layer 3 software processing functionality, as well as system management functionality.

Protecting the Control Plane

Protecting the CPU begins with detecting and classifying traffic destined for the CPU. This occurs in the data-plane forwarding ASICs, which are part of the PFC4 and Distributed Forwarding Card 4 (DFC4) forwarding engines.

The PFC4 and DFC4 forwarding engines provide enhanced traffic classification capabilities, with traffic policing and monitoring that can be applied to traffic sent to and from the CPU. These new enhancements are configured and monitored via two primary features:

Modular QoS Command Line Interface (MQC) service policies applied to the control-plane interface

Hardware rate limiters

As traffic enters the switch, the relevant forwarding engine (PFC4 or DFC4) determines the destination at Layer 2 or Layer 3 per the configuration applied. If the traffic is sent directly or indirectly to the switch’s own IP address or if the traffic requires some kind of exception handling, the forwarding engine will direct the traffic to the CPU. This is where the PFC4 can be used to classify and use QoS mechanisms, such as policing, to limit or throttle the amount of traffic sent to the control plane. Control-plane traffic is sent to and from the dual-core CPU subsystem via a 1-Gbps full duplex connection to the data-plane switching fabric, as shown in Figure 1.

Figure 1. Logical Diagram of the Control Plane

Distributed Forwarding Engines and Control Plane Protection

The Cisco Catalyst 6500 Series is a distributed-forwarding, multilayer switch. This means that data-plane packet lookup decisions are accelerated in hardware for Layer 2 through 4 services. The distributed forwarding aspect comes from the fact that multiple forwarding engines are supported including the Policy Feature Card (PFC) which is the primary forwarding engine, and any line cards which support a Distributed Forwarding Card (DFC).

The PFC and DFC forwarding engines operate in master-slave relationship, where the Supervisor Module PFC is the master forwarding engine synchronizing its forwarding tables with the DFC forwarding tables located on the line cards. The Supervisor PFC also serves as the forwarding engine for any line cards that do not contain a DFC module. The DFC modules perform Layer 2 through 4 forwarding for their local line card and thereby increase the overall system forwarding capacity.

The distributed-forwarding architecture allows for CPU-bound traffic to be sourced from any of the forwarding engines (PFC or DFCs). As described earlier, the control-plane mechanisms within the PFC and DFC provide the control plane protection. Therefore, each individual forwarding engine implements control plane protection on a per-forwarding-engine basis.

For example, if a given policy provisions a QoS policer[2] rate for 1000 bps for a given traffic class, each forwarding engine will police 1000 bps of matching traffic. The aggregate amount of traffic sent from the forwarding engines is therefore potentially 1000 bps multiplied by the number of forwarding engines. As a final step, the control plane itself implements a software-based queuing and policing mechanism that will police the sum of all traffic from the forwarding engines at the configured rate of 1000 bps. Figure 2 illustrates distributed-forwarding engines and control-plane policing (CoPP).

Figure 2. Distributed-Forwarding Engines and Control plane Policing

The Cisco Catalyst 6500 Series implements QoS policers on a per-forwarding-engine basis; this applies to the control plane protection policies as well. Both MQC service policies and hardware rate limiters are implemented on a per-forwarding-engine basis.

Distributed Policing and Control Plane Protection

Systems based on the Cisco Catalyst 65000 Series Supervisor Engine 2T support an optional distributed policing capability in which multiple forwarding engines can communicate and synchronize the amount of traffic transmitted for a specific policer. The Supervisor Engine 2T supports up to 16,384 (16 K) policers; 4096 (4 K) of these can be distributed policers. The distributed-policing option is useful when implementing control plane protection policies on systems with one or more DFC4 forwarding engines installed.

With distributed policing enabled, whenever a traffic policer policy is applied across multiple forwarding engines, the system will dynamically create a policing cluster. The cluster will include a traffic policer from each relevant forwarding engine. The cluster will also dynamically elect one of the forwarding engines to become the master policer for the cluster. Each forwarding engine implements a local traffic policing policy and communicates the amount of traffic forwarded by the local policer to the system elected master policer. The master policer then communicates the aggregate traffic amount transmitted to the relevant forwarding engines.

As Figure 3 illustrates, the communication between the policers on the different forwarding engines is performed via policer update packets (PUPs). The PUPs are sent via the in-band switching fabric at specific intervals or when traffic thresholds are reached. Each of the forwarding engines maintains counters and threshold values for locally transmitted traffic and the aggregate traffic transmitted for the cluster. The result is that the forwarding rate configured in the policy can be fully enforced across all forwarding engines. Ultimately, this means that the aggregate rate is enforced in hardware with little or no additional policing required by the final software policer.

Figure 3. Distributed Policing Capability of the Systems Based on Cisco Catalyst 6500 Series Supervisor Engine 2T

The following CLI examples show how to enable the distributed policing functionality and verify the current status.

Use the following command line to enable distributed policing:

Use the following to verify distributed policing status:

Applying an MQC Service Policy to the Control-Plane Interface

The Cisco IOS Modular QoS Command Line Interface (MQC)[3] provides a hierarchical command set to define and apply QoS policies. Using MQC, a user-defined service policy can be applied to the control-plane interface in the same way a service policy can be applied to any other interface. Using an MQC-based service policy on the control-plane interface provides a familiar method to classify and control traffic, including using MQC-related CLI show commands and SNMP MIB data to monitor traffic counters.

Using an MQC service policy on the control-plane interface provides the following important benefits:

Traffic visibility and monitoring using CLI or SNMP-based applications

Traffic QoS policing and marking actions using MQC policy maps

Consistent configuration with QoS settings

The first step in creating an MQC service policy is creating a class map or set of class maps. The class map defines a specific set of traffic. The class map is created using Cisco IOS access control lists (ACLs) or other match statements. Cisco IOS ACLs typically define traffic using Layer 2 through 4 addresses or port numbers in the frame or packet. ACLs may also use other properties such as IP options. Time To Live values can also be used.

Once the class map is defined, the next step is to create a policy map that defines a specific action to apply to a class map. For control-plane policing (CoPP) polices, the typical action will be to apply a QoS policer for a specific rate in either packets per second or bits per second.

Finally, a service policy is created using the previously defined class maps and policy maps, and the service policy is applied to an interface with a specific direction, either input or output. For CoPP policies, the input direction is the most applicable direction, as this will filter traffic coming into the CPU.

Table 2 provides an example MQC service-policy configuration applied to the control-plane interface.

Table 2. Example MQC Service-Policy Configuration

Steps

CLI Commands

Define ACL

router#(config)# access-list 101 permit icmp any any

Define traffic class

router(config)# class-map reporting
router(config-cmap)# match access-group 101

Define policy

router(config)# policy-map cpp-policy
router(config-pmap)# class reporting
router(config-pmap-c)# police 100000 conform transmit exceed-action drop violate-action drop

Attach policy to control plane

router(config)# control plane
router(config-cp)# service-policy input cpp-policy

Supervisor 2T and Enhanced Traffic Classification for MQC

Cisco Catalyst 6500 Series Supervisor Engine 2T provides enhanced traffic classification capabilities in hardware. Many of the new classification capabilities can now be defined using MQC class maps. Using the enhanced traffic classification capabilities in an MQC class map reduces the reliance on the alternative special case hardware rate limiters. The following match criteria are matched in hardware using the Supervisor Engine 2T:

Layer 2 MAC ACLs

ARP traffic

Multicast and broadcast traffic

IP options

TTL failures

Maximum transmission unit (MTU) failures

ICMP unreachable (FIB miss and Layer 3 ACL deny)

ICMP redirect

Default MQC Policy

Cisco Catalyst 6500 Series Supervisor Engine 2T provides a system-generated service policy that is applied to the control plane interface from a default configuration. The default service policy includes a policy map named policy-default-autocopp[4]. The policy map uses class maps to classify some of the most common types of control-plane traffic, along with QoS policers, to set an appropriate traffic rate. The policer rates are set by default at a conservative level. Customers can tune the policer levels to better match their individual environment.

The default service policy, policy-default-autocopp, uses system-generated class maps. The system-generated class maps are unique compared to user-defined class maps in that some of the class maps do not use any ACLs. Instead, they trigger hardware filtering on exception traffic, which is not definable using ACLs. Table 3 provides a comparison between two system-generated class maps, one using the internal trigger and one using an ACL.

Table 3. Examples of Reserved Class Maps Using Only Internal Traffic Exception Filters or Access Control Lists

Internal Traffic Filtering

Access Control List Filtering

Class map

Class Map match-all class-copp-unknown-protocol (id 14)
Match any

Class Map match-any class-copp-match-igmp (id 13)
Match access-group name acl-copp-match-igmp

Filter type

No ACL needed, uses internal hardware triggers

Extended IP access list acl-copp-match-igmp
10 permit igmp any any

One important note regarding the system-generated class maps is that you should not add any additional match statements to the class maps. This is especially important for the class maps that use the internal exception traffic filtering without an additional ACL statement. Adding additional match statements will not be integrated into the hardware and will be enforced in software separate from the hardware enforcement.

All system-generated class maps use reserved class map names and automatically configure the forwarding engine with the necessary match criteria for the given traffic class. All of the system-generated class map names start with class-copp.

Hardware Rate Limiters

Hardware rate limiters, also known as “special case hardware rate limiters” or “MLS rate limiters,” are traffic filters built into the hardware forwarding ASICs of the PFC3 or PFC4. These filters detect a variety of traffic types that require control-plane processing, including traffic types that cannot be classified using ACLs.

The Supervisor Engine 2T improves upon the hardware rate limiters available in the Supervisor Engine 720 in many ways, including the number of hardware rate limiters that can be active at a given time[5]. Another big improvement is that the Supervisor Engine 2T provides counters for the hardware rate limiters. The counters are available in packets-per-second and bits-per-second.

Table 4. Supervisor Engine 2T and Supervisor Engine 720 PFC3/PFC4 hardware rate limiter support

Feature

Supervisor 720-PFC3

Supervisor 2T-PFC4

Number of active rate limiters

Layer 3: 8 active
Layer 2: 4 active

Layer 3: 31 active

Layer 2: 26 active

Configurable in packets per second and bits per second

No, bps only

Yes, both packets and bits per second

Priority among rate limiters

No

Yes

Ability to leak the first packet above the rate limit threshold

No

Yes

Counters for forwarded, dropped and leaked packets

None

Yes

Configuration CLI

Router (config) #mls rate-limit

Router (config) #platform rate-limit

The following code shows how to configure hardware rate limiters on the Supervisor Engine 2T.

Using MQC Service Policies Versus Hardware Rate Limiters

In the event that both a hardware rate limiter and an MQC service policy match a particular frame or packet, the hardware rate limiter will take precedence over the MQC service policy.

Certain hardware rate limiters should be used with caution because they affect a wide variety of control-plane traffic types:

CEF RECEIVE: Rate-limits all traffic bound to the router IP address.

LAYER2 PDU: Rate-limits Layer 2 Protocol Data Units (PDUs) such as Bridge Protocol Data Units (BPDU), Cisco Discovery Protocol (CDP), Link Aggregation Control Protocol (LACP), Port Aggregation Protoco(PaGP), Dynamic Trunking Protocol (DTP), VLAN Trunking Protocol(VTP), and other Layer 2 PDUs.

As a best practice, we recommend that you use a MQC service policy versus a hardware rate limiter whenever possible. Using an MQC service policy provides greater granularity and visibility compared to a hardware rate limiter. Also the number of active hardware rate limiters is limited, although this no longer a significant limitation with the Supervisor Engine 2T. If the traffic cannot be classified with an MQC service policy, use the hardware rate limiter.

For example, if the goal is to classify or rate-limit SNMP traffic bound for the switch itself, consider adding a class map and policy map to the control-plane interface service policy rather than using the CEF RECEIVE hardware rate limiter.

Adjusting CoPP Policies

Understanding the tools available to classify and rate-limit traffic is obviously important, but it’s just as important to understand what rates to use and how the rates will affect the control plane. For example what is a normal rate for OSPF traffic? BGP traffic? ARP requests? The answer will vary depending on multiple metrics. For the routing protocols, it would include the number of neighbors, timer settings, the number of routes in the network, and also the state of the network—that is, is the network at steady state or is there a convergence event in progress? There is no single rate or policy that can fit all scenarios.

As discussed earlier, the Supervisor Engine 2T provides a default configuration and service policy with conservative settings for control-plane traffic. Given that the effect on the CPU is based on an aggregate of all control-plane traffic and not just the single dimension from a specific class, there may be a need to tune and adjust some of the control-plane service-policy settings over time.

For existing MQC policies and hardware rate limiters, you can use the CLI- or SNMP-based tools to monitor the counters for any traffic exceeding the prescribed rates. It’s also important to consider “typical” periods of network traffic—that is, specific network events and how they might affect the counters for a specific traffic class. For example, in a Layer 2 environment with some 500 directly connected Layer 2 hosts, consider setting an ARP rate limiter to accommodate the 500 hosts. There is no need for a threshold higher than, for example, 1500 ARP requests per second. The more environment data you gather over time, combined with any network events occurring during this time, the more accurate your baseline of control-plane traffic will be. Table 5 provides some of the most relevant commands to monitor control-plane traffic.

Table 5. Commands for Monitoring Control-Plane Traffic

Command

Purpose

show interface <interface> stats

Provides summary of software switching statistics

show policy-map control-plane input class <class-map>

Provides counters for hardware and software CoPP on a per-class-map basis

show platform hardware statistics

Provides counters for hardware rate limiters

The following code shows the results for show interface stats indicating traffic on interface Gigabit Ethernet 3/1 being software switched via the control plane:

The following code shows the abbreviated output for displaying counters for hardware rate limiters:

Control-plane traffic monitoring with Flexible NetFlow

Supervisor Engine 2T supports Cisco IOS Flexible NetFlow for IP flow monitoring. Using Flexible NetFlow (FNF), you can monitor control-plane traffic on a per-flow basis to develop realistic traffic rates, which can then be used in developing custom control-plane service policies.

For example, consider a situation where CPU utilization is becoming abnormally high and CPU-bound traffic processing is seen as the source of the high CPU rates. In this case, the exact source of the traffic is not clear. One option to reduce the CPU-bound traffic is to enable the existing hardware rate limiter for CEF RECEIVE. Alternatively, you can use the MQC reserved class map for CPU-bound traffic class-copp-receive. However, both of these techniques would classify all traffic destined to the CPU or Switch IP addresses, which may include quite a variety of traffic, including some critical traffic. Ultimately, these rate limiters would help to reduce CPU utilization at the expense of all CPU-bound traffic, and would still not identify the source of any abnormal traffic flows.

A more intelligent solution would be to use Flexible NetFlow on the control plane interface and let it gather granular traffic statistics, including source and destination IP addresses with Layer 4 port numbers. In this way, you could develop a more detailed understanding of the traffic. Using this information, you can then create a new class map and policy map to provide an appropriate level of policing and control plane protection, without affecting legitimate control plane traffic.

The following command-line examples show how a Cisco Catalyst 6500 Series Switch with Supervisor Engine 2T can be used to develop a service policy for the scenario just described:

Consider a FNF flow-record as shown next. The record matches the IPv4 source address and destination address, as well as the Layer 4 port numbers, and collects statistics for packets on input.

You can create a Flexible NetFlow flow monitor using the previously created flow record:

The next step is to apply the flow monitor to the control plane interface, as shown here:

With the Flexible NetFlow flow monitor in place on the control plane interface, you can use the Flexible NetFlow show commands to display the data. The following shows a show flow monitor command being used to display the NetFlow entries sorted by packet count, which quickly shows the top talkers first. The command is repeated after a few seconds. The counters indicate that the flow from IP address 9.1.0.2 has transmitted more than 53,000 packets, all sent to the switch’s IP address:

Now that you’ve identified the traffic flow that is the source of the abnormal increase in CPU utilization, you can make a decision as to how to mitigate the traffic. One alternative would be to use an MQC policy map, which classifies and polices the offending traffic to a more reasonable rate.

The first step is to create an access list to classify the offending traffic in as granular a way as possible. In this case, a User Datagram Protocol (UDP) extended access list is used:

The next step is to create an MQC class map that will match on the access list previously created:

With the class map configured, the next step is to add the new class map to the system-generated policy map policy-default-autocopp. The policy map will classify traffic based on the class map and then apply the desired action. In this case, we will use a QoS policer and police the traffic to 50 packets per second, allowing a burst of 10 packets beyond prescribed rate:

The new class map is applied to the system-generated policy map. The system configuration already has the policy map in use and applied to the control-plane interface. With the updated policy map in place, CPU utilization returns to normal, as shown here:

The MQC show commands can now be used to confirm the policer action. Note the counters shown are for the policer applied on each forwarding engine and then a separate set of counters for the final software-based policer. In this case, the only forwarding engine installed in the system is the PFC4 on the Supervisor Engine 2T.

Conclusion

Protecting the control plane of network devices continues to be a challenge for network operators because of both malicious and nonmalicious control traffic events. The control plane must be hardened so that critical control traffic processing is maintained, thereby maintaining the integrity of the network.

The new intelligent traffic classification capabilities available with Cisco Catalyst 6500 Series Supervisor Engine 2T provide a versatile and complete set of tools for protecting the control plane. The two main tools are the MQC-based service-policy and the control-plane hardware rate limiters. In addition, the Supervisor Engine 2T now supports Flexible NetFlow for analyzing traffic on the control-plane interface. Flexible NetFlow provides granular traffic classification and allows for policies that target the specific traffic flows without affecting other legitimate control plane traffic.

For More Information

For more information on Cisco IOS NetFlow, visit: http://www.cisco.com/go/netflow.

For more information on Cisco Modular QoS Command Line Interface, visit: http://www.cisco.com/en/US/docs/ios/12_2t/12_2t13/feature/guide/ft3level.html.

Appendix A Default Control Plane Interface Policy Map
(Applicable to Cisco IOS 12.2(50)SY, may change in future releases)

Policy Map policy-default-autocopp
Class class-copp-mcast-v4-data-on-routedPort
police rate 10 pps, burst 1 packets
conform-action drop
exceed-action drop
Class class-copp-mcast-v6-data-on-routedPort
police rate 10 pps, burst 1 packets
conform-action drop
exceed-action drop
Class class-copp-icmp-redirect-unreachable
police rate 100 pps, burst 10 packets
conform-action transmit
exceed-action drop
Class class-copp-ucast-rpf-fail
police rate 100 pps, burst 10 packets
conform-action transmit
exceed-action drop
Class class-copp-vacl-log
police rate 2000 pps, burst 1 packets
conform-action transmit
exceed-action drop
Class class-copp-mcast-punt
police rate 1000 pps, burst 256 packets
conform-action transmit
exceed-action drop
Class class-copp-mcast-copy
police rate 1000 pps, burst 256 packets
conform-action transmit
exceed-action drop
Class class-copp-ip-connected
police rate 1000 pps, burst 256 packets
conform-action transmit
exceed-action drop
Class class-copp-ipv6-connected
police rate 1000 pps, burst 256 packets
conform-action transmit
exceed-action drop
Class class-copp-match-pim-data
police rate 1000 pps, burst 1000 packets
conform-action transmit
exceed-action drop
Class class-copp-match-pimv6-data
police rate 1000 pps, burst 1000 packets
conform-action transmit
exceed-action drop
Class class-copp-match-mld
police rate 10000 pps, burst 10000 packets
conform-action set-discard-class-transmit 48
exceed-action transmit
Class class-copp-match-igmp
police rate 10000 pps, burst 10000 packets
conform-action set-discard-class-transmit 48
exceed-action transmit
Class class-copp-match-ndv6
police rate 1000 pps, burst 1000 packets
conform-action set-discard-class-transmit 48
exceed-action drop

Appendix B System Reserved Class-Maps
(Applicable to Cisco IOS 12.2(50)SY, may change in future releases)

Class Map match-all class-copp-ingress-acl-reflexive (id 1)
Class Map match-all class-copp-icmp-redirect-unreachable (id 3)
Class Map match-all class-copp-glean (id 4)
Class Map match-all class-copp-receive (id 5)
Class Map match-all class-copp-options (id 6)
Class Map match-all class-copp-broadcast (id 7)
Class Map match-all class-copp-mcast-acl-bridged (id 8)
Class Map match-all class-copp-slb (id 9)
Class Map match-all class-copp-mtu-fail (id 10)
Class Map match-all class-copp-ttl-fail (id 11)
Class Map match-all class-copp-arp-snooping (id 12)
Class Map match-all class-copp-mcast-copy (id 13)
Class Map match-all class-copp-ip-connected (id 14)
Class Map match-any class-copp-match-igmp (id 16)
Class Map match-all class-copp-unknown-protocol (id 17)
Class Map match-all class-copp-vacl-log (id 18)
Class Map match-all class-copp-mcast-ipv6-control (id 19)
Class Map match-any class-copp-match-pimv6-data (id 20)
Class Map match-all class-copp-mcast-punt (id 21)
Class Map match-all class-copp-unsupp-rewrite (id 22)
Class Map match-all class-copp-ucast-egress-acl-bridged (id 23)
Class Map match-all class-copp-ip-admission (id 24)
Class Map match-all class-copp-arp-acl (id 25)
Class Map match-all class-copp-service-insertion (id 26)
Class Map match-all class-copp-mac-acl (id 27)
Class Map match-all class-copp-mac-pbf (id 30)
Class Map match-any class-copp-match-mld (id 31)
Class Map match-all class-copp-ucast-ingress-acl-bridged (id 32)
Class Map match-all class-copp-dhcp-snooping (id 33)
Class Map match-all class-copp-egress-acl-reflexive (id 34)
Class Map match-all class-copp-wccp (id 35)
Class Map match-all class-copp-nd (id 37)
Class Map match-all class-copp-ipv6-connected (id 38)
Class Map match-all class-copp-mcast-rpf-fail (id 39)
Class Map match-all class-copp-igmp (id 40)
Class Map match-all class-copp-ucast-rpf-fail (id 41)
Class Map match-all class-copp-mcast-ip-control (id 42)
Class Map match-any class-copp-match-pim-data (id 43)
Class Map match-any class-copp-match-ndv6 (id 44)
Class Map match-any class-copp-mcast-v4-data-on-routedPort (id 45)
Class Map match-all class-copp-bridge (id 46)
Class Map match-any class-copp-mcast-v6-data-on-routedPort (id 47)

Appendix C Hardware Rate Limiter Comparison PFC3 Versus PFC4

Table 6 compares the PFC3 and PFC4 hardware rate limiters.

Table 6. PFC3 Versus PRC4

Hardware Rate Limiter

PFC3

PFC4

Comments

CEF RECEIVE

Yes

Yes

Unicast to router

CEF RECEIVE SECONDARY

Yes

Yes

Unicast traffic to router on a secondary IP address

CEF GLEAN

Yes

Yes

Unicast traffic which requires an ARP

IP ERRORS

Yes

Yes

Unicast

UCAST IP OPTION

Yes

Yes

Unicast

ICMP ACL-DROP

Yes

Yes

Unicast

ICMP NO-ROUTE

Yes

Yes

Unicast

ICMP REDIRECT

Yes

Yes

Unicast

RPF FAILURE

Yes

Yes

Unicast

ACL VACL LOG

Yes

Yes

Unicast ACL

ACL BRIDGED IN

Yes

Yes

Unicast ACL

ACL BRIDGED OUT

Yes

Yes

Unicast ACL

ARP Inspection

Yes

Yes

Unicast

DHCP Snooping IN

Yes

Yes

Unicast

DHCP Snooping OUT

Yes

Not Shown

Unicast – Enabled via the same config as DHCP Snooping IN (PFC3/PFC4)

IP FEATURES

Yes

Yes

Unicast

MAC PBF IN

Yes

Yes

Unicast ACL

CAPTURE PKT

Yes

Yes

PFC3 uses mls rate-limit to configure this RL.

PFC4 configures this RL via the Optimized ACL Logging feature ip access-list cache rate-limit

IP ADMIS. ON L2 PORT

Yes

Yes

Layer 2

MCAST IPV4 DIRECTLY C

Yes

Yes

Name change in PFC4. PFC3 name is MCAST DIRECT CON

MCAST IPV4 FIB MISS

No

Yes

Multicast

MCAST IPV4 IGMP

Yes

Yes

Multicast

MCAST IPV4 OPTIONS

Yes

Yes

Multicast

MCAST IPV4 PIM

Yes

Yes

Multicast

MCAST IPV6 DIRECTLY C

Yes

Yes

Name change in PFC4. PFC3 name is MCAST IPv6 DIRECT CON

MCAST IPV6 MLD

Yes

Yes

Multicast

MCAST IPV6 CONTROL PK

Yes

Yes

Multicast

MCAST IPv6 DFLT DROP

Yes

N/A

PFC4 does not need this rate-limiter

MCAST IPv6 *G M BRIDG

Yes

No

For PFC4 use MQC classification instead

MCAST IPv6 SG BRIDGE

Yes

No

For PFC4 use MQC classification instead

MCAST IPv6 DFLT DROP

Yes

N/A

PFC4 does not need this rate-limiter

MCAST IPv6 SECOND. DR

Yes

N/A

PFC4 does not need this rate-limiter

MCAST IPv6 *G BRIDGE

Yes

N/A

PFC4 does not need this rate-limiter

MTU FAILURE

Yes

Yes

All

TTL FAILURE

Yes

Yes

All

MCAST BRG FLD IP CNTR

Yes

Yes

Multicast

MCAST BRG FLD IP

Yes

Yes

Multicast

MCAST BRG

Yes

Yes

Multicast

MCAST BRG OMF

Yes

Yes

Multicast

MCAST Non RPF

Yes

No

Non-rpf leak in PFC4 is enhanced, and use CoPP for non-rpf leak when there are multiple streams.

MCAST Default Adjacency

Yes

No

In PFC3 this is configured using FIB MISS CLI, for PFC4 use MCAST IPV4 FIB MISS

MCAST PARTIAL SHORTCUT

Yes

N/A

PFC4 does not need this rate-limiter

UCAST UNKNOWN FLOOD

Yes

Yes

Layer2

LAYER_2 PDU

Yes

Yes

Layer2

LAYER_2 PT

Yes

Yes

Layer2

LAYER_2 PORTSEC

Yes

Yes

Layer2

LAYER_2 SPAN PCAP

No

Yes

Configurable via monitor session CLI”, not via platform rate-limit CLI

MAC PBF IN

Yes

Yes

#mls rate-limit unicast acl mac-pbf

Layer_2 MINIPROTOCOL

Yes

No

Configurable via monitor session CLI, not via mls rate-limit CLI

DIAG RESERVED 0

Yes

Yes

Reserved for Cisco internal use

DIAG RESERVED 1

Yes

Yes

Reserved for Cisco internal use

DIAG RESERVED 2

Yes

Yes

Reserved for Cisco internal use

DIAG RESERVED LIF 0

Yes

Yes

Reserved for Cisco internal use

MCAST REPL RESERVED

Yes

Yes

Reserved for Cisco internal use



[1] In a denial-of-service (DoS) attack, an attacker attempts to prevent legitimate users from accessing information or services. More on DoS and DDoS attacks is available from the US-CERT website http://www.us-cert.gov/cas/tips/ST04-015.html.
[2] A QoS policer is a mechanism to rate-limit traffic using the packet lookup process. For the Cisco Catalyst 6500 Series, the QoS policer is a hardware function performed in the Supervisor Engine PFC or line card DFC.
[3] A complete explanation of MQC is beyond the scope of this paper. More information on MQC is available on the Cisco.com website.
[4] Appendix A Lists the default policy-amp policy-default-autocopp
[5] See Appendix B for a list of the hardware rate limiters available in the Sup720/PFC3 and the Sup2T/PFC4.