Cisco Catalyst 6500 Series Switches

Cisco Catalyst 6500-E Series Switch as the Backbone of a Unified Access Campus Architecture White Paper

  • Viewing Options

  • PDF (3.3 MB)
  • Feedback


Overview.. 3

Unified Access Campus Design. 3

Services Integration. 4

Wireless Services Module 2 (WiSM2) 4

Application Security Appliance Service Module (ASA-SM) 5

Network Analysis Module 3 (NAM-3) 6

Smart Operations. 7

Smart Install 8

Generic Online Diagnostics (GOLD) 9

Embedded Event Manager (EEM) 9

Security. 11

Cisco TrustSec. 11

Security Group Access Control Lists (SGACLs) 11

Network Device Admission Control (NDAC) 14

MACsec Encryption. 14

Easy Virtual Networks (EVNs) 15

Control Plane Policing (CoPP) 17

Application Visibility and Control 18

Mini-Protocol Analyzer (MPA) 19

Flexible NetFlow (FnF) 20

Medianet 22

Performance Monitor 22

Mediatrace. 23

Resiliency. 24

Nonstop Forwarding with Stateful Switchover (NSF/SSO) 24

OSPF Nonstop Routing. 26

Virtual Switching System (VSS) 26

Multichassis EtherChannel 26

Quad-Supervisor SSO.. 27

Conclusion. 28


The Cisco® Catalyst® 6500-E Series Switch has been a strategic platform for more than a decade, traditionally providing services in the access, distribution, and core areas of campus, data center, and WAN networks for companies in every possible vertical. As market trends have changed to meet evolving customer demands, the Cisco Catalyst 6500-E Series Switch has adapted to support these new trends.

The influx of mobile devices, both corporately and personally owned, into the corporate campus network environment has forced IT departments to examine their network infrastructure to support these additional collaboration, video, and mobility needs. To address these requirements, the Cisco Catalyst 6500-E Series Switch has once again advanced its capabilities in the areas of smart operations, security, application visibility and control, and resiliency. With these enhancements, the Cisco Catalyst 6500-E Series Switch with Supervisor Engine 2T is the best choice for the backbone of the unified access campus architecture, delivering the services required to support an enterprisewide bring your own device (BYOD) infrastructure supporting video and collaboration services.

Unified Access Campus Design

Figure 1 shows a unified access campus architecture that will be referenced throughout this document.

Figure 1.      Unified Access Campus Design

Let us examine the different layers of the unified access campus architecture in Figure 1. Starting at the access layer are the Cisco Aironet® 2600 and 3600 Series Access Points. These connect to (from left to right) access layer switches from the Cisco Catalyst 3850, 4500-E, and 3750-X Series of switches. The Cisco Catalyst 3850 is a new concept in switching, offering converged wired and wireless in a single platform so that organizations can scale the wireless infrastructures that will be needed to support the proliferating BYOD requirements that are emerging in the industry.

The highlighted area illustrates where the Cisco Catalyst 6500-E with Supervisor Engine 2T (shown with integrated Wireless Services Module 2 [WiSM2]) will be positioned in the unified access campus architecture. The distribution and core layers of the network form the backbone of the unified access campus architecture and require a platform that is highly available, rich in services, and scalable enough to support the trends of BYOD, video, and collaboration being seen in today’s enterprise networks.

The Cisco Catalyst 6500-E with Supervisor Engine 2T is capable of supporting up to 4 terabits per second of data forwarding in a virtual switching system (VSS) configuration while maintaining a level of availability that can deliver 99.999 percent uptime to make sure of operational continuity. The Supervisor Engine 2T supports advanced features that allow an organization to build a highly scalable, secure, converged wired and wireless campus network. This paper focuses on five primary areas in which a Cisco Catalyst 6500-E with Supervisor Engine 2T delivers unmatched feature functionality to enable a unified access campus architecture:

   Services Integration

   Smart Operations


   Application Visibility and Control


Services Integration

With the innovative Cisco integrated service modules, network managers can deploy a broad range of LAN interfaces, security services, and content and network analysis services within the same platform. The modules are designed to take full advantage of the functionality and intelligence of the Cisco Catalyst 6500-E with Supervisor Engine 2T.

The integrated service module architecture simplifies infrastructure complexity through system and services integration, network virtualization, and simplified management and high availability, which all lead to a lower TCO. The current portfolio of services modules supported by the Supervisor Engine 2T includes, but is not limited to, the WiSM2, Network Analysis Module 3 (NAM-3), and Adaptive Security Appliance Service Module (ASA-SM). These three represent the newest generation of service modules in their respective areas of wireless, application visibility/control, and security and provide primary capabilities to support a unified access campus architecture.

Wireless Services Module 2 (WiSM2)

The Cisco Wireless Services Module 2 (WiSM2) Controller for Cisco Catalyst 6500-E Series Switches is a highly scalable and flexible platform that enables systemwide services for mission-critical wireless networking in medium-sized to large enterprises and campus environments. Designed for 802.11n performance and maximum scalability, the Cisco WiSM2 controller supports a higher density of clients and delivers more efficient roaming, with at least nine times the throughput of existing 802.11a/g networks. The WiSM2 controller has the ability to simultaneously manage up to 1000 access points, providing up to 20 Gbps of bandwidth and subsecond stateful failover of all access points from primary to standby controller.

The proliferation of wireless devices in enterprise campus networks as a result of BYOD is promoting the need for a converged wired and wireless infrastructure to provide ease of management as well as high availability to support delay-sensitive applications such as voice and video. The introduction of the Cisco Catalyst 3850 switch is a prime example of the convergence of wired and wireless, but there will be use cases where the Cisco Catalyst 3850 does not apply. Take, for example, the different options in Figure 2.

Figure 2.      Campus Wireless Deployment Scenarios

In the hybrid deployment model, an organization will have a mix of Cisco Catalyst 3850 Series (shown as the two switches on the left) in addition to Cisco Catalyst 4500-E (shown) or 3750-X Series in the access layer. This could be in a network where there is a mix of highly mobile users, who will need some of the advanced capabilities of the Cisco Catalyst 3850 Series, and back-office users, who will be more stationary and will not need those services. In this case, the Cisco Catalyst 6500 Series with Supervisor Engine 2T and WiSM2 is used to terminate the sessions of the back-office users, while the Cisco Catalyst 3850 Series will terminate the sessions for the mobile users. This means that an organization that has already made an investment in WiSM2 can protect that investment while at the same time enhancing its infrastructure with Cisco Catalyst 3850 technology.

In the traditional deployment, the organization has not yet deployed the Cisco Catalyst 3850 Series, or it might have no plans to do so for whatever reason (budget, technology requirements, and so on). In this case, the Cisco Catalyst 6500-E Series with Supervisor Engine 2T and WiSM2 is used to terminate all wireless sessions for the organization, providing the most scalable and highly available wireless infrastructure to meet the organization’s BYOD, video, and collaboration needs.

Application Security Appliance Service Module (ASA-SM)

The Cisco Catalyst 6500-E Series ASA Services Module (ASA-SM) delivers advanced technology that transparently integrates with the Cisco Catalyst 6500-E with Supervisor Engine 2T to provide sophisticated security, virtualization, reliability, and performance. The ASA-SM supports up to 16 Gbps of multiprotocol firewalling, up to 2 million access control entries(ACEs), and up to 250 virtual contexts, making it the perfect firewall solution to support the scalability and network virtualization required in a unified campus architecture supporting BYOD, video, and collaboration, as shown in Figure 3.

Figure 3.      Virtual Firewall Contexts to Support a BYOD Infrastructure

As Figure 3 demonstrates, the ASA-SM working in a virtualized mode works in conjunction with other network elements to provide isolated domains for trusted and untrusted devices and users. If you have ever been to a Cisco office and requested access to the wireless network, this is how it is done. The wireless infrastructure presents different Service Set Identifiers (SSIDs) based upon user type. After the user is associated and authenticated, that user is placed into a virtual LAN (VLAN) for that user alone, with Virtual Route Forwarding (VRF)and firewall context to maintain isolation between the two groups.

With the addition of the identity services engine (ISE), this can now be done at the device level using Device Sensor so that even company employees would be put into separate security domains depending on the type of device they are using (personal owned compared to corporate issued). The scalability of the ASA-SM’s virtual context feature allows an organization to be very flexible in how to secure its network given the proliferation of devices in the enterprise campus environment.

Network Analysis Module 3 (NAM-3)

One of the biggest challenges of BYOD in a unified access campus architecture is network analysis and monitoring. An organization has to monitor both its traditional traffic and corporate-owned infrastructure as well as employee-owned devices that are allowed onto the network. Network administrators need multifaceted visibility into the network and applications to help ensure consistent delivery of service to end users. Understanding who is using the network, knowing what applications are running on the network, assessing how the applications are performing, and characterizing how traffic is being used are the foundation for managing and improving the delivery of business-critical applications.

Integrated with the Cisco Catalyst 6500-E with Supervisor Engine 2T, the Network Analysis Module 3 (NAM-3) helps enable high-performance traffic monitoring, deep packet captures, and accurate performance analytics at 10 Gbps+ traffic speeds. The NAM-3 can collect information from across the unified access campus architecture using Switch Port Analyzer (SPAN), Remote SPAN (RSPAN), and Encapsulated RSPAN (ERSPAN); can act as a NetFlow collector for local or remote devices; and can integrate with the Cisco Prime infrastructure, which offers integrated network and application visibility, as shown in Figure 4.

Figure 4.      NAM-3 with Cisco Prime Network Analysis Module Software

The software delivers granular traffic analysis, rich application performance metrics, comprehensive voice analytics, and deep packet captures to help an organization manage and improve the operational effectiveness of the unified access campus architecture supporting BYOD, video, and collaboration.

Smart Operations

Cisco Catalyst Smart Operations are a set of tools, capabilities, and management applications that network administrators can use to simplify deployment, management, and troubleshooting of the unified access campus architecture. The Cisco Catalyst 6500-E with Supervisor Engine 2T supports the latest smart operations capabilities, including Smart Install, Generic Online Diagnostics (GOLD), and Embedded Event Manager (EEM). Figure 5 shows the importance of smart operations.

Figure 5.      Importance of Smart Operations

Figure 5 shows that more than half of the average network administrator’s time is spent with network configuration, troubleshooting, monitoring, and installation. The tools offered as part of smart operations are meant to reduce that time so that network administrators can have more time to optimize the network to deliver the best possible experience to their end users.

Smart Install

Smart Install is a plug-and-play configuration and image-management feature that provides zero-touch deployment for the Cisco Catalyst 3850, 3750, 3560, 2975, and 2960 Series of switches. This means that a customer can ship a switch to a location, place it in the network, and power it on with no configuration required on the device. With Cisco IOS® Software Release 15.1(1)SY and newer, the Cisco Catalyst 6500-E with Supervisor Engine 2T acts as the Smart Install director, as shown in the architecture in Figure 6.

Figure 6.      Smart Install Architecture and Operation

The Smart Install operation requires no technical expertise of the person installing the new switch. After the switch is connected, the system will dynamically detect what type of switch it is and then begin the image load and configuration processes automatically. If a Cisco IOS Software upgrade of existing switches is needed, the director can push down a new software version to a single client or to all clients in a group (for example, all Cisco Catalyst 3850 switches).

To enforce security of the environment, network administrators can set a join window on the director so that no clients can be brought online during unauthorized times. The director can act as both the Dynamic Host Configuration Protocol (DHCP) and Trivial File Transfer Protocol (TFTP) servers for the clients, eliminating the need for services external to the local network infrastructure.

Generic Online Diagnostics (GOLD)

The Cisco Catalyst 6500-E with Supervisor Engine 2T supports diagnostic capabilities that allow a network administrator to test and verify the hardware functionality of the switch while the switch is connected to a live network or before deploying the switch in the production network. The online diagnostics contain packet switching tests that check different hardware components and verify the data path and control signals. These tests can prevent future network issues by taking corrective actions before a catastrophic failure and can provide valuable information when troubleshooting a network issue.

GOLD tests are categorized as bootup, on-demand, scheduled, or health-monitoring diagnostics. Bootup diagnostics run during bootup, on-demand diagnostics run from the command-line interface (CLI), scheduled diagnostics run at user-designated intervals or specified times when the switch is connected to a live network, and health-monitoring diagnostics run in the background. The nondisruptive online diagnostic tests run as part of background health monitoring. Either disruptive or nondisruptive tests can be run at the user's request (on demand). Figure 7 shows an example of a health-monitoring diagnostic test.

Figure 7.      GOLD Health Monitoring of Forwarding Path

In this example, the system is sending health-monitoring diagnostic packets every 6 seconds to test the data and control path between the Supervisor Engine 2T and any DFC4-equipped modules. The test also makes sure of Layer 2 MAC address consistency across Layer 2 MAC address tables. If the test fails 10 consecutive times, then the module is reset.

GOLD also has the ability to run a full system check before deploying the switch in the live network. This can be accomplished with the diagnostic start system test all command, which runs all possible GOLD tests for a particular hardware configuration.

Embedded Event Manager (EEM)

The ability to quickly react to events within the system is a critical piece to maintaining the kind of stable, reliable infrastructure required by BYOD, video, and collaboration. The Cisco Catalyst 6500-E with Supervisor Engine 2T meets this requirement through the support of the Embedded Event Manager (EEM). Cisco IOS Software EEM is a powerful and flexible subsystem that provides real-time network event detection and onboard automation that gives the network administrator the ability to adapt the behavior of network devices to align with their business needs.

EEM supports more than 20 event detectors that are highly integrated with different Cisco IOS Software components to trigger policies in response to network events. These policies are programmed using either a simple (CLI or a scripting language called Tool Command Language (Tcl). Figure 8 shows the EEM architecture and operational model.

Figure 8.      EEM Architecture and Operational Model

An event in the system (such as the generation of a syslog message) is seen by an event detector, which then triggers the configured policy, which in turn takes some action as defined by the network administrator. These actions can be in the form of notifications (such as custom syslog messages, Simple Network Management Protocol [SNMP] traps, or emails), customized configurations, or other system actions (such as reloading the system or failing over to a standby Supervisor Engine 2T). Figure 9 shows a use case of EEM using the GOLD event detector.

Figure 9.      EEM Using the GOLD Event Detector

The previous section talked about GOLD tests helping to prevent future network issues. When using GOLD with EEM, network administrators can be alerted to a GOLD health-monitoring test failure before the system would normally send the notification. The EEM script will see the failure of the test and send notification by any means possible (syslog, SNMP, email, text, and so on). If the test was scheduled for a period of low network activity, the EEM policy could be configured to force the module to reload and to collect detailed data, using simple show commands and exporting the output to a file, in order to gather information that can allow the root cause of the problem to be determined more quickly, leading to a lower mean time to repair and higher availability.

For those who may not be as comfortable with scripting or who need assistance with building a script, an online community is available at The site contains scripts that have been built by other users, helpful “how to” examples, and a discussion forum in which EEM technical experts from Cisco will answer questions.


When it comes to building a unified access campus architecture to support BYOD, the number-one issue that comes to mind is usually security. With the influx of personally owned devices on the network, network administrators must build an infrastructure that is both flexible and secure enough to allow users access to their work environment regardless of the device they are using. The Cisco Catalyst 6500-E with Supervisor Engine 2T supports features such as Cisco TrustSec®, easy virtual networks (EVNs), and control plane policing (CoPP) to provide user access control, network segmentation, and infrastructure protection in a BYOD environment.

Cisco TrustSec

Cisco TrustSec offers a superior experience on a Cisco infrastructure, using features such as security group access control lists (SGACLs) for security policy enforcement, network device admission control (NDAC) for infrastructure protection, and 802.1AE MAC Security (MACsec) encryption for data integrity. The Cisco Catalyst 6500-E with Supervisor Engine 2T supports all of these capabilities and more, giving network administrators a highly flexible suite of features with which they can secure the backbone of the unified access campus architecture.

Security Group Access Control Lists (SGACLs)

The Cisco Catalyst 6500-E with Supervisor Engine 2T can act as both a security group tag (SGT) imposition point and an SGACL enforcement point. SGTs are usually applied at the access layer of the unified access campus architecture, using an ISE to assign the tags based on user authentication, device profiling, or a combination of the two. Figure 10 shows an example of the flexibility that Cisco ISE has in assigning SGTs.

Figure 10.    SGTs at the Access Layer

Figure 10 shows how the Cisco ISE can communicate with the access layer switch to apply SGTs based on user and device type. After the SGTs are assigned by the access layer switch, the Cisco Catalyst 6500-E with Supervisor Engine 2T can enforce the access policies that the network administrator configures in the Cisco ISE. If the access layer switch is unable to apply the SGTs, then the Cisco Catalyst 6500-E with Supervisor Engine 2T has the ability to apply SGTs in the backbone based on the IP subnet, the VLAN, or the Layer 3 port in which the user is located. Figure 11 shows examples of both the SGT imposition and SGACL enforcement capabilities.

Figure 11.    SGTs in the Unified Access Campus Backbone

After the SGT is assigned either at the access layer or in the backbone, the tagged traffic is passed through the network to an enforcement point. Figure 11 shows an example of an SGACL where traffic with SGT 1110 has access to resources in group 3200 on the allowed TCP ports, whereas any other IP traffic is denied. Because the SGACL is based on group memberships, changes in the underlying IP infrastructure do not requires changes in the SGACL. For example, if 10 new subnets are added to the user access infrastructure, no change is needed in the SGACL, because all of the new users would be getting existing SGTs. This makes an SGT/SGACL infrastructure much easier to manage and much more flexible.

Cases arise in which an organization wants to enact an enterprisewide SGT/SGACL infrastructure but has remote locations that are separated from the main campus by Layer 3 networks. The Cisco Catalyst 6500-E with Supervisor Engine 2T supports the ability to transmit SGT traffic from remote locations to a centralized enforcement site. Figure 12 shows the concept of connecting Cisco TrustSec domains across a domain without Cisco TrustSec.

Figure 12.    Connecting Cisco TrustSec Domains Across Domains Without Cisco TrustSec

The packet traversing a domain without Cisco TrustSec on the path to another Cisco TrustSec domain has its SGT preserved by using the Cisco TrustSec Layer 3 SGT transport feature. With this feature, the egress Cisco TrustSec device encapsulates the packet with an ESP header that includes a copy of the SGT. When the encapsulated packet arrives at the next Cisco TrustSec domain, the ingress Cisco TrustSec device removes the ESP encapsulation and propagates the packet with its SGT.

To support Cisco TrustSec Layer 3 SGT transport, the Cisco Catalyst 6500-E with Supervisor Engine 2T that will act as a Cisco TrustSec ingress or egress Layer 3 gateway must maintain a traffic policy database that lists eligible subnets in remote Cisco TrustSec domains as well as any excluded subnets within those regions. You can configure this database manually on each device if they cannot be downloaded automatically from the Cisco ISE.

Network Device Admission Control (NDAC)

One of the challenges faced by network administrators in any environment is guaranteeing that the physical infrastructure is secure. The Cisco Catalyst 6500-E with Supervisor Engine 2T supports the NDAC capability as part of its support of the broader Cisco TrustSec suite of features. Using NDAC, Cisco TrustSec authenticates a device before allowing it to join the network, thereby making sure that no unauthorized devices are plugged into the backbone of the unified access campus architecture. Figure 13 shows how an infrastructure using NDAC is built.

Figure 13.    NDAC Infrastructure Overview

Seed devices/authenticators are the first or closest devices to the ISE. In this case, the connectivity between the seed device and the ISE does not have authentication, encapsulation, or encryption enabled. Seed devices require manual configuration using traditional CLIs to define a shared secret with the ISE. Communication between the seed device and ISE uses RADIUS over IP.

Nonseed devices/supplicants are those that do not have direct IP connectivity to the ISE and require seed devices/authenticators to enroll and authenticate/authorize them onto the network. After the link between the supplicant and authenticator becomes activated, a protected access credential (PAC) will be provisioned to the supplicant, and ISE reachability information will also be downloaded. The PAC contains a shared key and an encrypted token to be used for future secure communications with the ISE.

MACsec Encryption

Data integrity and security are requirements for organizations where sensitive information is being passed between areas of the network that might be out of the control of the organization. To protect this information from being accessed by unauthorized users, the Cisco Catalyst 6500-E with Supervisor Engine 2T supports 802.1AE MACsec 128-bit AES encryption on the uplinks of the Supervisor Engine 2T as well as on all 6900 Series Module ports (1G/10G/40G). MACsec provides hop-by-hop encryption between directly connected devices, all without affecting the performance of the underlying traffic.

A common example where MACsec encryption is used is between buildings on a campus. In many instances an organization might have a contiguous campus environment with its own dark fiber connections between the buildings, but those connections might exist in a publically accessible space or at least one not totally controlled by the organization, as shown in Figure 14.

Figure 14.    MACsec Encryption in the Campus

This example of video surveillance traffic is one of the many use cases where MACsec encryption plays a vital role in the backbone of the unified access campus architecture. If this were a medical organization, financial institution, government agency, or any other organization whose data is highly confidential, then encrypting the traffic traversing the public space becomes critical to maintaining compliance with government regulations concerning data integrity.

In some cases an organization’s footprint is such that it has geographically separated locations separated by an ISP network, and yet the need for data integrity and security is the same as if the locations were on the same physical campus. For these cases, the Cisco Catalyst 6500-E with Supervisor Engine 2T offers the ability to pass 802.1AE MACsec encrypted traffic across a provider’s Multiprotocol Label Switching (MPLS) backbone, as seen in Figure 15.

Figure 15.    MACsec Encryption Across an MPLS Backbone

Figure 14 is the same use case as Figure 13, except now the encrypted traffic is being passed across an ISP’s MPLS backbone instead of between buildings at the same physical site. This effectively extends the backbone of the unified access campus architecture to the entire enterprise even when that enterprise is composed of geographically disparate locations. The ability to pass encrypted traffic across an MPLS backbone gives the network administrator the confidence to be able to extend the same policies and capabilities to remote site users as exist for local site users while remaining assured that data security is maintained.

Easy Virtual Networks (EVNs)

The logical separation of forwarding instances (or segmentation) over a single physical infrastructure is a primary concept when considering network security. The addition of personally owned devices into the enterprise campus environment means that organizations that previously never had to deal with this issue will suddenly find themselves needing to implement segmentation to make sure security or compliance guidelines are followed.

Organizations most commonly use VLANs, Multiprotocol Label Switching with virtual private networks (MPLS VPNs), and/or Virtual Route Forwarding Lite (VRF-Lite) to achieve network segmentation. The Cisco Catalyst 6500-E with Supervisor Engine 2T supports all of these methods with a very rich feature set to support each.

With Cisco IOS Software Release 15.0(1)SY1 and newer software, the Cisco Catalyst 6500-E with Supervisor Engine 2T supports the EVN feature. EVN simplifies deployment and management of MPLS VPNs and VRF-Lite to allow network administrators to more easily and quickly adopt these technologies, which can sometimes seem daunting to implement. The primary piece of EVN is the virtual network trunk (VNET trunk) capability, which vastly simplifies the deployment of VRF-Lite segmentation. Many organizations choose to deploy VRF-Lite VPNs because VRF-Lite does not require Border Gateway Protocol (BGP) or Label Distribution Protocol (LDP), and often the scalability of VRF-Lite (up to 32 VPNs) is more than what is needed.

Figures 16 and 17 show the benefit of using EVN in a VRF-Lite environment.

Figure 16.    VRF-Lite Configuration Without EVN

Figure 17.    VRF-Lite Configuration with EVN

In Figure 16, every subinterface on every switch carrying the VRF-Lite VPNs must be manually configured, so as the number of VRFs grows, the interface configuration becomes harder to work with and more prone to errors. An infrastructure with 6 nodes and 20 VRFs would require 6 main interface and 120 subinterface configurations.

In Figure 17 the benefits of the VNET trunk can be plainly seen in the massive reduction and simplicity of the interface configuration. When the trunk between the switches is established as a VNET trunk, all VRFs configured with the vnet tag command are automatically sent over the trunk. The only configuration steps a network administrator has to undertake are for the VNET trunk interface itself and the VNET tag assignment within the VRF definition. The network with 6 nodes and 20 VRFs would require only 6 main interface configurations, making it much easier to deploy and manage.

In addition to the VNET trunk capability, EVN introduces two other functions that ease the support and deployment of both MPLS VPNs and VRF-Lite. The first is the creation of a routing context that allows the network administrator to use the routing-context <vrf name> command to create a context in which exec-level commands (show, ping, traceroute, and so on) can be executed with adding the VRF name every time. The second is the ability to share services between VRFs using route leaking without the need for BGP, import/export statements, route descriptors, and route targets such as are needed without this new capability.

Control Plane Policing (CoPP)

The most vulnerable part of any switching infrastructure is the CPU, or control plane, which manages the hardware and maintains the Layer 2 and Layer 3 topologies. The CPU is usually not capable of operating at the speeds required of today’s switched networks, so network vendors have created higher performance application-specific integrated circuits (ASICs) to provide required features at speeds of tens of millions of packets per second. However, certain types of traffic still require CPU processing, and this traffic can potentially be sent to the CPU at ASIC speeds. Therefore, a mechanism must be put into place to protect the CPU from being overrun by traffic that it must process but that could be sent at a rate much higher than it can process.

The Cisco Catalyst 6500-E with Supervisor Engine 2T supports hardware-based CoPP, which increases security by protecting the CPU from unnecessary or denial-of-service (DoS) traffic and by giving priority to important control plane and management traffic. CoPP uses a dedicated control plane configuration through the modular quality-of-service (QoS) CLI (MQC) to provide filtering and rate-limiting capabilities, enforced by the PFC4 and DFC4, for the control plane packets. Figure 18 shows the operation of CoPP with the Supervisor Engine 2T.

Figure 18.    CoPP with the Supervisor Engine 2T

In this example, 410,000 bits per second are being sent toward the CPU. However, the CoPP policy is configured to allow only 10,000 bps of this type of traffic to reach the CPU. This rate is enforced across all forwarding engines (PFC4s/DFC4s) in the system, thereby making sure that the maximum amount of traffic that will reach the CPU is 10,000 bps. CoPP can also be configured to enforce limitations based on the number of packets per second of a specific traffic type, and a diverse set of counters is available to show how much traffic is being forwarded and dropped by a particular policy. This allows the network administrator to see where changes in the policies might need to be made if they are being to restrictive or too open.

All of the previously highlighted security features demonstrate why the Cisco Catalyst 6500-E with Supervisor Engine 2T is the best choice for the backbone of the unified access campus architecture. When it comes to the user security and segmentation (SGT/SGACL), infrastructure security (NDAC, CoPP), data security and integrity (MACsec, MACsec over MPLS), and infrastructure segmentation (MPLS, VRF-Lite, EVN) requirements of BYOD, video, and collaboration, no other backbone platform provides the scalability and feature functionality needed to support these enablers of such an architecture.

Application Visibility and Control

With all of the different types of devices, users, and traffic that will be traversing networks supporting BYOD, video, and collaboration, it becomes even more critical to have insight into that information so that the network administrator can properly support the requirements of the user community. Figure 19 shows several use cases in which the need for application visibility and control arises.

Figure 19.    Use Cases for Application Visibility and Control

Figure 19 shows many of the reasons why application visibility and control are so crucial to maintaining the unified access campus architecture. Whether it is for capacity planning, security, corporate compliance, or other reasons, it is vital that network administrators have an understanding of the users and traffic in their infrastructure. The Cisco Catalyst 6500-E with Supervisor Engine 2T supports a wide array of features that enable the network administrator to gain the necessary visibility into the network to make sure of delivery of a consistent end-to-end user experience. These features include, but are not limited to, Mini-Protocol Analyzer, Flexible NetFlow, and medianet, all of which are discussed in further detail in this section.

Mini-Protocol Analyzer (MPA)

The ability to inspect the entire content of a packet, also known as “packet capture” or “sniffing,” is sometimes a crucial part to troubleshooting a network problem, and that is the ability delivered by the Mini-Protocol Analyzer (MPA). The MPA captures network traffic from a SPAN session and stores the captured packets in a PCAP format in a local memory buffer. The captured packets can be either locally analyzed or exported to another device for analysis. Filtering options allow the network administrator to limit the captured packets to from selected VLANs, ACLs, or MAC addresses; packets of a specific EtherType; or packets of a specified packet size. Captures can be started and stopped on demand or can be scheduled for a specific date and time. An MPA session could be part of an EEM script that is implemented as the result of another event in the system.

The captured data can be displayed on the console, stored to a local file system, or exported to an external server using normal file transfer protocols. The format of the captured file is libpcap, which is supported by many packet analysis and sniffer programs (such as WireShark). Figure 20 shows some of the configuration options for the MPA.

Figure 20.    Mini-Protocol Analyzer Configuration Options

Flexible NetFlow (FnF)

Flexible NetFlow is the next generation in flow analysis technology. It optimizes the network infrastructure, reducing operation costs and improving capacity planning and security incident detection with increased flexibility and scalability. It gives the network administrator the ability to characterize IP traffic and identify its source, traffic destination, timing, and application information, which is critical for network availability, performance, and troubleshooting. The monitoring of IP traffic flows increases the accuracy of capacity planning and makes sure that resource allocation supports organizational goals.

The Cisco Catalyst 6500-E with Supervisor Engine 2T supports Flexible NetFlow with Cisco IOS Software Release 12.2(50)SY and newer. The gathering of flow information is done by all forwarding engines (PFC4s/DFC4s) individually for both IPv4 and IPv6 traffic, allowing the system to collect up to 13 million flow entries in a 6513-E system. Additional Flexible NetFlow capabilities such as per-VRF NetFlow, per-SGT NetFlow, Egress NetFlow, and MPLS NetFlow are also supported. Flexible NetFlow uses the NetFlow V9 header format, which gives the network administrator more control over the types of flows that are collected in the system. Figure 21 demonstrates the Flexible NetFlow model.

Figure 21.    The Flexible NetFlow Model

As Figure 21 shows, the Flexible NetFlow model is composed of three main components: flow exporters, flow records, and flow monitors. The flow exporter is simply the destination to which the NetFlow V9 encapsulated records will be sent. Notice that multiple flow exporters can be defined for the system, that multiple flow exporters can be used with a single flow monitor, and that flow exporters can be defined for every VRF in the system. This gives organizations with varying customer bases the ability to meet the needs of those customers to have independent flow collectors relevant to their own requirements.

Flow records contain the information that the network administrator wants to gather about each flow traversing the interface. Flow records contain two different types of fields: primary fields and nonprimary fields. Primary fields are unique attributes that help the system determine if the packet information is unique or similar to other packets. If a packet is unique, then a new entry is created in the NetFlow ternary content-addressable memory (TCAM) of the forwarding engine. If an entry is not unique, then no new entry is created, and the existing entry is updated. Figure 22 shows an example of this operation.

Figure 22.    Flexible NetFlow Operation

After the first packet, the NetFlow cache has one entry that is built based upon the primary fields (source IP, destination IP, source port, destination port, Layer 3 protocol, and TOS byte) in the flow record definition. When the second packet enters the system, the forwarding engine sees that it is identical to the first packet, so it simply increments the packet count to 2 for the entry previously created in the NetFlow cache. When the third packet enters the system, the forwarding engine builds a new entry in the NetFlow cache because the source IP address has changed (although nothing else has).

After the flow exporters and flow records are defined, the next step is to define the flow monitor. Referring back to Figure 21, notice that the flow monitor is simply the combination of the flow exporter and the flow record. It is important to note that flow monitors can share exporters, can be applied in different directions (ingress or egress), and can have multiple exporters per flow monitor. If the network administrator wants to turn on NetFlow sampling, the flow monitor is where the sampler would be defined. With the Supervisor Engine 2T, all sampling is done in hardware and provides the granularity to sample one packet out of a total pool of 2 to 32K packets.


The introduction of new devices into the enterprise campus architecture as a result of BYOD means that there will be more traffic with which existing applications will have to contend. Many organizations have made heavy investments into video and collaboration infrastructures and need to make sure that these functions are not degraded as a result of the increased traffic. Cisco Medianet is an end-to-end architecture for a network including advanced, intelligent technologies and devices in a platform optimized for the delivery of rich-media experiences. A medianet architecture helps IT organizations deliver the best possible user experience, with exceptional efficiency, across a range of use cases.

As the primary component of the backbone of the unified access campus architecture, the Cisco Catalyst 6500-E with Supervisor Engine 2T and Cisco IOS Software Release 15.0(1)SY and newer support many of the medianet capabilities applicable to that area of the network and will continue to add new medianet functions as they become available. This section will focus on two of the major medianet functions that are critical to the assessment of the network infrastructure’s ability to handle rich media services: Performance Monitor and Mediatrace.

Performance Monitor

Cisco Performance Monitor provides the ability to monitor the flow of packets in the network and to become aware of any issues that might affect the flow before it starts to significantly affect the performance of the application in question. Performance monitoring is especially important for video traffic because high-quality interactive video traffic is highly sensitive to network issues. Performance Monitor is focused on Real-Time Protocol (RTP) headers and provides real-time flow statistics for jitter, latency, and loss.

Cisco Performance Monitor uses software components and commands similar to those of Flexible NetFlow and QoS MQC, as shown in Figure 23.

Figure 23.    Performance Monitor Overview

The configuration example in Figure 23 shows how Performance Monitor uses the flow exporter, flow record, and flow monitor configurations found in Flexible NetFlow (discussed in the section prior to this one) to gather the flow information. Then, using the QoS MQC configuration, it looks for traffic matching a particular metric, “rtp” flows in this case, to determine if there are any issues with the network. Based on the analysis of the flows with regard to the configured metric, Performance Monitor is able to display very detailed information that can be used by a network administrator to determine what changes need to be made to guarantee a high-quality media communication.


Cisco Mediatrace helps to isolate and troubleshoot network degradation problems, such as jitter, latency, and loss, by enabling a network administrator to discover an IP flow’s path, dynamically enable monitoring capabilities on the nodes along the path, and collect information on a hop-by-hop basis. This information includes flow statistics, utilization information for incoming and outgoing interfaces, CPUs, memory, and any changes to IP routes or the Cisco Mediatrace monitoring state.

Mediatrace is enabled on each network node from which flow information is collected. The Mediatrace Initiator is enabled on the network node that will be used to control the Mediatrace sessions or polls. The Mediatrace Responder is enabled on each of the network nodes from which information will be collected. Figure 24 shows an example of how Mediatrace is used to collect information about an infrastructure.

Figure 24.    Using Mediatrace to Assess an Infrastructure

In this example, the Mediatrace Initiator has gathered information about specific endpoints in the unified access campus architecture to assess their readiness for telepresence communications. High-level information for each hop in the Mediatrace path is displayed in the middle box, while more granular information about a specific node in the path is displayed in the right box. The box on the far left shows the type of information that this particular profile is configured to gather. Mediatrace sessions can be run on demand, at a specific time or date, and within the body of an EEM script.

Cisco Prime Collaboration Monitor provides GUI-based control for medianet to complement the CLI-based statistics and control available on the individual medianet-capable nodes.


The introduction of more delay-sensitive and mission-critical applications into the unified access campus architecture as a result of BYOD, video, and collaboration means that the infrastructure must achieve the highest possible level of availability and reliability to guarantee that these applications function properly. The Cisco Catalyst 6500-E with Supervisor Engine 2T delivers more resilient capabilities than any other backbone platform. These capabilities include, but are not limited to, Nonstop Forwarding with Stateful Switchover (NSF/SSO); VSS functions, including multichassis EtherChannel (MEC) and quad-supervisor SSO; and Nonstop Routing for Open Shortest Path First Version 2 (OSPFv2), all of which are discussed in this section.

Nonstop Forwarding with Stateful Switchover (NSF/SSO)

The Cisco Catalyst 6500-E with Supervisor Engine 2T mitigates hardware malfunction by allowing a redundant supervisor engine, either within the same chassis or in a second chassis in VSS mode, to take over if the primary supervisor engine fails. SSO (frequently used with NSF) minimizes the time a network is unavailable to its users following a switchover while continuing to forward IP packets.

NSF works with SSO to minimize the amount of time a network is unavailable to its users following a switchover. The main objective of Cisco NSF is to prevent an unnecessary change in the routing topology as a result of a control-plane failure.

Usually, when a networking device restarts, all routing peers of that device detect that the device went down and then came back up. This transition results in what is called a routing flap, which could spread across multiple routing domains. Routing flaps caused by routing restarts create routing instabilities, which are detrimental to the overall network performance. NSF helps to suppress routing flaps in SSO-enabled devices, thus reducing network instability.

A primary element of NSF is packet forwarding. In a Cisco networking device, packet forwarding is provided by Cisco Express Forwarding. Cisco Express Forwarding is always enabled in Cisco Catalyst 6500-E Series Switches and cannot be disabled. Cisco Express Forwarding maintains the forwarding information base (FIB) and uses the FIB information that was current at the time of the switchover to continue forwarding packets during a switchover. This feature reduces traffic interruption during the switchover.

When working with NSF, there are two possible operational roles for each node: NSF-capable and NSF-aware. NSF-capable devices are those that have dual control planes and are configured to perform an NSF restart should the active control plane fail. NSF-capable devices can be one physical device with two control planes, such as a Cisco Catalyst 6500-E with dual Supervisor Engine 2Ts, or they can be one logical device with two control planes, such as a VSS 4T with one Supervisor Engine 2T in each chassis of the VSS.

NSF-aware devices are those devices that are running NSF-compatible routing protocols (Enhanced Interior Gateway Routing Protocol [EIGRP], OSPF, BGP, and Intermediate System-to-Intermediate System [IS-IS]) and are capable of assisting an NSF-capable device perform a restart of the routing process. If a device is NSF-capable and is running a routing protocol with NSF enabled, then all of the neighbor devices running that routing protocol must be at least NSF-aware, but they can be NSF-capable as well. Figure 25 shows the OSPF communication between the NSF-capable device and the NSF-aware device during an NSF operation.

Figure 25.    OSPF NSF Communication Example

Notice that the first communication between the NSF-capable device and NSF-aware device is a “grace Link State Advertisement (LSA)” (NSF is referred to as graceful restart in IETF standards). If the NSF-aware device did not have NSF awareness configured, then it would not understand the grace LSA and would undergo a routing flap, which could result in routing instability within the entire OSPF topology. After the standby control plane on the NSF‑capable device comes online, the link state database and FIB are rebuilt. A comparison of the new FIB contents is made to determine if any of the last-known-good entries need to be updated. If so, then just those entries requiring an update are changed, while the rest are left unchanged.

OSPF Nonstop Routing

Starting with Cisco IOS Software Release 15.1(1)SY and newer, the Cisco Catalyst 6500-E with Supervisor Engine 2T supports the OSPFv2 Nonstop Routing (NSR) feature, which increases the availability of any infrastructure running OSPFv2 (OSPFv3 NSR will be added in a later code release). Although OSPF NSR serves a function similar to that of OSPF NSF, it works differently. With NSF, OSPF on the newly active standby control plane initially has no state information, so it uses extensions to the OSPF protocol to recover its state from neighboring OSPF routers. For this to work, the neighbors must support the NSF protocol extensions and be willing to act as "helpers" to the restarting router. They must also continue forwarding data traffic to the restarting router while this recovery is taking place.

With NSR, by contrast, the router performing the switchover preserves its state internally, and in most cases the neighbors are unaware that anything has happened. Because no assistance is needed from neighboring routers, NSR can be used in situations where NSF cannot - for example, in networks where not all the neighbors implement the NSF protocol extensions.

Virtual Switching System (VSS)

The VSS combines a pair of physical switches into a single logical network element. VSS was developed to address the ever-increasing adoption of delay-sensitive applications, for example, voice, 2video, and collaboration, that are appearing in enterprise networks. Traditional network topologies relied protocols such as Spanning Tree Protocol and HSRP to manage loops and first-hop gateway implementation, but those protocols were proving to be unable to handle the delay sensitivity of these newer applications. With its support for MEC and quad-supervisor SSO, VSS with Supervisor Engine 2T provides the most highly available and reliable solution to address these requirements in the backbone of a unified access campus architecture.

Multichassis EtherChannel

MEC is an EtherChannel with ports that terminate on both chassis of the VSS. A VSS MEC can connect to any network element that supports EtherChannel by using manual ON mode, Link Aggregation Control Protocol (LACP) or Port Aggregation Protocol (PAgP). At the VSS, an MEC is an EtherChannel with additional capability: the VSS balances the load across ports in each chassis independently. For example, if traffic enters the active chassis, the VSS will select an MEC link from the active chassis. This MEC capability makes sure that data traffic does not unnecessarily traverse the VSL. Figure 26 shows how an MEC changes the traditional campus design.

Figure 26.    Changing Traditional Campus Design with VSS MEC

In the traditional campus design, the use of Spanning Tree Protocol results in one of the uplinks from the access layer being blocked to prevent a network loop. If the active link fails, then Spanning Tree Protocol has to go through a multisecond process to unblock the blocked link. With VSS MEC, both of the uplinks from the access layer are actively forwarding traffic even though the two links are still physically connected to two separate chassis. The access layer switch sees just a single switch in the backbone above it because VSS makes the two physical chassis appear as one from a protocol standpoint. As a result, the access layer can form what it thinks is a regular EtherChannel with the backbone switch.

Where MEC delivers the biggest effect is in response to a link failure. Flows in an MEC are assigned according to a hashing mechanism determined by the hardware of the device sending the flow into the MEC. If a link in the MEC fails, the convergence time is in the order of milliseconds. Even the most advanced Spanning Tree Protocol, Rapid Spanning Tree, takes a few seconds (at best) to recover a link failure. The delay-sensitive applications traversing the enterprise cannot withstand multisecond outages without restarting in many cases. Deploying MEC in the backbone of the unified access campus architecture is essential to maintaining the level of availability and reliability demanded by voice, video, and collaboration applications.

Quad-Supervisor SSO

Most customers who follow the best practice of using MECs for all of the devices connected to a VSS will usually not require more than a single Supervisor Engine 2T in each chassis of the VSS. With a single Supervisor Engine 2T in each chassis of a VSS, the failure of the Supervisor Engine 2T will result in the affected chassis being offline until the failed Supervisor Engine 2T can be replaced. This primarily affects any devices that are not dual-attached (such as a single NIC), as well as service modules only installed in the affected chassis. For many customers that require the highest level of network availability, the ability to use two Supervisor Engine 2Ts in each chassis of a VSS, also known as quad-supervisor VSS SSO, is a must. The Cisco Catalyst 6500-E with Supervisor Engine 2T and Cisco IOS Software Release 15.1(1)SY1 and newer software support this capability. Figure 27 shows a comparison in recovery times between a VSS with a single Supervisor Engine 2T in each chassis versus a VSS with two Supervisor Engine 2Ts in each chassis.

Figure 27.    VSS Recovery Comparison: One Supervisor Engine 2T Compared to Two Supervisor Engine 2Ts in Each Chassis

In a traditional VSS, a failure of the single Supervisor Engine 2T in one of the chassis will result in the available bandwidth of the VSS being cut in half for an indeterminate amount of time. While there is a subsecond SSO switchover between the two chassis, there is no in-chassis redundancy. This means that the VSS will operate on the single remaining chassis until the failed Supervisor Engine 2T can be replaced. How long that will be depends on whether or not a replacement is on site, how far away the site is, what the Cisco SMARTnet® Service contract replacement details are, and so on.

In a VSS with quad-supervisor SSO capabilities, there is an in-chassis standby hot Supervisor Engine 2T in each of the chassis of the VSS. This means that SSO is now supported within the chassis as well as between the two chassis of the VSS. If the active Supervisor Engine 2T in the chassis fails, the in-chassis standby hot Supervisor Engine 2T will take over via a subsecond SSO failover. None of the modules in the chassis will go offline, and the available bandwidth for the VSS will be affected only during the subsecond SSO failover. In the case in which all DFC4s are used in a chassis and traffic is being locally switched on the module, there might be no traffic effect at all.


The trends of BYOD, video, and collaboration are forcing IT organizations to rethink how they architect their infrastructures. The proliferation of wireless devices and speeds in the enterprise is causing a shift in how Cisco approaches the design of a campus network, moving toward a converged wired/wireless architecture referred to as a unified access campus architecture. The backbone of the unified access campus architecture is a critical piece to the design, because it must have the ease of management, application visibility, security, and resilience to allow delay-sensitive and mission-critical applications to function as expected to meet the business needs of the user community.

When the requirements of the backbone in the unified access campus architecture are considered, the Cisco Catalyst 6500-E with Supervisor Engine 2T becomes the outstanding option to support those requirements. When it comes to the combination of scalability and feature richness, no other platform can match the capabilities of the Cisco Catalyst 6500-E with Supervisor Engine 2T. Without sacrificing performance, the Cisco Catalyst 6500-E with Supervisor Engine 2T allows a network administrator to simplify the admin’s job through the support of smart operations while at the same time deploying the security, application visibility/control, and resiliency capabilities that the newest type of applications demands.