Guest

Cisco Catalyst 6500 Series Switches

Network Virtualization with the Cisco Catalyst 6500 Supervisor Engine 2T White Paper

  • Viewing Options

  • PDF (1.7 MB)
  • Feedback

Introduction

Network virtualization is a cost-efficient way to provide traffic separation. A virtualized network offers multiple individual networks superimposed on a single physical infrastructure.

Within the campus, these individual networks can be used to transport traffic belonging to different departments or to third-party vendors. Another example of how traffic separation is used is for different security and/or routing policies. Network virtualization can also be used to transport both IPv4 and IPv6 on the same infrastructure.

As a concept, network virtualization is similar to server virtualization, where a physical server can host multiple virtual machines. The Cisco® Catalyst® 6500/6800 Series with network virtualization enables network resources to be deployed and managed as logical services rather than as physical resources. As a result, companies can:

   Enhance enterprise agility

   Improve network efficiency

   Reduce capital and operational costs

   Maintain high standards of security, scalability, manageability, and availability throughout the campus design

The Cisco Catalyst 6500/6800 Series Supervisor Engine 2T is the ideal platform to build a virtual network for the campus. The Supervisor Engine 2T has many enhancements that its predecessor did not support, such as LISP, Virtual Private LAN Services (VPLS), H-VPLS, L2omGRE natively in hardware, 4000 VRFs, L3VPNomGRE, higher Multiprotocol Label Switching (MPLS) throughput, VPN routing and forwarding (VRF)-aware services like Web Cache Communication Protocol (WCCP) for application acceleration, VLAN Reuse for scalability, and other enhancements.

The Cisco Catalyst 6500/6800 Series platform is in a unique place in the network because it supports the Cisco Wireless Services Module 2 (WiSM2) Wireless Controller and also the Cisco Adaptive Security Appliance (ASA) Service Module. These features provide the ability to have a consistent secure policy between wireless and wired users using network virtualization segmentation.

It is very important to have a holistic approach when designing a virtualized infrastructure. You must consider three main components that enable an end-to-end virtualization solution: the access to the virtualized infrastructure, the transport, and finally the network services (Figure 1).

Figure 1.      Virtualization Topology

   Network access control and segmentation of classes of users: This component identifies users who are authorized to access the network and then places them into the appropriate logical partition.

   Path isolation: This component maintains traffic partitioned over a routed infrastructure and transports traffic over and between isolated partitions. The function of mapping isolated paths to virtual LANs (VLANs) and to virtual services is also performed in this component.

   Network services virtualization: This component provides access to shared or dedicated network services such as address management (Dynamic Host Configuration Protocol [DHCP]) and Domain Name System [DNS]). It also applies policy per partition and isolates application environments, if required.

   WAN access: This component provides inbound policies, restricts outbound traffic, and merges the different partitions for Internet access. In this component, traffic separation can be maintained to connect to other sites that share the same virtualization scheme.

Virtual Private Networks

A virtual private network (VPN) can be defined as a private network within a shared infrastructure; each VPN has its own routing and forwarding table inside the Cisco Catalyst 6500/6800 Series Supervisor Engine 2T, and any prefixes that belong to that VPN are provided access only to the set of routes contained within that VPN. This removes the uniqueness requirement of prefixes, and the only requirement is that the address space be unique within a VPN.

The Supervisor 2T contains a routing table per VPN and a global routing table that is used to reach other routes inside the network, as well as external globally reachable destinations (for example, the rest of the Internet).

More structures are associated with each virtual router than just the routing table:

   A forwarding information base (FIB) that is derived from the routing table

   An adjacency table that contains the next hop addresses

   Rules that control the import and export of routes from and into the VPN routing table

   A set of routing protocols that create the routing information base (RIB) within the VPN

The combination of the VPN IP routing table and the associated VPN IP forwarding table is called a VPN routing and forwarding (VRF) instance.

The Supervisor 2T offers several virtualization solutions that span different needs. The criteria that are used to select one set of solutions over another are usually the scalability of the specific technology, ease of deployment, and manageability. This document will review different technologies and will highlight the benefits and drawbacks of using one over another.

Access Control

Wireless Clients

Wireless clients associate to the access point using a service set identifier (SSID). For each defined SSID, we can have a different authentication method; for example, guest users can be associates using a broadcast SSID with open authentication. Managed users may benefit from a separate SSID with a dynamic wireless authentication mechanism (Extensible Authentication Protocol [EAP] and so on) or using static Wired Equivalent Privacy (WEP) keys or eventually open authentication.

The Supervisor Engine 2T supports the latest two generations of wireless services module (WiSM). The WiSM is a wireless LAN controller services module that, among multiple functionalities, uses the Control and Provisioning of Wireless Access Points Protocol (CAPWAP) to encapsulate original Ethernet frames from the wireless access point and transport them across Layer 3 boundaries. Using a combination of CAPWAP and VLANs, the administrator can logically isolate traffic for different user groups.

Wired Clients

At the access level, VLAN assignment is the preferred mechanism to associate a user to a logical segment. The VLAN assignment can be performed statically or dynamically using one of the identity technologies. Since the Supervisor Engine 2T includes Identity 4.1 support, static VLAN assignment is less desirable because of the lack of mobility, potential security hazard, and port provisioning issue.

Path Isolation

The logical isolation provided by VLANs at the access level ceases to exist at the first Layer 3 hop device (usually the distribution layer device), and we need to extend this isolation into the routed network domain. This is usually performed by defining a VRF on the first hop device and mapping a single or multiple VLANs to the defined VRF instance (see the configuration example in Figure 2).

Since the distribution block usually aggregates multiple access devices, it is important to maintain a good redundancy mechanism. Supervisor Engine 2T supports all the first hop redundancy protocols: Hot Standby Router Protocol (HSRP) and Gateway Load Balancing (GLBP) for both IPv4 and IPv6, and Virtual Router Redundancy Protocol [VRRP] for IPv4. Supervisor 2T also supports Virtual Switching System (VSS). This technology offers a simpler configuration that does not require using a first hop redundancy protocol.

Figure 2.      VLAN to VRF Mapping

VRF-Lite

VRF-Lite is a simple, elegant solution to perform path isolation within Supervisor Engine 2T. The concept was already present with Cisco Catalyst 6500 Series Virtual Switching Supervisor Engine 720, but there are big improvements that derive from the enhanced scalability, performance, and manageability of the new supervisor engine. VRF-Lite can slice the RIB and the FIB in multiple partitions by adding a VPN identifier to each entry. Unlike Multiprotocol Label Switching (MPLS)/VPN, VRF-Lite does not take into account the transport of the VPN information to other switches or routers.

Two methods can be used to transport the VRF information throughout the network. If all the devices within the network support VRF-Lite, a hop-by-hop solution can be used. This method maintains traffic separation between switches by using 802.1q trunk and associates each VLAN carried by the trunk with a VRF (see Figure 3).

Figure 3.      802.1q VRF Transportation

If not all devices in the path support VRF-Lite, the VRF can be transported across using Generic Routing Encapsulation (GRE) tunnels, so that each VRF can be mapped to a specific tunnel interface. Depending on the topology, point-to-point or point-to-multipoint tunnels can be used (see Figure 4).

Figure 4.      GRE Tunnel VRF Transportation

Hop-by-Hop VRF Transport

When using a hop-by-hop method to propagate the VRF, the administrator usually creates a subinterface and associates that interface with a specific VRF on the connection between neighboring switches.

The hop-by-hop propagation is facilitated by the new logical interface (LIF) concept, which allows the configuration of the same VLAN identification on two different primary interfaces (for more information on LIF, refer to the appendix). In the topology illustrated in Figure 5, SUP2T-2 assigned VLAN10 to the interfaces connecting to SUP2T-1 and SUP2T-3 for VRF-1.

Figure 5.      Hop-by-Hop VRF Configuration Example

Tunnel Transport

If any device within the network does not support the VRF feature or there is no need to make VRF available on certain devices, a GRE tunnel can be created to transport the VRF information. Thanks to the LIF technology (see the Appendix), Supervisor Engine 2T supports the possibility to terminate multiple tunnels on a single loopback. By comparison, the Supervisor Engine 720 required that each tunnel terminate on a separate loopback interface (see Figure 6).

Figure 6.      Tunnel Configuration Example

Routing and VRF-Lite

In order to propagate route information within each VRF instance, the routing protocol needs to be instantiated by using either a separate routing process (Open Shortest Path First [OSPF], Intermediate System-to-Intermediate System [IS-IS]) or address family (Enhanced Interior Gateway Routing Protocol [EIGRP], or Routing Information Protocol Version 2 [RIPv2]). This is often referred to as the “VRF awareness” of the routing protocol. All IPv4 routing protocols are VRF-aware, including static routes and policy-based routing (PBR). The Supervisor Engine 2T adds the capability to match on the packet length as a condition within a PBR policy and will support next-hop as a policy decision (even if the next hop is not directly connected) in a later release.

VRF-Lite Design Consideration

VRF-Lite transport is based on either IPv4 or IPv6 and does not require any additional protocol. The drawback of this technology is that any addition of a new VRF requires either the creation of a new tunnel interface or a new 802.1q subinterface, As such, VRF-Lite is manageable for networks with a fewer numbers of VPNs and fewer numbers of hops in a VPN path.

The Supervisor Engine 2T does not support per-packet, dynamic-path maximum-transmission-unit (MTU) checking based on the IP destination address, but it propagates the “DF” bit to the outer header when packets are sent over a tunnel. If the original packet is equal to or smaller than the tunnel MTU, the original packet is encapsulated, and the resulting tunneled packet may be subsequently fragmented if it exceeds the MTU of the physical output interface. The fragmentation process will be performed by the software.

If the encapsulated traffic is fragmented at the output physical interface or within the tunnel path, the fragments will not be reassembled by the forwarding engine; rather, they will be punted to the control plane for reassembly.

Easy Virtual Networks (EVN)

Easy Virtual Network (EVN) is a simplified LAN virtualization solution that helps enable network managers to provide service separation on a shared network infrastructure. It uses existing technology to increase the effectiveness of VRFs. Existing enterprise network architecture and protocols, as well as concepts such as trunk and access interface, are preserved in EVN architecture. EVN builds on VRF-Lite concepts and capabilities and provides additional benefits:

   Increased enterprise scalability

   Simplified configuration and management

   Routing contexts for ease of operations in a VRF

   Better monitoring and troubleshooting

   Shared services between groups

Traffic Separation in EVN

Path isolation can be achieved by using a unique tag for each VN. This tag is called the VNET tag. Each VN carries over a virtual network the same tag value that was assigned by a network administrator. An EVN device in the virtual path uses the tags to provide traffic separation among different VNs. This removes the dependency on physical or logical interfaces to provide traffic separation.

Provisioning in EVN

EVN requires Supervisor Engine 2T and a minimum of Cisco IOS Software Release 15.0(1)SY1. EVN requires additional configuration concepts summarized here:

   Basic VRF Provisioning

1.     Provision VRFs:

“vnet tag <>” new command

2.Associate user facing (AC) and Trunk (core-facing interfaces) with VRF:

“vnet trunk” new command

3.Define routing instance for VRFs:

Same configuration as in VRF-Lite (Multi-VRF or MPLS VPNs on access side)

   Advanced VRF Provisioning

1.     Customize attributes for each VRF (override inheritance)

2.     Filter VRFs on some links but allow on others: “vrf list <>” new command

3.     Setup inter-VRF communication (shared services/extranet services):“route-replicate from vrf <>” new command

Note:    When configuring EVN on a Cisco Catalyst 6500 Family networking device, we recommend you assign a vnet tag in the range 2-1000. Beginning with Cisco IOS Release 15.1(1)SY, on the Sup2T platform of the Cisco Catalyst 6500/6800 product lines, if the vlan internal allocation policy descending command is configured, the vnet tag range is from 2 to 3900.

Figure 7.      Interface Configuration Comparison between VRF-Lite and EVN

For more Information on EVN, please refer to the white paper and configuration guides: http://www.cisco.com/en/US/products/ps11783/products_ios_protocol_option_home.html.

LISP Enhancements to Network Virtualization:

Lisp based Network Virtualization enhancements are also supported. For more information, please refer to the following white paper: http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6554/ps6599/ps10800/
solution_overview_c22-650815.html
.

MPLS VPN

MPLS is an infrastructure technology used by service providers and large enterprises, allowing an easy integration of services such as VPN, traffic engineering (TE), quality of service (QoS), and fast convergence (Fast ReRoute [FRR]).

The MPLS terminology defines three types of nodes. The first type of node is the provider edge (PE), which sits at the border of the MPLS network and faces the customer edge (CE) on one side and the provider (P) node on the other. The P nodes may also be referred to as a label switching router (LSR) because they base their forwarding decision on the MPLS label (see Figure 8) rather than the IP header.

A packet enters the MPLS network at the ingress PE and is label-switched up to the egress PE. The path followed by a specific packet is called a label switching path (LSP) and is set up by a control-plane protocol such as Label Distribution Protocol (LDP) or Resource Reservation Protocol (RSVP). For detailed information about MPLS, refer to the book MPLS and VPN Architectures by Ivan Pepelnjak and Jim Guichard.

The Supervisor Engine 2T provides all the features necessary to support MPLS switching at both the PE and P level, including Layer 2 services such as Ethernet over MPLS (EoMPLS) and virtual private LAN service (VPLS).

Figure 8.      MPLS Label

MPLS VPN Configuration

Like VRF-Lite, MPLS VPN deployment requires mapping the VLAN to a Layer 3 interface at the first hop device referred to now as a PE router; that Layer 3 interface belongs to a specific VRF previously defined. All Layer 3 interfaces in the core, including the one facing the core on the PE, have MPLS forwarding enabled. Multiprotocol BGP (MP-BGP) needs to be enabled on the PE devices to exchange the VPN routes. All the PE devices within the network become BGP neighbors within a single AS (iBGP). VPN traffic is now carried end-to-end across the network, maintaining logical isolation between the defined groups, and two MPLS tags are added to each frame: one to route the packet within the MPLS network and the second to identify the VPN of the packet.

MPLS VPN is a highly scalable solution that can take advantage of the Supervisor Engine 2T capabilities to provide up to 16K VRFs within each system; the Supervisor Engine 720 supported up to 512 VRFs without performance degradation and up to 1024 VRFs with performance degradation on the additional 512 VRFs (see Figure 9.)

Figure 9.      MPLS/VPN Topology

Figure 10 illustrates a sample MPLS VPN configuration that supports both IPv4 and IPv6 in each VRF. SUP2T-1 is connected to the CEs through Layer 3 interface, whereas SUP2T-3 is connected through a Layer 2 trunk.

Figure 10.    MPLS VPN Configuration Example

The Supervisor Engine 2T includes the ability to transport MPLS over a GRE tunnel. This feature allows network administrator to join multiple MPLS domains together over an IPv4 backbone or IPv4-only service provider. With this feature, MPLS packets are encapsulated within a GRE tunnel, and the encapsulated packets traverse the IPv4 network. When GRE tunnel packets are received at the other side of the IPv4 network, the GRE tunnel header is removed, and the inner MPLS packet is forwarded to its final destination.

The support from MPLS over GRE and L3VPN o mGRE is being added with Cisco IOS Software Release 15.0(1)SY. This will be performed in hardware inside the PFC4 for both point-to-point and point-to-multipoint tunnels. Since this functionality requires an internal recirculation of the packet, the performance at the tunnel endpoints will be reduced.

MLDP-Based MVPN

The MLDP-based MVPN feature provides extensions to Label Distribution Protocol (LDP) for the setup of point-to-multipoint (P2MP) and multipoint-to-multipoint (MP2MP) label switched paths (LSPs) for transport in the Multicast Virtual Private Network (MVPN) core network.

Benefits of MLDP-Based MVPN

   Enables the use of a single MPLS forwarding plane for both unicast and multicast traffic.

   Enables existing MPLS protection (for example, MPLS Traffic Engineering/Resource Reservation Protocol (TE/RSVP link protection) and MPLS Operations Administration and Maintenance (OAM) mechanisms to be used for multicast traffic.

   Reduces operational complexity due to the elimination of the need for PIM in the MPLS core network.

For more information please refer to:
http://www.cisco.com/en/US/docs/ios-xml/ios/ipmulti_lsm/configuration/15-sy/imc_mldp-based_mvpn.html#GUID-C041E24A-EF77-40AE-B5A6-A1248215AE35.

MPLS VPN Quality of Service (QoS)

In MPLS VPN, a 3-bit field (EXP bits) within the label can be used to convey QoS information. This 3-bit field is a one-to-one match with the IP Precedence field of the IPv4 header, but if the IPv4 QoS is based on the Differentiated Services Code Point (DSCP), a translation is required. The Supervisor Engine 2T also supports the following QoS modes:

   Uniform mode

   Short pipe mode

   Pipe mode

Uniform Mode

In uniform mode, all changes made to the Layer 3 QoS value (IP precedence, DSCP, MPLS EXP) are continuously maintained as the packet traverses the MPLS network. The IP packet’s IP precedence value is copied onto the imposed label EXP value when the packet enters the MPLS network. Similarly, when the label is removed, the topmost label EXP value is copied onto the IP precedence value of the IP packet. In uniform mode, as the packet traverses the MPLS network, each operation that imposes an extra label (push operation) maintains the same EXP value of the already imposed label. Similarly, every time a label is swapped by the LSP protocol (swap operation), the EXP value of the previous label is copied to the new label.

Short Pipe Mode

In short pipe mode, the egress LSR does not maintain a copy of the ingress labeled packet’s EXP value. The egress LSR uses the IP QoS field (IP precedence, DSCP) to classify the IP packet after it is removed from its MPLS label (MPLS2IP) for outbound queuing.

Pipe Mode

Pipe mode is similar to uniform mode except that when the last label is removed, the EXP value of the topmost label is not copied as the IP precedence value of the IP packet. This mode is used to make the QoS strategy within the MPLS network independent of the IP QoS policy. In pipe mode, the IP precedence of the underlying IP packet is unchanged. The IP packet’s IP precedence is not copied onto the MPLS EXP value when the packet enters the MPLS network. During label disposition, the egress LSR maintains a copy of the EXP value in memory as the QoS value of the packet. This QoS value is then used to define the QoS policies on the egress LSR.

The Supervisor Engine 720 already had support for short pipe and uniform mode; the Supervisor Engine 2T adds the support of pipe mode, which requires an extra recirculation of the packets inside the PFC4.

MPLS Performance

The Supervisor Engine 2T is capable of performing all the MPLS operations in one pass. Those operations are:

   Label imposition (IP2MPLS)

   Label swap (MPLS2MPLS)

   Label disposition (MPLS2IP)

Each pass is performed at a speed of 60 million packets per second (mpps) irrespective of the size of that packet itself.

The Supervisor Engine 2T is capable of pushing five labels in one pass. This can be useful when a combination of FRR and TE is used for VPN traffic. By comparison, the Supervisor Engine 720 can push only three labels in a single pass.

Likewise, the Supervisor Engine 2T can swap one label and push four labels in a single pass, while the Supervisor Engine 720 can swap one label and push two labels.

In the same way, the Supervisor Engine 2T can pop one non- null label or pop one explicit- null plus one non- null label in a single pass, whereas the Supervisor Engine 720 can pop two non- null labels.

MPLS Manageability

Within the PFC4, the forwarding engine is capable of maintaining separate statistics for IPv4, IPv6, and MPLS for all traffic switched through the system. As Figure 11 shows, these counters are separated from the interface counters that are accumulated by the port application-specific integrated circuit (ASIC) itself.

Figure 11.    Interface Statistics

The administrator has also the capability to probe the forwarding engine FIB and adjacency capacity (Figure 12):

Figure 12.    Forwarding Engine Statistics

Appendix: Logical Interface (LIF) and Bridge Domain (BD)

The Supervisor Engine 2T uses a new forwarding engine (EARL8) on its policy feature card (PFC). This new Supervisor 2T introduces the new LIF and BD concepts for scaling both physical and logical interfaces beyond the 4K limit imposed by the Supervisor Engine 720 forwarding engine (EARL7). This functionality is called VLAN Re-Use. This functionality makes deploying network virtualization technologies like VRF-Lite easier. Without this feature, VRF‑Lite deployments became complicated and did not scale to a high number of VRFs. LIF enables a new per‑port, per-VLAN interface type and helps scale Layer 3 interfaces up to 128K. With LIFs, Layer 3 interfaces no longer consume an internal VLAN, and thanks to this separation, network administrators do not need to reserve VLANs for tunnels or Layer 3 interfaces. All the VLANs are now available for Layer 2 purposes.

With LIFs, the scope of the VLAN is local to a physical port, whereas with EARL7, the scope of a VLAN was systemwide. This allows a Layer 3 interface to have VLANs as subinterfaces, with the VLANs being meaningful only to that port and the same VLANs be reused on another Layer 3 interface.

Bridge domains allow VLAN (broadcast domain) scaling inside the switch to 16K, so that, for example, trunks carrying the same VLAN identification can be treated separately. Also, BDs enable the concept of virtual bridges, where a single system can support multiple bridges.