Cisco Unified Access (UA) and Bring Your Own Device (BYOD) CVD
Campus and Branch Network Design for BYOD
Downloads: This chapterpdf (PDF - 3.41MB) The complete bookPDF (PDF - 68.29MB) | Feedback

Table of Contents

Campus and Branch Network Design for BYOD

Campus Network Design

Centralized (Local Mode) Wireless Design

Security Group Tag Overview

ACL Complexity and Considerations

Security Group Tag

SGT Deployment Scenarios in this CVD

Campus Wired Design

Converged Access Campus Design

Campus Migration Path

Initial Overlay Model

Centralized/Local Mode Only

Hybrid Converged Access and Local Mode

Full Converged Access

Link Aggregation (LAG) with the CT5508 WLC

Wireless LAN Controller High Availability

Cisco Unified Wireless Network (CUWN) Controllers

1:1 Active/Standby Redundancy with AP and Client SSO

Configuring 1:1 Active/Standby Redundancy with AP and Client SSO

Converged Access Controllers

Catalyst Switch Stack Resiliency

Cisco CT 5670 Wireless Controller 1:1 Stack Resiliency

Branch Wide Area Network Design

Branch WAN Infrastructure

Branch WAN Bandwidth Requirements

Encryption Requirement

Transport

Branch LAN Network Design

FlexConnect Wireless Design

Branch Wired Design

Converged Access Branch Design

Campus and Branch Network Design for BYOD

Revised: March 6, 2014

What’s New: A new section Link Aggregation (LAG) with the CT5508 WLC has been added. Also, the Wireless LAN Controller High Availability section has been re-written to include 1:1 active/standby redundancy with AP and client SSO on CUWN platforms, Catalyst 3850 switch stack resiliency, and Cisco CT5670 wireless controller 1:1 stack resiliency.

Campus Network Design

As with the branch design, policy enforcement is effective if and only if there is a well-designed campus network infrastructure in place. This section discusses the high-level key design elements of campus LAN design.

The two wireless LAN designs for the campus which will be discussed within this design guide are Centralized (Local Mode) and Converged Access designs.

Centralized (Local Mode) Wireless Design

Cisco Unified Wireless Network (CUWN) Local Mode designs, refer to wireless LAN designs in which all data and control traffic is backhauled from the access point to a wireless controller before being terminated and placed on the Ethernet network. This type of design is also referred to as a centralized wireless design or centralized wireless overlay network. A typical recommended design within a large campus is to place all of the wireless controllers into a separate services module connected to the campus core.

The potential advantages of this design are:

  • Centralized access control of all wireless traffic from a single point within the campus network.
  • Less complexity for wireless roaming, since the wireless controllers can share larger IP address pool for wireless clients.

The potential disadvantages of this design are:

  • Potential for scalability bottlenecks at the wireless controllers or the network infrastructure connecting to the wireless controllers. This is because all wireless traffic is backhauled to a central point within the campus network where the wireless controllers are deployed, before being terminated on the Ethernet network. Note however, that this may be alleviated by deploying additional centralized wireless controllers, by upgrading to newer platforms such as the Cisco CT5760 wireless controller, and/or by moving wireless controllers out to the building distribution modules.
  • Less visibility of wireless traffic, since the wireless traffic is encapsulated within a CAPWAP tunnel as it crosses the campus network infrastructure.

With a Local Mode design, access points that are connected to the access-layer switches within the building distribution modules are configured and controlled via one or more centralized Wireless LAN Controllers. In the case of this design guide, these controllers are a set of Cisco CT5508 wireless controllers—dedicated for the campus—since they provide greater scalability for supporting Local Mode access points than Cisco Flex 7500 wireless controllers. As mentioned previously, all data and control traffic is backhauled from the access points to wireless controllers before being terminated and placed onto the Ethernet network. Guest wireless traffic is backhauled across the campus infrastructure to a dedicated CT5508 guest anchor controller located on a DMZ segment within the campus.

In order to implement the BYOD use cases, two separate methods of providing differentiated access control for campuses utilizing a Local Mode wireless design are examined. These methods are:

  • Applying the appropriate dynamic ACL after the device is authenticated and authorized.
  • Applying the appropriate Security Group Tag (SGT) to the device after it is authenticated and authorized.

When implementing access control via dynamic ACLs, the particular form of dynamic ACL chosen for the design guide are RADIUS specified local ACLs, otherwise known as named ACLs. These named ACLs must be configured on each CT5508 wireless controller. For example, a personal device which is granted full access to the network is statically assigned to the same VLAN as a personal device which is granted partial access. However different named ACLs are applied to each device, granting different access to the network.

Figure 5-1 shows at a high level how a centralized (Local Mode) wireless BYOD design using named ACLs for access control is implemented in the campus.

Figure 5-1 High-Level View of the Centralized (Local Mode) Wireless Campus BYOD Design

 

When implementing access control via Security Group Association (SGA), various source and destination Security Group Tags (SGTs) must be configured within Cisco ISE. A personal device which is granted full access to the network is statically assigned to the same VLAN as a personal device which is granted partial access. However different SGTs are applied to each device, thereby granting different access to the network.

Security Group Tag Overview

Throughout all versions of the BYOD CVD, policy enforcement has been accomplished through the use of Access Control Lists and VLANs to restrict user traffic as appropriate upon successful authentication and subsequent authorization. The use of ACLs can become a daunting administrative burden when factoring the number of devices upon which they are applied and the continual maintenance required to securely control network access.

This design guide also uses a complimentary technology known as TrustSec and the use of Security Group Tags (SGT). Security Group Tags offer a streamlined and alternative approach to enforcing role-based policies with minimal and in some cases, little or no ACLs at all if TCP/UDP port level granularity is not required.

The use of Security Group Tags are used as an alternative to ACLs for Campus wireless users and devices where the Cisco Wireless Controllers have been centrally deployed in a shared services block and configured for operation in local mode.

ACL Complexity and Considerations

To date, variations of named ACLs on wireless controllers, static and downloadable ACLs on various routing and switching platforms, as well as FlexACLs for FlexConnect wireless traffic in the branch have been used as a means of enforcing traffic restrictions and policies. In order to configure and deploy these ACLs, a combination of either command line (CLI) access to each device via Telnet/SSH or network management such as Prime Infrastructure have been required and used for statically configured ACLS while the Cisco Identity Services Engine (ISE) has been used to centrally define and push downloadable ACLs (DACL) to switching platforms.

  • Unique ACLs may be required for different locations such as branches or regional facilities, where user permissions may need to be enforced for local resources such as printers, servers, etc.
  • The operational complexity of ACLs may be impacted by changes in business policies.
  • The risk of security breaches increases with potential device misconfigurations.
  • ACL definitions become more complex when policy enforcement is based on IP addresses.
  • Platform capabilities, such as processor memory, scalability, or TCAM resources may be impacted by complex ACLs.

Cisco’s TrustSec provides a scalable and centralized model for policy enforcement by implementing Cisco's Security Group Access architecture and the use of Security Group Tags.

Security Group Tag

Security Group Tags, or SGT as they are known, allow for the abstraction of a host’s IP Address through the arbitrary assignment to a Closed User Group represented by an arbitrarily defined SGT. These tags are centrally created, managed, and administered by the ISE. The Security Group Tag is a 16-bit value that is transmitted in the Cisco Meta Data field of a Layer 2 Frame as depicted in Figure 5-2.

Figure 5-2 Layer 2 SGT Frame Format

 

The Security Group Tags are defined by an administrator at Cisco ISE and are represented by an arbitrary name and a decimal value between 1 and 65,535 where 0 is reserved for “Unknown”. Security Group Tags allow an organization to create policies based on a user’s or device’s role in the network providing a layer of abstraction in security policies based on a Security Group Tag as opposed to IP Addresses in ACLs.

For a complete overview of the Security Group Access architecture and Security Group Tags and how it will be incorporated within the CVD, refer to Chapter23, “BYOD Policy Enforcement Using Security Group Access”

SGT Deployment Scenarios in this CVD

Security Group Tags will be used as a means of policy enforcement in both the Limited and Enhanced Access Use Case where a campus wireless user/device can either be terminated centrally at a Wireless Controller in Local Mode or a Converged Access Catalyst 3850 switch and is granted either full or partial access to the network. Different classes of servers will be defined to which those users may or may not have access. The CVD also defines a class that has access to the Internet only through the use of an ACL on the wireless controller to deny access to all internal addresses. The Converged Access products such as the Catalyst 3850 and CT-5760 are addressed relative to SGT in this CVD as IOS-XE 3.3.0 introduced support for Security Group Tags and Security Group ACL enforcement. More about SGT and the Enhanced Use Case is discussed in the ensuing sections discussing the actual authorization policies.

Two deployment scenarios will be depicted within this CVD. The first will make use of Security Group ACLs (SGACLs) to enforce policies at the Nexus 7000 Data Center switches as well as at a Catalyst 6500 VSS switch in the Services block, Catalyst 3850, and the CT-5760 wireless controller, whereas the second scenario will enforce policies configured at a Cisco ASA configured as a Security Group Firewall (SGFW). SGACLs are role-based policies enforced on Catalyst switching platforms and specifically define whether traffic is permitted or denied based on source and destination SGT values. Again, these deployment scenarios are not mutually exclusive and can be used together. This first scenario can be seen in Figure 5-3 and the second scenario in Figure 5-4.

Figure 5-3 Policy Enforcement Using SGACL

 

Figure 5-4 Policy Enforcement Using SG-FW

 

Campus Wired Design

Figure 5-5 shows the wired design for a campus which does not implement Converged Access Catalyst 3850 Series switches. In other words, this is the wired design for a campus which implements switches such as the Catalyst 3750X and 4500 series at the access-layer of building distribution modules, along with a centralized (Local Mode) wireless design.

Figure 5-5 High-Level View of Non-Converged Access Wired Campus Design

 

This design guide assumes Catalyst switches deployed as Layer 2 devices within the access-layer of the campus building modules. Wired devices authenticate using 802.1X against the ISE server located within the campus data center. For this design, wired devices are all statically assigned to a single VLAN, the Employee VLAN. Differentiated access control for wired devices is provided by different RADIUS downloadable ACLs applied to the access-layer switch, which override a pre-configured static ACL on each Catalyst switch port.

Converged Access Campus Design

The Converged Access campus BYOD design highlights multiple Catalyst 3850 Series switches or switch stacks deployed at the access layer of each building distribution module of a large sized campus. Switch stacks form Switch Peer Groups (SPGs) in which all switches contain the Mobility Agent (MA) function. Roaming within a SPG is handled through a full mesh of mobility tunnels between MAs within the SPG. Multiple SPGs exist within the large sized campus.

This design guide will assume Catalyst 3850 Series switches deployed as Layer 2 access switches within the campus location. Layer 3 connectivity within each campus building distribution module is provided by Catalyst 6500 distribution switches. In keeping with campus design best practices for minimizing spanning-tree issues, VLANs are assumed not to span multiple Catalyst 3850 Series switch stacks deployed in separate wiring closets. Future design guidance may address Catalyst 3850 Series switches deployed as Layer 3 switches within the branch location.

Cisco CT5760 wireless controllers deployed within a centralized service module within the campus contains the Mobility Controller (MC) function. Multiple SPGs connecting to a single MC form a Mobility Sub-Domain. Multiple Mobility Sub-Domains exist within the large sized campus. Roaming between SPGs within a Mobility Sub-Domain is done through the Cisco CT5760 wireless controller. The CT5760 wireless controllers also manage Radio Resource Management (RRM), WIPs, etc.

Multiple Cisco CT5760 wireless controllers form a Mobility Group. Hence a Mobility Group also consists of multiple Mobility Sub-Domains. Roaming between Mobility Sub-domains is done through the Cisco CT5760 wireless controllers within the Mobility Group. The design within this design guide assumes a single Mobility Group and hence a single Mobility Domain extends across and is entirely contained within the large campus.


NoteCisco CT5508 wireless controllers can also implement the Mobility Controller (MC) function within the Converged Access campus design. However the CT5508, being an older platform has less overall throughput than the newer CT5760 platform. This version of the design guide only discusses the CT5760 wireless controller functioning as the Mobility Controller within a Converged Access campus deployment. Future versions of this design guide may include the CT5508 wireless controller deployed in this manner.


Access points within the campus building distribution modules are configured and controlled via the wireless controller Mobility Agent (MA) functionality integrated within the Catalyst 3850 Series switch. Guest wireless traffic is still backhauled to a dedicated CT5508 guest anchor controller located on a DMZ segment within the campus. Provisioning traffic (i.e., traffic from devices attempting to on-board with ISE) is terminated locally on the Catalyst 3850 Series switch with the Converged Access campus design. When implementing a dual-SSID design, provisioning traffic is terminated on a separate VLAN. All on-boarded devices terminate on a single VLAN with this design.


NoteThis design guide only discusses wireless guest access. Wired guest access may be discussed within future revisions of this design guide.


The potential advantages of this design are as follows:

  • Increased scalability of the wireless deployment, since wireless traffic is terminated on every access-layer Catalyst 3850 Series switch within the campus, instead of being backhauled to one or more centralized wireless controllers.
  • Increased visibility of the wireless traffic, since wireless traffic is terminated on every access-layer Catalyst 3850 Series switch within the campus.

The potential disadvantages of this design are as follows:

  • Less centralized access control of wireless traffic from a single point within the campus network. Access control is spread out to each Catalyst 3850 Series access switch. Note however, that with Converged Access designs, traffic from a particular WLAN can still be backhauled to a centralized CT5760 wireless controller and switched centrally. This is touched upon in Campus Migration Path.
  • Increased potential for more complexity for wireless roaming, since each Catalyst 3850 Series switch implements the Mobility Agent (MA) functionality, effectively functioning as a wireless controller.

In order to implement the BYOD use cases, the method adopted in this design guide for a campus utilizing a Converged Access design is to apply the appropriate named ACL after the device is authenticated and authorized. This applies to both wired and wireless devices. These named ACLs, which must be configured on each Catalyst 3850 Series switch, provide differentiated access control. For example, a personal device which is granted full access to the network is statically assigned to the same VLAN as a personal device which is granted partial access. However different named ACLs are applied to each device, granting different access to the network.

Figure 5-6 shows at a high level a simplified Converged Access BYOD design with a single Catalyst 3850 Series switch functioning as a Mobility Agent (MA) and a single CT5760 wireless controller functioning as a Mobility Controller (MC) in the campus.

Figure 5-6 High-Level View of the Converged Access Campus BYOD Design

 


NoteThe Converged Access campus BYOD design may also be referred to as the External Controller Large Campus BYOD design within this document. Future versions of this design guide may address small campus and/or large branch Converged Access designs, in which multiple Catalyst 3850 switch stacks implement both the Mobility Controller (MC) and Mobility Agent (MA) functionality. In such a design, referred to as the Integrated Controller Small Campus / Large Branch design, no external CT5760 wireless controllers are needed.


Note that in the case of this design guide, on-boarded wired devices are also statically assigned to the same VLAN as wireless devices. Hence on-boarded wired and wireless devices will share the same VLAN, and hence the same IP subnet addressing space. It is recognized that customers may implement separate subnets for wired and wireless devices due to issues such as additional security compliance requirements for wireless devices. This is not addressed within this version of the design guidance. Dynamically assigned named ACLs provide differentiated network access for wired devices.

Assuming all campus switches implement the same set of ACLs for access control, RADIUS downloadable ACLs may alternatively be deployed within the campus. The benefit of implementing a downloadable ACL within the campus is that changes to the access control entries only have to be configured once within the Cisco ISE server versus having to touch all campus Catalyst 3850 Series switches. However this option also requires separate ISE policy rules for campus and branch Converged Access deployments, assuming named ACLs are still deployed within branch locations.

Implementing downloadable ACLs within branch locations presents scaling issues if access to local branch servers is required within the ACL. In such scenarios, each branch would require a separate downloadable ACL and, therefore, a separate Cisco ISE policy rule to identify that ACL for that branch. This becomes administratively un-scalable as the number of deployed branches increases.

Hence this design guide only discusses the use of named ACLs for access control of on-boarded devices both within the Converged Access branch and campus designs. Because named ACLs are used for both designs, the same Cisco ISE policies rules can be used for both Converged Access campus and branch deployments. Hence one set of policy rules can be used for Converged Access designs regardless of where the device is located. This reduces the administrative complexity of the Cisco ISE policy; albeit it at the expense of increased complexity of having to configure and maintain ACLs at each campus Catalyst 3850 Series switch.


NoteManagement applications such as Cisco Prime Infrastructure may ease the burden of ACL administration by providing a point of central configuration and deployment of named ACLs for the Converged Access BYOD branch and campus designs.


Campus Migration Path

For large campus designs, a migration path from a traditional CUWN centralized (Local Mode) wireless overlay network design to a Converged Access design is necessary. It is considered unfeasible for a customer to simply “flash cut” a large campus over to a Converged Access design. There are many potential migration paths from a traditional CUWN centralized design to a Converged Access design. This section discusses one possible migration path. The steps of the migration path from the initial overlay model are as follows:

1. Local/Centralized Mode Only

2. Hybrid Converged Access and Centralized

3. Full Converged Access

Each is discussed in the following sections.

Initial Overlay Model

Figure 5-7 shows the logical components for the initial state in the migration path - the Initial Overlay Model.

Figure 5-7 Initial State in the Migration Path—Initial Overlay Model

 

The initial overlay model consists of access points, operating in Local Mode, connected to Catalyst 3750-X series switches at the access-layer of individual building modules within the campus. The access points are controlled by a CT5508 wireless controller located within a services module within the campus. CAPWAP tunnels extend from individual access points to the CT5508 wireless controller. A second CT5508 wireless controller on a DMZ segment within the Internet edge module functions as a dedicated wireless guest anchor controller. A mobility tunnel extends from the campus (foreign) CT5508 wireless controller to the guest (anchor) CT5508 wireless controller.

This is the campus BYOD design which is discussed in Centralized (Local Mode) Wireless Design.

Centralized/Local Mode Only

Figure 5-8 shows the logical components for the first step in the migration path—Centralized/Local Mode Only.

Figure 5-8 First Step in Migration Path—Centralized/Local Mode Only

 


NoteNote that the term “Local Mode” is used with CUWN controllers, while the term “Centralized Mode” is used with Converged Access controllers within Cisco documentation. Both refer to the same model with a centralized data and control plane for wireless traffic. In other words, all traffic is backhauled to the wireless controller before being placed on the Ethernet network.


In this step of the migration path, the customer simply adds more wireless controller capacity. Since the CT5760 is a newer platform and offers higher aggregate throughput, the customer may decide to begin transitioning to this platform by adding them to the existing campus wireless overlay design. The CT5760 supports up to 1,000 access points and up to 12,000 clients with up to 60 Gbps throughput per wireless controller.


NoteThe wireless capabilities of the CT5760 are not identical to Cisco Unified Wireless Network controllers running software version 7.6. The network administrator must ensure that all the necessary features exist in the CT5670 before migrating access points from existing CT5508 wireless controllers to CT5760 wireless controllers. For a list of supported features, refer to the CT5760 Controller Deployment Guide at:http://www.cisco.com/en/US/docs/wireless/technology/5760_deploy/CT5760_Controller_Deployment_Guide.html.


At this point, it is assumed that the access-layer switches within the building module wiring closets have not reached their replacement cycle. Hence the access points, operating in local mode, are still connected to Catalyst 3750-X series switches at the access-layer of individual building modules within the campus. The access points are controlled by either the CT5508 or the CT5760 wireless controller located within a services module within the campus. Both are members of the same Mobility Group. CAPWAP tunnels extend from individual access points to either the CT5508 or CT5760 wireless controller. A mobility tunnel extends between the CT5508 and CT5760.

A logical choice for migration to the CT5760 wireless controller would initially be at the building level. In other words, one building of a campus could be migrated—potentially floor by floor—from an existing CT5508 to a CT5760 wireless controller.

In order to maintain mobility across the campus, the existing CT5508 wireless controllers need to be upgraded to CUWN software version 7.5 or higher. CUWN software versions 7.5 and higher support the new mobility tunneling method, which uses CAPWAP within UDP ports 16666 and 16667, instead of Ethernet-over-IP. This is compatible with IOS XE 3.2.0 and higher software running on CT5760 wireless controllers. Note that this includes upgrading the CT5508 wireless controller dedicated for wireless guest access. Mobility tunnels extend from the foreign CT5508 and CT5760 wireless controllers to the anchor CT5508 wireless controller.


NoteCentralized management of CT5760 wireless LAN controllers and Catalyst 3850 Series switches running IOS XE software 3.3.0SE and higher currently requires Cisco Prime Infrastructure 2.0.1. Centralized management of CUWN wireless LAN controllers running software version 7.6 currently requires Cisco Prime Infrastructure 1.4.1. In other words, two instances of Cisco Prime Infrastructure may be required currently if the customer wishes to support a model in which both CUWN and Converged Access infrastructure is deployed within the network and centralized management via Cisco Prime Infrastructure is a requirement.


Hybrid Converged Access and Local Mode

Figure 5-9 shows the logical components for the second step in the migration path—a Hybrid Converged Access and Local Mode model.

Figure 5-9 Second Step in Migration Path—Hybrid Converged Access and Local Mode

 

At this point in the migration path, it is assumed that the access-layer switches within the building module wiring closets have begun to reach their replacement cycle. In this scenario, the customer has chosen to deploy Catalyst 3850 Series switches at the access-layer of their building modules and begin migrating to a converged access model. Again, a logical choice for migration would be at the building level. In other words, one building of a campus would be migrated—potentially floor by floor—from access points operating in centralized mode connected to a Catalyst 3750-X Series switch and controlled by the CT5760, to access points operating in converged mode connected to and controlled by a Catalyst 3850 Series switch.

With this design, the Catalyst 3850 Series switches function as the Mobility Agent (MA), while the CT5760 wireless controller functions as the Mobility Controller (MC) and possibly the Mobility Oracle (MO). However during the migration of floors, the CT5760 wireless controller will still have to function in centralized mode as well for access points still connected to Catalyst 3750-X series switches. Hence the design is a “hybrid” of centralized and converged access designs.

CAPWAP tunnels extend from individual access points which are connected to Catalyst 3750-X Series switches to either the CT5508 or CT5760 wireless controller. CAPWAP tunnels also extend from individual access points which are connected to Catalyst 3850 Series switches to the Catalyst 3850 Series switches. Mobility tunnels extend from the MA within the Catalyst 3850 Series switches to the MC within the CT5760 wireless controller. Finally, mobility tunnels extend between MAs within the Catalyst 3850 Switches which are part of a Switch Peer Group (SPG). SPGs offload mobility traffic for groups of switches in which a large amount of mobility is expected. When roaming between access points connected to Catalyst 3850 Series switches which are part of the same SPG, the MC located within the CT5760 is not involved in the roam. A SPG may extend across part of a floor within a building, the entire floor, or in some cases multiple floors. A mobility tunnel (using the new mobility architecture) also extends between the CT5508 and CT5760. Finally, mobility tunnels (using the new mobility architecture) extend from the foreign CT5508 and CT5760 back to the anchor CT5508 for wireless guest access.

Full Converged Access

Figure 5-10 shows the logical components for the third step in the migration path—the Full Converged Access model.

Figure 5-10 Third Step in Migration Path—Full Converge Access

 

This design assumes the customer has retired existing CT5508 wireless controllers operating in Local Mode and moved to a converged access design with CT5670 wireless controllers. At this point in the migration path, it is assumed that the access-layer switches within the building module wiring closets have completed their replacement cycle. In this scenario, the customer has chosen to deploy only Catalyst 3850 Series switches at the access-layer of their building modules and completely migrate to a converged access model.


NoteWe realize that some customers may never fully migrate to a full Converged Access model, while others may take years to reach a full Converged Access deployment.


With this design, the Catalyst 3850 Series switches function as the Mobility Agent (MA), while the CT5760 wireless controller functions as the Mobility Controller (MC) and possibly the Mobility Oracle (MO).

CAPWAP tunnels extend from individual access points which are connected to Catalyst 3850 Series switches to the Catalyst 3850 Series switches. Mobility tunnels extend from the MA within the Catalyst 3850 Series switches to the MC within the CT5760 wireless controller. Mobility tunnels extend between MAs within the Catalyst 3850 Switches which are part of a Switch Peer Group (SPG). A mobility tunnel also extends between the two CT5760 wireless controllers. Finally, mobility tunnels (using the new mobility architecture) extend from the foreign CT5760 wireless controllers back to the anchor CT5508 for wireless guest access.


NoteRoaming between sub-domains (i.e., roaming between two CT5760 wireless controllers functioning as MCs) has not been validated with this version of the design guide.


Link Aggregation (LAG) with the CT5508 WLC

Cisco CT5508 wireless controllers have eight Gigabit Ethernet distribution system ports, for a maximum platform throughput of approximately 8 Gbps. Typically one or more WLANs—which correspond to SSIDs—are mapped to a dynamic interface, which is then mapped to a physical distribution system port. In a campus centralized (local mode) deployment, wireless traffic is backhauled across the campus network infrastructure and terminated on the Gigabit Ethernet distribution ports of the CT5508 WLC. With the use of a single physical distribution system port per WLAN, the throughput of each WLAN is limited to the throughput of the 1 Gbps physical distribution system port. Hence an alternative is to deploy link aggregation (LAG) across the distribution system ports, bundling them into a single high speed interface, as shown in Figure 5-11.

Figure 5-11 Link Aggregation (LAG) Between the CT5508 WLC and Attached Catalyst 6500 VSS Pair

 

Cisco 5508 wireless controllers support the ability to configure all eight Gigabit Ethernet distribution system ports into a single LAG group. This is the load-balancing mechanism validated for CT5508 WLCs within the campus deployed as centralized (local mode) controllers within the design guide.


NoteThis discussion of LAG does not include CT5508 wireless LAN controllers deployed in a 1:1 active/standby redundancy pair at this time.


An example of the configuration of LAG on a CT5508 wireless controller is shown in Figure 5-12.

Figure 5-12 Configuration of Link Aggregation (LAG) on a CT5508 Wireless LAN Controller

 

When using LAG, the switch or switches (in the case of a VSS group) to which the CT5508 wireless controller is attached must be configured for EtherChannel support. The following shows an example configuration on a Catalyst 6500 Series VSS pair in which the eight GigabitEthernet interfaces are split across both switches within the VSS pair.

!
vlan 2
name BYOD-Employee
!
vlan 3
name BYOD-Provisioning
!
vlan 45
name ua28-wlc5508-3-mgmt
!
vlan 450
name ua28-5508-3-users
!
interface Port-channel45
description LAG to ua28-wlc5508-3
switchport
switchport trunk allowed vlan 2,3,45,450
switchport mode trunk
load-interval 30
!
interface GigabitEthernet1/2/45
description ua28-wlc5508-3
switchport
switchport trunk allowed vlan 2,3,45,450
switchport mode trunk
load-interval 30
channel-group 45 mode on
!
interface GigabitEthernet1/2/46
description ua28-wlc5508-3
switchport
switchport trunk allowed vlan 2,3,45,450
switchport mode trunk
load-interval 30
channel-group 45 mode on
!
interface GigabitEthernet1/2/47
description ua28-wlc5508-3
switchport
switchport trunk allowed vlan 2,3,45,450
switchport mode trunk
load-interval 30
channel-group 45 mode on
!
interface GigabitEthernet1/2/48
description ua28-wlc5508-3
switchport
switchport trunk allowed vlan 2,3,45,450
switchport mode trunk
load-interval 30
channel-group 45 mode on
!
interface GigabitEthernet2/2/45
description ua28-wlc5508-3
switchport
switchport trunk allowed vlan 2,3,45,450
switchport mode trunk
load-interval 30
channel-group 45 mode on
!
interface GigabitEthernet2/2/46
description ua28-wlc5508-3
switchport
switchport trunk allowed vlan 2,3,45,450
switchport mode trunk
load-interval 30
channel-group 45 mode on
!
interface GigabitEthernet2/2/47
description ua28-wlc5508-3
switchport
switchport trunk allowed vlan 2,3,45,450
switchport mode trunk
load-interval 30
channel-group 45 mode on
!
interface GigabitEthernet2/2/48
description ua28-wlc5508-3
switchport
switchport trunk allowed vlan 2,3,45,450
switchport mode trunk
load-interval 30
channel-group 45 mode on
!
interface Vlan2
description BYOD-Employee VLAN for Functional Testing
ip address 1.231.2.1 255.255.255.0
ip helper-address 1.230.1.61
ip helper-address 1.225.42.15
ip helper-address 1.225.49.15
!
interface Vlan3
description BYOD-Provisioning VLAN for Functional Testing
ip address 1.231.3.1 255.255.255.0
ip helper-address 1.230.1.61
ip helper-address 1.225.42.15
!
interface Vlan45
description AP-Manager IP for ua28-wlc5508-3
ip address 1.225.45.1 255.255.255.0
!
interface Vlan450
ip address 1.228.128.1 255.255.192.0
ip helper-address 1.230.1.61
!

Wireless LAN Controller High Availability

High availability of the wireless infrastructure is becoming increasingly important as more devices with critical functions move to the wireless medium. Real-time audio, video, and text communication relies on the corporate wireless network and the expectation of zero downtime is becoming the norm. The negative impacts of wireless network outages are just as impactful as outages of the wired network.

Implementing high availability within the wireless infrastructure involves multiple components and functionality deployed throughout the overall network infrastructure, which itself must be designed for high availability. This section discusses wireless LAN controller platform level high availability specific to the implementation of wireless controller platforms within the Cisco BYOD design. Platform-level (box-to-box) redundancy refers to the ability to maintain wireless service when connectivity to one or more physical wireless LAN controller platforms within a site is lost. Figure 5-13 shows the WLC platforms within the Cisco BYOD design.

Figure 5-13 Wireless LAN Controller Platform High-Availability

 

The platforms highlighted in Figure 5-13 are as follows:

  • Cisco CT5508 wireless LAN controllers (Circle 1) servicing campus APs operating in centralized (local) mode and/or functioning as dedicated guest controllers.
  • Cisco Flex 7510 wireless LAN controllers (Circle 2) servicing branch APs operating in FlexConnect mode.
  • Cisco CT5760 wireless LAN controllers (Circle 3) servicing campus APs operating in centralized mode and/or functioning as Mobility Controllers (MCs) in a campus Converged Access design.
  • Catalyst 3850 Series switches (Circle 4) functioning as Mobility Agents (MAs) servicing APs in a campus converged access design and/or functioning as Mobility Agents (MAs) and Mobility Controllers (MCs) in a branch Converged Access design.

Table 5-1 shows the methods of providing platform level redundancy of Cisco WLC platforms discussed within this design guide.

 

Table 5-1 Wireless Controller Platform Redundancy

Platform
Platform (Box-to-Box) Redundancy

Cisco CT5508 Wireless LAN Controller

1:1 Active/Standby Redundancy with AP & Client SSO

Cisco Flex 7510 Wireless LAN Controller

1:1 Active/Standby Redundancy with AP & Client SSO

Cisco CT5760 Wireless LAN Controller

1:1 Stack Resiliency—Cisco IOS Software SSO

Cisco 3850 Series Switch Stack

Stack Resiliency—Cisco IOS Software SSO

The following sections discuss the deployment of platform-level high availability on specific Cisco wireless LAN controllers as they are deployed within the Cisco BYOD design.

Cisco Unified Wireless Network (CUWN) Controllers

This section discusses platform high availability mechanisms for the following CUWN wireless LAN controller platforms:

  • Cisco CT5508 WLC platforms deployed within the campus of the Cisco BYOD design servicing campus APs operating in centralized (local) mode.
  • Cisco Flex 7510 WLC platforms deployed within the campus of the Cisco BYOD design servicing remote branch APs operating in FlexConnect mode.

CUWN platforms support two forms of platform redundancy:

  • 1:1 active/standby redundancy with AP and client SSO
  • The older form of high availability known as N+1 redundancy

This design guide has not validated N+1 redundancy as a means of achieving platform high availability. Instead, it utilizes 1:1 active/standby redundancy with AP and client SSO. The N+1 High Availability Deployment Guide provides guidance around N+1 redundancy: http://www.cisco.com/en/US/docs/wireless/technology/hi_avail/N1_High_Availability_Deployment_Guide.pdf

1:1 Active/Standby Redundancy with AP and Client SSO

In CUWN software release 7.3, the ability to have a 1:1 active/standby pair of wireless LAN controllers with AP stateful switch over (SSO) was introduced. This capability allows access points to perform a rapid stateful switch over to a hot-standby wireless LAN controller—with an identical configuration to the primary WLC—in the event of a failure of the active WLC. All unique configuration parameters and groupings specific to individual APs and AP groups are retained. An example of retained configuration is FlexConnect grouping, which applies different restrictions and settings to sub-sets of APs based on branch location.

An example of 1:1 active/standby redundancy (using a single physical distribution system port) with AP SSO is shown in Figure 5-14.

Figure 5-14 Example of 1:1 Active/Standby Redundancy with AP SSO

 

In CUWN software release 7.5, the ability to have a 1:1 active/standby pair of wireless controllers was extended to allow both APs and wireless clients to perform a rapid stateful switch over. As with the previous version of SSO, unique configuration parameters and groupings specific to individual APs and AP groups are retained. With CUWN software releases 7.5 and higher, wireless clients in the RUN state also remain associated when a failover occurs.

An example 1:1 active/standby redundancy with AP and client SSO (using a single physical distribution system port) is shown in Figure 5-15.

Figure 5-15 Example of 1:1 Active/Standby Redundancy with AP and Client SSO

 


NoteInFigure 5-15, the redundant ports are connected to the same switch as the distribution system ports. However the redundant ports can be connected via a completely different switch (or switches) depending on the deployment.


1:1 active/standby redundancy with AP and Client SSO is supported on the following CUWN WLC platforms:

  • Cisco 5500 Series
  • Cisco Flex 7500 Series
  • Cisco 8500 Series
  • Cisco WiSM2

Note1:1 active/standby redundancy with AP SSO is not supported on the virtual wireless LAN controller (vWLC) platform or the Cisco 2500 Series wireless LAN controller platform. 1:1 active/standby redundancy with AP and client SSO is also currently not supported by the new (hierarchical) mobility architecture. Hence 1:1 active/standby with AP and client SSO cannot be supported in the hybrid design discussed inHybrid Converged Access and Local Mode.


With 1:1 active/standby redundancy, the active and standby WLCs use a dedicated redundant port (RP). In CUWN software releases 7.3 and 7.4, it is highly recommended that the redundant ports (RP) of both WLCs be directly connected by an Ethernet cable. With CUWN software releases 7.5 and higher, the requirement that the wireless controllers be connected via a dedicated cable between the redundant ports (RPs) has been removed. The redundant ports (RPs) can now be connected via one or more Layer 2 switches. The following are the requirements for connectivity between WLC when in a 1:1 HA remote configuration:

  • Redundant Port (RP) round trip time (RTT) must be less than 80 milliseconds if the keepalive timer is left to its default of 100 milliseconds OR 80% of the keepalive timer if the keepalive timer is configured within the range of 100-400 milliseconds.
  • Failure detection time is 3 * 100 = 300 + 60 = 360 + jitter (12 milliseconds) = ~400 milliseconds
  • Bandwidth between redundant ports (RPs) must be 60 Mbps or higher
  • MTU: 1500 bytes or larger

NoteBecause the direct connectivity requirement has been removed, 1:1 active/standby redundancy with AP and client SSO could be used for platform (box-to-box) redundancy and/or for site-to-site redundancy, since both the active and standby controllers no longer need to be in physical proximity to each other. Site-to-site redundancy, in which the 1:1 active/standby CUWN wireless controllers are located in separate data centers, has not been validated as part of the Cisco BYOD design guide.


UDP keepalive messages are sent every 100 milliseconds by default from the standby WLC to the active WLC via the redundant port (RP). Configuration, operational data synchronization, and role negotiation information are also synchronized between the active and standby WLCs via the redundant port (RP). The IP address of the RP is not user-configurable. The first two octets are always “169.254”. The last two octets are the same as the redundancy management interface (RMI).

The RMI is an additional interface which must be configured to be on the same IP subnet as the management interface. The active WLC checks to see if the gateway is available by sending an ICMP ping on the management interface every second. Likewise, the standby WLC checks to see if the gateway is available by sending an ICMP ping on the RMI every second. The standby WLC will also check the health of the active WLC via the RMI if the active WLC stops responding to keepalive messages sent via the redundant port (RP).

Failovers are triggered by loss of keepalive messages as well as network faults. Hence the rate at which UDP keepalive messages are sent has a direct influence on how fast failover occurs. The loss of three UDP keepalive messages (along with three ICMP packets which are immediately sent across the RMI when packet loss is detected across the RP) causes the standby controller to assume the active role. The UDP keepalive messages can be sent between every 100 milliseconds to 400 milliseconds, in 50 millisecond increments.

CUWN WLCs implement a 1:1 active/standby model for both the control plane and the data plane. Only the active WLC is up from a control and data plane perspective until a failure occurs. APs do not go into the DISCOVERY state and therefore do not need to establish a new CAPWAP connection or download new configuration before accepting wireless client associations. When the previous active WLC recovers, it will negotiate with the current active WLC to become the standby WLC. In other words, there is no preempt functionality.

Within the BYOD campus local mode design, all wireless clients connected to APs managed by a local Cisco 5508 WLC were de-authenticated and dis-associated upon failover to the standby WLC with CUWN software releases 7.3 and 7.4. Wireless clients had to re-associate and re-authenticate since client state information was not maintained. Thus the overall recovery time was dependent upon the number of wireless clients and the authentication mechanism. With CUWN software releases 7.5 and higher, existing wireless clients in the RUN state remain authenticated and associated since client state information is maintained between the active and standby WLCs. Therefore the overall recovery time can be much faster.

Within the BYOD branch FlexConnect design, APs operating in FlexConnect mode managed by a remote Flex 7510 wireless controller went into standalone mode when the connection to the wireless controller was lost with CUWN software releases 7.3 and 7.4. Existing wireless clients were not de-authenticated and dis-associated, as is the case with wireless clients connected to APs operating in centralized (local mode) managed by a Cisco 5508 wireless controller. However new wireless clients cannot associate and authenticate to the branch wireless network when centralized authentication is configured for the WLAN and the access point is in standalone mode. Hence 1:1 active/standby redundancy with AP and client SSO may also provide benefits to a branch FlexConnect wireless deployment. However unless the 1:1 active/standby pair of Flex 7510 wireless controllers are deployed in separate sites with a Layer 2 connection between them, site-to-site redundancy will not be accomplished. In this case, N+1 redundancy may provide an alternative form of high availability (both platform and site-to-site) for branch FlexConnect designs.

Configuring 1:1 Active/Standby Redundancy with AP and Client SSO

The steps for configuring 1:1 active/standby redundancy with AP and client SSO are the same as for configuring 1:1 active/standby redundancy with only AP SSO. There is only a single configuration option which enables both AP and client SSO. There is no option for enabling one without the other.

Before enabling SSO, the management interfaces of the primary (active) and secondary (hot standby) wireless LAN controllers must be configured to be on the same subnet. Figure 5-16 shows an example of the configuration of the IP address of the management interface of a CUWN wireless controller.

The IP address for the management interface in the example in Figure 5-16 is configured to be 10.225.147.3/24. Assuming this is to be the primary (active) wireless controller of the HA pair, the IP address of the management interface of the secondary (standby) wireless controller would need to be configured to also be on the 10.225.147.0 subnet. For example the management interface of the secondary (standby) wireless controller could be configured to be 10.224.147.4/24.

Figure 5-16 Configuring the IP Address of the Management Interface

 

Next, the IP address of the Redundancy Management Interface (RMI) of each wireless LAN controller (the primary and the secondary) in the HA pair must be configured to be in the same IP subnet as the Management Interface. This is done through the Redundancy-->Global Configuration screen. Figure 5-17 shows an example of the configuration of the IP address of the RMI and the IP address of the peer RMI on the primary CUWN wireless controller.

Figure 5-17 Configuring the IP Addresses of the Redundancy Management Interface (RMI) and the Peer RMI

 

The IP address for the RMI in the example in Figure 5-17 is configured to be 10.225.147.103. Assuming this is to be the primary (active) wireless controller of the HA pair (as selected in the figure), the IP address of the RMI of the peer (standby) wireless controller would need to be configured to also be on the 10.225.147.0 subnet. For example the RMI of the peer (standby) wireless controller is shown to be 10.224.147.104.

Note that the configuration of the peer RMI shown above simply informs the wireless controller of the IP address of the RMI of the peer wireless controller. It does not configure the IP address of the RMI of the peer. The same configuration step needs to be done on the secondary (standby) wireless controller. However, on the secondary (standby) wireless controller, its RMI would be configured with an IP address of 10.224.147.104 given the example above. Likewise, the secondary (standby) wireless controller peer RMI would be configured with an IP address of 10.224.147.103.

After configuring the IP addresses of the RMI and the peer RMI on both wireless controllers and selecting one unit as the primary (active) wireless controller and the other unit as the secondary (standby) wireless controller, the network administrator must click the Apply button before enabling SSO.


NoteAs of CUWN software release 7.3 and above, a factory ordered HA SKU is orderable. If a factory ordered HA SKU is part of the HA pair, it will automatically default to the role of the standby wireless controller when paired with an active wireless controller with a valid AP count license.


After clicking the Apply button the network administrator can then enable SSO by selecting Enabled from the drop down menu next to the SSO field, as shown in Figure 5-18.

Figure 5-18 Enabling AP and Client SSO on CUWN Wireless Controllers

 

The wireless controllers will reboot upon clicking the Apply button and negotiate their respective HA roles based upon their configuration. Once the wireless controllers have rebooted, the secondary (standby) WLC will proceed to download its configuration from the primary (active) WLC. Upon downloading its configuration, the secondary (standby) WLC will reboot again. Once the secondary (standby) WLC has rebooted for the second time, it will verify its configuration is synchronized with the primary (active) WLC, and assume the role of the standby wireless controller. Note that default information such as the IP address of the Redundancy Port (RP) and the IP address of the peer RP will be automatically populated once SSO has been enabled.

Finally, Figure 5-19 shows how the keepalive timer can be modified in order to influence the failover time.

Figure 5-19 Configuring the Keepalive Timer to Influence Failover Time

 

As mentioned previously, UDP keepalive messages can be sent between every 100 milliseconds to every 400 milliseconds, in 50 millisecond increments.

For more information regarding active/standby redundancy with AP SSO, refer to the High Availability (AP SSO) Deployment Guide: http://www.cisco.com/en/US/products/ps10315/products_tech_note09186a0080bd3504.shtml

Converged Access Controllers

This section discusses platform high availability mechanisms for the following Converged Access (IOS XE based) WLC platforms:

  • Catalyst 3850 switches functioning as Mobility Agents (MAs) and Mobility Controllers (MCs) deployed within a branch Converged Access design.
  • Catalyst 3850 switches functioning as MAs along with Cisco 5760 WLCs functioning as MCs deployed within a campus Converged Access design.
  • Cisco 5760 wireless controllers servicing APs operating in local mode within a campus centralized (non-Converged Access) design.

Catalyst Switch Stack Resiliency

Catalyst 3850 Series switches support StackWise technology along with Cisco IOS software SSO for providing resiliency within the switch stack. Catalyst switch stack resiliency is the method of providing high availability for Catalyst 3850 Series switches deployed in a Converged Access branch design. This is shown in Figure 5-20.

Figure 5-20 Catalyst 3850 Switch Stack Resiliency

 

In the Converged Access campus design, Catalyst switch stack resiliency also provides high availability for the Catalyst 3850 Series switches functioning as MAs. Cisco CT5760 wireless controller 1:1 stack resiliency provides high availability for the MC function. This is discussed in the next section.

Catalyst switch stack resiliency has been supported for Catalyst 3850 Series switches since IOS XE software release 3.2.0SE. Catalyst 3850 Series switches support Cisco StackWise-480 stacking ports along with copper-based Cisco StackWise cabling for a stack bandwidth of approximately 480 Gbps.


NoteN+1 platform redundancy is not supported for access points connected to Catalyst 3850 switches operating in a converged access deployment.


With IOS XE software release 3.3.0SE and higher, the number of Catalyst 3850 Series switches which can be supported in a single switch stack has been increased from four to nine switches. The stack behaves as a single switching unit that is managed by an “active” switch elected by the member switches. The active switch automatically elects a standby switch within the stack. The active switch creates and updates all the switching/routing/wireless information and constantly synchronizes that information with the standby switch. If the active switch fails, the standby switch assumes the role of the active switch and continues to the keep the stack operational. Access points continue to remain connected during an active-to-standby switchover. Wireless clients are dis-associated and need to re-associate and re-authenticate. Hence the recovery time is dependent upon how many wireless clients need to be re-associated and re-authenticated, as well as the method of authentication. No configuration commands are required in order to enable switch stack resiliency on Catalyst 3850 and/or 3650 Series switches; it is enabled by default when the switches are connected via stack cables.

Cisco CT 5670 Wireless Controller 1:1 Stack Resiliency

As of IOS XE software release 3.3.0SE and higher, Cisco CT5760 wireless controllers support 1:1 stack resiliency, similar to that supported by Catalyst 3850 Series switches. However only two CT5760 wireless controllers can be connected in a high availability stack. An example of 1:1 stack resiliency on the CT5760 is shown in Figure 5-21.

Figure 5-21 Cisco CT5760 WLC 1:1 Stack Resiliency

 

1:1 stack resiliency is the method of providing platform-level high availability for CT5760 wireless controllers servicing APs operating in a centralized campus design within this design guide because it can provide faster overall recovery for wireless clients. Prior to IOS XE software release 3.3.0SE, N+1 redundancy defined at the access point (which is still supported) was the only method of providing platform-level redundancy when the CT5760 was operating as a centralized controller within a campus design.

Note that high availability on the CT 5760 wireless controller is different than high availability on CUWN wireless controllers. With CT 5760 1:1 stack resiliency, the data planes of both WLCs are active although the maximum throughput of the stack is still approximately 80 Gbps. The control plane of one of the CT 5760s is active, while the control plane of the other is in standby. The active WLC creates and updates all the switching/routing/wireless information and constantly synchronizes that information with the standby WLC. If the active CT 5760 fails, the standby CT 5760 assumes the role of the active WLC and continues to the keep the stack operational. Access points continue to remain connected during an active-to-standby switchover. Wireless clients are dis-associated and need to re-associate and re-authenticate. Hence the recovery time is dependent upon how many wireless clients need to be re-associated and re-authenticated, as well as the method of authentication.

1:1 stack resiliency is also the method of providing high availability of CT5760 wireless controllers which function as Mobility Controllers (MCs) in the Converged Access campus design, as shown in Figure 5-21. Prior to IOS XE software release 3.3.0SE, the only way of providing high availability of the MC function in a Converged Access campus design was to manually re-configure each Catalyst 3850 Series switch stack (functioning as an MA) to point to a different CT 5760 wireless controller (functioning as an MC) upon failure of the original CT5760. Hence IOS XE software release 3.3.0SE and higher provides a significant step forward in providing high availability in a Converged Access campus design.

No configuration commands are required in order to enable 1:1 stack resiliency on CT5760 wireless controllers; it is enabled by default when two WLCs are connected via a stack cable.

Branch Wide Area Network Design

Many network administrators will re-examine the wide area network (WAN) prior to deploying a BYOD solution at the branch. Guest networks in particular have the ability to increase loads to a rate that can consume WAN bandwidth and compromise corporate traffic. While wired rates have increased from 10 Mbps to 1 Gbps and cellular networks have increase bandwidth from 30 Kbps for GPRS to around 20 Mbps for LTE, traditional branch WAN bandwidths have not experienced the same increase in performance. Employees and guests expect bandwidth, delay, and jitter on the corporate network to be at least as good as they experience at home or on the cellular network.

Furthermore, because WiFi access is typically free for corporate users and because most hand held devices will prefer WiFi over cellular, corporate users will likely continue using the guest or corporate SSID for Internet access, even when the LTE network offers faster speeds. This is forcing network administrators to explore new WAN transport mechanisms such as Metro Ethernet and VPN-over-Cable to meet user expectations. Another approach is to offload guest Internet traffic at the branch in an effort to preserve WAN bandwidth for corporate traffic. Corporate Security Policy will need to be considered, however, before providing direct Internet access from the branch. As a result, the WAN is experiencing increased loads. While there are no new WAN requirements for branch BYOD services, some areas such as transport technology, access speeds, and encryption should be reviewed.

Branch WAN Infrastructure

The branch WAN infrastructure within this design includes Cisco ASR 1006s as the head-end routers. Two different WAN connections are terminated on these devices; the first router is configured as a service provider MPLS circuit and the second router is configured with an Internet connection. These head-end routers are both placed in a “WAN edge” block that exists off of the campus core. The ASR that terminates the Internet connection also makes use of IOS Zone-Based Firewall (ZBFW) and only tunneled traffic towards the branch is permitted.

Within the branch, two different designs have been validated. The first design consists of two Cisco 2921 ISR-G2 routers. One of the two routers terminates the SP MPLS circuit, while the second router terminates the Internet connection which can be utilized as a branch backup exclusively or as an alternate path for corporate traffic. The second design consists of a single Cisco 2921 ISR-G2 router that terminates both circuits.

In both deployment modes, the Cisco IOS Zone-Based Firewall (ZBFW) has been implemented to protect the branch's connection to the Internet. Although entirely feasible, local Internet access from the branch is not permitted. For this purpose as well as for corporate data, DMVPN has been implemented and only tunnel access granted for secure connectivity back to the campus head-end routers. This provides for access to the data center. Internet access is available through the corporate firewall/gateway. DMVPN is additionally used to secure traffic across the service provider's MPLS circuit.

It is beyond the scope of this document to provide configuration information and design guidance around DMVPN, ZBFW configuration, QOS, and other aspects of the WAN infrastructure.

For detailed reference information around Next Generation Enterprise WAN (NGEW) design, refer to the documentation on Design Zone: http://www.cisco.com/en/US/netsol/ns816/networking_solutions_program_home.html .

For additional QOS Design Guidance, refer to the Medianet Design Guide at: http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns819/landing_vid_medianet.html .

Branch WAN Bandwidth Requirements

This design guide presents two branch wireless LAN designs—FlexConnect and Converged Access. In FlexConnect designs, branch access points are managed by a wireless LAN controller in the campus data center or services module. A CAPWAP tunnel is established between the wireless controller and each of the access points within the branch locations. This CAPWAP tunnel is used for control traffic and possibly data traffic during the on-boarding process in some designs. This traffic is transported over the WAN. Even though devices may use a FlexConnect design to locally terminate traffic onto local VLANs within the branch, a large percentage of traffic will continue to flow over the WAN to the corporate data center.

In Converged Access designs, branch access points are managed by the integrated wireless LAN controller functionality within the Catalyst 3850 Series switch. A CAPWAP tunnel is established between the Catalyst 3850 Series switch and the access points within the branch locations. This CAPWAP tunnel is used for all wireless control and data traffic. However, even though devices may use a Converged Access design to locally terminate traffic onto local VLANs within the branch, a large percentage of traffic will again continue to flow over the WAN to the corporate data center.

Since both branch wireless LAN designs presented in this document utilizes a centralized AAA server (such as Cisco ISE), there may be an increase in authentication and authorization traffic as more employee managed devices are on-boarded. These new endpoints may also generate additional new traffic. Further, guest Internet access is carried back to an anchor controller in the campus DMZ with both branch wireless LAN designs. All of this may result in increased loads on the WAN circuit as a result of the BYOD deployment.

It may be difficult to forecast the additional amount of traffic loading because the level of participation may not be well known prior to deploying BYOD. Wireless guest traffic in particular can be difficult to budget and may vary substantially depending on local events. A reasonable design goal is to provision a minimum of 1.5 Mbps at each branch that offers BYOD. The head-end WAN aggregation circuits should be provisioned to follow traditional oversubscription ratios (OSR) for data. This will allow adequate bandwidth for smaller deployments. Larger branch locations will likely need additional bandwidth, especially if the guest users are likely to expect the use of high bandwidth applications such as streaming video traffic. The WAN architecture should offer enough flexibility to adjust service levels to meet demand. Sub-rate MPLS access circuits or a dedicated WAN router with incremental bandwidth capabilities can accomplish this. Address space adequate for each branch should also be considered because both FlexConnect and Converged Access designs can allow wireless DHCP clients to pull from local scopes. Additional information concerning bandwidth management techniques such as rate-limiting is discussed in Chapter21, “BYOD Guest Wireless Access”

Encryption Requirement

Another component of both BYOD enabled branch wireless LAN designs is local termination of branch wireless traffic. This allows branch wireless devices to directly access resources located on the branch LAN without the need to traverse a CAPWAP tunnel to a centralized wireless controller. This reduces the amount of traffic that needs to be carried by the WAN by eliminating the hair-pinning of traffic from the branch location, back to the wireless controller within the campus, and then back to the branch server. The effect reduces load in both directions-upstream within a CAPWAP tunnel and downstream outside of the CAPWAP tunnel. The benefits are realized when a wireless branch device is connecting to a server located in the same branch. If the traffic is destined for the data center, it still transits the WAN but outside of a CAPWAP tunnel, benefiting from the same level of security and performance as wired traffic. Depending on the application, it may not be encrypted so additional WAN security might be needed. If the branch is using a broadband connection as either the primary or backup path, then obviously encryption technologies such as DMVPN should be deployed. However, even if an MPLS VPN service is being used, the enterprise may still want to consider encrypting any traffic that passes off premise.

Transport

With both the FlexConnect and Converged Access designs, not all wireless traffic is terminated locally. In this design guide guest traffic is still tunneled within a mobility tunnel to a central controller at a campus location. Also, depending upon the on-boarding design implemented (single SSID versus dual SSID), traffic from devices which are in the process of being on-boarded may also remain in the CAPWAP tunnel to the central controller with the FlexConnect design. This traffic may compete for bandwidth with the corporate traffic also using the WAN link, but not inside a CAPWAP tunnel. These concerns are addressed with a mix of traditional QoS services and wireless rate-limiting. In some situations, the transport will determine what is appropriate.

If Layer 2 MPLS tunnels are in place, destination routing can be used to place CAPWAP traffic on a dedicated path to the wireless controllers. This may be useful as an approach to isolate guest traffic from the branch towards the campus since FlexConnect with local termination will pass most corporate traffic outside of a CAPWAP tunnel directly to its destination. Return traffic from the campus towards the branch is more difficult to manage without more complex route policies, but may be possible with careful planning.

Figure 3-2 illustrates at a high level a typical WAN architecture.

Branch LAN Network Design

The anywhere, any device requirement of BYOD implies that employees can use either corporate or personal devices at either campus or branch locations. When they do, the pertinent component of the BYOD architecture is the ability to enforce policies on these devices at either the branch or at the campus location. Policy enforcement is effective if and only if there is a well-designed branch network infrastructure in place. This branch network infrastructure can be categorized into WAN and LAN components. This section discusses the high level key design elements of branch LAN design.

Cisco access points can currently operate in one of two implementation modes in the Cisco Unified Wireless Network (CUWN) architecture:

  • Local mode (also referred to as a centralized controller design)
  • FlexConnect mode

In addition, Cisco has recently integrated wireless LAN controller functionality directly into the latest generation access-layer switches—the Catalyst 3850. Hence there is a now a third implementation choice:

  • Converged Access

FlexConnect is a wireless design which primarily applies to branch locations and is discussed in this section. Local mode is a wireless design which primarily applies to campus locations within this design guide and is discussed in Campus Network Design. Converged Access designs apply to both wired and wireless designs within both the branch and campus and hence are discussed in both sections of this chapter.


NoteLocal mode can be deployed within branches which are large enough to justify the requirement for wireless controllers deployed within the branch itself. In such cases, the BYOD design for the large branch is similar to the campus design.


FlexConnect Wireless Design

FlexConnect is an innovative Cisco technology which provides more flexibility in deploying a wireless LAN. For example, the wireless LAN may be configured to authenticate users using a centralized AAA server, but once the user is authenticated the traffic is switched locally on the access point Ethernet interface. Alternatively, the traffic may be backhauled and terminated on the wireless controller Ethernet interface if desired. The local switching functionality provided by FlexConnect eliminates the need for data traffic to go all the way back to the wireless controller when access to local resources at the branch is a requirement. This may reduce the Round Trip Time (RTT) delay for access to applications on local branch servers, increasing application performance. It can also reduce unnecessary hair-pinning of traffic when accessing resources local to the branch.

Access points connected to the access-layer switches within branch locations are still configured and controlled via one or more centralized wireless LAN controllers. In the case of this design guide, these controllers are a set of Cisco Flex 7500 wireless controllers—dedicated for branches—since they provide greater scalability for supporting access points in FlexConnect mode than Cisco CT5508 wireless controllers. Note also that with this design, guest wireless traffic is backhauled across the WAN to a dedicated CT5508 guest anchor controller located on a DMZ segment within the campus. Provisioning traffic (i.e. traffic from devices attempting to on-board with ISE) may also be backhauled across the WAN to the Flex7500 wireless controllers located within the campus.

Figure 5-22 shows at a high level how FlexConnect is implemented in the branch design.

Figure 5-22 High-Level View of the FlexConnect Wireless Branch Design

 

To implement the BYOD use cases for on-boarded devices, the method presented in this design guide for branch locations utilizing a FlexConnect wireless design is to place the device into an appropriate VLAN after it is authenticated and authorized. Statically configured FlexConnect ACLs applied per access point (or access point group) and per VLAN, provide differentiated access control for wireless devices. For example, a personal device which needs full access to the network is placed into a VLAN in which a FlexConnect ACL is configured on the access point with the right permissions. Personal devices that are granted partial access are placed in a different VLAN which has a different FlexConnect ACL.

Branch Wired Design

Figure 5-23 shows the wired design for a branch which does not implement Converged Access Catalyst 3850 Series switches. In other words, this is the wired design for a branch which implements switches such as the Catalyst 3750X, along with a FlexConnect wireless design.

Figure 5-23 High-Level View of Non-Converged Access Wired Branch Design

 

This design guide assumes that Catalyst switches are deployed as Layer 2 devices within the branch location. Wired devices authenticate using 802.1X against the ISE server centrally located within the campus. For this design, wired devices are also dynamically assigned to separate VLANs based on their access control requirements. A RADIUS downloadable ACL applied to the Catalyst 3750X Series switch overrides a pre-configured static ACL on each Catalyst switch port. Differentiated access control for the wired devices is provided by statically configured ACLs applied to the Cisco ISR G2 router Layer 3 sub-interfaces.

Converged Access Branch Design

The Converged Access branch BYOD design assumes a single Catalyst 3850 Series switch or switch stack deployed within a branch location. Hence this design applies to small to mid-sized branches only. This is shown in Figure 5-24.

Figure 5-24 Converged Access Branch Design Hardware

 

Up to nine Catalyst 3850 Series switches may be deployed within a switch stack. The maximum number of access points supported per switch stack is 50, with up to a maximum of 2,000 wireless clients. The Catalyst 3850 Series supports up to 40 Gbps wireless throughput per switch (48-port models). Note that wireless performance requirements and physical distance limitations will often dictate the actual number of wireless access points and clients which can be deployed with this design. When a switch stack is implemented, APs should be deployed across the switches for wireless resilience purposes. This design guide will assume Catalyst 3850 Series switches deployed as Layer 2 switches within the branch location. Layer 3 connectivity within the branch is provided by the ISR routers which also serve as the WAN connectivity point for the branch. Future design guidance may address Catalyst 3850 Series switches deployed as Layer 3 switches within the branch location.


NoteThe Converged Access branch BYOD design may also be referred to as the Integrated Controller Branch BYOD design within this document.


As mentioned previously, Cisco has integrated wireless LAN controller functionality directly in the Catalyst 3850 Series switch. When access to local resources at the branch is a requirement, this allows for the termination of wireless traffic on the Catalyst 3850 switch itself, rather than backhauling traffic to a centralized wireless controller. As with FlexConnect designs, Converged Access designs can reduce Round Trip Time (RTT) delay, increase application performance, and reduce unnecessary hair-pinning of traffic when accessing resources local to the branch.

For the Converged Access branch BYOD design, the single Catalyst 3850 Series switch stack will implement the following wireless controller functionality:

  • Mobility Agent (MA)—Terminates the CAPWAP tunnels from the access points (APs), and maintains the wireless client database.
  • Mobility Controller (MC)—Manages mobility within and across sub-domains. Also manages radio resource management (RRM), WIPS, etc.

Since there is only a single switch stack, there is only a single Switch Peer Group (SPG). The Mobility Group, Mobility Sub-Domain, and Mobility Domain are entirely contained within the branch. No additional centralized wireless controllers are needed at the campus location, except for the Cisco CT5508 wireless controllers which function as the dedicated anchor controllers for wireless guest traffic. The access points within the branch locations are configured and controlled via the wireless LAN controller functionality integrated within the Catalyst 3850 Series switch. Guest wireless traffic is still backhauled to a dedicated CT5508 guest anchor controller located on a DMZ segment within the campus. Provisioning traffic (i.e., traffic from devices attempting to on-board with ISE) is terminated locally on the Catalyst 3850 Series switch, with the Converged Access branch design. When implementing a dual-SSID design, provisioning traffic is terminated on a separate VLAN. All on-boarded devices terminate on a single VLAN with this design.


NoteWhen deploying converged access wireless designs in which the Catalyst 3850 Series switch functions as the Mobility Controller (MC) and Mobility Agent (MA), it should be noted that the mobility tunnel for wireless guest access initiates from the Catalyst 3850 to the Guest anchor controller located within the DMZ. Hence, each branch will initiate a mobility tunnel for wireless guest access with this design. The maximum number of mobility controllers within a mobility domain is 72 for the CT5508 wireless controller. Therefore the maximum number of mobility anchor tunnels is limited to 71 for the CT5508 wireless controller. Therefore the network administrator may need to deploy additional CT5508 guest anchor controllers. Alternatively, the network administrator may look at providing direct Internet access from the branch for guest access. Future versions of this design guide may address such designs.


In order to implement the BYOD use cases, the method adopted in this design guide for branch locations utilizing a Converged Access design is to apply the appropriate dynamic ACL after the device is authenticated and authorized. This applies to both wired and wireless devices. The particular form of dynamic ACL is a RADIUS specified local ACL, otherwise known as a named ACL. These named ACLs, which must be configured on each Catalyst 3850 Series switch, provide differentiated access control. For example, a personal device which is granted full access to the network is statically assigned to the same VLAN as a personal device which is granted partial access. However different named ACLs are applied to each device, granting different access to the network. Since the named ACL is configured on the Catalyst 3850 switch specific to the particular branch, a single Cisco ISE policy can be implemented across multiple branches. However the Access Control Entries (ACEs) within the ACL for each branch can be unique to the IP addressing of the branch. This reduces the administrative complexity of the Cisco ISE policy, albeit at the expense of increased complexity of having to configure and maintain ACLs at each branch Catalyst 3850 Series switch.

Figure 5-25 shows at a high level how a Converged Access BYOD design is implemented in the branch.

Figure 5-25 High-Level View of the Converged Access Branch BYOD Design

 

Note that in the case of this design guide, on-boarded wired devices are also statically assigned to the same VLAN as wireless devices. Hence on-boarded wired and wireless devices will share the same VLAN, and hence the same IP subnet addressing space. It is recognized that customers may implement separate subnets for wired and wireless devices due to issues such as additional security compliance requirements for wireless devices. This will not be addressed within this version of the design guidance. Dynamically assigned named ACLs provide differentiated network access for wired devices.

The reason for the two methods of providing differentiated access between the FlexConnect and Converged Access branch designs is that prior to CUWN software version 7.5, FlexConnect did not allow the dynamic assignment of an ACL to an access point. It only allowed the dynamic assignment of a VLAN. The FlexConnect wireless design in this design guide is carried forward from the previous version of the design guide and continues to require a separate VLAN for each separate level of access control. This can increase the administrative burden of managing the branch network configuration. Converged access designs are more consistent with the campus wireless designs, requiring a single VLAN for multiple levels of access control.