The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
This Cisco Connected Communities Infrastructure (CCI) Solution Release 2.1 Cisco Validated Design (CVD) Implementation Guide provides a comprehensive explanation of the Cisco Connected Communities Network infrastructure implementation, including Wi-Fi Access network, along with Smart Cities and Roadways vertical solution use cases such as Cisco Safety and Security, Cisco Smart Street Lighting, Supervisory Control and Data Acquisition (SCADA) Water, LoRaWAN Lighting, and Edge Computing.
This implementation document includes information about the solution architecture, possible deployment models, and guidelines for deployment. It also recommends best practices and potential issues when deploying the reference architecture.
The document covers the following topics:
Discusses the CCI Solution network topologies, along with IP addressing used at every layer of the topologies. Includes Virtual Network and Scalable Groups names used in the solution overlay network. |
|
Discusses the CCI solution components hardware model and software versions validated. |
|
Explains the steps to implement network underlay routing for CCI Solution network topologies with Ethernet network backhaul and MPLS network backhaul. |
|
Explains the steps to implement CCI Solution shared services like Cisco Digital Network Architecture Center (Cisco DNA Center), Cisco Identity Services Engine (ISE), Cisco Wireless LAN Controller (WLC), and Cisco Prime Infrastructure (PI). |
|
Explains the implementation details to set up Cisco DNA Center for CCI Solution with network design, device discovery, fabric provisioning, and Industrial Ethernet switches as Extended Nodes. |
|
Explains the implementation details for Ethernet network backhaul and MPLS network backhaul for the solution network topologies. It also includes implementation covered as part of fabric overlay provisioning for IP transit and SD-Access transit methods of fabric site interconnection as applicable. |
|
Explains the steps to implement fusion router routing configuration required to access shared services network, other fabric sites via IP transit, and the Internet. |
|
Describes implementation details of various access networks in CCI. It covers the implementation of the following access network and technologies: ■ ■ |
|
Describes the steps to implement Field Area Network for CR-Mesh. Explains the implementation in various places in the network, such as the headend network, onboarding Connected Grid Router (CGR) as gateway for CR-Mesh endpoints, and CR-Mesh network. |
|
Explains the detailed steps to implement the Remote Point-of-Presence (RPoP) network for connecting the remote LoRaWAN and CR-Mesh access network to the CCI Network headend infrastructure. Note: Although RPoP network can be used for connecting various other devices, only LoRaWAN and CR-Mesh have been validated. |
|
Describes the steps to implement vertical solution-specific Cisco application servers in the data center or headquarter site. Also covers the implementation of various partner applications (on-premises or cloud) required for Cities and Roadways verticals. |
|
Explains the detailed steps for implementing CCI network security such as macro- and micro-segmentation, Scalable Groups Tags (SGT)-based classification and propagation, policy enforcement, device or endpoints security, and Firepower. |
|
Discusses the steps to deploy CCI network QoS on CCI fabric device and IE access rings. |
|
Discusses steps to configure SD Access Multicast in a PoP site and between PoP sites. |
|
Implementation of SCADA Communication with Multiple Backhaul Types and Protocols |
Captures the detailed implementation steps and procedure of SCADA communication with multiple backhaul types and protocols. This implementation focused on Distributed Network Protocol 3 (DNP3) and MODBUS SCADA protocols. |
Explains the detailed steps for implementing LoRaWAN-based FlashNet Lighting using Actility ThingPark Enterprise (TPE) as the network server. |
|
Explains the detailed steps for secure onboarding of Axis cameras. |
|
Describes how to extend network services out to a train network when a CCI network is being built out, |
|
Captures supplementary configurations used for the CCI network topologies validated in this CVD. |
The audience for this guide comprises, but is not limited to, system architects, network/compute/systems engineers, field consultants, Cisco Solution Support specialists, and customers.
Readers should be familiar with networking protocols and IP Routing, basic network security and QoS, and be exposed to server virtualization using hypervisor and the Cisco Connected Communities Infrastructure (CCI) Solution architecture, which is described in the Cisco Connected Communities Infrastructure CVD Solution Design Guide at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/solutions/Verticals/CCI/CCI/DG/cci-dg.html
This implementation guide provides a comprehensive details of Cisco Connected Communities Infrastructure (CCI) horizontal network infrastructure implementation leveraging the Cisco Digital Network Architecture Center (Cisco DNA Center) Software Defined Access (SD-Access) Fabric. The CCI solution horizontal access network infrastructure implementation is based on the Cisco Software Defined Access Deployment Guide that can be found at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/sda-sdg-2019oct.html
This document also provides details about implementing CCI vertical use cases such as cities safety and security and Cisco Smart Street Lighting and CCI overlay network use cases such as transportation/roadways intersection. While the implementation steps detailed in this document should be used as a reference for deploying other CCI vertical use cases, the detailed implementation of specific vertical use cases on the CCI network that are not validated in this solution is beyond the scope of this document.
This document covers example network underlay routing configurations and Multiprotocol Label Switching (MPLS) network backhaul configuration for the deployment models and network topologies validated in the solution. Detailed implementation of network routing protocols and configuring MPLS network backhaul is beyond the scope of this document.
The Cisco CCI solution is a multi-service network architecture for a City Campus or a Metropolitan area and Roadways that leverages Cisco's Intent-Based Networking and SD-Access with Cisco DNA Center management to bring the latest developments in network segmentation, automation, and endpoint authentication.
The CCI solution architecture also includes ruggedized access network devices such as Cisco Industrial Ethernet (IE) Switches, Connected Grid Routers (CGR), Cisco Industrial Routers (IR), Cisco Long Range Wide Area Network (LoRaWAN) gateway, and the Cisco® IC3000 Industrial Compute Gateway along with other network infrastructure components to provide a scalable and secure network for CCI vertical solution use cases. The CCI solution implementation is based on the design recommended in the Cisco Connected Communities Infrastructure Solution Design Guide that can be found at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/solutions/Verticals/CCI/CCI/DG/cci-dg/cci-dg.html
This guide details the implementation of the Cisco CCI horizontal network, which includes the implementation of the CCI network underlay, shared services, backhaul network (Ethernet and MPLS), SD-Access Fabric overlay network, access networks like Ethernet Access Rings using Cisco IE switches and CR Mesh, and access technologies like DSRC and LoRaWAN.
However, in some deployments of CCI, there could only be Remote Point-of-Presence (RPoP) sites comprised of CGR, IR1101, and IR1800 Series routers, and is typically connected to the Public Internet (a cellular network, for example), over which secure FlexVPN tunnels are established to the Headend in the CCI Headend network in the Demilitarized Zone (DMZ). Such RPoP-only CCI deployments, which do not require Cisco SD-Access, could be implemented by following the steps described in Implementing Remote Point-of-Presence (RPoP) Sites.
■Cisco Ultra Reliable Wireless Backhaul (CURWB) for CCI backhaul and wireless access networks
■Enhanced Ethernet Access Ring & Provisioning
–IE-3300 10G Access Ring in CCI PoPs
–Daisy Chaining Automation of Extended and Policy Extended Nodes using Cisco DNA Center
–REP Ring Automation using Cisco DNA Center
■Cisco CyberVision OT Device and Flow Detection
–CyberVision Sensor deployment on IE-3400, IE-3300 10G and IR-1101 Platform
–OT Device and Protocols (DNP3 and MODBUS) Flow Detection using Cisco Cyber Vision Center
■Enhanced End-to-End QoS design on IE3400 and IE3300 10G
■Enhanced Remote Point-of-Presence (RPoP) Management design
–IR-1800 as RPoP gateway with multi-service and macro-segmentation at RPoP
–RPoP Management Design using Cisco DNA Center and Cisco IoT Operations Dashboard (IoTOD)
This document also provides implementation details for overlaying CCI vertical use cases like Cities Safety and Security, Cisco Smart Street Lighting, SCADA use cases, LoRaWAN Lighting, and Rail and Roadways intersection on the CCI network. It is recommended to implement CCI network and vertical use cases, as depicted in Figure 1, which shows the flow of the material in this implementation guide:
Figure 1 CCI Solution Implementation Flow
The document addresses the implementation of the following CCI network horizontal and vertical use cases:
■CCI Underlay Network implementation for basic network (Layer 3) IP forwarding and connectivity.
■Implementation of shared services like Cisco DNA Center, Identity Service Engine (ISE), Cisco Wireless LAN Controller (WLC) and Cisco Prime Infrastructure (PI), Cisco CyberVision Center, DHCP, and DNS servers, as well as other shared IoT devices management applications such as Field Network Director (FND).
■Configuring Cisco SD-Access Fabric Site (Point-of-Presence aka PoP) as overlay network and Interconnection of the Fabric Sites leveraging Cisco DNA Center.
■Implementation of Cisco Industrial Ethernet switches—Cisco Industrial Ethernet (IE) 4000, Cisco Industrial Ethernet (IE) 5000, Cisco Catalyst Industrial Ethernet and Embedded Services 3300 and 3400 series switches—as fabric extended nodes, policy extended nodes, in Ethernet access network rings.
■Implementation of Cisco Unified Wireless Network (CUWN) Wi-Fi Mesh and SD Access Wireless Wi-Fi access networks.
■Implementation of headquarter site data center applications for vertical use cases and services on the fabric overlay network. It covers Cities Safety and Security, LoRaWAN, ThingPark Enterprise, and applications such as Certificate Authority (CA) services needed for Cisco Smart Street Lighting solution use cases along with Public Wi-Fi use cases.
■Deployment details for LoRaWAN in Remote PoP.
■Deployment details for Remote PoP over cellular network backhaul for multi-services and macro-segmentation.
■Deployment details for LoRaWAN in Remote PoP (optional).
■Implementation of end-to-end network security, which covers macro- and micro-segmentation of CCI networks using Virtual Networks (VNs) and Scalable Group Tags (SGT) and Scalable Group Access Control Lists (SGACL), network devices and endpoints security, and network firewall implementation in the DMZ and Stealthwatch.
■End-to-end network QoS implementation for traffic classification, prioritization, queuing, and policing.
■Implementation of multicast network forwarding in CCI. Enabling multicast in CCI is optional as it is needed if you want to implement any vertical use case which requires multicast traffic forwarding in CCI.
■Implementation of SCADA communication use cases with CR-Mesh network.
■Implementation of LoRaWAN-based FlashNet lighting use case.
This section, which discusses the various topologies used for solution validation and implementation, includes the following major topics:
■Solution Virtual Networks and Scalable Groups
This section describes the different deployment network topologies that have been validated in the CCI Solution Implementation.
Figure 2 depicts the CCI high level validation topology, including the endpoints for vertical use cases validated in this solution implementation:
Figure 2 CCI High Level Solution Validation Topology
The multiple layers of topology include:
1. Internet Cloud and Data Center layer, which includes:
–Network connectivity to Demilitarized Zone (DMZ) to access Internet/Applications on the cloud.
–A Headquarter or Data Center Site (HQ/DC Site) aka Application Servers Site consisting of:
2. Network backhaul layer interconnects PoPs and the Internet Cloud/Data Center layer with either the private enterprise Ethernet network or private MPLS network backhaul. Remote PoPs connect to the CCI network via cellular or private/public network backhaul.
3. Aggregation layer aggregates all PoPs traffic to the upper layers.
4. Ethernet access ring provides network access to gateways/endpoints validated in the solution.
5. Internet of Things (IoT) gateways and endpoints layer includes Access Points (AP) for Wi-Fi access, Curwb Access Points for Rail, access gateways based on access technologies (such as DSRC, LoRaWAN, and CR-Mesh) and their endpoints validated in this solution.
Two deployment models of the CCI solution have been validated during this implementation:
1. CCI network deployment topology with Cisco SD-Access Transit, henceforth referred to as SDA Transit interconnection of all sites. The validation is done over the Enterprise Ethernet Network backhaul (using Cisco Catalyst 9500 switches as the network core). This topology is depicted in Figure 3.
2. CCI network deployment topology with IP Transit interconnection of PoPs and headquarter sites with validation done over Private MPLS network backhaul, as shown in Figure 4.
Figure 3 SD-Access Transit with Enterprise Backhaul Network Topology
Note : In Figure 3, the PoP1 with C9500 SVL is also supported to connect IE switches to just the nearest Catalyst 9500 stack member. This could be likely when there is insufficient fiber pairs between the two physical locations where each stack member is housed, however in this case also a Port Channel is still used with single member link automated by DNA Center.
Figure 4 IP Transit with MPLS Backhaul Network Topology
Network topologies validated in this CVD include FlexVPN tunnels that are configured for securing the communication between the Cisco 1240 Connected Grid Router and the HER in Cisco Smart Street Lighting solution use cases implemented on the CCI network.
For more details about fabric device roles (B-Border, CP-Control Plane, E-Edge, T-Transit, X-Extended Node) in the network topology, refer to the Cisco Connected Communities Infrastructure CVD Solution Design Guide.
This section captures the example IP addressing prefixes used in the solution lab topology, as shown in Figure 3.
Note: The IP addresses captured in this section are example IP addressing used only for the solution validation as internal sub-networks in the CVD lab. It provides a reference for selecting subnets for the solution implementation. It is recommended to choose private network prefixes/IP addressing scheme depending on the solution deployment and devices connected to the CCI network.
Addressing Convention followed in the IP Subnet Selection
Four prefixes are used in the network subnet for the network topology (where X is the site ID chosen for a PoP site/ transit site and the underlay network devices, if any).
■192.0.X. YY—Devices Loopback IP addresses prefix
■172.10.X. YY—Virtual Network (VN) subnets prefix
■192.168.X. YY—Fabric Overlay Border Handoff Network prefix
■192.100.X. YY—Fabric Extended Nodes IP Pool prefix
Note: Refer to IP Addressing of Solution Components for more details about IP addresses, including IP addresses used for underlay network connectivity for the network topologies, as shown in Figure 3, Figure 4, and Figure 19.
In the CCI implementation, a Virtual Network (VN) is used for a vertical service. This macro-segmentation provides complete separation between services. One VN can communicate with another only by leaking routes between the VRF at the fusion router. Table 2 provides an example list of VNs used in the CCI solution validation.
Example VNs for the Cities and Roadways applications include Safety and Security, Cisco Smart Street Lighting, Iteris, Schneider, and LoRaWAN. Further micro-segmentation within a virtual network is possible by using Scalable Group Tags (SGT). Table 2 also provides an example list of SGTs for micro-segmentation of the VN.
This section covers the Cisco hardware and software component version validated in this CCI solution implementation for CCI horizontal network and CCI vertical-specific use cases implementation such as Cities Safety and Security and Street Lighting and Roadways for the system validation topology, as shown in Figure 2.
It also captures the CCI vertical solution partner hardware and software components along with other third-party applications validated in this implementation.
Table 3 and provide the list of Cisco components and the corresponding version validated in the CCI Horizontal Network and Cities Safety and Security vertical use case applications:
Table 5 and Table 6 provide the list of Cisco components and its version validated for the Street Lighting solution on the CCI network for the Cities vertical.
Table 7 and Table 8 provide the list of CIMCON components and its version validated for the Street Lighting solution on the CCI network for the Cities vertical along with other third-party applications used in this solution implementation:
Communication module, including CIMCON application middleware |
|||
Cloud application for integration with ThingPark Enterprise (TPE) |
Note : Make sure to install licenses for each of the products in the CCI solution. Refer to the respective product’s installation/licensing guide for more details on product license activation.
Train Radio1 |
||||
1.The Train Radio is not part of the trackside infrastructure. The FM 4500 resides on the train to communicate with the FM 3500 on the trackside. |
The underlay network is defined by the switches and router in the network that are used to deploy the SD-Access network. In CCI, the underlay must establish IP connectivity via the use of a routing protocol. Instead of using arbitrary network topologies and protocols, the underlay implementation for SD-Access uses a well-designed Layer 3 foundation inclusive of the campus edge switches (also known as a routed access design), to ensure performance, scalability, and high availability of the network. Before the Cisco DNA Center can discover and manage the fabric devices, it must have this underlay network to reach them. This section covers the example configurations for implementing underlay network for CCI when CCI PoPs are interconnected via either Enterprise Ethernet backhaul or MPLS backhaul.
Note: Underlay network and routing configurations discussed in this section are example configurations used in the solution validation for the network topologies, as shown in Figure 3 and Figure 4 only. Depending on the CCI network deployment, you can choose to implement either of or both the network backhauls.
This section includes the following major topics:
■Configuring Enterprise Ethernet Network Underlay
■Configuring Network Underlay for MPLS Backhaul Network
Ethernet as a backhaul, is one of the enterprise networks backhaul deployment methods that can be implemented in CCI horizontal network, as shown in Figure 3. The underlay network connectivity between shared services and all devices in each PoP site (including HQ/DC site) is provided through the backhaul network. The underlay network configuration is a basic network connectivity prerequisite for implementing the fabric overlay network for the CCI solution using the Cisco DNA Center.
Many protocols are available to configure IP routing, but in this implementation EIGRP is used as an example routing protocol for configuring underlay network connectivity and IP routing across PoP Sites and shared services. Cisco DNA Center uses Border Gateway Protocol (BGP) as the routing protocol when a border node connects to an IP transit, which means the configuration co-exists with the underlay configuration.
In the CCI Solution, all fabric/PoP sites leverage the Cisco Catalyst 9300 switch stack as an aggregation/distribution layer switch for aggregating traffic from access rings. Switch stack ensures redundancy. A stack of Cisco Catalyst 9300 switches appears to the operator and the rest of the network as one single switch, making it easier to manage and configure. Newer switch models add stateful failover capability, providing similar behavior as a chassis with dual supervisors in case of a failure or the need to update software on the stack.
Cisco Catalyst 9300 switch stack configuration is the initial step for provisioning a PoP site network (along with redundancy) for the access rings network and backhaul network connectivity network in the CCI topology. Refer to the following URL for configuring Cisco Catalyst 9300 switches in a stack:
■ https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9300/software/release/17-6/configuration_guide/stck_mgr_ha/b_176_stck_mgr_ha_9300_cg/managing_switch_stacks.html
Alternatively, Cisco Catalyst 9500 Series switches can be used as PoP sites aggregation/distribution layer switch for aggregating traffic from access rings in CCI. Cisco Catalyst 9500 platform StackWise Virtual (SVL) technology allows the clustering of two physical switches, that are geographically separated, together into a single logical entity. The two switches operate as one; they share the same configuration and forwarding state. This technology allows for enhancements in all areas of network design, including high availability, scalability, management, and maintenance.
Cisco Catalyst 9500 switch SVL configuration is the initial step for provisioning a PoP site network (along with redundancy) for the access rings network and backhaul network connectivity network in the CCI topology.
StackWise Virtual domain is elected as the central management point for the entire system when accessed via management IP or console. The switch acting as the single management point is referred to as the SV active switch. The peer chassis is referred to as the SV standby switch. The SV standby switch is also considered a hot-standby switch, since it is ready to become the active switch and it takes over all functions of active switch when active switch fails.
When the Catalyst 9500 SVL is used in the role of the Fabric-in-a-Box (FiaB) (Border + Control Plane + Edge), the connection to a Transit Site (for example, SD Access Transit switches) must be done with interfaces configured as a switchport trunk. A Switched Virtual Interface (SVI) is used for the Layer 3 configuration.
Refer to the section “How to Configure Cisco StackWise Virtual” for configuring Cisco Catalyst 9500 switches in a SVL Mode at the following URL:
https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9500/software/release/17-6/configuration_guide/ha/b_176_ha_9500_cg/configuring_cisco_stackwise_virtual.html
Cisco Catalyst 9500 switches are used to provide the Ethernet network backhaul for interconnecting PoP sites and Shared Services, and Data Center applications in the HQ PoP Site, as shown in Figure 3. The following configuration provides an example configuration to enable Cisco Catalyst 9500 switches for the underlay network routing (Layer 3) for the network topology, as shown in Figure 3.
Configure the Layer 3 interface for the underlay network on Cisco Catalyst 9500 switches:
Example Interfaces Configuration on Cisco Catalyst 9500-1 (Transit Site)
1. Loopback interface is configured on the device for Cisco DNA Center discovery:
2. Configure an interface as a trunk to a PoP Site Cisco Catalyst 9300 stack:
3. Configure an SVI interface (example: VLAN 200) for underlay reachability between Fusion Router 1 and Cisco Catalyst 9300 stack, which is FiaB:
4. Configure Layer 3 Port Channel between C9500 switches in Transit sites. On 9500-1:
5. EIGRP routing protocol is configured between fusion routers and Cisco Catalyst 9300 stack network devices to form neighbors:
Note : EIGRP is chosen as an example routing protocol for the underlay network routing configuration. Refer to the Cisco Connected Communities Infrastructure Design Guide for more details on recommended routing protocol for the underlay network routing configuration.
Example Interfaces Configuration on 9500-2 (Transit Site)
1. Loopback is configured on the device for Cisco DNA Center discovery:
2. Configure an interface on 9500-2 as a trunk to the Cisco Catalyst 9300 stack:
3. Configure an SVI interface (for example, VLAN201) for underlay reachability between Fusion Router 2 and the Cisco Catalyst 9300 stack which is FiaB:
4. Configure Layer 3 Port Channel between 9500 switches in Transit sites. On 9500-1:
5. EIGRP routing configuration between fusion routers and Cisco Catalyst 9300 stack network devices to form neighbors:
Example Interfaces Configuration on 9500-2 (Transit Site)
1. Loopback is configured on the device for Cisco DNA Center discovery:
2. Configure an interface on 9500-2 as a trunk to the Cisco Catalyst 9300 stack:
3. Configure an SVI interface (Example VLAN201) for underlay reachability between Fusion Router 2 and the Cisco Catalyst 9300 stack which is FiaB:
Configure Layer 3 Port Channel between C9500 switches in Transit sites. On C9500-2:
interface TenGigabitEthernet1/0/1
no switchport
no ip address
channel-group 13 mode active
end
!
interface TenGigabitEthernet1/0/2
no switchport
no ip address
channel-group 13 mode active
end
4. EIGRP routing configuration between fusion routers and Cisco Catalyst 9300 stack network devices to form neighbors:
An example Layer 3 routing configuration on the PoP site network device Cisco Catalyst 9300 stack or 9500 SVL to reach fusion routers and shared services network:
1. Loopback interface on the Cisco Catalyst 9300 stack for Cisco DNA Center discovery:
2. Configure interfaces on the Cisco Catalyst 9300 stack as trunk ports to fusion routers:
3. Configure an SVI interfaces (example: VLAN200 and VLAN201) on the Cisco Catalyst 9300 stack to reach fusion routers:
4. Configure EIGRP neighbors between Cisco Catalyst 9300 Stack and Cisco Catalyst 9500 switches (fusion routers):
Note: The above are the example configurations for the PoP1 site, as shown in Figure 3. The same has to be applied for all the PoP sites, including the HQ/DC site, to reach the shared services network so that devices can be successfully discovered in the Cisco DNA Center.
For all the network devices in a PoP site and fusion routers to reach the shared services network, configure the basic underlay routing between the fusion routers and shared services network. Refer to Figure 3 for the physical topology between the fusion router, Nexus, and the shared services network.
1. A pair of Nexus 5672UP switches in the HQ/DC site connecting to application servers is used for connecting the Cisco DNA Center appliance and the Cisco UCS server where other shared services applications are hosted. The following configuration provides an example configuration (Layer 3) on the Nexus switches for configuring the shared services network to Cisco Catalyst 9500 switches as fusion routers, as shown in Figure 3.
a. Configure an SVI interface (example: shared service VLAN1000) in Nexus-1 to reach the shared services network:
b. Configure interface for connectivity to Cisco DNA Center appliance enterprise network interface:
c. Configure interface for connectivity to CSR1KV:
a. Configure an SVI interface (VLAN1000) in Nexus-2 to reach the shared services network:
b. Configure Nexus-2 interface for connectivity to the CSR1KV:
2. Configure Cisco CSR1000V (fusion routers) to reach the shared services network.
a. For the shared services network (10.10.100.X), configure sub interfaces to reach Cisco DNA Center, DHCP, DNS, and ISE.
b. Cisco CSR1000v routers are configured as default routers for the shared services subnet with Next Hop Redundancy using the HSRP protocol. Configure HSRP to create gateway redundancy between the fusion routers for the shared services subnet. Example HSRP configuration on fusion routers:
3. Add the shared services network in the underlaying EIGRP routing configuration on both fusion routers, as shown in the example below.
Once the underlay routing configuration is complete for the Catalyst 9300 FiaB and fusion routers, the connectivity to the shared services (Cisco DNA, ISE, DHCP, WLC, Prime, etc.) network must be verified.
Transit Control Plane (C9500-1) IP Routing Verification:
Ping Devices in Shared Services:
After successfully verifying the underlay connectivity from the Catalyst 9300 FiaB to the shared services, the edge fabric can start being provisioned.
In addition to a Layer 3 enterprise network deployment, an edge fabric site can also be connected to the data center fabric site through an MPLS backhaul network. This network could be deployed by the city operator or a separate service provider. In either case, the fabric border device will act as a customer edge (CE) router and the connecting router in the MPLS core will act as the provider edge (PE) router. For this testing, a Layer 3 Virtual Private Network (L3VPN) was implemented. Explaining the differences in MPLS implementations is outside the scope of this document. This implementation is one of many ways a service provider can separate one customer’s traffic from another.
Many ways exist for configuring a VRF-aware routing protocol between a PE and CE, but, in this implementation, eBGP was used. Cisco DNA Center only supports BGP as the routing protocol when a border node connects to an IP transit, which means the configuration can be combined with the underlay configuration. When the Catalyst 9300 is used in the role of the FiaB (Border + Control Plane + Edge), the connection to the PE must be done with an interface configured as a switchport trunk. An SVI is used for the Layer 3 configuration. For resiliency, another port on a different stack member can be connected to a different PE router.
Example Catalyst 9300 Configuration:
Example Provider Edge Configuration:
Note : Example VRF configuration is shown above for one VN. The configuration must be repeated if you add more VNs in the network.
Once the routing configuration is on the Catalyst 9300 FiaB and provider edge, connectivity to the shared services (Cisco DNA, ISE, DHCP, etc.) must be verified.
■Ping devices in shared services:
After successfully verifying the underlay connectivity from the Catalyst 9300 FiaB to the shared services, the edge fabric can start being provisioned.
When using Cisco Ultra-Reliable Wireless Backhaul (CURWB) in the backhaul to connect edge PoPs to the headquarters, it will takes on the role of underlay. Because the links act as invisible wires between the PoPs and the headquarters, they can be used as an SD Transit. However, because they are wireless devices, additional consideration and configuration is needed for deployment. The inherent challenges in an RF environment necessitate, a complete site survey is required before deploying the CURWB radios. Details of the site survey are outside the scope of this document. Using two different RF paths to provide higher throughput and resiliency for each PoP site is recommended. Configuring the radios prior to the physical installation is also recommended.
An example testbed is depicted below.
Figure 5 Multiple Wireless Backhaul Paths
In this deployment, two wireless paths are used to provide higher throughput and resiliency. Each PoP uses a routing protocol supporting Equal Cost Multipath (ECMP) which enables load balancing between the links. The effectiveness of the load balancing is dependent on the type of traffic and the load balancing algorithm chosen in the PoP border switch.
Plug-ins are the licenses installed on the radios that enable specific features. The plug-ins needed to enable the fixed infrastructure will depend on the model chosen, the throughput needed, and whether the radios are in bridge mode or point-to –multipoint mode. The radios will also require the VLAN plug-in to enable the correct VLAN processing and AES to secure the wireless traffic. Enable MPLS fast failover is enabled by installing the TITAN plug-in.
The radios can be configured in three different ways: 1) through RACER, 2) using the built-in web configuration tool, and 3) using the CLI. RACER and the CLI permit full configuration of all the options compared to the web configuration tool. RACER is the preferred tool for configuration because of the ability to manage all the CURWB radios’ configurations in a single dashboard.
Each radio is configured to operate in a specific mode based on its role in the network. In this deployment, the radios at the headquarters are configured as Mesh Ends and the radios installed at the PoP sites are Mesh Points. The Mesh End radio is responsible for connecting the mesh network to the LAN connected backbone. Because the radios are configured as part of the network underlay, the management interface on all the Mesh Ends and Mesh Points must be configured in the same subnet. The configured passphrases must also match on the Mesh End and all its associated Mesh Points. This passphrase must be different from the other Mesh End and its Mesh Points, ensuring that the wireless networks are kept separate.
Figure 6 Mesh End Wireless Path A - General
Figure 7 Mesh End Wireless Path B - General
Figure 8 Mesh Point Wireless Path A – General
Figure 9 Mesh Point Wireless Path B – General
The wireless part of the radio is a separate configuration, and each path is configured on a separate non-overlapping frequency as determined by the site survey. Because the radios are operating in Point –to-Multipoint mode, there is the chance that Mesh Points could communicate at the same time causing a collision. The FM3200 can operate in Time Division Multiple Access (TDMA) mode which increases efficiency in the communication by reducing collisions, but the FM3500 can only operate in Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) mode. To reduce collisions, it is necessary to enable RTS/CTS on the FM3500 Mesh End radios.
Figure 10 Mesh End/Mesh Point – Wireless Radio
Because the Mesh Ends communicate with numerous PoP sites, they are also configured using FluidMAX. This allows the unit configured as “Master” (Mesh End in this case) to dictate the operating frequency to the radio units configured as “Slave” (Mesh Points).
Note: In the Advanced Radio Settings UI shown below, the Primary is called Master and the Secondary is called Slave. This feature cannot be configured in RACER for the FM3500, it can only be configured using the web Configurator, or the CLI.
Figure 11 FluidMAX Primary/Master
Figure 12 FluidMAX Secondary/Slave
For this deployment, EIGRP was used as the underlay routing protocol which uses the well-known standard reserved multicast address 224.0.0.10. To forward these messages to the other radios, the Mesh Ends must be configured with multicast routes. The below configuration will send below sends the EIGRP update messages to all units in the mesh network.
Because the radios are all in the underlay network, the management VLAN can be configured as common across all the radios. The other configurable option for VLANs in the radio is the native VLAN. The native VLAN should must be configured the same on the Mesh End and Mesh Points while using the PoP border node to set the desired native VLAN. This ensures that any untagged packets coming into the wireless network do not inadvertently leave the radio with a VLAN tag. In the below examples below, VLAN 145 is used for management and VLAN 555 is used as the native VLAN. Note, VLAN 555 is not being used elsewhere in the network. Note that if the native VLAN is set to 0, any untagged traffic will be dropped.
QoS can only be enabled through the RACER configuration or CLI, not using the web Configurator. Enableing QoS on the radio and leaving the marking and queueing to the connected switch is recommended. Enable this configuration on all Mesh Ends and Mesh Points. When enabling 802.1P, the CURWB radio will inspects the COS value in the VLAN header as opposed to the DSCP value in the Layer 3 header.
Using multiple parallel wireless paths will increases throughput and resiliency. Each radio network is therefore treated as a separate network path to a PoP site. In this deployment, wireless path A is assigned to VLAN 200 and wireless path B is assigned to VLAN 201. Each radio is connected to a trunk port that disallows the other PoP VLAN. The MTU is also configured for the maximum size that the radios can pass. The MTU is also required on the SVI because EIGRP sends updates up to the maximum size allowed on the link. VLAN 145 is included to enable management of the radios.
Each VLAN has an associated SVI for Layer3 reachability.
VLAN 200 and 201 are added to EIGRP to form neighbors with the other PoP sites connected wirelessly.
At each PoP site, the VLAN for each wireless path must be configured. For sites with dual paths, this is VLAN 200 and 201. The interfaces facing the radio must also be set as trunks. When using the 9x00 as the border node, the MTU can be configured system wide for 2044.
Cisco DNA-C also requires a loopback for onboarding and management.
The underlay subnets are then added to the EIGRP process.
Looking at the EIGRP neighbors will confirms the underlay is functioning correctly.
After the underlay network is functional and all required configuration for discovery is in place, the Discovery workflow can be used to onboard the device.
Onboarding and provisioning the newly-discovered switch is the same process as a wired switch and requires no special configuration to support the CURWB connection. After provisioned to the fabric site, the border interfaces must be configured if an IP Transit is used. Each interface facing the CURWB radio is used as the External Interface.
Figure 16 Border External Interfaces
Because the PoP switch is connected to the headquarters through Layer2, each VLAN configured for a VN must be unique at the headquarters site.
Figure 17 Border Interface-1 VN Configuration
Figure 18 Border Interface-2 VN Configuration
Through the use of multiple interfaces, the routing protocol can be configured for fast failover and load balancing. Bidirectional Failure Detection (BFD) is configured on the interfaces and within the BGP instance for the VRF associated with the VN. Load balancing is achieved using the maximum-paths command. The routing protocol is dependent on having multiple interfaces to achieve these additions.
IP Routing table with multiple paths
The headquarters core switch needs the complementary configuration on the interfaces and BGP address family configuration. Upon completion, multiple paths will be available for traffic between the Edge PoP and headquarterss which can be used for load balancing and failover.
This section covers the implementation of services common to all fabric sites (PoPs) in CCI network, also called shared services. Shared services like Cisco DNA Center, ISE, Centralized Wireless LAN Controller (WLC), DHCP, and DNS, along with other CCI vertical-specific applications such as FND and Fog Director, must be reachable from each fabric/PoP site underlay network and overlay VN provisioned using the Cisco DNA Center.
This section includes the following major topics:
■Cisco DNA Center Installation and Initial Configuration
■Preparing Cisco Identity Service Engine for SD-Access
■Configuring DHCP and DNS Services
■Implementing Field Network Director for CCI
■Implementing Centralized Wireless LAN Controller for Cisco Unified Wireless Network
■Cisco Prime Infrastructure Installation and Configuration
■Cisco Cyber Vision Center Installation and Configuration
Cisco DNA Center offers centralized, intuitive management that makes it fast and easy to design, provision, and apply policies across your network environment. The Cisco DNA Center provides a centralized management dashboard for complete control of the CCI horizontal network.
Cisco DNA Center, which is a dedicated hardware appliance powered through a software collection of applications, processes, services, packages, and tools, is the centerpiece for Cisco® Digital Network Architecture (Cisco DNA™). This software provides full automation capabilities for provisioning and change management, reducing operations by minimizing the touch time required to maintain the network.
This section covers the installation and basic network configuration needed on the Cisco DNA Center for accessing its GUI in CCI deployment.
For step-by-step instructions for installing and configuring Cisco DNA Center, refer to the Cisco DNA Center Installation Guide, Release 2.2.3 at the following URLs:
Cisco DNA Center First Generation Appliance Installation Guide
■ https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/install_guide/1stgen/b_cisco_dna_center_install_guide_2_2_3_1stGen.html
Cisco DNA Center Second Generation Appliance Installation Guide
■ https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/install_guide/2ndgen/b_cisco_dna_center_install_guide_2_2_3_2ndGen.html
Cisco Identity Services Engine (ISE) is a policy-based access control system that enables the enterprises, Smart Cities, and the like to enforce compliance and infrastructure security. ISE is an integral part of Cisco SD-Access acting as the authentication, authorization, and accounting (AAA) server for devices identity management, access control, and enforcement of access policies on fabric devices.
In the CCI solution, ISE is coupled with the Cisco DNA Center for dynamic mapping of users and devices to scalable groups, which simplifies end-to-end security policy management and enforcement at a greater scale than traditional network policy implementations relying on IP access lists.
A centralized standalone deployment of ISE is configured with the Cisco DNA Center in the shared services network as shown in the network topology that is depicted in Figure 3. ISE can be installed in various ways; OVA deployment of ISE as a virtual machine is used in this implementation. Refer to the URL below for step-by-step instructions on installing ISE:
■ https://www.cisco.com/c/en/us/td/docs/security/ise/2-4/install_guide/b_ise_InstallationGuide24/b_ise_InstallationGuide24_chapter_011.html
If you prefer to deploy the latest compatible version of ISE, refer the following URL for ISE v3.0 Installation:
■ https://www.cisco.com/c/en/us/td/docs/security/ise/3-0/install_guide/b_ise_InstallationGuide30/b_ise_InstallationGuide30_chapter_3.html
Once the ISE installation is complete, update the Patch 13 on ISE v2.4, which is compatible with Cisco DNA Center SD-Access, by completing the following steps:
1. Download the ISE patch bundle ise-patchbundle-2.4.0.357-Patch13-20080314.SPA.x86_64.tar.gz.
Note: Software downloads from Cisco website requires a registered Cisco Account and Cisco software download access.
2. Log in to the ISE GUI and navigate to Administration-> Maintenance-> Patch Management.
3. Click Install, upload the patch file, and then click Install again. The installation will take about 1 hour and during the time ISE will not be available.
4. To verify the patch is installed successfully, check Patch Management in to see whether the Patch 13 is listed, as shown in Figure 19.
Figure 19 Cisco ISE Patch Installation View
This completes the installation and relevant patch upgrade of ISE compatible with Cisco DNA Center Release 2.3.2.
Note: Refer to the Cisco SD-Access 2.3.2.x Hardware and Software Compatibility Matrix at the following URL for more details: https://www.cisco.com/c/dam/en/us/td/docs/Website/enterprise/sda_compatibility_matrix/index.html
Once ISE installation and basic configuration is complete, it has to be integrated with the Cisco DNA Center. Refer to the section Integrate Cisco ISE with Cisco DNA Center in the Cisco DNA Center Installation Guide Release 2.2.3 at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/install_guide/2ndgen/b_cisco_dna_center_install_guide_2_2_3_2ndGen/m_complete_first_time_setup_2_2_3_2ndgen.html#task_ikj_pg3_sfb
Note: Before integrating ISE with the Cisco DNA Center, ensure that PxGrid services are online on ISE and that the cluster node is up in Cisco DNA Center.
Once integrated with Cisco DNA Center using PxGrid, information sharing between the two platforms is enabled, including device information and group information. This allows the Cisco DNA Center to define policies that are pushed to ISE and then rendered into the network infrastructure by the ISE Policy Service Nodes (PSNs). When integrating the two platforms, a trust is established through mutual certificate authentication. This authentication is completed seamlessly in the background during integration and requires both platforms to have accurate NTP time synchronization.
A DHCP Server is a network server that automatically provides and assigns IP addresses, default gateways, and other network parameters to client devices. It relies on the standard protocol known as Dynamic Host Configuration Protocol (DHCP) to respond to broadcast queries by clients.
DHCP services can be configured in the network in many ways. In this implementation, a centralized DHCP services in the CCI network shared services, running on a Windows 2016 server is used. This section covers the example DHCP scope and IP pools definition and discusses other scope options that are required for implementing SD-Access in the CCI network.
Refer to the step-by-step instructions on Microsoft Windows Server 2016: DHCP Server Installation & Configuration at the following URL:
■ https://social.technet.microsoft.com/wiki/contents/articles/51170.microsoft-windows-server-2016-dhcp-server-installation-configuration.aspx
After the DHCP server is successfully configured on a Windows 2016 server, create Scopes for all the IP pools configured on the Cisco DNA Center with options 43 (example pools are for extended node and host node pools) in the DHCP server, as shown in Figure 20:
Figure 20 Example IP Scope and Scope Options in CCI Network
For more information on DHCP option 43, refer to the section DHCP Controller Discovery in the Cisco Digital Network Architecture Center User Guide, Release 2.2.3 at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/user_guide/b_cisco_dna_center_ug_2_2_3/b_cisco_dna_center_ug_2_2_3_chapter_01101.html?bookSearch=true#id_90877
In this implementation, Domain Name Servers (DNS) in the CCI network shared services, running on a Windows 2016 server (co-located on the DHCP server), are used.
Refer to the following URL for step-by-step instructions and configuration of the DNS on the Windows 2016 server for the CCI network:
■ https://www.microsoftpressstore.com/articles/article.aspx?p=2756482
Cisco Field Network Director (FND) is an essential component for IoT solution deployments. FND in CCI provides easier deployment and management of devices such as Field Area Routers (CGR), Connected Grid End Points (CGEs) and IC3000 Industrial Compute Gateway. FND is the critical component of the FAN solution. FND is the one component that interacts with most of the components in the FAN solution.
For information about installing/configuring FND, refer to the following URL:
■ https://www.cisco.com/c/en/us/td/docs/routers/connectedgrid/iot_fnd/install/oracle/iot_fnd_oracle/installation_rpm_new_oracle.html
Note: FND with the Oracle database, which is used in this implementation, is needed for CGR mesh support.
■In the CCI network, FND OVA (this OVA includes Oracle for mesh management (CGR, IR5x)), can be downloaded from the following link:
– https://software.cisco.com/download/home/286287993/type/286320249/release/4.5.1
Note: -v containing image should be used for mesh deployment.
Note: After download, use the iot-fnd-oracle-4.4.0-79.ova file to install the FND Application.
■FND is installed in the shared services network in CCI so that it can be accessible by FAR and other headend components. The installation steps can be found at the following link (refer to section “Prerequisites, Installing the OVA”).
– https://www.cisco.com/c/en/us/td/docs/routers/connectedgrid/iot_fnd/install/ova/installation-ova-fnd-4-3-1.html#pgfld-1544292
■RHEL needs an active account with access to subscription management, needed for performing yum updates, yum install, and so on. Addressing these prerequisites is beyond the scope of this document. Please refer to Red Hat documentation.
■IP address configuration on a couple of interfaces:
–a) One interface configured with the IP Address of the FND:
–b) Another temporary interface providing Internet connectivity.
■The section “Implementing Field Network Director” in the FND implementation guide has detailed implementation information (you can skip the sections “Integrating FND with TPS Proxy” and “Integrating FND with FND-DB”) at the following URL:
– https://salesconnect.cisco.com/#/content-detail/da249429-ec79-49fc-9471-0ec859e83872
■After successful implementation, you should check the status of FND in the CLI:
In CCI, the IC3000 Industrial Compute Gateway connected to the edge switch via the management port learns about the FND via the DHCP server through option 43 and connects to the FND. Registration succeeds assuming the CSV file is uploaded to the FND and connectivity exists between the FND and the IC3000 Industrial Compute Gateway. As part of registration, FND enables the data ports for data traffic if enabled from the IC3000 Industrial Compute Gateway template under the FND.
For information about managing/deploying IC3000 Industrial Compute Gateway, refer to the Cisco IC3000 Industrial Compute Gateway Deployment Guide at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/routers/ic3000/deployment/guide/DeploymentGuide.html
In CCI, Cisco Catalyst 9800 Series Wireless Controller (C9800-40) is configured as a Centralized Wireless LAN Controller (WLC) with High Availability (HA) for managing Cisco Unified Wireless Network (CUWN) with Wi-Fi mesh deployments in PoPs. Refer to the “Cisco Unified Wireless Network (CUWN) with Mesh” section in the Connected Communities Infrastructure Design Guide at the following URL for more details on the design:
■ https://www.cisco.com/c/en/us/td/docs/solutions/Verticals/CCI/CCI/DG/cci-dg.html
This section covers the initial installation and HA configuration of C9800-40 WLC in CCI Shared Services network. This section applies to you if you are doing CUWN wireless and deploying WLC centrally in Shared Services.
The Cisco Catalyst 9800-40 Wireless Controller is a 40-G wireless controller that offers a compact form factor that consumes less rack space and power while offering 40 Gbps forwarding throughput. This section covers the installation and Day-0 Configuration required to setup the C9800 WLC.
Refer to the following URL for rack mounting and installing the C9800-40 hardware:
■ https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/9800-40/installation-guide/b-wlc-ig-9800-40/installing-the-controller.html
Once the WLC is rack mounted, verify the following:
1. The network interface cable or the optional Management port cable is connected.
2. The chassis is securely mounted and grounded.
3. The power and interface cables are connected
4. Terminal server is connected to the console port.
There are two modes in which a IOS XE software image on a Catalyst 9800 WLC can run: Install mode and Bundle mode.
The install mode uses pre-extracted files from the binary file into the flash in order to boot the controller. The controller uses the ‘packages.conf’ file that was created during the extraction as boot variable.
The system works in bundle mode if the controller boots with the binary image (.bin) as boot variable. In this mode the controller extracts the.bin file into the RAM and runs from there. This mode uses more memory than install mode since the packages extracted during boot up are copied to the RAM.
Note: Install mode is the recommended mode to run the wireless controller.
Boot the Controller in Install Mode:
Step 1: Make sure to boot from flash:packages.conf (and we do not have other boot files specified in our configuration).
Step 2: Install software image to flash. The install add file bootflash:<image.bin> activate commit command moves the switch from bundle-mode to install-mode where image.bin is our base image.
Step 3: Type yes to all the prompts. Once the installation is completed the controller proceeds to reload.
Step 4: After the controller bootup, you can verify the current installation mode of the controller. Run the show version command to confirm the mode.
For more details on WLC power up and initial configuration, refer to the following URL:
■ https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/9800-40/installation-guide/b-wlc-ig-9800-40/power-up-and-initial-configuration.html
Day-0 Manual Configuration Using the Cisco IOS-XE CLI:
C9800-40 WLC is connected to shared services network with 10G link. The steps to access WLC CLI to perform the initial configuration on the controller are provided below.
Step 1: Terminate the configuration wizard (this wizard is not specific for wireless controller):
Step 2: Press Return and continue with the manual configuration.
Step 3: Press Return to bring up the WLC> prompt and Type enable to enter privileged EXEC mode.
Step 4: Enter the config mode and set the hostname:
Step 5: Configure login credentials:
Step 6: Configure the VLAN for wireless management interface and shared services VLAN in CCI network.
Step 7: Configure the SVI for wireless management interface.
Step 8: Configure the interface TenGigabitEthernet0/0/1 as trunk:
Step 9: Configure a default route (or a more specific route) to reach the box:
Step 10: Disable the wireless network to configure the country code:
Step 11: Configure the AP country domain. This configuration is what will trigger the GUI to skip the DAY 0 flow as the C9800 needs a country code to be operational:
Step 12: Specify the interface to be the wireless management interface:
Step 13: For the Controller to be discovered by the Cisco DNA Center or Prime Infrastructure, CLI, SSH and SNMP credentials should be configured on the devices along with NETCONF:
Verify that you can ping the wireless management interface and then just https://<IP of the device wireless management interface>. Use the credentials you have entered earlier. Since the box has a country code configured, the GUI will skip DAY 0 page and you will get access to the main Dashboard for DAY 1 configuration.
Access the C9800 Web UI using https://<IP_addr_of_C9800-40-WLC>. The username and password configured during the Day-0 configuration of WLC must be used to log on to WLC Web UI. Figure 21 shows C9800-40 WLC Web UI dashboard view after successful login.
Figure 21 Cisco 9800-L WLC Web UI Dashboard View
High availability (HA) has been a requirement on wireless controllers to minimize downtime in live networks. This section provides information on the theory of operation and configuration for the Catalyst 9800 Wireless Controller as it pertains to supporting stateful switchover of access points and clients (AP and Client SSO).
The redundancy explained on this document is 1:1, which means that one of the boxes will be in Active State while the other one will be in Hot Standby. If the active box is detected to be unreachable, the Hot Standby unit will become Active and all the APs and clients will keep its service through the new active box.
Once both boxes are synchronized with each other, the standby 9800 WLC will mimic its configuration with the primary box. Any configuration change is done on the active unit will be replicated to the standby unit via the Redundancy Port (RP). Configuration changes are no longer allowed to be performed on the standby 9800 WLC.
Besides the synchronization of the configuration between boxes, they also synchronize the APs in UP state (not APs in downloading state or APs in DTLS handshaking), clients in RUN state (this means that if there is a client in Web Authentication required state and a switchover occurs, that client will have to restart its association process), RRM configuration along other settings.
For more details on deployment and configuration, refer to the following URL:
■ https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/17-6/config-guide/b_wl_17_6_cg/m_vewlc_high_availability.html
High Availability Prerequisites:
■HA Pair can only be form between two wireless controllers of the same form factor
■Both controllers must be running the same software version in order to form the HA Pair
■Maximum RP link latency = 80ms RTT, minimum bandwidth = 60 Mbps and minimum MTU = 1500
Configure HA on 9800 WLC Hardware:
C9800-40-K9 Wireless controller has two RP Ports as shown in Figure 22.
Figure 22 C9800-40 WLC Front View
In Figure 22:
1. RJ-45 Ethernet Redundancy port
2. SFP Gigabit Redundancy port
The HA Pair always has one active controller and one standby controller. If the active controller becomes unavailable, the standby assumes the role of the active. The Active wireless controller creates and updates all the wireless information and constantly synchronizes that information with the standby controller. If the active wireless controller fails, the standby wireless controller assumes the role of the active wireless controller and continues to the keep the HA Pair operational. Access Points and clients continue to remain connected during an active-to-standby switchover.
Figure 23 C9800-40 WLC High Availability Network Topology
Redundancy SSO is enabled by default, but you still need to configure the communication between the boxes. Follow the step-by-step instructions below for deploying WLC in HA.
Step 1: Make sure both the C9800 WLCs are reachable to each other. Wireless management interface from both boxes must belong to the same VLAN and subnet (in our case connected to Nexus 5000).
Step 2: Connect both 9800 WLC to each other through its RP port.
There are two options to connect both 9800 WLCs to each other, choose the one that fits you more. In this example implementation, RJ45 Ethernet ports are connected.
1. Redundancy Port—RJ45 10/100/1000 redundancy Ethernet port, as shown in Figure 24.
Figure 24 C9800-40 WLC RJ45 Redundancy Ports Connection
2. Redundancy Port—10-GE SFP ports, as shown in Figure 25:
Figure 25 C9800-L WLC Redundancy Ports Connection
Step 3: Provide the required redundancy configurations to both 9800 WLCs.
Step 4: On WLC Web UI, navigate to Administration-> Device-> Redundancy. Enable “Redundancy Configuration”, check ‘RP’ for “Redundancy Pairing Type” and enter the desired IP address along with the Active and Standby Chassis Priorities. Each box should have its own IP address and they should both belong to the same subnet.
On the Active controller, the priority is set to a higher value than the standby controller. The wireless controller with the higher priority value is selected as the active during the active-standby election process. If we do not choose a specific box to be active, the boxes themselves will elect Active based on lowest MAC address. The Remote IP is the IP address of the standby controller’s redundancy port IP.
C9800-40 WLC1 and C9800-40 WLC2:
Figure 26 Redundancy Pairing on both C9800-40 WLCs
Step 4: Switch to C9800 WLC CLI and configure Chassis HA interface.
Step 5: Configure the priority of the specified device.
Step 6: Configure the peer keepalive timeout value.
Step 7: Configure the peer keepalive retry value before claiming peer is down.
Step 8: Save configurations on both 9800 WLCs and reboot both boxes at the same time.
Step 9: On WLC Web UI, Navigate to Administration-> Reload, select Save Configuration and Reload, and click Apply.
Step 10: Switch to WLC CLI and type reload on CLI prompt.
Step 11: Verify the HA configuration on both WLCs. Once both 9800 WLC have rebooted and are synchronized to each other, we can console into them and verify their current state with CLI commands as shown below.
Enable Console Access to Standby 9800 WLC
Once we enable HA and one of the boxes is assigned as active and the other one as standby hot, by default we are not allowed to reach exec mode (enable) on the standby box. To enable it, login by SSH/console to the active 9800 WLC and enter these commands:
If we want to force a switchover between boxes you can either manually reboot the active 9800 WLC or run this command:
Cisco Prime Infrastructure (PI) will act as a dedicated Network Management Server (NMS) providing network device and client monitoring and reporting services. The solution will integrate WLCs and APs with the existing virtual PI deployment. All configuration for WLCs and APs can be deployed using PI with the aid of configuration templates.
This section describes how to configure and integrate Catalyst 9800 Series Wireless Controllers with Prime Infrastructure (3.7) which uses CLI, Simple Network Management Protocol (SNMP) and NETCONF. Configuration details for SNMPv2 and SNMPv3 are included.
PI 3.7 Virtual Appliance (VA) is installed in Shared Services network. Refer to the installation guide at the following URL which describes how to install Cisco Prime Infrastructure 3.7 as an OVA on VMware. Download the OVA file PI-VA-3.7.0.0.159.ova from Cisco.com. Verify the integrity of the OVA file using its checksum listed on Cisco.com.
■https://www.cisco.com/c/en/us/td/docs/net_mgmt/prime/infrastructure/3-7/quickstart/guide/bk_Cisco_Prime_Infrastructure_3_7_0_Quick_Start_Guide.html
Figure 27 Prime Infrastructure 3.7 Verification
Access the PI WebUI with the IP address configure:
Figure 28 Cisco Prime Infrastructure Web UI—Dashboard View
Managing Catalyst 9800 WLC with Prime Infrastructure Using SNMP v3 and NETCONF
In order for Prime Infrastructure to configure, manage, and monitor Catalyst 9800 Series Wireless LAN Controllers, it needs to be able to access Catalyst 9800 via CLI, SNMP, and NETCONF. When adding Catalyst 9800 to Prime Infrastructure, telnet/SSH credentials as well as SNMP community string, version, etc. will need to be specified. PI uses this information to verify reachability and to inventory Catalyst 9800 WLC. It will also use SNMP to push configuration templates as well as support traps for AP and client events. However, in order for PI to gather Access Point (AP) and client statistics, NETCONF is leveraged. NETCONF is not enabled by default on Catalyst 9800 WLC and needs to be manually configured.
For more details, refer to the following URL:
■ https://www.cisco.com/c/en/us/support/docs/wireless/catalyst-9800-series-wireless-controllers/214286-managing-catalyst-9800-wireless-controll.html
SNMPv2 Configuration on Catalyst 9800 WLC
Step 1. Navigate to Administration -> Management -> SNMP -> Slide to Enable SNMP.
Step 2. Click on Community Strings and create a Read-Only and a Read-Write community name.
SNMPv3 Configuration on Catalyst 9800 WLC
Note: As of 17.1 IOS-XE, the web UI will only allow to create read-only v3 users. Follow the CLI procedure to create a read-write v3 user.
Click on V3 Users. Create a user, choose AuthPriv, SHA, and AES protocols and chose long passwords as show in Figure 29.
Figure 29 Cisco 9800-40 WLC SNMP Configuration
Note: SNMPv3 User Config is not reflected on running-configuration. Only SNMPv3 group configuration is seen
NETCONF Configuration on the Catalyst 9800 WLC:
Navigate to Administration -> Management -> HTTP/HTTPS/NetConf.
Note: If aaa new-model is enabled on Cat9800, then we will also need to configure
NETCONF on 9800 uses the default method (and we cannot change this) for both aaa authentication login as well as aaa authorization exec. In case we want to define a different method for SSH connections, we can do so under the "line vty" command line. NETCONF will keep using the default methods.
Navigate to Configuration -> Interface -> Wireless.
Step 1. Capture the Wireless Management IP address configured on the Catalyst 9800 WLC.
Navigate to Administration -> User Administration.
Step 2. Capture the privilege 15 user credentials as well as enable password.
Step 3. Get the SNMPv2 community strings and/or SNMPv3 user as applicable.
For SNMPv2, Navigate to Administration-> Management-> SNMP-> Community Strings.
For SNMPv3, Navigate to Administration-> Management-> SNMP-> V3 Users.
Step 4. On Prime Infrastructure GUI, navigate to click on Configuration-> Network : Network Devices-> Click on Drop-Down beside +-> Select Add Device.
Step 5. On the Add Device pop-up, enter the interface IP address on 9800 that will be used to establish communication with Prime Infrastructure.
Step 6. Navigate to SNMP tab and provide SNMPv3 details configured on Cat9800 WLC. From Auth-Type Drop-down match the previously configured authentication type and from Privacy Type Drop-Down select the encryption method configured on Cat9800 WLC.
Step 7. Navigate to Telnet/SSH tab of Add Device, provide the Privilege 15 Username and Password along with Enable Password. Click on Verify Credentials to ensure CLI, SNMP credentials work fine. Then click on Add, as shown in Figure 30.
Figure 30 Adding C9800 WLC to the PI
Step 1. Verify that NETCONF is enabled on Cat9800:
Step 2. Verify the telemetry connection to Prime from the Cat9800
Step 3. On Prime Infrastructure, navigate to Inventory-> Network Devices-> Device Type and verify the status as shown in Figure 31.
Figure 31 C9800 WLC on the PI as a Managed Device
The Cisco Secure Network Analytics (Stealthwatch) system collects and analyses flow telemetry generated by the network infrastructure for the purposes of network and security visibility. The Flow Collector leverages enterprise telemetry such as NetFlow, IPFIX (Internet Protocol Flow Information Export), and other types of flow data from existing infrastructure such as routers, switches, firewalls, endpoints, and other network infrastructure devices, Using flow telemetry, host behavior is monitored using continuous automated behavioral analysis techniques. The intelligence generated by Stealthwatch can be reported both to security and network operations staff in order to provide quick access and detailed analysis of security and network events.
The main components of Cisco Stealthwatch system are:
■Stealthwatch Management Console (SMC)
■Stealthwatch Flow Collector (SFC)
For more information, see the Cisco Secure Network Analytics web page:
https://www.cisco.com/c/en/us/products/security/stealthwatch/index.html
The Stealthwatch Management Console (SMC) is an enterprise-level security management system that allows network administrators to define, configure, and monitor multiple distributed Stealthwatch Flow Collectors from a single location. This system provides flow-based security, network, and application performance monitoring across physical and virtual environments. With Stealthwatch, network operations and security teams can see who is using the network, what applications and services are in use, and how well they are performing. The SMC client software allows you to access the SMC’s user-friendly graphical user interface (GUI) from any local computer with access to a web browser.
Through the client GUI, you can easily access real-time security and network information about critical segments throughout your network.
The Stealthwatch Flow Collector (SFC) is responsible for collecting all NetFlow telemetry generated by a network’s flow-capable devices. This is the heart of the Stealthwatch system and where data normalization and analysis occurs.
SMC and SFC are deployed as an Virtual Appliances in CCI Shared services VLAN in underlay network on ESXI host. This section describes and explains how to initialize SMC and add Flow Collector to SMC.
For installing a SMC and FC Virtual Appliance using VMware, refer to “Installing a Virtual Appliance using VMware” in:
■ https://www.cisco.com/c/dam/en/us/td/docs/security/stealthwatch/system_installation_configuration/SW_7_1_Installation_and_Configuration_Guide_DV_1_0.pdf
Step 1: Configuring IP addresses.
After you install the Stealthwatch VE appliances (both SMC and SFC) using VMware, you are ready to configure the basic virtual environment for them. In CCI network, we deployed the OVA file and powered up the VM. After the initial boot, it will ask you to enter the IP address, subnet, broadcast address, and gateway you would like to use. After you configure this, it will restart again.
For IP address configuration refer to “Configuring the IP Addresses” in:
■ https://www.cisco.com/c/dam/en/us/td/docs/security/stealthwatch/system_installation_configuration/SW_7_1_Installation_and_Configuration_Guide_DV_1_0.pdf
Figure 32 Stealthwatch System Configuration
After the VM restarts, you are shown a log in prompt. The default username/password is sysadmin/lan1cope. You can enter this and change the default password if you want.
Note: You'll have to do the following setup for both the SMC and the SFC.
Step 2: Configuring the appliances.
Open up a browser and navigate to https://<ip-addr-of-SMC>.
You will be able to login to this page with the default username/password of admin/lan411cope. After initially signing in, you are shown the welcome screen shown in Figure 33.
Figure 33 Stealthwatch Appliance Setup Tool
To configure the appliance, refer to “Configuring Your Appliances” in:
■ https://www.cisco.com/c/dam/en/us/td/docs/security/stealthwatch/system_installation_configuration/SW_7_1_Installation_and_Configuration_Guide_DV_1_0.pdf
Figure 34 Stealthwatch Management Console Appliance Configuration
Note: You will have to do the appliance configurations for both the SMC and the SFC.
Step 3: Configure your Flow Collectors for Central Management.
To configure your Flow Collector so it communicates with your primary SMC/Central Manager, refer to “Configure your Flow Collectors for Central Management” in:
■ https://www.cisco.com/c/dam/en/us/td/docs/security/stealthwatch/system_installation_configuration/SW_7_1_Installation_and_Configuration_Guide_DV_1_0.pdf
Figure 35 Stealthwatch Flow Collector Appliance Configuration
After you configure an appliance in the Appliance Setup Tool and configure SFC for Central Management, confirm the appliance status in Central Management by navigating to log in to your primary SMC. Click the Global Settings icon and select Central Management.
Confirm the appliance is shown in the inventory and the status for the appliance is shown as Up.
In CCI network, NetFlow is enabled on Cisco IE switches (IE4000, IE5000, IE3400, and IE3300) in the ring to monitor the network flows. Using the DNA Center templates, Netflow can be enabled on the CCI devices.
Refer the following URL for details about the Cisco DNA Center Template Editor:
https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/user_guide/b_cisco_dna_center_ug_2_2_3/b_cisco_dna_center_ug_2_2_3_chapter_01000.html
You can verify the traffic flow monitoring on the SMC dashboard.
Figure 37 Stealthwatch Management Console Dashboard
This section describes the steps to configure Cisco Stealthwatch Management Centre (SMC) and Cisco Identity Services Engine (ISE) using pxGrid. Once integrated with ISE, the SMC will learn the user session information (IP address/username bindings), Static TrustSec mappings, and Adaptive Network Control (ANC) mitigation actions for quarantining endpoints.
Step 1: Generating certificates.
To connect Stealthwatch and Cisco ISE, certificates must be deployed correctly for trusted communication between the two systems. Deploying certificates requires that you use several different product or application interfaces: the SMC Web App, the Central Management interface, and the Cisco ISE Server management portal. Starting with v7.0, Stealthwatch only imports client certificates created with a Certificate Signing Request (CSR) generated from Stealthwatch Central Management to connect to ISE pxGrid node.
The recommended method of deploying certificates is to use the ISE internal Certificate Authority (CA). This option is only available with ISE 2.2 and above.
To deploy certificates using the ISE internal CA, refer to “Using ISE Internal CA” in:
■ https://community.cisco.com/t5/security-documents/deploying-cisco-stealthwatch-7-0-with-cisco-ise-2-4-using-pxgrid/ta-p/3793357?attachment-id=165804
Figure 38 Client Identity in SMC
Step 2: Configuring ISE pxGrid integration.
To configure Stealthwatch to successfully connect, register, and subscribe to the ISE pxGrid node, refer to “Configuring ISE pxGrid Integration” in:
■ https://community.cisco.com/t5/security-documents/deploying-cisco-stealthwatch-7-0-with-cisco-ise-2-4-using-pxgrid/ta-p/3793357?attachment-id=165804
Figure 39 Stealthwatch Integration with ISE
Step 3: Applying ISE Adaptive Network Control (ANC) policies.
ISE ANC policies align with organizations security policies. For example, when malware or breaches are detected, the organization may investigate further by providing segmented network access or, if the threat is more severe and capable of propagating through the network, the IT administrator may want to shut down the port.
Possible ANC actions are: quarantine (Change or Authorization), port-shut, and port bounce. These ANC policies will then be used as condition rules in ISE authorization policies to enforce the organizations security policy.
To create ISE ANC policies and associate to Stealthwatch, refer to “SE Adaptive Network Control (ANC) Policies” in:
■ https://community.cisco.com/t5/security-documents/deploying-cisco-stealthwatch-7-0-with-cisco-ise-2-4-using-pxgrid/ta-p/3793357?attachment-id=165804
Figure 40 ISE ANC Policy on Stealthwatch
Cisco Stealthwatch provides comprehensive network visibility and threat detection for accelerated incident response. For more information, see:
■ https://community.cisco.com/t5/security-documents/stealthwatch-use-cases/ta-p/3611837
Use the Stealthwatch Downloading and Licensing Guide to activate licenses on your appliances:
■ https://www.cisco.com/c/en/us/support/security/stealthwatch/products-licensing-information-listing.html
This section describes the deployment of Cisco Cyber Vision Center (CVC) in Shared Services.
The Cyber Vision Center can be deployed as a virtual machine (VM) or as a hardware appliance. In this deployment, the standalone Cyber Vision Center is deployed as a VM on a Cisco Unified Computing System (UCS) in the CCI Shared Services network.
For step-by-step instructions on installation and resource recommendations of CVD, refer to the Cisco Cyber Vision Center VM Installation Guide at the following URL:
https://www.cisco.com/c/dam/en/us/td/docs/security/cyber_vision/Cisco_Cyber_Vision_Center_VM_Installation_Guide_4_0_0.pdf
It is recommended to install the Cyber Vision Center application in the CCI Shared Services network with dual interfaces; one for management and the other for sensor communication, respectively. An example of the IP addressing schema used in CVC installation is shown below.
■Admin Interface (eth0): 10.104.206.225 (Routable IP address for CVC UI access)
■Collection interface (eth1): 10.10.100.33 (shared services network IP)
■Collection network gateway: 10.10.100.1 (shared services gateway)
Refer to the section “Cisco Cyber Vision Operational Technology (OT) Flow and Device Visibility Design” in the CCI General Solution Design Guide for the detailed design and deployment considerations for CVC, Network Sensors on IE3400 and IE3300-X series switches, and the IR1101 for RPoP in a CCI deployment.
https://www.cisco.com/c/en/us/td/docs/solutions/Verticals/CCI/CCI/DG/General/cci-dg/cci-dg.html
This section covers the implementation of a CCI PoP site aka Fabric site with the Cisco DNA Center SD-Access. A fabric overlay network is provisioned on the underlay network implemented on each fabric/PoP site for SD-Access network implementation in a PoP or a fabric site construct, as defined in the CCI Solution design.
This section includes the following major topics:
■Preparing Cisco DNA Center for PoP Site Provisioning
■Discovering Devices in the Network
■Provisioning Devices in SD-Access
■Provisioning Fabric Overlay Network
■Implementing Wireless LAN Controller in a PoP
Note: The implementation steps for the SD-Access Network deployment that are covered in this section provide a summary of steps to be followed along with example configurations used for implementing fabric sites for CCI network topologies discussed in the section Deployment Topology Diagrams. For detailed step-by-step instruction for SD-Access deployment, refer to the following URL:
■ https://www.cisco.com/c/dam/en/us/td/docs/solutions/CVD/Campus/sda-fabric-deploy-2019oct.pdf
In the Cisco DNA Center, the “Design” area is where you create the structure and framework of your network, including the physical topology, network settings, and device type profiles that you can apply to devices throughout your network. Create a network hierarchy of areas, buildings, and floors that reflect the physical deployment. In later steps, discovered devices are assigned to respective PoP sites in Cisco DNA Center GUI, so that they are displayed hierarchically in the topology maps.
To prepare to design your Cisco DNA Center for CCI network fabric implementation, refer to the chapter “Design Network Hierarchy & Settings” in the Cisco Digital Network Architecture Center User Guide, Release 2.2.3 at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/user_guide/b_cisco_dna_center_ug_2_2_3/b_cisco_dna_center_ug_2_2_3_chapter_0110.html
1. Creating network hierarchy.
In this implementation, three fabric sites are created as shown in Figure 3 and Figure 4 for various deployment models of the network topologies with IP Transit and SD-Access Transit fabric interconnection. Example fabric sites with the names MGRoad, Hebbal, Elex City are configured for the fabric sites PoP1 Site, PoP2 Site, PoP3 Site respectively. In Figure 41, the sites with names Cessna and Koramangala are configured as HQ/DC site and SDA transit site respectively.
Note: In CCI deployment, a PoP site can be mapped to an area with a building under that area in Cisco DNA Center network hierarchy. By creating buildings, you can apply settings to a specific area or a PoP site.
For more details about Network Hierarchy and steps to configure the hierarchy of PoP sites and HQ/DC Site, refer to the following URL:
■ https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/user_guide/b_cisco_dna_center_ug_2_2_3/b_cisco_dna_center_ug_2_2_3_chapter_0110.htm
Figure 41 shows an example Network Hierarchy under the Design tab in the Cisco DNA Center user interface for the CCI network implementation:
Figure 41 Example CCI Network Hierarchy View in Cisco DNA Center
2. Configuring network settings.
Set up network properties such as AAA, DHCP, DNS, and NTP for the CCI network. Cisco DNA Center will configure the network settings on the devices while provisioning the discovered devices in the fabric.
Refer to the following sections to configure network settings:
–Manage Global Network Settings:
–Configure Global Network Servers:
Figure 23 shows an example Network Settings configured in the Cisco DNA Center for the CCI network topology:
Figure 42 Example Global Network Settings View in Cisco DNA Center
3. Setting device credentials for device discovery.
–Device credentials refer to the CLI, SNMP, and HTTPS credentials that are configured on network devices. Cisco DNA Center uses these credentials to discover and collect information about the devices in your network. Configure global/site level device credentials to discover all the network devices in the CCI network for fabric/PoP site provisioning.
Refer to the following sections for configuring device credentials in the Cisco DNA Center:
–About Global Device Credentials:
–Configure Global CLI Credentials:
–Configure SNMPv3 Credentials:
4. Configuring IP address pools.
IP address pools that will used for fabric infrastructure provisioning, extended nodes in the CCI network, and data network are manually defined and configured on the Cisco DNA Center. It reserves pools as a visual reference for use in fabric sites (PoPs). In this implementation, a windows DHCP server is used.
Alternatively, you can integrate third party IP Address Manager (IPAM) servers to Cisco DNA Center in order reduce IP address management tasks. IPAM integration with Cisco DNA Center provides:
–Access to existing IP address scopes, referred to as IP address pools in Cisco DNA Center.
–When configuring new IP address pools in Cisco DNA Center, the pools populate to the IPAM server automatically.
To integrate IPAM server to Cisco DNA Center, refer to the “Configure an IP Address Manager” section at the following URL:
Refer to the following sections for adding and reserving IP address pools as needed in CCI network deployment:
Figure 43 and Figure 44 show example IPv4 address pools with global network prefixes and reserved IP pools in a site for fabric border network handoff, extended node, and data networks on fabric overlay VNs.
Figure 43 Example Global IPv4 Address Pools View in Cisco DNA Center
Figure 44 Example IPv4 Address Pools Reserved in a Fabric/PoP Site
This completes the initial preparation of Cisco DNA Center for devices discovery and fabric site provisioning.
Cisco DNA Center is used to discover and manage the SD-Access underlay network devices that are compatible with the Cisco DNA Center. For the list of the devices supported by Cisco DNA Center, refer to the following URL:
■ https://www.cisco.com/c/en/us/solutions/enterprise-networks/software-defined-access/compatibility-matrix.html
To discover equipment in the network, the appliance must have IP reachability to these devices, and CLI and SNMP management credentials must be configured on the devices. Once discovered, the devices are added to Cisco DNA Center's inventory, allowing the controller to make configuration changes through provisioning.
1. For the Network devices to be discovered by the Cisco DNA Center, CLI and SNMP credentials should be configured on the devices as configured at the Cisco DNA Center in the previous section.
The example configuration used network devices in this implementation:
a. Configure CLI SSH user credentials on the network device. Example configuration on Cisco Catalyst 9300 Switch Stack:
b. Configure SNNMPv3 credentials on the network device. Example configuration on Cisco Catalyst 9300 Switch Stack:
c. Enable SSH Version 2 access on the network device. Example configuration on Cisco Catalyst 9300 Switch Stack:
Repeat the above configurations on all the network devices in the network to be discovered by the Cisco DNA Center.
2. For detailed step-by-step instructions on discovering all the device in the CCI network on Cisco DNA Center, refer to the chapter “Discover your Network” in the Cisco Digital Network Architecture Center User Guide, Release 2.2.3 at the following URL:
– https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/user_guide/b_cisco_dna_center_ug_2_2_3/b_cisco_dna_center_ug_2_2_3_chapter_010.html
Once device discovery is successful, all the devices are added to Cisco DNA Center Inventory, as shown in the example in Figure 45:
Figure 45 Example List of Discovered Devices in Cisco DNA Center Inventory
Once the devices are discovered and managed in the Cisco DNA Center inventory, devices have to be provisioned to the sites for SD-Access Deployment.
For more details and step-by-step instruction for provisioning devices in SD-Access site, refer to the following section in the Software-Defined Access for Distributed Campus Deployment Guide at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/SD-Access-Distributed-Campus-Deployment-Guide-2019JUL.html#_Toc13487388
For how to assign the device to sites and provisioning devices, click “Process 5: Deploying SD-Access with the Provision Application” and follow Procedures 1 and 2 at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/SD-Access-Distributed-Campus-Deployment-Guide-2019JUL.html#_Toc13487382
Once the devices are provisioned to the sites, all the details added in the network settings like AAA, NTP, DHCP, and DNS are configured on the devices by Cisco DNA Center.
Once devices are provisioned to a site, the fabric overlay workflows can begin. This starts through the creation of transits, the formation of a fabric domain, and the assignment of sites, buildings, and/or floors to this fabric domain.
Fabric domain is configured in the Cisco DNA Center for a fabric overlay network. After adding the sites to the Network Hierarchy, the network sites have to be part of a fabric domain. Once the fabric domain is added, add the transit networks (either IP transit or SDA transit or both) for interconnecting multiple fabric sites (PoPs). In this implementation, both the transit network types (IP Transit and SD-Access Transit) are validated for the network topologies, as shown in Figure 3 and Figure 4.
Depending on your network deployment and backhaul network for interconnecting fabric sites, you can choose to deploy either IP Transit or SD-Access Transit as applicable:
1. For provisioning the fabric domain and creating a IP-based transit network in the Cisco DNA Center, click "Process 6: Provisioning the Fabric overlay"and follow Procedures 1, 3, and 4 in the Software-Defined Access for Distributed Campus Deployment Guide at the following URL:
– https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/SD-Access-Distributed-Campus-Deployment-Guide-2019JUL.html#_Toc13487387
2. Optionally, follow Procedure 2 to create an SD-Access transit network for an example network topology, as shown in Figure 3.
Note: For the SD-Access transit network, it is required to assign the transit control plane nodes (example: Cisco Catalyst 9500 switches) to a site that will be provisioned as a SD-Access transit site, by completing the steps mentioned in Provisioning Devices in SD-Access.
Figure 46 shows an example fabric domain: IP-based Transit and SD-Access Transit networks created for the network topology in Figure 3 and Figure 4.
Figure 46 Example Fabric Domain, IP-based, and SD-Access Transit Networks
Once the sites are added to the fabric domain in the Cisco DNA Center, a Cisco Catalyst 9300 Stack that is added in a site is provisioned with fabric roles. A fabric overlay consists of three different fabric nodes: control plane node, border node, and edge node. To function, a fabric must have an edge node and control plane node. This allows endpoints to traverse their packets across the overlay to communicate with each other (policy dependent). The border node allows communication from endpoints inside the fabric to destinations outside of the fabric along with the reverse flow from outside to inside.
In the CCI network fabric site (PoP), a switch stack (Cisco Catalyst 9300) is configured with all the fabric roles (i.e., border, control plane, and edge called FiaB). Fabric is provisioned with an overlay VN; i.e., macro-segmentation for the overlay network is defined. (Note that the overlay network will only be fully created until the host onboarding stage). This process virtualizes the overlay network into multiple self-contained VNs.
In the CCI network, VNs and SGTs are created for each vertical use case overlaid on the CCI network. An example list of VNs created in this implementation are available in Table 2.
To create VNs in the fabric as needed, refer to Procedure 1 under section "Process 4: Creating Segmentation with the Cisco DNA Center Policy Application” in the Software-Defined Access for Distributed Campus Deployment Guide
https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/SD-Access-Distributed-Campus-Deployment-Guide-2019JUL.html#_Toc13487379
2. Associate VN to Fabric Site
IP address pools enable host devices to communicate within the fabric site. Associate IP addresses pools for end-points data traffic in the overlay VN.
Follow the steps in Virtual Network section under the chapter “Provision Fabric Networks” to create VNs and associate IP address pools to a VN:
https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/user_guide/b_cisco_dna_center_ug_2_2_3/b_cisco_dna_center_ug_2_2_3_chapter_01110.html#id_50854
Note: Select No Authentication for the default Authentication Template for a fabric site. The No Authentication template is selected as default template.
Figure 47 shows an example VN and overlay IP pools associated with a VN (SnS_VN) in the CCI network:
Figure 47 Example Virtual Network and IP Pools Association in CCI Network
–Similarly, associate an IP pool in the fabric default INFRA_VN for Extended Nodes IP addressing.
3. Provisioning Fabric-in-a-Box (FiaB)
Once the VNs are associated with the fabric sites, provision a Cisco Catalyst 9300 Switch stack as FiaB in the CCI fabric (PoP) site. A Layer 3 handoff that extends the fabric VNs to the next hop (i.e., fusion router, in the case of IP-based Transit or SDA Transit CP or intermediate network device, in the case of SDA Transit). This will allow the end-points in the fabric to access shared services once the fusion router configuration is completed.
Complete the following steps to provision a Cisco Catalyst 9300 Switch stack in a fabric/PoP site as FiaB for IP-based transit network topology, as shown in Figure 3:
a. In Cisco DNA Center, navigate to Provision-> Fabric.
b. Select the Fabric Enabled Site (Bangalore) that was created.
c. Select the PoP site (MGRoad) from the fabric-enabled sites in the left pane.
d. Select the device to be provisioned as a FiaB. A slide pane appears.
e. On the slide pane, select the roles Edge node, control plane, and border node, as shown in Figure 48.
Figure 48 Example FiaB Provisioning View for IP Transit
f. Click Configure next to Border Role, configure the local Autonomous number for the site, and select the Layer 3 handoff network pool associated with the site.
g. Under the Transit/Peer networks, enable the option Default to all Virtual networks and then select the transit site. In this case, we used SD Access Transit.
h. Select the Transit Control Plane devices and then click Add.
Example provisioning for IP transit external interfaces is shown in Figure 49.
Figure 49 Example FiaB Border Configuration for SD-Access Transit in CCI Network
i. Once done, click Add and Save to provision the FiaB and for the successful provisioning of the Fabric message in the Cisco DNA Center UI.
j. Verify the FiaB provisioning in Cisco DNA Center UI for the network site. No errors should be reported in the Fabric Infrastructure map view of the FiaB.
Alternatively, if you are deploying an IP Transit based network topology, as shown in Figure 4, you need to configure the FiaB Border to connect to an SD-Access Transit Network created in Step 2 of the Configuring Fabric Domain and Transit Network(s).
Refer to the following URL for steps to create the IP Transit network in Cisco DNA Center:
– https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/user_guide/b_cisco_dna_center_ug_2_2_3/b_cisco_dna_center_ug_2_2_3_chapter_01110.html#id_75992
This completes the FiaB provisioning or fabric role assignment in the fabric overlay network for a PoP site connected to either the SD-Access Transit or IP-based Transit network.
In CCI, Cisco Catalyst 9800 Series Wireless Controller (C9800-L) or Cisco Catalyst 9300 Series Switch Stack with embedded Wireless Controller can be configured. C9800-L WLC manages Cisco Unified Wireless Network (CUWN) Wi-Fi access mesh and non-mesh deployments. Alternatively, an embedded WLC on C9300 switch stack can be deployed for managing SD Access Wireless (Wi-Fi) networks. Refer to the “CCI Wi-Fi Access Network Solution” section in the Connected Communities Infrastructure Design Guide at the following URL for more details on the CCI Wi-Fi design.
■ https://www.cisco.com/c/en/us/td/docs/solutions/Verticals/CCI/CCI/DG/cci-dg.html
Cisco Catalyst 9800-L Wireless Controller can be configured as a per PoP Wireless LAN Controller (WLC) with High Availability (HA) for managing CUWN Wi-Fi networks within a PoP. Cisco Catalyst 9800-L Wireless Controller is the first low-end controller that provides a significant boost in performance and features from Cisco 3504 Wireless Controller. This section covers the initial installation and HA configuration of C9800-L WLC in a CCI PoP.
For more details on C9800-L controller, refer to the following URL:
■ https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/9800-L/installation-guide/b-wlc-ig-9800-L/overview.html
For rack mounting and installing the C9800-L hardware, refer to the following URL:
■ https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/9800-L/installation-guide/b-wlc-ig-9800-L/Installing-the-Cisco-Catalyst-9800-L-Wireless-Controller.html
Once the WLC is rack mounted make sure the following:
1. The network interface cable or the optional Management port cable is connected.
2. The chassis is securely mounted and grounded.
3. The power and interface cables are connected
4. Terminal server is connected to the console port
Note: Install mode is the recommended mode to run the wireless controller.
Boot the controller in INSTALL mode:
Step1: Make sure to boot from flash:packages.conf (and you do not have other boot files specified in your config).
Step2: Software install image to flash. The install add file bootflash:<image.bin> activate commit command moves the switch from bundle-mode to install-mode where image.bin is our base image.
Step3: Type yes to all prompts. When the install is complete, the controller reloads.
After the controller reboot we can verify the current installation mode of the controller. Run show version to confirm.
For more details on WLC power up and initial configuration, refer to the following URL:
■ https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/9800-L/installation-guide/b-wlc-ig-9800-L/Power-Up-and-Initial-Configuration.html
Day-0 Manual Configuration Using the Cisco IOS-XE CLI:
C9800-L WLC is connected to one of our CCI PoP sites MGRoad C9300 FiaB Switch with 10G link.
This section shows you how to access the CLI to perform the initial configuration on the controller.
Step 1: Terminate the configuration wizard (this wizard is not specific for wireless controller):
Step 2: Press Return and continue with the manual configuration.
Step 3: Press Return to bring up the WLC> prompt and Type enable to enter privileged EXEC mode.:
Step 4: Enter the config mode and set the hostname:
Step 5: Configure login credentials:
Step 6: Configure the underlay VLAN for wireless management interface. For example, vlan 199, IP: 10.10.199.199 with gateway 10.10.199.1 in underlay EIGRP 2000 AS.:
Step 7: Configure the SVI for wireless management interface:
Step 8: Configure the interface TenGigabitEthernet0/0/0 as trunk:
Step 9: Configure a default route (or a more specific route) to reach the box:
Step 10: Disable the wireless network to configure the country code:
Step 11: Configure the AP country domain. This configuration is what will trigger the GUI to skip the DAY 0 flow as the C9800 needs a country code to be operational:
Step 12: Specify the interface to be the wireless management interface:
Step 13: For the Controller to be discovered by the Cisco DNA Center or Prime Infrastructure, CLI,SSH, and SNMP credentials should be configured on the devices along with NETCONF.:
Verify that we can ping the wireless management interface and then just https://<IP of the device wireless management interface>. Use the credentials you have entered earlier. Since the box has a country code configured, the GUI will skip DAY 0 page and you will get access to the main Dashboard for DAY 1 configuration.
Access the C9800 Web UI using https://<C9800-L-WLC-IP>. The username and password configured during the Day-0 configuration is used to log on to WLC Web UI.
Figure 50 Cisco 9800-L WLC Web UI Dashboard View
The HA Pair always has one active controller and one standby controller. If the active controller becomes unavailable, the standby assumes the role of the active. The Active wireless controller creates and updates all the wireless information and constantly synchronizes that information with the standby controller. If the active wireless controller fails, the standby wireless controller assumes the role of the active wireless controller and continues to the keep the HA Pair operational. Access Points and clients continue to remain connected during an active-to-standby switchover. Follow the steps below for configuring C9800-L WLC with HA in a PoP.
Note: Redundancy SSO is enabled by default but you still need to configure the communication between the boxes.
Step 1: Make sure both the C9800 WLCs are reachable to each other. Wireless management interface from both boxes must belong to the same VLAN and subnet. Connected to C9300 FiaB, one of our PoP sites in our case.
Step 2: Connect both 9800 WLC to each other through its RP port. Connecting C9800-L Wireless Controllers using RJ-45 RP Port for SSO:
Figure 51 C-9800-L WLC Redundancy Port Connections
Step 3: Provide the required redundancy configurations to both 9800 WLCs.
Navigate to Administration-> Device-> Redundancy. Enable “Redundancy Configuration”, check ‘ RP ’ for “ Redundancy Pairing Type ” and enter the desired IP address along with the Active and Standby Chassis Priorities. Both boxes should have its own IP address, and both should belong to the same subnet.
On the Active controller, the priority is set to a higher value than the standby controller. The wireless controller with the higher priority value is selected as the active during the active-standby election process. If we do not choose a specific box to be active, the boxes themselves will elect Active based on lowest MAC address. The Remote IP is the IP address of the standby controller’s redundancy port IP.
Figure 52 Redundancy Pairing on both C9800-L WLCs
Configuring Chassis HA interface:
Configure the priority of the specified device:
Configure the peer keepalive timeout value:
Configure the peer keepalive retry value before claiming peer is down.:
Step 4: Save configurations on both 9800 WLCs and Reboot both boxes at the same time.
Navigate to Administration-> Reload.
Once both 9800 WLC rebooted and are synced to each other we can console into them and verify their current state with these commands:
Enable Console Access to Standby 9800 WLC:
Once we enable HA and one of the boxes is assigned as active and the other one as standby hot, by default we are not allowed to reach exec mode (enable) on the standby box. To enable it, login by SSH/console to the active 9800 WLC and enter these commands:
If we want to force a switchover between WLCs, you can either manually reboot the active 9800-L WLC or run this command:
Integrating the Wireless with the SD Access brings the best of both the architectures like Simplifying the Control & Management Plane, optimizing the data plane and Integrating Policy & Segmentation end to end. This section covers the installation of Cisco Catalyst Embedded 9800 Wireless Controller (eWLC) on Catalyst 9K series Switches and bring up with Cisco DNA Center.
In CCI, Cisco Catalyst Embedded 9800 Wireless Controller (eWLC) is installed in PoP sites which require SD Access Wireless (Wi-Fi) on C9300 FiaB switch stack. Follow the steps below to configured eWLC on C9300 switch stack.
We will categorize this section into two parts:
■ Installation of eWLC (c9800-sw) on C9300 FiaB PoP Site
■Enable Embedded SDA-Wireless through DNA Center Provisioning & AP Onboarding:
Installation of eWLC (c9800-sw) on C9300 FiaB PoP Site:
The steps to install eWLC on the C9300 FiaB switch are the following:
1. Check that license is dna-advantage.
2. Boot the switch in install mode.
Step1: Check that license is dna-advantage.
For eWLC package to install properly you need to have the dna-advantage active on the switch. You can check this through show version command.
Step2: Boot the switch in install mode.
When we boot the switch passing directly the.bin image, this is called "bundle mode". Packaging only works, when the switch is booted in "install mode". To verify the mode, run "show version":.
Make sure to boot from flash:packages.conf (there are no other boot files specified in our configuration):
a. Software install image to flash. The install add file bootflash:<image.bin> activate commit command moves the switch from bundle-mode to install-mode where image.bin is our base image.
b. Type yes to all the prompt. Once the install is completed the switch proceeds to reload.
After the controller reboot, we can verify the current installation mode of the controller. Run the show version command in order to confirm.
Step3: Install the eWLC package.
After downloading the eWLC image to the switch you can install the wireless package using one single command line.
In our CCI, eWLC version installed is C9800-SW-iosxe-wlc.17.01.01s.SPA.bin
Where "flash:ewlc_pkg.bin" is our ewlc package. Alternatively, we can also install it from tftp directly.
Say "yes" to all questions. The switch should then reload and come up with ewlc package installed.
After reloading we can confirm the install with the install summary command.:
To enable NETCONF on the switch these three commands needs to be on the switch.
We can check NETCONF is running with the following command.
Enable Embedded SDA-Wireless through DNA Center Provisioning and AP Onboarding
–Make sure we don’t have a WLC in the site where you plan to enable Embedded SDA-Wireless.
–It is important that we do the discovery after we have installed the -wlc pkg otherwise DNACenter will not display the “embedded wireless” option in fabric view.
–Configure our AP IP pool and attach it to INFRA_VN. In that DHCP scope, point DHCP option 43 DNA Center with this PnP will discover the AP
Reserving IP Pool for Access Points:
Navigate to Design-> Network Settings-> IP Address Pools, for MG Road PoP Site reserve IP Pool for SDA Wireless Access Points, as shown in Figure 53.
Figure 53 AP IP Pool Reservation on Cisco DNA Center
Attaching AP IP Pool to INFRA_VN:
Attach an AP IP pool to MGRoad Fabric Site by following the steps under “Add a Gateway to a Layer 3 Virtual Network” section in the following URL.
https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/user_guide/b_cisco_dna_center_ug_2_2_3/b_cisco_dna_center_ug_2_2_3_chapter_01110.html#task_zm2_sfl_1qb
Figure 54 Attaching AP Pool to INFRA_VN on Cisco DNA Center
6. Provision AP (Will be added to DNAC automatically by joining the Catalyst 9800 WLC).
7. Configure onboarding SSID (refer to Implementing Wi-Fi Access Network).
a. When configuring the discovery properties, click the add credentials and configure the NETCONF port to 830.
Figure 55 eWLC Discovery on Cisco DNA Center
b. Assign the switch to MGRoad PoP Site.
c. Provision the device. Refer to Provisioning Devices in SD-Access for the device provisioning steps.
d. Add the device as Fabric in a Box (configure as Border, Control, and Edge Node) and Enable Embedded Wireless.
Figure 56 Enabling Embedded Wireless on FiaB Switch
e. Connect the SDA Wireless APs connect to either Fabric Edge (FE) ports, or Extended Node (EN) ports. It is recommended to resync of the switch for it to add the AP. Go to Provision-> Inventory, select the switch from the site, and resync the switch. The APs will be shown in the Devices tab for us to assign to our eWLC site and provision.
Note: Latency between AP and WLC needs to be < 20 ms.
Figure 57 AP IP Pool Reservation on Cisco DNA Center
Note: To assign the APs to the Site, Floors should be created under the Building under Network Hierarchy.
Figure 58 Attaching AP Pool to INFRA_VN on Cisco DNA Center
Note: By default, the RF profile that you is marked as default under Design > Network Settings > Wireless > Wireless Radio Frequency Profile is selected in the RF Profile drop-down list. You can change the default RF Profile value for an AP by selecting a value from the RF Profile drop-down list. The options are: High, Typical, and Low. The AP group is created based on the RF profile selected.
For verifying successful provisioning of SD Access Wireless on C9300 Stack in a PoP site, navigate to Provision -> SD Access-> Fabric Infrastructure view as shown in Figure 59.
Figure 59 SD Access Wireless AP View on Fabric Infrastructure
This section covers the implementation of the backhaul network for interconnecting fabric sites (PoPs). It is mandatory to configure the underlay network connectivity between the Fabric Border (FiaB) and the backhaul network (Enterprise Ethernet or MPLS) as mentioned in Underlay Network Implementation. Fabric sites can be interconnected either using SD-Access Transit or IP-based Transit, which is implemented depending on the CCI backhaul network.
Note: This section provides example configurations for Private Ethernet and MPLS-based network backhauls implemented in this solution validation, as shown in Figure 3 and Figure 4.
This section includes the following major topics:
■PoP Interconnection over Ethernet Network Backhaul
■PoP Interconnection via IP Transit over MPLS Network Backhaul
This section covers the example configuration of fabric interconnection for the SD-Access Transit-based network topology shown in Figure 3.
When configuring the interfaces on a fabric border to communicate with SD-Access transit, Cisco DNA will configure a VRF for each VN in the fabric site Border (i.e., FiaB and Transit Control Plane (T-CP) nodes). BGP peering is configured between the T-CP node and FiaB to enable overlay routing. In this implementation, two Cisco Catalyst 9500 switches as Ethernet network backhaul are provisioned as "SD-Access Transit" T-CP nodes, as shown in Figure 19. When connecting fabric sites to a SD-Access Transit network, each VN with subnets configured for data traffic is created as a VRF in FiaB and VN subnet(s) network prefixes for data traffic are registered with T-CP nodes in the SD-Access Transit site.
Example FiaB VRF Configuration:
Cisco DNA Center automatically configured the BGP peering between the FiaB Border and SD-Access Transit Control Plane nodes (i.e., Cisco Catalyst 9500 switches in this implementation) using lookback interfaces configured (routing enabled in the underlay network) on these devices. It leverages the existing underlay physical interfaces/network connectivity to backhaul network. Therefore, no separate physical interface selection is required.
Note: P subnet pools configured for extended nodes are added in the Global Routing Table (GRT) address family in the BGP routing configuration outside of the VRF address family.
Example FiaB Border BGP Routing Automatically Configured by Cisco DNA Center
Example SD-Access Transit Control Plane Node BGP Routing Automatically Configured by Cisco DNA Center
When configuring the interfaces on a fabric border to communicate with an IP transit, the Cisco DNA Center will configure a VRF for each VN in the fabric site. This is known as VRF-lite because the VRFs are only locally significant. When connecting to an MPLS backhaul, the provider will use its own VRFs to keep different customers' traffic separated. Using a VRF-aware routing protocol within the service provider gives them the ability to keep the VRF configuration at the service provider edge instead of every single device in the core. These VRFs, however, are not related to the VRFs configured on the fabric border. To maintain the macro-segmentation provided by a VN's use of VRFs between fabric sites over an IP transit, the service provider must also provide a VRF for each VN configured at a fabric site.
Example Provider Edge VRF Configuration
When configuring the border services, Cisco DNA will automatically configure a VLAN interface on the border node. When configuring the provider edge node, there must be a matching VLAN configuration to enable connectivity. The border configuration is shown in Figure 60.
Figure 60 Border Node External Interface
On the provider edge interface facing the edge fabric border, the services are separated using a different service instance. Each service instance is then associated with a bridge domain interface. For ease of administration, the VLAN encapsulation and bridge-domain should match. If the IP transit is owned by a different operator, they will have to ensure the encapsulation matches the VLAN configured on the fabric border.
The VRF is also added to the service provider's BGP configuration:
A service provider interface will be connected to the data center (fusion router, in this implementation) and this must have all the VRFs configured to maintain segmentation end to end. Because these devices are not part of the fabric, the configuration must be done manually.
Provider Edge VRF facing the fusion router:
Since the VLAN encapsulation is not automatically generated by Cisco DNA for this connection, there are no mandates on the VLAN other than what the service provider may require:
The VRF is then added to the service provider's BGP configuration:
A complementary configuration also exists on the customer edge device (fusion router, in this implementation):
Because the VRF separation is maintained within the IP transit network, the VN will maintain its macro-segmentation from one fabric site to another.
When fabric traffic needs to cross over between user-defined VRFs or services that are shared by fabric and non-fabric devices, it must be manually routed by a non-fabric device. These shared services include, but aren't limited to, Cisco DNA, ISE, DHCP, WLC, and NTP. The shared services can be in the GRT or a separate VRF. This routing device is known as a fusion router because it fuses together traffic from different VRFs or a VRF and the GRT. This process involves leaking the appropriate routes between VRFs or the GRT. VRF import/export statements and route maps can limit the routes leaked between services.
This section covers the following two example implementations of the fusion router for the network topologies, as shown in Figure 4 and Figure 19. Depending on the deployment topology/backhaul network, you can choose to implement either of the configurations:
■Configuring a Fusion Router in IP-Based Transit Network
■Configuring a Fusion Router in SD-Access Transit Network
For more details about fusion routers, route leaking, and step-by-step instructions for configuring a fusion router, refer to the section "About Fusion Routers" in the Software-Defined Access for Distributed Campus Deployment Guide at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Campus/SD-Access-Distributed-Campus-Deployment-Guide-2019JUL.html#_Toc13487404
For the IP Transit scenario, a Cisco ASR 1000 Series Router was used as the fusion router, but the only requirement is that the router must support route leaking between VRFs. In this implementation, the shared services were part of the global routing table, but they could also be part of a separate shared services VRF.
1. The fusion router configuration is outside the scope of Cisco DNA and must therefore be done manually. The first step is to configure a VRF for every VN configured in Cisco DNA.
2. The fusion router must then have interfaces configured in the VRF, which can connect to a fabric border node or other non-fabric router. In the case of a fabric border node, Cisco DNA will configure the interface and BGP configuration as part of the border configuration. The fusion router side must be done manually. The following is an example of the automatically generated border node interface configuration:
3. The following is the complementary interface configuration manually entered on the fusion router:
4. Cisco DNA also automatically generates the BGP config for the VRF on the border node:
5. The fusion router must be manually configured to successfully neighbor with the border node:
6. Because the VRF creates a routing table separate from the GRT, routes must be shared between them for the VRF to have access to the shared services, and vice versa. One way to achieve this is with prefix lists and route maps.
7. The route-map must then be imported into the target VRF:
8. Verifying the routes on a fabric site:
9. Additionally, routes from the VRF must be exported to the GRT so the shared services can reach interfaces in the VRF:
10. The route map must then be exported from the target VRF:
11. Verifying the routes on the fusion router:
Implementation of fusion routers in SD-Access Transit-based network topology (Figure 3) is similar to IP-based Transit since both network topologies connect to fusion routers via the IP Transit network. In this implementation, an IP Transit network interconnects a HQ/DC site with an external network outside of fabric overlay in order to provide access to shared services. Therefore, steps to configure the fusion router is similar to what was described in the previous section.
This section discusses an example implementation of redundant fusion routers in HQ/DC site, as shown in Figure 19, for a CCI implementation (with an SD-Access Transit-based network topology). A couple of Cisco Cloud Services Routers 1000V are used as redundant fusion routers in this implementation.
1. Configure VRF for every VN configured in Cisco DNA Center on the fusion router. Example VRF configuration:
Figure 61 shows an example VLAN automatically created by the Cisco DNA Center Border while FiaB role provisioning.
Figure 61 Example Border Configuration for Connecting to IP Transit
2. In Figure 61, GigabitEthernet2/0/6 is a physical link connecting to a fusion router (CSR1000V-1 used as fusion router) and GigabitEthernet1/0/6 is a physical link to redundant (secondary) fusion router (CSR1000V-2). Example VLAN configurations automatically configured by the Cisco DNA Center on HQ/DC Site FiaB border:
3. Configure complementary interface configurations matching this VLAN interfaces on the fusion router:
4. Cisco DNA Center automatically generates the BGP config for the VRF (SnS_VN) and INFRA_VN on the border node:
5. The fusion router must be manually configured to successfully neighbor with the border node:
6. Configure prefix-lists and to match shared services network routes:
7. Configure route-map to import shared services network into the target VRF:
8. The route-map must then be imported into the target VRF. Example configuration for a VN (SnS_VN):
9. Verifying the routes on a fabric site (for example, on the HQ/DC site):
10. Additionally, routes from the VRF must be exported to the GRT so that the shared services can reach interfaces in the VRF:
11. The route map must then be exported from the target VRF:
12. Verify the routes on the fusion router:
13. This completes the fusion routing configuration on CSR1000v-1. Repeat the same steps for the secondary fusion router (CSR1000v-2) in the network.
Note: Shared services network prefixes are advertised to other fabric sites (PoP) via Control Plane nodes (SD-Access Transit) BGP neighborship between all PoP sites border and Transit Site control plane nodes.
Regardless of how the rest of the network itself is designed or deployed outside of the fabric, a few things are going to be in common in deployments due to the configuration provisioned by the Cisco DNA Center. Providing Internet access to PoP (fabric) site devices is one of such common use cases to be provisioned in the deployments. In the CCI network, Internet access to PoP sites are configured on the fusion router that connects to DMZ network as Internet Edge.
Refer to the following URL for mode details on different types for fabric border.
■ https://community.cisco.com/t5/networking-documents/guide-to-choosing-sd-access-sda-border-roles-in-cisco-dnac-1-3/ta-p/3889472
In the SD-Access Transit based network topology as shown in Figure 3, the fusion routers (CSR1000V) are acting as Internet edges to HQ/DC site FiaB. Alternatively, on IP based Transit network topology, as shown in Figure 4, a couple of Catalyst 9500 switches as fusion routers are the Internet Edges.
This section covers an example implementation of configuring Internet access to PoP sites via HQ/DC site which is connected to Internet edge as shown in Figure 3. The FiaB border in HQ/DC site will have the SD-Access network prefixes in its VRF routing tables. As a prerequisite for being “connected-to-Internet,” it will also have a default route to its next hop (fusion router as Internet edge) in its Global Routing Table.
Note: Make sure that in order to provide Internet access to other SD-Access Transit-connected PoP (Fabric) sites, the Fabric Border which connects to your network Internet edge is configured with the Connected to the Internet checkbox enabled.
In this implementation, the HQ/DC site border (FiaB) connects to the Internet edge and provides Internet access to other PoP sites via SD-Access network. Therefore, the border is configured with the Connected to the Internet checkbox enabled.
Figure 62 Example Border Configuration for Internet Connectivity
The fusion router as Internet edge has the default route in its GRT to the next-hop of the Internet (i.e., FirePower2140 in DMZ network in this implementation).
Default static route in underlay network on fusion router:
This default route must be advertised from the GRT to the VRFs. This allows packets to egress the fabric domain towards the Internet. In addition, the SD-Access prefixes in the VRF tables on the border nodes must be advertised to the external domain (outside of the fabric domain) to draw (attract) packets back in.
These SD-Access network prefixes are already configured in fusion routers; however, they must be added in the Firepower configuration. For detailed Firepower implementation in DMZ network in this implementation, refer to the Configure Static and Dynamic Routing:. It includes, the configuration required to enable Internet access for endpoints/devices in the PoP sites.
VRF and BGP configurations have already been provisioned by Cisco DNA Center, along with the Layer 3 handoff. All fabric domain prefixes will be learned in the GRT of the Internet edge routers. Configure the default route on the fusion router (Internet edge) to advertise the default route. The default route is injected into the BGP RIB of VRFs needing Internet access, resulting in a general advertisement to all BGP neighbors via SD-Access Transit for the VRF.
Advertising a default route in BGP has different methods, each with its own caveats. In this implementation, the "network 0.0.0.0" method is used as an example.
–This will inject the default route into BGP if there is a default route present in the GRT.
–The route is then advertised to all configured neighbors.
Example BGP Configuration on Fusion Router (Internet Edge)
1. Verify the default route is injected on the border (FiaB) VRF:
2. Once the Firepower in DMZ is configured for Internet access, verify the Internet access from border (FiaB) via VRF as shown belowL
This completes the Internet access configuration for the PoP sites in overlay VNs. For more details on Internet access for fabric sites, refer to the section "Configuring Internet Connectivity" in the Software-Defined Access for Distributed Campus Deployment Guide.
This section covers the implementation of various last mile access networks like Ethernet Access, CR-Mesh, DSRC, and LoRaWAN in each PoP site, as per the solution design validated in this CVD.
This section includes the following major topics:
■Implementation of Ethernet Access Network
■Implementing Cisco Resilient Mesh Access Network
■Implementing LoRaWAN Access Network
■Implementing Wi-Fi Access Network
The Ethernet network access in a PoP site is provided by connecting Cisco Industrial Ethernet (IE) switches in a ring topology. This section covers the implementation of Ethernet access ring(s) in a PoP site to provide network access to wired endpoints or gateways (examples: IP Camera, Cohda RSU, ICS300, and CGR) connected to the CCI network. Follow the steps covered in this section to complete the implementation of Ethernet access rings in PoP sites.
This section details the steps required for onboarding Extended Nodes or Policy Extended Node into a linear daisy chain topology, as discussed, in the CCI design guide at the following URL:
https://www.cisco.com/c/en/us/td/docs/solutions/Verticals/CCI/CCI/DG/General/cci-dg/cci-dg.html#pgfId-457899.
To create a linear daisy chain topology of IE switches in a CCI PoP site the pre-requisites for EN & PEN onboarding (described in previous section) must be met. Additionally, following points must be ensured:
■Ensure that there is only one upstream switch via switch being onboarded can reach Cisco DNA Center for PnP.
■The physical topology connecting the devices that are to be onboarded as ENs & PENs must be completed.
Begin the following steps once the setup meets all the above pre-requisites:
1. Connect the EN/PEN devices to the fabric edge device (FiaB in this case) in the form of a daisy chain topology. You can have multiple links from the extended node device to the fabric edge for redundancy. If there are multiple links between the node and the FiaB, Cisco DNA Center bundles them into a port-channel as part of onboarding process.
2. Power-up the first extended node in the daisy chain and execute the following CLI commands:
After the switch reboots, PnP gets triggered, and the device appears under Provision->Plug and Play with state “Unclaimed” which then changes to Planned->Onboarding and finally to Provisioned. After successful onboarding, device will appear in the fabric topology under Provision->Fabric sites->Fabric Sites->Site_Name as shown in Figure 63.
Figure 63 Onboarding first node of Daisy chain
3. After the onboarding completes for first node, power up the second node connected to the first node and repeat the above steps to onboard it onto Cisco DNA Center.
Multiple IE switches can be added to this chain by repeating the above steps. Once daisy chain onboarding of all required IE switches is complete, verify the fabric topology. The fabric topology should appear as shown in Figure 64:
Figure 64 Linear Daisy chain containing two nodes
This completes the linear daisy chaining of Extended nodes or Policy Extended nodes.
Refer to the following URL for more details on daisy-chaining topology limitations and restrictions:
https://www.cisco.com/c/en/us/td/docs/solutions/Verticals/CCI/CCI/DG/General/cci-dg/cci-dg.html#pgfId-457899
To onboard an STP ring of ENs & PENs, the IE switches which have to be the member of the ring must be first onboarded as a linear daisy chain as described in the previous section. The linear daisy chain for the final ring topology can be obtained by breaking the ring at any desired point. For optimization, it is recommended to break the ring in the middle and onboard the two parts of the ring as two separate linear daisy chains. For example, the intended final ring shown in Figure 66, the two linear daisy chains can be chosen as shown in Figure 65.
Figure 65 Recommended Linear-daisy chains to form an STP ring
Figure 66 Intended Final STP ring
Following are the steps required to be followed for obtaining the above mentioned STP ring of ENs or PENs:
1. Onboard the member devices of the ring in the form of two daisy chains as described previously. DO NOT connect the interfaces of the last nodes of the two chains before the onboarding process of both linear chains is complete. This will create two upstream links for some of the member devices and may violate the pre-requisite of having exactly one upstream switch for Cisco DNA Center to discover the device via PnP, and causing onboarding to fail.
2. Close the ring by bringing up the interfaces connecting the last nodes of the two daisy chains (For example, the devices SN-FOC2429V0SZ and SN-FOC2401V0A0 from the Figure 66 above).
3. Create a template with the configuration for converting the interfaces brought up in Step 2 above into a port-channel interface.
For detailed steps on how to configure using Templates refer to the chapter “Create Templates to Automate Device Configuration Changes” at the following URL:
https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/user_guide/b_cisco_dna_center_ug_2_2_3/b_cisco_dna_center_ug_2_2_3_chapter_01000.html
4. To create the template navigate to Tools ->Template Editor-> Icon. The content to be added in the template is as follows for Policy Extended Nodes ring:
The content to be added in the template is as follows for Extended Nodes ring:
5. Click the Input Form pane next to the Template System Variables and check Bind to Source under Content in the right pane. Select Source Inventory and Entity as interface from the dropdown as shown in the diagram below.
Figure 67 Creating Template for STP ring
6. Click on Actions->Save->Commit.
7. Associate the template to a network profile by going to Design->Network Profile->Add Profile ->Day N Template->Add Template and then selecting the device type as Switches and Hubs and choosing the Template created in step 2. Finally click on Add.
8. Associate this Network Profile to the site name where the daisy chain has been onboarded.
9. Navigate to Provision->Inventory and enable the checkbox for the two devices followed by Actions->Provision device and complete the steps as shown in Figure 68 & Figure 69 below. Choose the interface on each of the two nodes and assign a port-channel number for both devices and then click on Next->Deploy.
Figure 68 Provisioning Template for creating Port-channel between the two last nodes of daisy chains
Figure 69 Provisioning Template for creating Port-channel between the two last nodes of daisy chains
Figure 70 Provisioning Template for creating Port-channel between the two last nodes of daisy chains
This will close the two linear chains into a STP ring.
Cisco switches run STP by default and hence the only STP configuration that is required in the ring is assigning the FiaB switch as the root bridge. For this we will create another template and associate it with a Network Profile, associate the device type matching the FiaB and then assign it to the site. The same steps as described in above section has to be followed for applying the template to the device. The configuration to be added in the template is:
10. After the template is ready for deployment go to Provision->Inventory->Select the FiaB switch -> Actions ->Provision Device->Next ->Deploy to deploy the template to FiaB switch
11. Verify that the FiaB switch has become the root bridge for all configured VLANs. This can be done by issuing “show spanning-tree” CLI command on the switch. The root ID will match the bridge ID for all VLANs in the output as shown below:
----some outputs have been omitted-------
This completes the STP ring creation of ENs or PENs in a CCI PoP site.
Note: For a ring size of more than 20 nodes, the spanning-tree max age timer must be changed. The STP max age timer should be increased from the default value of 20 to a maximum value of 40 depending on the number of nodes. Following is the command to set the timer using CLI:
Extended nodes (EN) and Policy Extended Nodes (PEN) in SD-Access extend the Fabric Edge for IoT devices and provide SD-Access to IE switches, ENs and PENs run in Layer 2 switch mode and do not natively support fabric technology. An EN/PEN is configured by an automated workflow. After configuration, the extended node device is displayed on the fabric topology view. Port Assignment on the extended nodes is done on the Host Onboarding window.
The following are the supported hardware and minimum supported software versions on the EN/PEN:
■Cisco Industrial Ethernet 4000, 4010, 5000 series switches: 15.2(7)E0s with LAN base license enabled
■Cisco Catalyst IE 3400, 3400 Heavy Duty (X-coded and D-coded) series switches: IOS XE 17.1.1s
■Cisco Catalyst IE 3300 series switches: IOS XE 16.12.1s
Note: Both a Network Advantage and a DNA Advantage license is required on IE3400 switches for onboarding it them as Policy Extended Nodes (PENs). This section discusses the steps to onboard an EN or PEN in an Ethernet access ring.
Prerequisites for extended node onboarding:
■Configure a network range for the extended node. Refer to <<Step 4. Configure IP Address Pools>> for steps to configure the IP Address Pool. This configuration comprises adding an IP Pool and reserving the IP Pool at the site level. Ensure that the CLI and SNMP credentials are configured.
■Assign the extended IP address pool to INFRA_VN under the Fabric > Host Onboarding tab. Select Extended Node as the pool type. Cisco DNA Center configures the extended IP address pool and VLAN on the supported fabric edge device. This enables the onboarding of extended nodes.
■Ensure that the Fabric site is configured with “No Authentication” mode for onboarding IE switches in to SD Access fabric as EN or PEN
Configure the DHCP server with the extended IP address pool and Option-43. Refer to section "DHCP Controller Discovery" in the Cisco Digital Network Architecture Center User Guide, Release 2.2.3 at the following URL:
https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center/2-2-3/user_guide/b_cisco_dna_center_ug_2_2_3/b_cisco_dna_center_ug_2_2_3_chapter_01101.html?bookSearch=true#id_90877
Ensure that the FiaB is provisioned and that the extended node IP pool default gateway configured on the FiaB (Edge) is reachable from the Cisco DNA Center.
Complete the following steps to onboard EN or PEN:
1. Connect the EN/PEN devices to the fabric edge device (FiaB in this case) in a the form of a Daisy Chain format. You can have multiple links from the extended node device to the fabric edge.
2. Power-up the extended node device if it has no previous configuration. If the extended node switch has any previous configurations, execute the following steps on the extended node switch before starting the onboarding process:
The Cisco DNA Center adds the EN or PEN device to the Inventory and assigns the same Site as the fabric edge. The EN or PEN is then added to the fabric. Now the EN or PEN is onboarded and ready to be managed.
After the configuration is complete, the EN or PEN appears in the Fabric topology with a tag (X) indicating that it is an extended node, as shown in Figure 71.
Figure 71 Cisco DNA Center Fabric Infrastructure View of Extended Node
Note: If any errors exist in the workflow while configuring an EN or PEN, an error notification is displayed as a banner on the topology window. Click See more details on the interface to check the errors.
Configure REP Ring topology for Extended Nodes & Policy Extended Nodes:
To enable redundancy on the extended nodes, configure a Resilient Ethernet Protocol (REP) Ring for a fabric site. The Resilient Ethernet Protocol (REP) is a Cisco proprietary protocol that provides an alternative to the Spanning Tree Protocol (STP). REP provides a way to control network loops, handle link failures, and improve convergence time. It controls a group of ports connected in a segment, ensures that the segment does not create any bridging loops, and responds to link failures within the segment.
A REP segment is a chain of ports connected to each other and configured with a segment ID. Each segment consists of standard (non-edge) segment ports and two user-configured edge ports. A switch can have no more than two ports that belong to the same segment, and each segment port can have only one external neighbor. An example Closed REP ring topology configuration validated in this implementation is described in this section.
REP Ring Configuration using REP Workflow:
■A REP ring can be created in a CCI PoP site using Cisco DNA Center REP Workflow feature.
Note: REP Workflow for creating a REP ring in a CCI PoP site/fabric site is supported from Cisco DNA Center 2.3.2.x release onwards. You must upgrade the Cisco DNA Center to the release 2.3.2.x or higher to use this feature for creating REP rings.
Limitations of REP Ring Workflow:
■Configuring an ring topology of all switches is must by physically connecting all of the switches in a ring before using the REP workflow.
■A device connected in a REP Ring can’t be deleted from the fabric until the REP Ring that it’s a part of is deleted.
■To delete or insert a member into the REP Ring, first delete the REP ring, add, or delete a member (as required) and then create the REP Ring again.
■Multiple rings within a REP ring are not supported.
■A ring of rings is not supported.
■A node in a REP ring can have other nodes connected to it in a daisy chain manner; but a node in a daisy chain can not have a ring of nodes connected to it.
■A mix of extended node (ENs) devices and policy extended node (PEN) devices in a REP Ring isn’t not supported. A REP Ring can have all devices either as extended node or as policy extended node.
■By default, a maximum of 18 devices can be onboarded in a single REP ring. To onboard more than 18 devices, increase the BPDU timer using the spanning-tree vlan <infra_ VN_ VLAN> max-age 40 command. Use the Cisco DNA Center templates to configure the command.
Follow the below steps to configure the REP ring using the workflow.
1. In the Cisco DNA Center GUI, click the Menu icon and choose Workflows > Create REP Ring.
Alternatively, you can navigate to the Fabric Site topology view, and then select the Fabric Edge node or the FIAB node on which you want to create the REP ring and click Create REP Ring under the REP Rings tab.
2. In the workflow wizard, click Let's Do it.
3. Select a Fabric Site from the drop-down list and then click Next.
4. Select a fabric edge node in the topology view and then click Next.
Figure 72 Cisco DNA Center REP workflow – Fabric Edge selection
5. Select the extended nodes that connect to the fabric edge device and then click Next.
You can select two extended nodes to connect to the fabric edge (One would be the beginning of the REP Ring and the other would end the REP Ring).
6. Review and edit (if required) your fabric site, edge, and extended node selections.
Figure 73 Cisco DNA Center REP Workflow - REP Ring Review
7. To initiate the REP ring configuration, click Provision.
8. A REP Ring Configuration Status window shows a detailed configuration progress.
9. A REP Ring Summary window displays the details of the REP ring that is created along with the discovered devices. Click Next.
Figure 74 Cisco DNA Center REP workflow – REP Ring Summary
10. After the creation of the REP ring, a success message is displayed.
To verify the creation of the REP ring, go to the Fabric Site window and click on the fabric edge. In the slide-in window, under the REP Ring tab, you can see the list of all REP rings that exist on that device. Click on a REP Ring name in the list to view its details like the devices present in the ring, ports of each device that connect to the ring, and so on.
Figure 75 shows a REP ring fabric topology view once the REP ring is provisioned successfully using REP Ring workflow feature in Cisco DNA Center UI.
Figure 75 REP Ring Topology View in Cisco DNA Center SD-Access Fabric
Figure 76 REP Ring Topology View in Cisco DNA Center SD-Access Fabric
Using the Assurance features of Cisco DNA Center provides a detailed view of the network health. The overall network health can be viewed as well as an individual device health in Device 360. Assurance focuses on network visibility aspect of the network by identifying the issues, trends in the network. Assurance also focuses on the operational efficiencies by focusing on faster troubleshooting. Assurance provides the following benefits:
–Provides actionable insights into network, client, and application related issues. These issues consist of basic and advanced correlation of multiple pieces of information, thus eliminating white noise and false positives.
–Provides both system-guided as well as self-guided troubleshooting. For a large number of issues, Assurance provides a system-guided approach, where multiple Key Performance Indicators (KPIs) are correlated, and the results from tests and sensors are used to determine the root cause of a problem, after which possible actions are provided to resolve the problem. The focus is on highlighting the issue rather than monitoring data. Quite frequently, Assurance performs the work of a Level 3 support engineer.
–Provides in-depth health scores for a network and its devices, clients, applications, and services. Client experience is assured both for access (onboarding) and connectivity.
In CCI network where there will be IE nodes in a fabric site, it will be important to have a single view of the network health. Some examples of the network health are shown below:
Figure 77 Device 360 Network Health
For more detailed information about using Cisco DNA Assurance, refer to the Cisco DNA Assurance User Guide, Release 2.2.3 at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/cloud-systems-management/network-automation-and-management/dna-center-assurance/2-2-3/b_cisco_dna_assurance_2_2_3_ug.html
This section describes the initial configuration of CURWB radios, telemetry monitoring with FM Monitor, and the integration with DNA Center. This configuration deployment uses the FM3500 Endo in a PTP capacity for establishing wireless connectivity between Infrastructure Extended Nodes (EN) within the access layer. The IE switches connected behind them can be onboarded and managed using Cisco DNA Center. The following reference topology wais used in this deployment.
This deployment uses RACER to perform the initial configuration. RACER is a centralized, Internet-based configuration software platform that is accessed from the Partner Portal. Devices can be configured online only. If a device must be configured offline, then a separate configuration file can be uploaded to the device using the offline configurator. Refer to your device-specific guide for the instructions on this process. The General Mode window contains controls to monitor/enable configuration of the following settings:
Figure 79 CURWB General Settings
The CURWB devices used in this deployment are part of the network underlay. The management interface on all bridge units are configured on the same subnet. All units that are part of the same network should also have the same passphrase.
The frequency between the local and remote units must be the same. If configuring multiple bridge pairs, each pair should be on a separate frequency.
This setting is on by default and is recommended to disable only if deemed necessary.
The screenshot above shows the QoS 802.1p is enabled. This allows the CURWB radio to read the COS value from the VLAN tag, otherwise the DSCP/TOS value is read from the layer 3 IP packet.
If the VLAN plug-in is assigned the VLAN settings tab will be configurable and required to allows the unit to be connected to one or more virtual networks. Even without the plug-in, the CURWB radios can connect to a VLAN access network. The plug-in gives you the option to specify the management VLAN and native VLAN while also preserving the existing VLAN tags. With VLANs enabled, ensure the management subnet VLAN ID is added to the configuration. Note: this plug-in is required for integration within DNA Center for extended node onboarding via CURWB.
In this deployment, the CURWB radio management VLAN ID is 222 and this VLAN is not used anywhere else within the network. Configure the VLAN ID and SVI on the fabric edge PoP as part of the network underlay. In this scenario, the native VLAN is set to 1 which matches what is configured on the fabric edge. See the following configuration example taken from the Fabric PoP N Edge:
CURWB management subnet added to underlay EIGRP
In this deployment, the FM-VLAN was installed as it is required for connection to multiple virtual networks. Please refer to your device user guide for other required plug-ins. Plug-ins can be added individually, through CSV, or via the RACER template.
After CURWB radios have been configured as bridge links, an IE switch can be connected to the CURWB ethernet port, onboarded through PNP and managed through DNA Center as an Extended Node. The wireless connection between bridge units acts as a transparent relay in lieu of ethernet or fiber links.
Onboarding and provisioning a newly-discovered switch are the same processes as with a wired switch and requires no special configuration to support the CURWB connection. The Extended node requires an ip IP address from DHCP to start the pnp PNP process.
Option 43 includes three type- length- values (TLV). The first value is 5A1D;B2;K4; which specifies the PNP option. The second is the Cisco DNA Center IP address. The third is the port which could be 80 (HTTP) or 443 (HTTPS). Here is an example:
To onboard an Extended node over the wireless bridge, connect the ethernet port of the local CURWB unit to the Fabric Edge Node or an existing Extended Node.
Connect the ethernet port of the remote CURWB unit to the switch port of IE switch to be onboarded. If the configuration settings on the CURWB radios (local & remote) are correct, then the zero-touch provisioning script should start the onboarding process of the IE switch behind the radio. After the onboarding is complete, verify that port channel config has been pushed down to the connected interface and that the IE switch appears within DNA Center inventory.
Figure 83 Extended Node with CURWB connection
Figure 84 CURWB connected interfaces
The same port channel and interface configuration are displayed on the CLI of the Extended node.
The CURWB management VLAN must be configured on the Extended node and allowed on interfaces carrying management VLAN traffic. This can be done manually, or optionally, via template in DNA Center. By default, all VLANs, 1 to 4094 are forwarded on trunk interfaces. Unless pruning is desired only creation of the layer 2 VLAN is required on the switch. See the following example.
Figure 85 CURWB template created optionally via DNAC Center with VLAN variable
Figure 86 Optional CURWB management VLAN template deployed via DNA – C Inventory
Select the newly-onboarded switch within the device inventory. In the screenshot above it is the Extended Node with device name SN-FDO2025U0QH.
Figure 87 CURWB DayN template - Advanced configuration
After configuring the management VLAN on the Extended Node manually or via template deployment, the CURWB radio MAC address displays in the MAC table with VLAN 222.
Figure 88 CURWB layer 2 MAC Addresses
In the screen capture above, the CURWB radio is connected to interface Gigabit Ethernet 1/3 with port-channel 2. The MAC address table displays the corresponding radio MAC address with the radio management VLAN tag, VLAN 222 in this case.
QoS can only be enabled through the RACER configuration or CLI, and not the web Configurator. Enabling QoS on the radio is recommended. Marking and queuing is best left to the connected switch.
It is important to note that although the current IE switching platforms supports Gigabit Ethernet speeds, the CURWB radios have a maximum throughput capacity of 500 Mbps which is a best-case scenario. Actual throughput speeds may vary due to the nature of the wireless environment in which they are deployed. Plan to shape the traffic to 10% below the max capacity to increase stability over the wireless bridged nodes. In the following example, a traffic policy was implemented at 150 mbps based off a link capacity of 166 mbps and configured on the switch connecting to the CURWB radios.
A parent shaper using the Default class map is used to match all traffic.
A Service policy is applied in the Egress direction on CURWB facing interfaces.
CURWB radios can transmit telemetry traffic and situational alerts to the FM monitor dashboard in real time. For QoS, this traffic is sent as Best Effort from the management VLAN. The following configuration is used in this deployment to help prioritize the telemetry traffic leaving the radio and reduced latency and delay in reaching the destination Monitor application.
Access list permitting the CURWB management network to FM Monitor dashboard
Class map must match the access-group defined in the access list
Policy map must contain the class previously defined and marked according to desired DSCP/COS value
Service policy to be applied in the Ingress direction on all CURWB facing interfaces
The process above describes the steps to configure CURWB and onboard a single EN over wireless bridge links. To form Ring and Daisy Chain Topologies, connect additional ENs in daisy chain format and repeat the steps as needed to onboard additional ENs over the wireless bridge.
Figure 89 DNAC Fabric Edge topology view
The CURWB connected interfaces and the wired ethernet interfaces form the 5-ring Extended node topology from DNA Center shown in the above screen capture.
Cisco FM Monitor is a network wide, on-premises monitoring dashboard, allowing any CURWB customer to proactively maintain and monitor one or multiple more CURWB networks. The dashboard displays in real time, situational alerts and telemetry data in real time from every CURWB device in a network. It can work as a standalone system or in parallel with a Simple Network Management Protocol (SNMP) monitoring tool. For more details please review your device user guide.
Figure 90 FM Monitor Dashboard
FM Monitor displays and tracks real-time Key Performance Indicators (KPIs) within each administrative cluster, including the number of active radios, number of connected IP edge devices, end-to-end latency, jitter, upload/download throughput in real time, and system uptime. The following table view displays the CURWB units used in this deployment.
Figure 92 More Device Telemetry
To add a CURWB radio for monitoring, click the Settings icon and then click the Devices widget. The “add new device button” message appears in the upper right of the display window. Click this field and input the CURWB IP address. If the device is reachable, a success message is displayed, and the status will displays as green (online).
Figure 94 Adding a Device to Monitor
Figure 95 FM Monitor added CURWB radios
Refer to Implementation of the Field Area Network for more details about CR-Mesh access network implementation.
LoRaWAN is a media access control (MAC) protocol for wide area networks defined by the LoRa Alliance (https://www.lora-alliance.org) on top of the LoRa radio physical layer. The LoRa Alliance is an open and nonprofit standards association that includes hundreds of registered members from service providers, solution providers, service integrators, application developers, and sensor and chipset manufacturers. It is designed to allow low-powered devices to communicate with Internet-connected applications over long range wireless connections.
Cisco Wireless Gateway for LoRaWAN is a module from Cisco Internet of Things (IoT) extension module series (IXM Gateway). It can be connected to the Cisco 809 and 829 Industrial Integrated Services Routers (IR800 series) or be deployed as standalone for low-power wide-area (LPWA) access. It is a carrier-grade gateway for indoor and outdoor deployment, including harsh environments.
■ https://www.cisco.com/c/en/us/solutions/internet-of-things/lorawan-solution.html
There are two LoRaWAN gateway deploy modes as below:
■Virtual interface mode—IR800 series including the LoRaWAN module as a virtual interface
■Standalone mode—The LoRaWAN module working alone as an Ethernet backhaul gateway or attached to a cellular router through Ethernet.
FND can manage IXM Gateway in both virtual and standalone mode, the deployment options of IXM are as shown in Table 16.
Table 16 LoRaWAN Deployment Options
IXM connected to CCI Access Ring for Ethernet Backhaul and PoE |
||
IXM is connected to IR1101 for POE and Cellular connectivity |
||
IR829 as Remote PoP Gateway and IXM as Extended LPWA interface |
The LoRaWAN access network implementation workflow is shown in Figure 96:
Figure 96 LoRaWAN Access Network Implementation Workflow
The transport of LoRa traffic from LoRaWAN (IXM) gateway to reach ThingPark Enterprise (TPE) and FND is via CCI Backhaul (local PoP) or Cellular Backhaul (Remote PoP). IXM is deployed at local PoP and remote PoP (discussed in later section) and forwards LoRa traffic from sensors in range towards TPE with the help of the Long Range Relay (LRR) packet forwarder that is installed on the gateway.
This section will discuss the Installation and Configuration of TPE, on-boarding of IXM Gateway in TPE and FND in Local PoP. LoRa Gateway is operated in Stand-alone mode and connected to CCI Network. Here the CCI Network will provide the reachability to TPE and FND (Users can connect IXM to IR1101 or IR 829 for Cellular backhaul which will be discussed in Remote PoP Section).
Note : LoRaWAN operating in Virtual mode behind IR8x9 is discussed in Remote PoP with LoRaWAN Access Network.
TPE is used for managing IXM gateway, sensors, and applications. TPE helps configure RF channels on the IXM gateway and allows coupling sensors and applications so that sensor data gets forwarded to their respective application.
In CCI, IXM Gateway is connected to the CCI network over cellular backhaul and TPE is installed in the Data Center (obtain installation and configuration guide from Actility). After installing TPE, IXM needs to be configured to connect it to TPE (obtain IXM installation and configuration guide from TPE dashboard download link). Sensors and applications are configured on TPE. The IXM gateway in range of sensors transports data to application via TPE.
Note : Currently, TPE supports only Over The Air Activation (OTAA).
For details about ThingPark Enterprise, refer to the following URL:
■ https://www.actility.com/enterprise-iot-connectivity-solutions/
Onboarding Cisco IXM Gateways includes the following steps:
1. Bring up the Cisco IXM Gateway.
2. Perform the initial configuration on Cisco IXM Gateway.
3. Install the packet forwarder.
4. Perform the LRR packet forwarder configuration.
For details on how to perform each of these steps refer to :
■ https://www.thethingsnetwork.org/docs/gateways/cisco/setup.html
Refer to the sample Cisco IXM gateway configuration below:
To bring up the connectivity between Cisco IXM and TPE following these steps:
1. Ensure reachability between IXM and TPE exists.
2. Edit the credentials.txt (found at In $ROOTACT/usr/etc/lrr -in standalone mode of IXM) to reflect the configured credentials as below:
2nd line: user account on IXM LoRaWAN GW
3rd line: password for the user account
A sample of the same is shown in example below:
3. Edit the $ROOTACT/usr/etc/lrr /lrr.ini file to reflect the TPE address set-up the FTP address as shown in example below:
4. Set up the base station following the steps given in the installation guide TP_Enterprise_BS_Installation_Guide_cisco_CISCO_cixm.1_v2.2 (downloaded from the TPE dashboard when setting up the prerequisite).
5. Push the Rf region file from the TPE dashboard to the IXM. Confirm and wait for a successful push.
6. Check the lgw.ini and channels.ini files are now in $ROOTACT/usr/etc/lrr.
7. Restart the packet forwarder.
8. After completion of the steps the Base station must be shown as active in the connection status of the TPE dashboard as shown in Figure 97.
Figure 97 Actility Base Station Detailed View—Connected
An application must be created before provisioning a LoRa sensor.
Following are the steps to be followed in TPE to create the application:
1. On the TPE go to Applications-> Create-> Generic application.
2. Fill the details of the Application to be created and click Save.
3. The application is now setup and will appear as shown in Figure 98 when navigating to Application -> List.
Setting up an example PNI sensor in TPE
Before beginning to setup the PNI sensor, make sure the sensor is installed and activated.
Refer to the following URL for the steps:
■ https://www.pnicorp.com/wp-content/uploads/PNI-PlacePod-Vehicle-Detection-Sensor-User-Manual-1.pdf
To setup the PNI sensor in the TPE perform the following steps.
2. Fill in the sensor related details.
3. In Application select the application in previous step.
The sensor is now setup and the sensor data must now be traversing to the application.
IoT FND supports the following configurations for the Cisco Wireless Gateway for LoRaWAN:
–Hardware monitoring and events report.
–IP networking configuration and operations (for example, IP address and IPsec).
–Initial installation of the Thingpark LRR software.
In CCI scenario, we are on-boarding IXM Gateway without using TPS (Tunnel Provisioning Server), the IGMA based configurations has been provisioned on Gateway manually. After provisioning of IGMA based configuration, gateway triggers registration request from the device. SCEP enrollment is used for certificate-based authentication.
Step 1: Add the Actility LRR and public key to FND by clicking the import button on the File Management page. On FND UI, select Config -> Device File Management -> Actions, click Upload. Select Add File option, Upload Actility LRR and public key, and select Upload File option.
Figure 99 Uploading LRR Image and Public Key into FND
Step 2: On FND UI, select Config -> Device Configuration page, select default-lorawan and Edit Configuration Template, and update the Device Configuration group with the following parameters and save the changes. Figure 100 shows a sample configuration.
Figure 100 Default Configuration Template in FND for IXM
Step 3: On FND UI, select Config -> Device Configuration page, select Default-Lorawan, Edit Group properties, and select LRR Image and LRR Public Key, which user uploaded in step 1 as shown in Figure 101.
Figure 101 Default Configuration with Group Properties for LRR Image and Public Key Upload in FND for IXM
Step 4: The Provisioning Settings page will have the FND common name populated in IoT-FND URL as shown in Figure 102 (not mandatory to use this step for verification).
Figure 102 Provisioning Settings in FND
Step 5: Add the IXM Gateway into FND as a Lorawan Device using CSV file. Select Devices-> Field Devices-> Add Devices and insert csv file with the following details:
Figure 103 Adding Devices into FND
Step 6: The user needs to provision configuration on IXM Gateway for triggering registration request. Make sure the firewall allows ports 9120, 9121, 9122, all of the SSH, telnet, and DHCP ports. User has to obtain certificates from the CA (the same ones used to issue certs for FND). Execute the show ipsec certs command to verify.
1. Basic Reachability to FND and RSA CA Server and IP Addressing.
2. Configure Username, NTP, and Enabling SSH.
3. SCEP Enrollment to obtain CA certificates from CA server.
User can get certificates in two ways:
–One way is manually install the CA server certificate using USB manually.
In this guide we have used SCEP to obtain the certificates.
Note: In the above SCEP enrollment it is best practice to give device ID as Name Of the Certificate.
4. IGMA profile has to be provisioned after SCEP.
The below configure is used to trigger the registration request from device.
Note : If user is unable to provision igma profile, enter enable mode and configure the following command to enable igma.
5. The user needs to add HER configuration manually, for example the tunnel crypto profiles and transform sets. Refer to the following URL for HER-based configuration (this step is not mandatory for IXM Gateway for Registration).:
■ https://www.cisco.com/c/en/us/td/docs/routers/interface-module-lorawan/software/configuration/guide/b_lora_scg/b_lora_scg_chapter_01010.html
Step 7: Once the Modem is registered, the IXM will show as up in the FND. Check the following events if there are issues during provisioning.
Figure 104 Registration Request from Device in FND
Step 8: Detailed IXM Gateway information can be viewed by clicking on the IXM Gateway tab.
Figure 105 IXM Gateway Dashboard Tab
Step 9: If configuration update is required, follow the same procedure in Step 2, but in this case you invoke a configuration push. Select Push Configuration tab. On the drop-down menu, select Push GATEWAY Configuration and select Start.
Figure 106 IXM Gateway Configuration Push Tab
After the configuration push, the tab will show if the configuration is successfully pushed on to the device.
Figure 107 IXM Gateway Configuration Push Successful in FND
Step 1: Load the Firmware Image into FND.
On FND UI, Select Config -> Firmware Update and select Upload Image.
Figure 108 IXM Gateway Image Upload Tab in FND
Step 2: Push the firmware to the IXM Gateway by selecting LORAWAN on the Select Type drop-down menu and select a firmware image on the Select an Image drop-down menu. If you want to erase the LRR or pubkey, select the clean install option as shown in Figure 109.
Figure 109 IXM Gateway Image Upload Tab in FND-2
Step 3: After upload is complete, install the image by clicking the Install Image button.
Figure 110 IXM Gateway Image Install
When the upgrade starts, a screen similar to Figure 111 is displayed.
Figure 111 IXM Gateway Successful Image Install
Enable the debug categories shown in Figure 112 on FND before troubleshooting.
1. FND does not have any messages from the IXM.
–Make sure the IGMA profile is pointing to the correct FND profile and the name resolution is correct.
–Make sure the FND can be pinged.
–Check the FND configuration template for command accuracy
For more details refer to the following URL:
■ https://www.cisco.com/c/en/us/td/docs/routers/interface-module-lorawan/software/configuration/guide/b_lora_scg.pdf
CCI covers two different Wi-Fi deployment types: Cisco Unified Wireless Network (CUWN) with Mesh, and SDA Wireless. This section covers the implementation of both CUWM Wi-Fi Mesh and SDA Wireless Wi-Fi (non-mesh) access networks.
■For CUWN deployment with Centralized WLC, WLC should be deployed in shared as covered in Implementing Centralized Wireless LAN Controller for Cisco Unified Wireless Network.
The CUWN solution supports client data services, client monitoring and control, and rogue access point detection, monitoring, and containment functions. CUWN uses lightweight access points (APs), Cisco Wireless LAN Controllers (WLCs). In CCI, CUWN is deployed as “Over the Top (OTT)” as a non-native service. In this mode, the SD-Access fabric is simply a transport network for the wireless traffic. CUWN also leverages Cisco Prime Infrastructure for managing OTT Wi-Fi access network.
In a wireless mesh deployment, multiple APs (with or without Ethernet connections) communicate over wireless interfaces to form a mesh access network. The Flex+Bridge mode is used in CCI Wi-Fi Mesh network.
Refer to the following URLs for more details on Wi-Fi mesh:
■ https://www.cisco.com/c/en/us/td/docs/wireless/controller/technotes/8-7/b_mesh_87.html
■ https://www.cisco.com/c/en/us/td/docs/wireless/controller/technotes/8-8/b_mesh_88.html
This section covers the CUWN implantation with C9800 WLC. The configuration steps are the same both for Centralized WLC deployment model and the Per-PoP WLC deployment mode.
For C9800 configuration guidance, refer to the following URL:
■ https://www.cisco.com/c/en/us/support/docs/wireless/catalyst-9800-series-wireless-controllers/213911-understand-catalyst-9800-wireless-contro.html#anc34
In CUWN Mesh, Root Access point (RAP) Ethernet port is connected to the ring IE switches. In Cisco DNA Center 1.3.x, a dedicated AP VLAN with the name AP_VLAN and VLAN ID 2045 is created with a corresponding SVI interface. After you perform the VN-to-IP pool assignment under INFRA_VN for an AP pool, the IP address is assigned to the SVI interface.
In this example, VLAN ID 2045 is the SDAs INFRA_VN vlan, which we are associating to the AP infra Pool.
Refer to the section “Provisioning Devices using Cisco DNA Center Templates” for the steps to create and apply Day-N configuration templates in Cisco DNA Center.
Configure the below CLIs (example configs for IOS-XE) on the switch port on which RAP is connected. This can be configured either manually or using DAY-N templates. It is recommend to use DAY-N configuration templates to configure the following commands on IE switch ports on which RAP is connected.
This section provides the configuration steps required to join a mesh Access Point (AP) as a Root AP (RAP) or Mesh AP (MAP) to the Catalyst 9800 Wireless LAN Controller (WLC) in Flex+Bridge mode.
A mesh AP needs to be authenticated for it to join the 9800 controller. AP will first join WLC in local mode and then we convert it to Flex+Bridge, also known as mesh mode.
For configuration guidance, refer to the following URLs:
■ https://www.cisco.com/c/en/us/support/docs/wireless/catalyst-9800-series-wireless-controllers/215100-join-mesh-aps-to-catalyst-9800-wireless.html
Configure RAP/MAP MAC addresses under Device Authentication:
1. Navigate to Configuration -> Security -> AAA -> AAA Advanced -> Device Authentication, select Device Authentication and select Add. Type in the Base Ethernet MAC address of the AP to join to the WLC, leave the Attribute List Name blank, and finally select Apply to Device.
Configure the authentication and authorization method list:
2. Navigate to Configuration -> Security -> AAA -> AAA Method List -> Authentication and select Add. The AAA Authentication pop-up appears. Type in a name in the Method List Name, select 802.1x from the Type* drop-down and local for the Group Type, and select Apply to Device.
3. Navigate to Configuration -> Security -> AAA -> AAA Method List -> Authentication and select Add. The AAA Authentication pop-up appears. Type in a name in the Method List Name, select credential download from the Type* drop-down and local for the Group Type, and select Apply to Device.
4. Navigate to Configuration -> Wireless -> Mesh -> Profiles and select Add. The Add Mesh Profile pop-up appears. In the General tab set a name and description for the Mesh profile and check Backhaul Client Access.
5. Under the Advanced tab select EAP for the Method field. Select the Authorization and Authentication profile earlier, uncheck Vlan Transparent and check Ethernet Bridging (optional). Create a Bridge Group Name (BGN), check the Strict Match, and select Apply to Device as shown in Figure 113.
Figure 113 Mesh Profile on C9800 WLC
Figure 69 Mesh Profile on C9800 WLC
6. Navigate to Configuration -> Tag & Profiles -> AP Join -> Profile and select Add. The AP Join Profile pop-up appears. Set a name and description for the AP Join profile.
7. Navigate to the AP tab and select the Mesh Profile created earlier from the Mesh Profile Name drop-down. Ensure EAP-FAST and CAPWAP DTLS are set for the EAP Type and AP Authorization Type fields respectively and finally select Apply to Device.
8. Navigate to Configuration -> Tag & Profiles -> Tags -> Site and select Add. The Site Tag pop up appears. Type in a name and description for the Site Tag, select the AP Join Profile created earlier from the AP Join Profile drop-down. At the bottom of the Site Tag popup, uncheck the Enable Local Site checkbox to enable the Flex Profile dropdown. From the Flex Profile drop-down, select the Flex Profile you want to use for the AP.
Connect the AP to the network and ensure the AP is in local mode. To ensure the AP is in local mode issue the command capwap ap mode local.
Note: The AP must have a way to find the controller with either Layer 2 broadcast, DHCP Option 43, DNS resolution, or manual setup.
In CCI deployment, DHCP option 43 is used for the AP pool to find the WLC. We use DHCP Option 43 to help the AP obtain controller IP address from the DHCP server. In addition to offering it an IP address, DHCP server may also return one or more controller IP addresses to the AP.
Refer to the following URL for information on configuring option43 on DHCP Server:
■ https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wireless-lan-wlan/97066-dhcp-option-43-00.html
The AP joins the WLC, ensure it is listed under the AP list, navigate to Configuration -> Wireless -> Access Points > All Access Points.
1. Select the AP; the AP popup appears. Select the Site Tag created earlier under the General -> Tags -> Site tab. Within the AP popup, select Update and Apply to Device.
Figure 114 Applying the Site Tag to AP
The AP reboots and must join back the WLC in Flex + Bridge mode.
We can now define the role of the AP: either root AP or mesh AP. The root AP is the one with a wired connection to the switch while the mesh AP will join the WLC via its radio which will try to connect to a root AP. A mesh AP can join the WLC via its wired interface once it has failed to find a root AP via its radio, for provisioning purposes.
2. Select the AP; the AP popup appears, Under Mesh -> Role, from the drop-down menu choose Root for RAP and Mesh for MAP, and then select Update and Apply to Device.
Figure 115 Selecting the Role of AP in Mesh
For more details on C9800 WLC configuration guidelines, refer to the following URL:
■ https://www.cisco.com/c/en/us/support/docs/wireless/catalyst-9800-series-wireless-controllers/213911-understand-catalyst-9800-wireless-contro.html
Step 1 : Declare Client VLANs. Add the needed VLANs to the WLC where the wireless clients are assigned.
a. Navigate to Configuration -> Layer2 -> VLAN -> VLAN -> + Add. Add all the required VLANs and change the State to Activated
Note: If you do not specify a Name, the VLAN automatically gets assigned the name VLANXXXX, where XXXX is its VLAN ID.
Repeat this step to create all the required VLANs.
In CCI network, to get the VLAN information of the VN networks:
b. Navigate to Provision -> Fabric, select desired PoP Site, and click on FiaB C9300 switch. On Run Commands, type show vlan brief to fetch the VLAN details.
c. Verify the VLANs are allowed in your data interfaces.
–If you are using port channels, navigate to Configuration -> Interface -> Logical -> PortChannel name -> General. Make sure it is configured as Allowed Vlan = All.
–If you are not using port channels, navigate to Configuration -> Interface -> Ethernet -> Interface Name -> General. Make sure it is configured as Allowed Vlan = All.
Figure 116 Visual Representation of WLAN Configuration Elements
Recommended flow of configuration:
2. Create/Modify a Policy Profile.
3. Create/Modify a Policy Tag (link the SSID to the desired Policy Profile).
4. Assign the Policy Tag to the AP.
Navigate to Configuration-> Tags & Profiles-> WLANs-> + Add. Enter all the needed information (SSID name, security type and so on) and then click Apply to Device.
Step 2. Create/Modify a Policy Profile:
Navigate to Configuration-> Tags & Profiles-> Policy. Either select the name of a pre-existing one or click + Add to add a new one. Ensure it is enabled, set the needed VLAN and any other parameter we want to customize. Once done click on Update & Apply to Device.
Step 3. Create/Modify a Policy Tag:
Navigate to Configuration-> Tags & Profiles-> Tags-> Policy. Either select the name of a pre-existing one or click + Add to add a new one. Inside the Policy Tag, click +Add, from the drop down list select the WLAN Profile name you want to add to the Policy Tag and Policy Profile to which you want to link it. Then click the checkmark Update & Apply to Device.
Step 4. Assigning the Policy Tag to the AP:
Navigate to Configuration-> Wireless-> Access Points-> AP name-> General-> Tags. From the Policy dropdown list select the desired Policy Tag and click Update & Apply to Device.
Note: After changing the policy tag on an AP, it loses its association to the 9800 WLCs and join back within about 1 minute.
Recommended flow of configuration:
1. Create/Modify the RF profiles for 2.4GHz / 5GHz.
3. If needed, assign the RF Tag to the AP.
Step 1. Create/Modify the RF profiles for 2.4GHz / 5GHz:
Navigate to Configuration-> Tags & Profiles-> RF. Either select the name of a pre-existing one or click + Add to add a new one. Modify the profile as desired, one per band (802.11a/802.11b). Then click Apply to Device. In CCI, we are using the pre-configured RF profiles
Step 1. Create/Modify a RF Tag :
The RF tag is the setting that allows you to specify which RF Profiles are assigned to the APs.
Navigate to Configuration-> Tags & Profiles-> Tags-> RF. Either select the name of a pre-existing one or click + Add to add a new one. Inside the RF Tag, select the RF Profile that we want to add. After that click Update & Apply to Device.
Step 2. Policy Tag Assignment (optional) :
You can assign a RF Tag directly to an AP.
Navigate to Configuration-> Wireless-> Access Points-> AP name-> General-> Tags. From the Site dropdown list select the desired RFTag and click Update & Apply to Device.
Figure 117 WLAN Verification on C9800
Other important Verification Commands :
You can alternatively use these commands to verify the configuration.
VLANs/Interfaces Configuration:
Ethernet bridging should be enabled for the following scenarios:
2. Connect Ethernet devices, such as a video camera on a MAP using its Ethernet port.
An Ethernet Bridging feature can provide a wireless infrastructure connection for Ethernet-enabled devices. Devices that do not have a wireless client adapter in order to connect to the wireless network can be connected to the AP through the Ethernet port. The MAP AP associates to the root AP through the wireless interface. In this way, wired clients obtain access to the wireless network. Wired clients with different VLANs behind the AP are also supported. To use an Ethernet-bridged application, enable the bridging feature on the RAP and on all the MAPs in that sector.
For more details on Ethernet Bridging, refer to:
■ https://www.cisco.com/c/en/us/td/docs/wireless/controller/technotes/8-7/b_mesh_87.html
a. Navigate to Configuration->Wireless->Mesh->Profiles, click the existing Mesh Profile. On the Advanced tab, uncheck VLAN Transparent and check Ethernet Bridging, as shown in Figure 118. Then click Update & Apply to Device.
Figure 118 Ethernet Bridging on Wireless Mesh
b. Navigate to Configuration -> Wireless -> Access Points -> AP name -> Mesh and configure the Ethernet port as shown below.
Access Ethernet ports in access mode:
AP Ethernet port is configured as access for some use cases where specific application traffic to be segmented within a wireless mesh network and then forwarded (bridged) to a wired LAN, for example connecting a security camera.
Access Ethernet ports in trunk mode:
AP Ethernet port is configured as trunk port, in the cases where we want to connect an L2 switch to increase the port density and to bridge multiple vlans to the wired LAN over the Wi-Fi Mesh.
Figure 119 Mesh AP Ethernet Port configuration example
Authentication, Authorization, Accounting (AAA) server providing authentication, authorization and accounting services for wireless clients and infrastructure administrator access control. This section provide steps to configure C9800 WLC to work with ISE. For more information on Cisco Catalyst 9800 series, refer to the following URL.
■ https://www.cisco.com/c/en/us/products/wireless/catalyst-9800-series-wireless-controllers/index.html
The section assumes the C9800 WLC is accessible and AP is associated to the C9800. The document also assumes underlying network elements are already configured, which includes, VLANs, SVIs, Subnets, DHCP, routing, and DNS.
Following flow diagram shows the C9800 configuration on WLC at a high level. Each box represents individual configuration profile with relevant options shown and how each profile feeds into other profiles to make a working configuration. The bullet points within the profile that are in bold represents sub profile being fed into the profile. It also includes the suggested order to create the profiles that maps to the main section of the document.
Figure 120 C9800 WLC Configuration Flow for ISE
a. Go to Configuration-> Security-> AAA-> Servers / Groups-> Servers, Click Add.
Enter following information (Any configuration not defined in the table assumes default settings):
b. Click Server Groups, Click Add.
Name |
|
Group Type |
|
c. Go to Configuration-> Security-> AAA-> AAA Method List-> Authentication, Click Add.
Create Authentication list using following information that will be used for both OPEN SSID and SECURE SSID:
d. Go to Configuration-> Security-> AAA-> AAA Method List-> Authorization, Click Add.
Note : The Authorization name 'default' is significant here since there is no Authorization list that can be defined within the 802.1X WLAN. By using 'default' as name, C9800 can use the ISE to get additional authorization details such as for dACL operation. If default authorization list cannot be used or desired, then named authorization can be created and can be referenced via RADIUS server as a Cisco VSA. The Cisco VSA to use is 'Method-List={authorization-method-list}', which can be configured in ISE advanced Attribute Settings.
e. Go to Configuration-> Security-> AAA-> AAA Method List-> Accounting, Click Add.
Create Webauth Parameter Map (Required for Guest Access)
1. Go to Configuration-> Security-> Webauth-> Webauth Parameter Map, Click Add.
2. Enter Name ‘Captive-Bypass-Portal, Click Apply to Device.
3. Click ‘Captive-Bypass-Portal’ parameter map from the list.
4. Check Captive Bypass Portal, Click Update & Apply.
Create VLANs. Go to Configuration-> Layer 2-> VLAN-> VLAN, Click Add to add the required access vlans for the SSIDs.
Step 7: Create Policy Profiles
Go to Configuration-> Tags & Profiles-> Policy, Click Add.
Add Policy Profiles for WLANs using following table. Policy profile covers device sensor, default VLAN, CoA, and RADIUS Accounting. These profiles will be mapped to the WLANs using tags.
Figure 121 C9800 Policy Profile Configuration
Go to Configuration-> Tags & Profiles-> Tags, under Policy Click Add.
Within the ' ISE Enabled ' Tag window, click Add to map following WLANs to matching policy profiles. This ties the WLAN to the respective Policy Profile.
Step 9: Assign Policy Tag to AP
Finally, apply the tag to the AP. This section shows instructions on tying it to a single AP. Using Advanced Wireless Setup Wizard on C9800, same tag can be applied to multiple APs at the same time.
1. Go to Configuration -> Wireless -> Access Points.
2. Click on the AP Name or MAC address.
3. Under General-> Tags, Select 'CCI_Hebbal'.
Figure 122 C9800 Policy Tag assignment to AP
Add WLC as network device on ISE
Step 1. Navigate to Administration-> Network Resources-> Network Devices - > Add.
Step 2. Enter WLC Name, check the RADIUS Authentication Settings option and enter the Shared Secret.
Figure 123 WLC and ISE Integration Verification
Step 1. Navigate to Administration -> Identity Management -> Identities -> Users -> Add.
Step 2. Enter the information. In this example, this user belongs to a group called ALL_ACCOUNTS but it can be adjusted as needed as shown in the image.
Authentication rules are used to verify if the credentials of the users are right (verify if the user really is who it says it is) and limit the authentication methods that are allowed to be used by it.
Navigate to Policy-> Policy Elements-> Results-> Authentication-> Allowed Protocols as shown in Figure 124.
Add an authentication rule by selecting the protocols as shown in Figure 124.
Figure 124 Authentication Rule Configuration on ISE
The authorization profile determines if the client has access or not to the network, push Access Control Lists (ACLs), VLAN override or any other parameter. The authorization profile shown in this example sends an access accept for the client and assigns the client to VLAN 1028.
Add a new Authorization Profile.
Navigate to Policy-> Policy Elements-> Results-> Authorization-> Authorization Profiles as shown in Figure 125.
Enter the values as shown in the image. Here we can return AAA override attributes like VLAN as example. WLC 9800 accepts tunnel attributes 64,65,81 using VLAN id or Name, and accepts also the usage of the AirSpace-Interface-Name Attribute.
Figure 125 Authorization Profile Configuration on ISE
Create Policy Set (Authentication and Authorization rules)
Navigate to Policy-> Policy Sets as shown in the image. Click on ‘+’ to create a CUWN_PolicySet
Add the conditions that do the authorization process to fall into this rule. In this example, the authorization process hits this rule if it uses 802.1x Wireless and its called station ID ends with CCI_OTT_SnSH as shown in Figure 126.
Figure 126 Policy Set Authorization Conditions
To view the Authentication/Authorization rules, we would click on the arrow on the right side to go into that specific policy set:
Under Allowed Protocols field, from the drop-down select the ‘CUWN_auth’ we had created earlier, for Authentication Policy choose the Default rule with use as ‘All_User_ID_Stores’ and for Authorization Policy choose the Default rule with ‘CUWN_AuthorizationProf’ we had created earlier.
Figure 127 Policy Set Configuration in ISE
For details about SDA eWLC deployment and SDA AP onboarding, refer to the section “Configuring SD Access Wireless Embedded WLC on C9300 Stack”
1. On DNA Center, Navigate to DESIGN-> Network Settings> Wireless, in the left hierarchy pane, select the Global level. In the Enterprise Wireless section, click + Add. Create an SSID with the required information as shown in the below image and click Next to continue.
2. Enter a Wireless Profile Name, under Fabric select Yes and choose a Site where SSID broadcasts, and click Finish as shown in the below image.
3. Provision the PoP site C9300 switch with eWLC to configure the changes. Make sure the newly created SSID is getting configured.
Even though SDA AP is in local mode, data traffic is not forwarded to WLC over CAPWAP, instead AP encapsulates traffic in VXLAN and forwards it to Fabric Edge switch. So, the micro-segmentation with wireless clients works same as that of the wired clients.
For more details on micro-segmentation using SGTs refer to the following URL:
■ https://www.cisco.com/c/dam/en/us/td/docs/solutions/CVD/Campus/sda-fabric-deploy-2019oct.pdf
1. On DNA Center, Navigate to Policy -> Group-Based Access Control -> Scalable Groups and create SGTs. In this example, as shown in Figure 128, two SGTs CCI_SSID1_SnS_VN and CCI_SSID2_SnS_VN are created and assigned to the SnS_VN and deployed.
Figure 128 SGT Creation on Cisco DNA Center
On DNA Center, Navigate to Policy -> Group-Based Access Control -> Policies and create policies. In this example as shown in Figure 129, a deny policy is created between CCI_SSID1_SnS_VN and CCI_SSID2_SnS_VN SGTs and deployed.
Figure 129 Policy (SGACL) Creation on Cisco DNA Center
The status changes to DEPLOYED and the policies are available to be applied to SD-Access fabrics Cisco DNA Center creates and are also available in ISE, viewable using the Cisco TrustSec policy matrix.
1. On ISE Navigate to Work Centers-> TrustSec-> TrustSec Policy, and then on the left side selecting Matrix. Verify that the policy has been created in the ISE TrustSec policy matrix.
Figure 130 SGACL verification on ISE policy Matrix
2. On DNA Center, navigate to Provision -> Fabric and choose the Bangalore Fabric. Navigate to MG Road PoP site and under Host Onboarding assign the SGTs to the Address Pools, then click Save and Apply.
show cts role-based permissions - Shows SGACL configured in ISE and pushed to the edge device
When clients with SSIDs,CCI_SSID1 (SGT 22) and CCI_SSID2 (SGT 23) tries to communication each other, on the Fabric Edge we observer packets are getting denied.
show cts role-based counters—Provides information on the exit edge node about SGACL being applied.
This section covers the implementation FAN on the CCI network for implementing Cisco Resilient Mesh (CR-Mesh) as one of the access networks in the PoP site access rings or a RPoP site. Implementation of the headend network infrastructure for secure communication of CR-Mesh gateway (CGR1240) and nodes data traffic over CCI network to the headend router is discussed in detail in this section.
This section includes the following major topics:
■Secure Onboarding of Field Area Router—CGR1240
■Implementing CR-Mesh Access Network
The headend is a combination of components that helps in authentication, certificate enrollment, provisioning, and management of legitimate FARs and Field devices.
Table 17 lists the headend infrastructure components:
Table 18 shows the headend components and operating system requirements:
Table 19 shows the headend components hardware requirements according to scale requirements:
Table 20 shows the headend components license requirements:
Multiple components interact with each other in the headend. Considering the dependency of the components, the following sequence IS followed while implementing the headend. For example, the RSA CA server should be installed and configured first and foremost, followed by implementation of the FND, and so on. Components 1-4 are mandatory for building the headend infrastructure. Component 5 is required for securely onboarding endpoints like CGE.
Root CA helps provide the certificate for RSA certificate-based authentication. This component is required by multiple components like the HER, FAR, and FND. RSA CA certificate-based authentication for enhanced security. Interacting components would be authenticating each other in the first place using the RSA CA certificate. Components of the headend that require the RSA CA server are as shown in Table 21:
For installing/configuring of the RSA Server, refer to the section “Implementing RSA Certificate Authority” on page 35 at the following URL:
■ https://salesconnect.cisco.com/-/content-detail/da249429-ec79-49fc-9471-0ec859e83872
■If you do not have access to any of these Cisco SalesConnect links, ask your Cisco account team to help provide you with the documentation. However, some of the documents require a signed non-disclosure agreement (NDA) with Cisco.
■In the above Implementation Guide, the installation is given for the Windows 2012 server and the steps are same for the Windows 2016 server.
After installation of RSA CA Server, the following certificates were exported from the RSA CA server:
–Contains the properties representing the FND
–Certificate contains the private key of FND
–Password Protecting the Exported FND Certificate.
–Certificate represents the RSA CA Server
–Certificate doesn't contain the private key of RSA CA server
–It is a public certificate and is not protected with any password.
The ECC CA server is used to implement the authentication between the NPS Server and the CGE, the NPS Server integrates Certificate Authority with RADIUS and NPS server issues CGE certificates, which are programmed into the CGE for authentication. When CGR receives the authentication request from CGE, it will forward the request to the NPS server for authentication.
■Configure the system time and date on the Windows Server 2016 Enterprise machine (to install the ECC CA) to the correct time and date, or enable the Windows Time service to sync time with an authoritative time source.
■For each configuration page mentioned in the following steps, any settings/options that are not mentioned can remain at their default value.
■Each server machine configured with Active Directory Certificate Services (either Root or Subordinate CA (Sub-CA)) can only be configured with one specific Cryptographic Service Provider (CSP). For this installation, the CSP is ECDSA P256#Microsoft Software Key Storage Provider.
Note: The ECDSA P256 Algorithm is used for authenticating the CGEs.
■In the following procedure to install the ECC CA, it is assumed that you want to install the Active Directory Certificate Services on a server machine that has successfully joined the Active Directory Domain as a member server. The server on which the ADCS is to be enabled needs to be part of an Active Directory Domain (either as a member server or as a domain controller).
■It is recommended to appropriately rename the computer name of the Windows 2016 server to something meaningful according to the role played by the server. While doing so, the server might reload. Once the server comes back up, verify that the computer name has changed.
Time synchronization plays a crucial role while using certificate-based authentication, which provides stronger security compared to pre-shared keys. For installing/configuring of NTP Synchronization of RSA CA Server, refer to section “NTP Synchronization for RSA CA Server,” page 36, at the following URL:
■ https://salesconnect.cisco.com/#/content-detail/da249429-ec79-49fc-9471-0ec859e83872
Creating Active Directory Domain Services, DNS Server, and NPS
1. In Windows 2016, click Start and then click Server Manager. If Server Manager is not in the menu items, click Start, click the Smart Search box and type Server manager.
2. In the Select installation type section, choose the default Role-based or feature-based installation, click Next, leave default on the Server Selection section, and then click Next again.
3. In the Server Roles section, check Active Directory Domain Services, DNS Server and Network Policy and Access Services (in the pop-up window, click Add Features after each selection) and then click Next.
4. In the Features section, leave default values and click Next. In the Active Directory Domain Services section, leave default values and click Next. In the DNS server section, leave default values and click Next. In the Network Policy and Access server section, leave the default values and then click Next.
5. In the Confirm installation services section, select Restart the destination server automatically if required, and then click Install. Once the server role installation is completed, the Installation Results dialog displays. Check all the relevant parameters.
Configuring Active Directory Domain Services, DNS Server, and NPS
6. On the Server Manager page, select AD DS (Active Directory Domain Services), click More, and then select Promote this server to a domain controller.
7. On the Deployment Configuration panel, choose Add a new forest, set a Root domain name like iot.cisco.com, and then click Next. In the Domain Controller Options section, set the password and click Next. In the DNS Options section, leave default value for Create DNS delegation and then click Next.
8. Under Additional Options, set the NETBIOS domain name and click Next. In the Paths section, leave values as default and then click Next.
9. In the Review Options section, verify all the desired values and then click Next. In the Prerequisites Check section, make sure all the prerequisite checks are passed successfully and click Install.
1. Open Server Manager, click Add roles and features, click Next, choose the default Role-based or feature-based installation, click Next, choose the left default on Server Selection, and then click Next.
2. On the Select Server Roles page, choose Active Directory Certificates Services, in the window click Add Features, and then click Next.
3. On the Select Role Services page, check the following role services, and then click Next.
–Certificate Authority Web Enrollment
–Online Responder (new Microsoft name for Certificate Revocation List)
4. On Web Server Role (IIS) page, click Next. On the Select Role Services page, click Next to accept all the default role services for Web Server (IIS).
5. On the Confirm Installation Options page, review all selected configuration settings and (select Restart the destination server automatically if required). To accept these options, click Install and wait until the setup process complete. Once the server role installation is completed, the Installation Results dialog displays.
6. Click Server Manager, click AD CS, and then click More. On the All Servers Task Details and Notifications page, select Configure Active Directory Certificate Services and then click Next.
7. On the Credentials page, click Next. On the Select Role Services page, check the following role services, and then click Next.
–Certificate Authority Web Enrollment
–Online Responder (new Microsoft name for Certificate Revocation List)
8. On the CA Type page, as default, select Root CA, and then click Next.
9. On the Set Up Private Key page, click Create a new private key, and then click Next.
10. On the Configure Cryptography for CA page, select the following CSP, key length, and hash algorithm:
a. Select Cryptographic Service Provider (CSP):
b. Choose key character length and hash algorithm:
11. On the CA Name page, leave all default values and then click Next. On the Set Validity Period page, specify the number of years or months that the CG-Mesh node certificate is valid. User can choose the Validity Period according to the requirements. In this implementation, as an example, Validity Period has been chosen as 5 years. Click Next.
12. On the Confirm Installation Options page, review all selected configuration settings. To accept these options, click Install and wait until the setup process completes. Once the server role installation is completed, the Installation Results dialog displays.
13. Verify that all desired server roles and role services that are shown with Installation succeeded. Click the Close option and reboot the server.
Disable Certificate Extensions
14. Open a Command prompt console and type the following commands to disable some certificate extensions:
Modify Default Name Curve for Server Key Exchange Message
15. Click Start, search for gpedit.msc (Local Group Policy Editor), select Local Computer policy, select Computer Configuration, expand Administrative Template, select the drop-down list for Network, select SSL configuration setting, and then click ECC Curve Order.
16. In ECC Curve Order page, click Enabled, and then add secp256r1 in ECC Curve Order.
Complete the following steps to create and configure the template for CGE on the NPS:
1. Launch Certsrv (Certificate Authority Console), click Server Manager, elect Tools, and then in the drop-down list, select Certification Authority.
2. In the Certsrv window on the Certificate Authority / Sub-CA Server (under Certificate Authority) running ECC Algorithm, right-click and select Properties.
3. In the Properties window, select General tab, select View Certificate, and then click the Details tab. Scroll down and check the Signature algorithm used is SHA256ECDSA. The Public key should be ECC (256 Bits).
4. In the Certification Authority Console, select CA (Local)-> Sub CA. Right-click Certificate Templates in the left plane, and then right-click and select Manage.
5. Select and duplicate the Computer from the Certificates Templates Console. In the Compatibility tab, select Windows Server 2016 for Certification Authority and Certificate Recipient.
6. In the General tab, specify the Template display name (for example, CGE_Template) and that the validity period is 5 years and the Renewal period is 6 weeks. Then select the Publish certificate in Active Directory check box.
7. On the Request Handling tab, choose Signature from the Purpose drop-down list. Select Yes in the Certificate Templates warning dialog. To allow certificate private key exports in the Request Handling tab, select Allow private key to be exported.
8. On the Cryptography tab, choose Key Storage Provider for the Provider Category, choose ECDSA_ P256 for the algorithm name. Enter 256 in the Minimum key size field. For the Request hash, choose SHA256.
9. On the Subject Name tab, select Supply in the request to enter the Subject Name and Common Name. This can be the EUI64 MAC address string of a CGE Node and is used for additional user authentication against the RADIUS server.
10. On the Security tab, for all listed group or user names, ensure that the Enroll and Autoenroll permissions are selected.
11. Select Apply and OK, close the Certificate Template Console, and then select the Certificate Template folder from the Certification Authority (certsrv).
Figure 131 Creation of Certificate Template for CGE
12. Select New, select Certificate Template to Issue, and then select the new certificate template, for example CGE_Template, which the user generated earlier. The new certificate template should be listed within the Certificate Templates folder of the Certification Authority Console.
Figure 132 CGE Template to Issue Certificates
The following steps guide the administrator of the NPS servers to generate a certificate from the CA using the Template that was created above (CGE_Template).
1. Open the Microsoft Management Console (MMC) application on the Windows Server 2016 (Run> mmc). Be sure that the Local Computer Certificates Snap-In is loaded. However, for the first configuration for MMC, the user can click File and Add/Remove Snap-in..., and in the pop-up window, select and add the Certificate Authority in the left pane. Click OK and then click Finish.
2. Click File and Add/Remove Snap-in... and will pop the window, select Certificates in the left pane, and then click Add. Click OK, select My user account, and then click Finish.
3. In the Add or Remove Snap-ins window, select Certificates in the left pane and click Add. Click OK, select Computer account, click Next, select Local Computer, and then click Finish. The items are added in the left pane.
4. In the Certificates (Local Computer), go to the Personal drop-down list. Select Certificates, right-click and select All tasks, select Request New Certificate, then click Next (Certificates (Local Computer)-> Personal-> Certificates-> All Tasks-> Request New Certificate).
5. Select Active Directory Enrollment Policy and then click Next.
6. Select CGE_Template and then click More information is required to enroll link below it.
7. In the Certificate Properties Dialog, in the Subject tab, choose Common name from the Type drop-down list. After filling in EUID in Value, click Add, and then click OK.
8. Click Enroll and then click Finish when enroll is completed.
Three certificates need to be exported: CGE certificate with private key, CGE certificate with public key, and ECC CA Server Root Certificate with the public key only:
■CGE Certificate with Public Key will be added as and entry in the Active directory.
■CGE Certificate with Private Key will be programmed into CGE, which is used for authentication purposes.
■ECC CA Root Certificate will be programmed into CGE, which is used for identifying the valid root CA.
Exporting Certificate with Private Key
1. Return to the MMC application, highlight the newly created certificate (example: 00173b0b0039003c), right-click and select All Tasks and then select Export.
2. Follow the export wizard to the next screen. Select Yes, export the private key.
3. In Certificate Export Wizard, select Include all certificates in the certification path if possible and select Next. This includes the CA certificate.
4. Enter the password for certificate, which will be used in CGE. For default settings, use the password Cisco123 and select Next.
Figure 133 Certificate Export of CGE
6. After exporting, the Certificate Export Wizard looks like what is depicted in Figure 134:
Figure 134 Successful Export of Certificate with Private Key
Exporting Certificate with Public Key
1. Return to the MMC application, highlight the newly created certificate (example: 00173b0b0039003c), right-click and select All Tasks, and then select Export.
2. Follow the export wizard to the next screen. Select No, do not export the private key.
3. Select the export file format DER encoded binary X.509 (.CER). Click Next and save it as a.cer file.
Figure 135 Successful Export of Certificate with Public Key
Exporting CA Server Certificate
1. Open the link on the NPS server and then click the link of Download a CA certificate, certificate chain, or CRL.
Figure 136 Exporting CA Certificate on ECC-CA Server
2. Click Download CA certificate, and then choose the DER format. This is the root certificate for the ECC CA server.
1. Launch Certsrv (Certificate Authority Console), click Server Manager, select Tools and, in the drop-down list, select Certification Authority.
2. In Certsrv window on the Certificate Authority / Sub-CA Server (under Certificate Authority) running ECC Algorithm, right-click Certificate Template, and then select Manage.
3. Select and duplicate the Web Server certificate template from the Certificates Templates Console. In the Compatibility tab, select the Windows Server 2016 for Certification Authority and Certificate Recipient.
4. In the General tab, specify the Template display name (for example, Radius_Template), choose a validity period of 5 years and Renewal period of 6 weeks, and then click the Publish certificate in Active Directory check box.
5. On the Request Handling tab, choose Signature from the Purpose drop-down list. Select Yes in the Certificate Templates warning dialog. To allow certificate private key exports in the Request Handling tab, select Allow private key to be exported.
6. On the Cryptography tab, choose Key Storage Provider for the Provider Category and choose ECDSA_ P256 for the algorithm name. Enter 256 in the Minimum key size field. For the Request hash, choose SHA256.
7. On the Subject Name tab, select Supply in the request to enter the Subject Name and Common Name.
8. On the Security tab, for all listed group or user names, ensure that the Enroll and Autoenroll permissions are selected.
9. On the Extensions tab, user should ensure that only Server Authentication is present. Click Apply and then OK. Close the Certificate Template Console.
Note: User needs to add the newly created Radius_Template.
10. Select the Certificate Template folder from the Certification Authority (certsrv).
11. Select New, select Certificate Template to Issue and the new certificate template (for example, Radius_Template, which the user generated earlier). The new certificate template should be listed within the Certificate Templates folder of the Certification Authority Console.
Figure 137 Configuring RADIUS Template
Note: In the CCI deployment, we are using two AAA servers: one is Microsoft NPS and the other is Cisco ISE. For CGE authentication, we are relying on Microsoft NPS since the CGE authentication is tightly coupled with Microsoft NPS server as per the current implementation.
The following steps guide the administrator of the NPS servers to generate a certificate from the CA using the template that was created above (Radius_Template):
1. Open the Microsoft Management Console (MMC) application on Windows Server 2016 (Run> mmc) and be sure the Local Computer Certificates Snap-In is loaded. However, for the first configuration for MMC, you can click File and Add/Remove Snap-in..., select Certificates in the left pane, and then click Add. Click OK, select Computer account, click Next, select Local Computer, and then click Finish. The items are added in the left pane.
2. In the certificates (Local Computer), from the Personal drop-down list, select Certificates, right-click and select All tasks, select Request New Certificate, click Next (Certificates (Local Computer)-> Personal-> Certificates-> All Tasks-> Request New Certificate).
3. Select Active Directory Enrollment Policy and then click Next.
4. Select Radius_Template and click the More information is required to enroll link below it.
5. In the Certificate Properties dialog, in the Subject tab, choose Common name from the Type drop-down list. After filling in the RADIUS in Value, click Add, and then click OK.
6. Click Enroll and then click Finish when Enroll is completed.
Exporting RADIUS Private Certificate
1. Return to the MMC application and highlight the newly created certificate (example: Radius_Template), right-click and select All Tasks, and then select Export.
2. Follow the export wizard to the next screen. Select Yes, export the private key.
3. In the Certificate Export Wizard, select Include all certificates in the certification path if possible and then select Next. This includes the CA certificate.
4. Enter the password for the certificate that will be used in CGE. For default settings, use the password Cisco123 and then select Next.
Figure 138 Configuring and Creating RADIUS Template
Figure 139 Exporting RADIUS Certificate with Private Key
Exporting RADIUS Public Certificate
1. Return to the MMC application and highlight the newly created certificate (example: Radius_Template), right-click and select All Tasks and then select Export.
2. Follow the export wizard to the next screen. Select No, do not export the private key.
3. Select export file format DER encoded binary X.509 (.CER). Click Next and save it as.cer file.
Figure 140 Exporting RADIUS Certificate with Public Key
CGE Configuration in NPS Server
Adding CGE to Active Directory of NPS
1. From Start-> Administrative tools, open Active Directory Users and Computers.
2. Select domain iot.cisco.com and then click Computers (example shown below).
Figure 141 Adding Computer (Node) to Active Directory Users and Computers
3. Click Action, select Computer, enter EUI64 as computer name, and then click OK.
4. Click View, select Advanced Features, select the new computer, and then click Action and select Name Mappings.
5. In Security Identity Mapping, click Add, and navigate to the new public key cert (00173b0b0039003c.cer) above. Verify details and then click OK.
Modify the Active Directory Services Interface (ADSI) of CGE
1. Click Start-> ADSI Edit. Navigate to the iot.cisco.com and its computers.
2. Select the new node you added. Click Action, select Properties, and then scroll down to servicePrincipalName. Click to highlight and edit it.
3. Type in the string HOST/(here it should be HOST/00173b0b0039003c) as shown in the example above and click Add. Then click OK.
Figure 142 Configuring ADSI Parameters
Figure 143 Adding Host EUID to ADSI
4. Close the ADSI edit window.
1. From Start, click Administrative Tools, and then select Network Policy Server. Right-click the NPS (Local) icon and select Register Server in Active Directory.
2. Click RADIUS Clients and Server and select RADIUS Clients.
3. Click Action and select New. Select Enable this RADIUS Client, enter the details of your CGR, and then select OK. Note that the password (e.g., cisco-123) is the same as the configuration in the CGR and IP address is the IP address of the loopback of the CGR.
Figure 144 Adding CGR to NPS for Authentication
Configuring the Policies on the NPS Server
Turn on the Microsoft Network Policy Server (e.g., Windows Server 2016) configuration for the CR-Mesh network. Connection Request Policies and network policies must be configured for the CR-Mesh network.
1. Launch Network Policy Server, expand Policies, and select Connection Request Policies. Add a new Connection Request Policy by selecting Action and then selecting New. In the Overview tab:
a. Enter a policy name (for example, CRDC CGR Authorization Request) and then click Next.
b. In the Specific Conditions tab, click Add and in the pop-up window select NAS Port Type. In the next pop-up window, select Virtual (VPN) and click Next.
Figure 145 Configuring Policy for NPS Server
c. In the Specific Request forwarding tab, leave the default and click Next.
d. In the Specific Authentication method tab, leave the default and click Next.
e. In the Configure settings tab, leave the default and click Next.
f. Please review all the parameters in the Completing Connection Request Policy Wizard and click Finish.
Figure 146 Configuring and Verifying NPS Policy Parameters
2. Launch Network Policy Server, expand Policies and select Network Policies. Add a new Policy by selecting Action and then selecting New.
a. In the New Network Policy window, enter Policy name (e.g., CGR Authorization Request) in the tab Specify Network Policy Name and Connection Type and then click Next.
b. In the Specific Conditions tab, click Add and in the pop-up window select NAS Port Type. Then in the next pop-up window, select Virtual (VPN) and click Next.
c. In the Specify Access Permission tab, choose the option Access granted and click Next.
d. In the Configure Authentication Methods tab, in the right pane, under EAP Types, click Add and select the Microsoft: Smart Card or other certificate option.
e. Select Microsoft: Smart Card or other certificate and click Edit. In the pop-up window “Certificate issued to” drop-down, select RADIUS (RADIUS Server Certificate) and click OK. Do not select the certificate issued to the CA if that certificate is also running on the same machine.
f. On the Configure Constraints tab, leave everything as default and click Next.
g. On the Configure Settings tab, specify Standard RADIUS Attributes and select Framed-MTU. Add the value 700 and then select OK. The Termination-Action set to default is optional.
h. Click Apply and then OK to save all the properties of the Network Policy.
i. Restart the Network Policy Server.
Figure 147 Successful Dot1x Completion Wizard
After installation of ECC CA Server, the following certificates were exported from the ECC CA server:
■Root Certificate of the ECC CA Server—This certificate is used to program in CGEs.
■Private and Public certificate of CGEs—These certificates are used to program in CGEs for Dot1x authentication.
The Field Network Director is prerequisite for this section, it is assumed that the FND is installed; if not, please refer to Implementing Field Network Director for CCI for installation and configuration of FND.
Software Security Module (SSM) is a low-cost alternative to a Hardware Security Module (HSM). IoT FND uses the CSMP protocol to communicate with CGE endpoints. SSM uses CiscoJ to provide cryptographic services such as signing and verifying CSMP messages, and CSMP Keystore management. SSM ensures Federal Information Processing Standards (FIPS) compliance while providing services. The user needs to install SSM on the IoT FND application server or another remote server. SSM remote-machine installations use HTTPS to securely communicate with IoT FND.
This section describes SSM installation and setup, including:
1. Get the IoT FND configuration details for the SSM. SSM ships with following default credentials:
2. Enter 5 at the prompt, and complete the following when prompted:
3. To connect to this SSM server, copy/paste the output from the previous step, and complete the following when prompted into the cgms.properties file.
Note: You must include the IPv4 address of the interface for IoT FND to use to connect to the SSM server.
Note: You must install and start the SSM server before switching to SSM.
To switch from using the Hardware Security Module (HSM) for CSMP-based messaging to using the SSM:
2. Run the ssm_setup.sh script on the SSM server.
3. Select Option 3 to print IoT FND SSM configuration.
4. Copy and paste the details into the cgms.properties to connect to that SSM server; an example is shown below.
5. Ensure that the SSM is up and running and the user can connect to it.
New releases of FND change the certificate for the Web every time FND is upgraded. Therefore, the trust entry for web keystore in SSM needs to be updated. Adding the newly generated certificate for Web into the SSM web keystore Keystore location: /opt/cgms-ssm/conf/ssm_web_keystore.
1. From the FND UI Web Interface, go to Admin tab (in the top right corner)-> SYSTEM MANAGEMENT. Select Certificate for Web. Download the base64 version of Certificate for Web from the FND GUI. The file has been downloaded and saved as certForWeb.txt.
2. Transfer this file to the FND (RHEL OS) through the command line. For example, in the usual case, the file is stored under /root/certForWeb.txt.
3. Navigate to the SSM configuration directory /opt/cgms-ssm/conf/. View the content of the ssm_web_keystore using the following command:
4. Updating the current certificate as a new trusted CA certificate in the SSM web keystore. Instead of replacing the existing nms_trusted alias, a new entry could be added to the Trusted CA certificate list. The following command imports the newly downloaded certForWeb.txt file into the keystore ssm_web_keystore under the alias name of fnd, and would be treated as a Trusted CA certificate, from this point onwards.
5. Observe that the keystore should be having three trusted CA certificates now. Observe the newly added fnd alias name as third entry.
6. Restart the SSM server for the change to take effect. There is no need to restart FND (cgms).
7. Along with two other entries, the fnd entry will be added:
8. FND does not need to be restarted; restarting SSM alone is sufficient. FND now displays the certificate under the certificate for CSMP.
Figure 148 CSMP Certificate in FND after Installing SSM
The Headend Router (HER) is the converging point for the Headend. HER provides the routing connectivity between the components located in the DMZ versus components located in the data center area.
The HER also provides the routing connectivity between the FARs as well as the Headend components. As the traffic from the FAR is crossing an untrusted WAN, the traffic can be encrypted (optional, but highly recommended) for secure transmission over the WAN. The HER can terminate the secure tunnels from FAR and enable the communication between the FARs and Headend components like the FND, DHCP server, RSA CA server, and ECC CA server.
Note: The HER is located in the DMZ area. The HER provides routing connectivity for the FARs, with Headend components located in both the DMZ and the Data Center, as well as with application servers. Unlike other Headend components that interact between themselves at the application layer, the interaction of the HER is not at the application layer level. These interactions are only at the routing/transport layer. There should be IPv6 reachability between FND, which is present in shared services, and HER.
Prerequisite: IP Address of all the components must be reachable from the HER.
In this implementation, the Cisco CSR 1000v is used as the HER (the user should install two CSRs and should be in HSRP for redundancy). In addition, the majority of components in this implementation synchronize their time with the HER using the NTP protocol.
This section covers the following processes:
a. Configure the HER as the NTP primary for other Headend components.
b. Configure network time source for the HER.
3. Integrating the HER with FND:
a. Verify that the HER is reachable from the FND.
b. Import the details of the HER into FND.
c. Verify the HER/FND communication.
4. Certificate enrollment of the HER:
a. Verify RSA CA server reachability from the HER.
b. Receive a copy of the RSA CA server certificate.
c. Receive the certificate of HER, signed by the RSA CA server.
5. Secure the communication with HER.
6. Selective route advertisement from the HER to the FAR:
–Route advertisement using IKEv2, post-tunnel establishment with the FAR.
The HER has the following types of interfaces:
DMZ interfaces are used to receive the communication from the FARs and field devices like CGEs.
HER Configuration for the Field-facing WAN Interface (located in DMZ)
The HER would use this field-facing DMZ interface for communication with the FAR. The interface is also configured with a virtual IP address to facilitate redundancy across multiple HERs.
Note: FAR would initiate the secure tunnel to this virtual IP address (y.y.y.y)
Using the field-facing DMZ interface, an overlay tunnel is established between the loopback interface of the HER and the FAR.
The sections NTP Configurations, Integrating HER with FND, and Certificate enrollment of the HER are available at the following URL:
■ https://salesconnect.cisco.com/#/content-detail/da249429-ec79-49fc-9471-0ec859e83872
The FlexVPN tunnel is used to secure the tunnel between the HER and the FAR. FlexVPN is a robust, standards-based encryption technology, which uses IKEV2 as a security technology. The tunnel configurations should be mapped to the correct security configurations. After the configurations are complete, the communication between the HER and the FAR is validated. For the communication between the HER and the FAR to be successful, the encryption algorithm, hashing algorithm, and Diffie-Hellman group should match between the HER and the FAR.
This configuration shows a virtual template configuration on the hub that allows multiple spoke configurations to be established.
The following configurations are important for the FlexVPN tunnel to be established. The IKEV2 proposal lists out the hashing algorithm, encryption algorithms, and Diffie-Hellman group that should be used in establishing the tunnel. This proposal is attached to the policy. In this, the authentication is done using a certificate based. The IKEV2 contains the virtual-template or the tunnel on which the security configurations should be applied. The IKEV2 profile is attached to the IPsec profile and the IPsec profile is attached to the virtual template.
This section covers the IKEv2 configuration required for certificate-based authentication:
The issuer common name used is IOT-RSA-ROOT-CA, which is the common name entered during the subject name configuration of RSA CA server.
This section covers the IPSec configuration required for certificate-based authentication:
The IPv4 and IPv6 addresses configured under the loopback interface are used in the establishment of the tunnels at the HER. Tunnels from multiple field routers can terminate on the same virtual-template interface. A virtual-access would be cloned out of the virtual-template to serve the purpose of tunnel endpoint.
The virtual-template is configurable when no active virtual-access exists or when the virtual-template interface is in shut-down state. Traffic flowing through the virtual-template can be secured with the help of the FlexVPN tunnel.
This section covers the route advertisement using IKEv2, instead of using routing protocol.
1. Once the tunnel is established, routes can be advertised over it using IKEv2. Advertising routes using IKEv2 instead of routing protocol has the following benefits:
–The lowest bandwidth consumption for route exchange.
–In turn, low cost to maintain the communication between the field element and the Headend.
2. This implementation advertises default route to the tunnel peers by implementing the IPv4 and IPv6 access lists.
3. To be able to advertise specific routes instead of a default route, IPv4 and IPv6 access lists need to be modified by permitting specific prefixes only. Advertising specific prefixes, instead of default route is recommended.
4. In this case, using access lists we are going to advertise specific prefixes of FND, CPNR, ECC CA Server, RSA CA Server (if needed), and use case-based IPv4/IPv6 addresses.
A centralized DHCPv6 server is required to be provisioned in the network to assign IPv6 addresses to CGEs. A DHCP server setup in the shared services network can be configured to enable DHCPv6 service with required scope options. In this implementation, an example configuration to provision a DHCPv6 server leveraging Cisco Prime Network Registrar (CPNR) for CGE IP addressing is discussed.
Note: The main purpose of the DHCPv6 server is to allocate the IPv6 address/prefix dynamically to the field devices (CGEs), not for any Headend components.
Use Case—To allocate IPv6 addresses to CGEs
1. Optionally, an IPv6 prefix can also be delegated along with the IPv6 address (allocated to endpoint).
2. This delegated IPv6 prefix can be used to enable IPv6 address auto-configuration of applications located behind the endpoint.
This section has been implemented using the following flow:
1. Obtain the CPNR license by mentioning the features needed (like DNS or DHCP).
2. Obtain the CPNR license to suit the scale requirement.
3. Download the latest CPNR X.Y.Z files from www.cisco.com. For example:
4. Server that hosts the Headend components should be running ESXI as Type-1 hypervisor.
5. Deploy both the regional and local OVAs on the ESXI server:
a. Ensure both OVAs are successfully deployed as VMs.
b. Power on both the local and regional VMs.
c. Open console of both the VMs using the vSphere client and set the root password.
d. Accept the end user license agreement on both VMs.
The sections CPNR Regional Server Setup, CPNR Local Server Setup, and Integrating CPNR(DHCP) with FND are available at the following URL (refer to “Implementing DHCP Server”):
■ https://salesconnect.cisco.com/#/content-detail/da249429-ec79-49fc-9471-0ec859e83872
1. Log in to local CPNR (10.x.x.x:8080), click Settings (top right), and choose Advanced. From Operate-> Manage servers, select Local DHCP server (on left panel), and select Network interfaces (in middle panel). IPv6 address of CPNR is 2001:a:b:c::d, which is to be used for configuring the FAR relay interface. Therefore, click Configure for 2001:a:b:c::d interface (the last entry in Figure 149 for eth160).
Figure 149 CPNR Ethernet Interface Configuration
2. From Design-> DHCPv6, select Options.
Figure 150 CPNR DHCPv6 Options
3. Choose the Add Option (+ icon). Under Options menu in the left panel, as shown in Figure 151.
Figure 151 DHCPv6 Option Definition Creation
4. In the pop-up window, enter the corresponding values: Name=CGE_OptionDefinition, Type = DHCPv6, vendor option enterprise id: 26484, and then click Add OptionDefiniteSet. The option definition set is created.
Figure 152 Setting the Values for Option Definition 258057
5. On left panel, choose Options-> CGE_OptionDefinition and then in the middle panel choose Option Definitions. Enter the Add icon (+) to enter the corresponding values: Number: 17, Name: opt17, Select type: vendor-opts from the drop-down list, click Add Option Definition, and then click Save. The user should receive a Saved Successfully” message.
Figure 153 Setting opt-17 Value for Option Definition
6. Click Option Definitions and select opt17 that has just been created.
Figure 154 Successful Creation of opt 17
a. Click Add sub-option definition for adding NMS IPv6 Address. Enter the following fields in sub-option definition Number=1, Name=NMS, type=Ipv6 address from the drop-down list. Leave repeat field as is. Click Add Sub-Option definition. Then click Save.
b. Click opt17 again and then click Add sub-option definition again for adding the CE IPv6 Address. Enter the following fields in sub-option definition: Number=2, Name=Lightingale, type=Ipv6 address from the drop-down list. Leave repeat field as is. Click Add Sub-Option definition. Then click Save. After saving both the values, it looks like Figure 155.
Figure 155 Creation of sub-option for opt-17
Figure 156 DHCP v6 Definition Set with opt-17 and Sub-options
7. To create a DHCP Policy, from Design-> DHCP Settings, select Policies. Create '+' under Policies. In the Add a DHCP Policy pop-up menu, from the column Name: CGE_DHCP_Policy, select Add DHCP Policy.
8. Choose DHCPv6 Vendor Options as CGE_OptionDefinition and sub-option as opt17*[17] (vendor-opts) and then click Add Option.
Figure 157 Configuring a New DHCP Policy
9. Click opt17 to edit the option 17 settings and gives an option to edit the values. Enter the following values in the New Value field by clicking Modify Values and then click Save. In the Values field, enter: (enterprise-id 26484 ((NMS 1 2001:abc::123) (Lightingale 2 ce-ipv6-address))).
Figure 158 Editing/Modifying DHCP Policy
10. Confirm that the DHCPv6 Settings on CGE_DHCP_POLICY are as shown in Figure 159:
Figure 159 Policy Values for CGE_DHCP_POLICY -1
Figure 160 Policy Values for CGE_DHCP_POLICY-2
11. Click Save and the message Saved Successfully should display after completion.
12. Click Design-> DHCPv6 and select Prefixes. In the left panel, under Prefixes, create a new prefix by clicking the '+' icon. Enter Name: CGE_Prefix and address and range as desired and then select Add IPv6 Prefix.
Figure 161 Adding IPv6 CGE Prefixes
13. In the Non-Parents Setting tab, select Policy as CGE_DHCP_Policy and Allocation-algorithms as interface-identifier. Then click Save.
Figure 162 Selecting Policies for the Prefixes
14. For Link Configuration, create a new link. Select Design-> DHCPv6 and then select Links. In the pop-up window, enter Name as CGE_Link_Details and the remainder of the values as default. Then select Add Link.
15. Under Select Existing un-associated prefixes for this links, click Add. Select the prefix configured above. Click Add in the Available List pop-up window.
16. Add prefix for prefix delegation: give address/range, select dhcp type=prefix-delegation. Click Add prefix and then click Save.
17. The user should now define and create a DHCP policy. From Design-> DHCP Settings, select Policies.
18. In POLICY_PD, select DHCPv6 settings. Select the following options:
–Allow-non-temporary-addresses : true
–Allow-temporary-addresses: false
Figure 163 Settings for DHCPv6 Settings
20. Click policy defined for prefix delegation (Design-> DHCPv6 and select Prefixes), in our case (Prefix_Delegation1), choose the POLICY_PD policy under Non-parent settings, and then click Save.
Figure 164 Prefixes for POLICY_PD
21. Verify policy associations: for cge_prefix_final, CGE_DHCP_Policy is associated and for Prefix_delegation1, PD_POLICY is associated
22. Restart the DHCP server to apply changes. From Operate Servers-> Servers, click Manage Servers. Then click the DHCP server and in the right corner, select Restart Server.
The Cisco Connected Grid Router (CGR) serves as a horizontal platform for various industrial services. It also provides services for street lighting applications and substation automation using data from intelligent electronic devices (IED). By providing features such as VLAN, VRF-Lite, QoS, the CGR1000 provides true multi-services to IoT industries.
The CGR 1240 Series acts as Field Area Router, which aggregates the traffic from CGEs and routes the traffic to HER via WAN. CGR forms a tunnel with HER to secure the data traffic flowing through it. The two WAN interfaces options are:
The CGR 1240 series router provides the network connection between Neighbor Area Network and WAN.
CGR has the following types of interfaces:
■Cellular Interface (Remote POP will covered in Remote PoP with CR-Mesh over Cellular Network Backhaul.)
CGR, which acts as a Field Area Router, has the Uplink Ethernet Interface connected to the CCI Access Network, which in turn forms a secure tunnel to HER for communication.
Using the field-facing interface, an overlay tunnel is established between the loopback interface of FAR and HER.
Role of Wireless WPAN Interface
WPAN interface is used to communicate with CGEs like IR510 and Street Light Controllers.
Pre-staging is the process in which the CGR is pre-configured with Certificates, tunnel-based configurations, CGNA and WSMA profiles, and EEM script-based configurations in the Customer Office Premises. The pre-staging steps are:
3. Secure Tunnel Establishment
For SCEP Enrollment, CGR is connected to the CA server for loading certificates. Before certificate enrollment, configure the LAN interface of the CGR to communicate with the CA server.
CGR Configuration for the HER-facing LAN Interface
Note: The default gateway of the CA server is the CGR interface IP address.
Simple Certificate Enrollment Protocol (SCEP)
A Cisco-developed enrollment protocol that uses HTTP to communicate with the CA or registration authority (RA). SCEP is the most commonly-used method for sending and receiving requests and certificates.
Certificate enrollment, which is the process of obtaining a certificate from a certification authority (CA), occurs between the end host that requests the certificate and the CA. Each peer that participates in the public key infrastructure (PKI) must enroll with a CA.
Prerequisites for PKI Certificate Enrollment
Before configuring peers for certificate enrollment, you should have the following items:
2016 Windows Server acts as Authority Server (Certificate Server) both in Auto Enrollment and Auto Approval.
Enable NTP on the device so that the PKI services such as Auto Enrollment and certificate rollover may function correctly. (Device should be synchronized with CA server.)
Steps to Enroll CGR with the RSA CA Server
1. Creation of a 2048-bit RSA key-pair named LDevID.
2. Definition of certificate authority details, trusted by the HER/CGR (that is, trustpoint definition):
a. Enrollment profile (with Enrollment URL defined) to reach the certificate authority for certificate enrollment.
b. Communication restricted only to the Authentic certificate authority, by performing a fingerprint check.
c. Communications accepted only from the RSA CA server, which advertised SHA1 fingerprint/thumbprint matches with the configured fingerprint.
d. The serial number to be part of the certificate.
e. The IP address is NOT needed to be part of the certificate.
f. No password is needed during certificate enrollment.
g. The key pair created above in this section is used.
3. Receiving a copy of the RSA CA server's certificate (with public key).
4. Receiving the certificate of HER signed by RSA CA server:
a. The signed certificate should contain the above details, which are configured under the trust point definition.
Note: Ensure that no blank space exists after the password in the Trustpoint configuration.
Verifying the Certificate Enrollment Status of CGR
Note: The enrollment URL differs according to the type of RSA CA server:
a. For the Windows CA server, the URL path is http://rsaca.iot.cisco.com/certsrv/mscep/mscep.dll.
b. The fingerprint should be extracted from the RSA CA Server's certificate. Subject Name contents will be appearing on certificates.
This section shows the configurations that have to be executed on the Cisco CGR to establish a tunnel with the HER. The security configurations are the same as the HER security configurations. If a mismatch exists between the configurations on the HER or CGR, then the tunnel between them is not established.
FAR advertises routes of IPv6 CGEs to HER by advertising specific prefixes through IKEv2 prefix injection:
To monitor CGR using FND, CGR first needs to register with FND. CGR registration steps are shown below:
Note: cg-nms.odm should be the latest; otherwise CGR registration fails.
Verify the Reachability from CGR to FND
The IPv4 reachability of the CGR connecting to FND is reachable.
This action needs to be performed in the FND. The list of the FARs that need to go through registration must have an entry added in FND. The following section captures the csv method for adding an entry for the FAR in the FND. Details about one or more FARs can be captured in a csv file and can be imported into the FND in one go.
The first row of the csv would have the ordered list of the device properties (comma separated). Each subsequent row will represent a FAR, which is an ordered list of commas separated values corresponding to the ordered list of device properties in the first row.
The following is sample content showing the sample structure of a csv file:
Note: Do not leave any extra spaces before/after comma while creating the csv file.
Helps identify the type of device. Few examples of device type: ir800, cgr1000 |
||
Encrypted password derived from Generating the Encrypted Password is mentioned below. |
Password in encrypted form. Unencrypted form of this password would be used by FND to interact with FAR. |
|
Generating the Encrypted Password
Log in to the FND via SSH and perform the following steps to get the encrypted password that needs to be populated into the FAR.csv file.
Note: For security reasons, it is recommended to have unique passwords for each FAR.
In the above snippet, the password that should be used for accessing the FAR is stored in a temporary file named /tmp/pwd. The signature tool is then run to encrypt the password stored in the file /tmp/pwd with the key (with alias cgms) stored in the cgms_keystore. Finally, remove the password file /tmp/pwd for security reasons.
This section describes the steps for importing the FAR.csv into the FND.
1. From Devices, choose Field Devices, select Inventory, and in the drop-down list, select Add Devices.
Figure 165 Importing FAR into FND
2. Choose the FAR.csv file and then click Add.
Figure 166 Insert FAR csv into FND
3. The FND performs a validation of the FAR.csv file and successful validation results in importing FAR details into the FND. If any failures exist, click the number under the column Failure# corresponding to the latest import attempt; this opens a window that displays the failures encountered.
Figure 167 Successful Addition of CGR into FND
4. After FAR.csv import, the status must be successful before proceeding further. After successful import of the FAR, the device would be in an unheard state. Click the FAR PID to verify device/config properties of the FAR imported into the FND.
Figure 168 Dashboard Displaying CGR after Upload
5. After importing the FAR.csv into the FND, navigate to the Config Properties section of the corresponding FAR and verify the accuracy of the device parameters.
Figure 169 FND UI Displaying Properties of CGR
Config Provisioning Settings on FND
6. To communicate CGR with FND, user should provide FND URL and the same should match with CGR CGNA configuration.
Figure 170 Configuration of FND Provisional Settings to Communicate with CGR
In CGR, WPAN configuration along with dot1x, AAA and mesh security will be configured from FND after CGR is successfully registered. This section describes the steps to push the configuration from FND to CGR after registration.
1. From the FND UI, select the Config drop-down list in the top panel and then select Device Configuration.
2. Select the Router option from the left panel and then select the group in which the user needs to apply configuration after CGR registration.
3. Go to the Edit Configuration Template tab, remove the default template, and insert the WPAN configuration. For WPAN configuration, please refer to Sample Cisco Resilient Mesh Security Configuration.
4. Enrollment configuration of CGR.
To enroll CGR into FND, the following configuration for AAA, HTTP, CGNA Profiles, and WSMA needs to be configured into CGR:
After CGR is on-boarded, the user can see the configuration parameters in FND. In the FND Dashboard, the user is able to see CGR status.
Figure 171 CGR Successful On-boarding
Figure 172 CGR Properties after Successful On-board
Verification on FAR for Successful Registration with FND
The following CGNA profiles can be used to verify on the FAR:
1. Profile Name: cg-nms-register:
a. Observe that the profile is disabled.
b. With a successful last response.
2. Profile Name: cg-nms-periodic:
a. Observe that the profile is Active, waiting on timer for next action.
b. With a successful last response.
Management of CGRs like device maintenance, monitoring, and operations can be performed by FND. In this section, we will see CGR being upgraded using FND. The CGR upgrade has the following steps:
a. Image loaded into FND Firmware Images.
c. Install Image and reload the Device.
1. Go to FND UI and on the top right, select the CONFIG drop-down list and then select Firmware Update.
2. Go to Images, select IOS-CGR, select +. In the pop-up window, select CGR image and then select Add File.
Figure 173 CGR Image Upload into FND
3. After uploading the image in Firmware Images, select Groups where you can create groups by selecting Assign devices to group. Select the group and then select upload image; it will upload the image to CGR group members.
Figure 174 Creation of Groups and Upload of Image into Device
4. Once upload is completed, click Install Image to install the image on the router. It will take some time to install the latest image on the router.
Figure 175 Completion of Image Upload in FND
5. Once the Reload is completed, the FND UI will display Installation completed.
Figure 176 Image Upgrade Completion
The CGE communication module performs secure 802.1x network join through neighboring CG-Endpoints or FAR, validating authentication to the AAA RADIUS server in the data center. CGR serves as the authenticator and communicates with a standard AAA server using RADIUS. CGE uses a stateless EAP proxy that forwards EAP messages between the CGR and a joining interface because the joining interface might be multiple mesh hops away from the CGR. The MTU setting on the AAA server must be set to 800 bytes or lower, because IEEE802.1x implementation in CGEs limits the MTU to 800 bytes. RADIUS servers can use auth-port 1812 and acct-port 1813.
Cisco supports Radio Frequency (RF) mesh communication technology in the CGE space for the last mile connectivity. A Cisco CGE needs to implement RF protocol stacks and needs to be appropriately configured to be able to join and communicate with a Neighborhood Area Network (NAN) rooted at a Cisco's Connected Grid Router (CGR) 1000 series.
A CGE connected to a NAN/CG mesh (RF) must be capable of end-to-end Layer 3 communication using IPv6. When a CGE attempts to join a CR-Mesh network, it must authenticate itself to the network, obtain link layer security credentials, join the RPL routing domain, obtain an IPv6 address along with options and prefix delegation if required, register itself to network management services (FND) using CoAP Simple Management Protocol (CSMP), and communicate with required application servers (LightingGale Application) to deliver grid functionalities.
As we know, the CGR 1000 series acts as a Field Area Router (FAR). Each FAR advertises a unique Personal Area Network (PAN), which is recognized by a combination of a SSID and PAN ID. CGEs are programmed to join a PAN with a given SSID. CGEs can migrate between PANs based on a set of metrics for the PAN (very rarely) and for fault tolerance. CR-Mesh is embedded in CGEs using IP Layer 3 mesh networking technology that performs end-to-end IPv6 networking functions on the communication module. CGEs support an IEEE 802.15.4e/g interface and standards-based IPv6 communication stack, including security and network management.
CR-Mesh supports a frequency-hopping radio link, network discovery, link-layer network access control, network-layer auto configuration, IPv6 routing and forwarding, firmware upgrade, and power outage notification. The CGR runs the IPv6 Routing Protocol over Low Power and Lossy Networks, also known as RPL. The IPv6 Layer-3 RPL protocol is used to build the mesh network.
The installation of WPAN with CGR1240 can be found at the following URL:
■ https://www.cisco.com/c/en/us/td/docs/routers/connectedgrid/cgr1000/ios/modules/wpan_cgmesh/b_wpan_cgmesh_IOS_cfg.html
Note: The CGR1000 router must be running Cisco IOS Release 15.8(3)M0a (cgr1000-universalk9-bundle.SPA.158-3.M0a.bin) or greater to support the CGM WPAN-OFDM Module. Cisco WPAN version must be 5.7.27.
This section of the document covers only the WPAN configuration of Cisco CGR WPAN module. Before deployment in the field, pre-staging configurations are done on CGE. The pre-staging configurations are provided by the operator and the CGE provider configures the same on the CGE device during the manufacturing process.
The pre-staging configurations include CGE certificate with private key, CSMP certificate, ECC CA Root Certificate, and XML config file. Apart from several other configurations, the XML config includes SSID and Phy mode.
All configurations and management of CGR WPAN are done by IoT FND using Cisco IOS commands (Release 15.4(2)CG and greater).
At the CGR 1000, configure the WPAN Module interface as follows:
Note: If an user is inserted in slot 5, it will be 5/1 (slot numbers are visible inside CGR.
Enabling Dot1x, Mesh-security, and DHCPv6
User must enable the dot1x (802.1X), mesh-security, and DHCPv6 features to configure the WPAN interface. To enable these features, use the following command:
For dot1x, the WPAN interface configuration requires:
To configure the name of your IEEE 802.15.4 Personal Area Network Identifier (PAN ID), use the following WPAN command:
The Service Set Identifier (SSID) should be consistent across the CGR WPAN interface and CGE's.
To configure the name of the SSID, use the SSID command ieee154 ssid <ssid_name >, for example:
The txpower in the configuration specifies the txpower setting in the physical hardware (chip). However, the radio signal out of the hardware chip must travel through the amplifier, front end, antenna, etc. that causes the output power of the chip to be less than the actual electro-magnetic signal that is emitted into the air. Values range from 2 (high) to the default value of -34 dBm (low-Lab Testing).
To configure the transmit power for outdoor usage, specify a higher transmit power, such as:
A notch is a list of disabled channels from the 902-to-928 MHz range. If no notch exists at all, then all channels are enabled. if there is a notch [x, y], then channels between x and y are disabled:
CLI interface commands defines CGR phy-mode. In our case, we are using only PHY-mode 98:
Setting the Minimum Version Increment
To set the minimum time between RPL version increments, use the version-incr-time command:
Setting the DODAG Lifetime Duration
To set the Destination-Oriented Directed Acyclic Graph (DODAG) lifetime duration, use the DAG lifetime command. Each node uses the lifetime duration parameter to drive its own operation (such as Destination Advertisement Object or DAO transmission interval). Also, the CGR uses this lifetime value as the timeout duration for each RPL routing entry:
Configuring the DODAG Information Object Parameter
To configure the DODAG Information Object (DIO) parameter per the RPL IETF specification, use the rpl dio-min command:
To set the DIO double parameter as per the RPL IETF specification, use the dio-dbl command. DIO double is a doubling factor parameter used by the RPL protocol:
To determine the available IPv6 functions, query the ipv6 commands. To enable IPv6 on an interface, use:
IPv6 addresses lease for end nodes will be managed by CPNR (centralized DHCP Server). To configure the IPv6 DHCP relay, use the ipv6 dhcp relay command:
Configuring the Power Outage Server
User can configure the power outage server with the outage server command. We recommend an IPv6 address or IPv6 resolvable FQDN of a server. In most cases, the outage server is your IoT FND server:
CGEs use the IEEE 802.1X protocol, known as Extensible Authentication Protocol over LAN (EAPOL), for authentication.
To set the mesh key, use the mesh-security set mesh-key command in privileged mode:
Note: Mesh-security config and keys do not appear in the CGR configuration as shown by show running-config or show startup-config.
The following example shows what is required for CGR WPAN, dot1x and mesh-security:
This action needs to be performed in the FND. The following section captures the csv method for adding an entry for the CGE in the FND. Just like CGR, the same kind of CSV needs to be uploaded in FND to on-board CGEs.
The following is sample content showing the sample structure of a csv file:
Note: Do not leave any extra spaces before/after comma while creating the csv file.
Helps identify the type of device. |
||
This section describes the steps for importing the CR-Mesh.csv into the FND:
1. Open FND UI, click the Devices drop-down list, select Field Devices, select the Inventory tab, click the Add Devices tab, select CR-Mesh csv and then click Add, which will import the CGEs into FND.
Figure 177 Importing CGE into FND
2. After uploading the csv of the devices, once the RPL tree is formed in CGR, devices will show up in FND. The user can monitor the status in FND. Please check the reachability from FND to CGE and vice versa if the CGE doesn't come up.
4. The user can verify the reachability by using traceroute and ping commands from FND UI. (Devices-> Click Device(00173B14002200)-> Ping/Traceroute).
Figure 179 Ping from FND to CGE
Figure 180 Traceroute from FND to CGE
Application Firmware of CGE can be upgraded from the FND. The Application Firmware Image has to be obtained from third party vendors. The steps for performing the upgrade are the following:
1. Upload Image into the Firmware repository.
2. Load Application Firmware Image into CGE.
3. Schedule an upgrade and verify upgrade.
Note: Make sure the Application Firmware Image is compatible with the WPAN version; otherwise, the user will lose connection to CGE.
4. Go to FND UI, select Config drop-down list, and select Firmware Update. Select Images, select RF and click '+' icon to upload the Firmware image. Then click Add File.
Figure 181 Application Firmware Images of CGEs in FND
5. Go to Groups, select the Group the user wants to upgrade. From Firmware Management and Select Upload Image, after the pop-up window, select select-type as RF, and then select the image to upload.
Figure 182 Upload of Application Firmware Image to CGEs
6. After the firmware image is uploaded into the device, schedule an upgrade by clicking Schedule as shown in Figure 183:
Figure 183 Scheduling an Upgrade in FND
7. Set the Install and Reload time and then CGE will automatically install and reload. After upgrade, when the node comes up, check the node version to be sure that it is in the latest version.
Figure 184 Image Upgrade Successful
CGRs can also be installed with a CGM-WPAN-OFDM module provide a low cost, low power solution to CCI. The CGM-WPAN-OFDM module is designed to operate within an RF900 wireless network to provide control over Cisco Resilient Mesh Endpoints (CR-Mesh) with serial (RS232/RS485), USB (LS/FS), or Fast Ethernet (10/100) ports.
Table 24 WPAN Module Models Used in CCI
|
|
WPAN RF 900 Plug-in module for CGR 1000 routers. Provides access to 900 MHz mesh networks. |
Table 25 TLED Indicators of the CGM WPAN-OFDM-FCC WPAN Module
Table 26 shows the CLI interface commands for the CGM WPAN-OFDM Module. In CCI Scenario we have used phy-mode as 98.
Table 26 Summary of CLI Interface Commands for the CGM WPAN-OFDM Module
–The minimum supported firmware version for OFDM WPAN is 5.7.27.
–CGR1000 router must be running Cisco IOS Release 15.7(3)M1 (cgr1000-universalk9-bundle.SPA.157-3.M1.bin) or greater to support the CGM WPAN-OFDM Module.
This section covers the implementation of Remote Point-of-Presence (RPoP) sites in CCI network, as discussed in the CCI Solution Design Guide. Example Remote PoP sites with LoRaWAN or a CR-Mesh access networks configuration over wireless (cellular) backhaul network that are validated in this CVD are discussed in this section.
This chapter includes the following major topics:
■Implementing RPoP IR1101 with Cellular Backhaul to CCI Headend
■Remote PoP with Cellular BackHaul to CCI Headend, page 182
■Remote PoP with LoRaWAN Access Network
■Remote PoP with CR-Mesh over Cellular Network Backhaul
■ Remote PoP with Digital Subscriber Line (DSL) Backhaul
■Remote PoP Management using Cisco DNA Center
This section covers Cisco IR1101 as Remote PoP gateway implementation in CCI. It discusses different services that RPoP offers with the capabilities of IR1101 and how the CCI multiservice network with macro-segmentation is extended to RPoP endpoints/assets via the CCI headend (HE) network in the DMZ.
CCI provides network macro-segmentation using SD-Access using the concept of Virtual Networks (VNs), the same VNs are extended to RPoP Gateways via Flexvpn and each service to be isolated from the other for network security which hence offers isolated and secured multiservice deployments at the RPoP gateways.
Figure 185 RPoP IR1101 Implementation Flow
Pre-staging is the process in which the IR1101s are preconfigured with Certificates, tunnel-based configurations, CGNA and WSMA profiles, and EEM script-based configurations. Pre-staging will be done in facility by connecting IR1101 to the LAN. Once pre-staging is done, the remote Gateways will be shipped to the deployment locations. The pre-staging steps are:
3. 4G Sim Installation and Configuration
For SCEP Enrollment, IR1101 is connected to the CA server for loading certificates. Before certificate enrollment, configure the LAN interface of the IR1101 to communicate with the CA server. Connect the FastEthernet port of IR1101 to any of the CCI Access trusted switch which has reachability to the CA server.
Create an SVI and configure VLAN and assign IP address via DHCP:
Prerequisites for PKI Certificate Enrollment
Before configuring peers for certificate enrollment, you should have the following items:
2016 Windows Server acts as Authority Server (Certificate Server) both in Auto Enrollment and Auto Approval.
Enable NTP on the device so that the PKI services such as Auto Enrollment and certificate rollover may function correctly. (Device should be synchronized with CA server.)
Steps to Enroll IR1101 with the RSA CA Server
The following steps need to be performed:
1. Creation of a 2048-bit RSA key-pair named LDevID.
2. Definition of certificate authority details, trusted by the HER/IR1101 (that is, trust point definition):
a. Enrollment profile (with Enrollment URL defined) to reach the certificate authority for certificate enrollment.
b. Communication restricted only to the Authentic certificate authority, by performing a fingerprint check.
c. Communications accepted only from the RSA CA server, whose advertised SHA1 fingerprint/thumbprint matches with the configured fingerprint.
d. The serial number to be part of the certificate.
e. The IP address is NOT needed to be part of the certificate.
f. No password is needed during certificate enrollment.
g. The key pair created above in this section is used.
3. Receiving a copy of the RSA CA server's certificate (with public key).
4. Receiving the certificate of HER signed by RSA CA server:
a. The signed certificate should contain the above details, which are configured under the trust point definition.
Note: Ensure that no blank space exists after the password in the Trustpoint configuration.
Verifying the Certificate Enrollment Status of IR1101:
Note: The enrollment URL differs according to the type of RSA CA server.
Refer to the following to install SIM on IR1101:
https://www.cisco.com/c/en/us/td/docs/routers/access/1101/b_IR1101HIG/b_IR1101HIG_chapter_010.html
■IR1101 SIM installation (requires a pluggable LTE module installed on the gateway)
IR1101 Cellular Interface Example Configuration:
This section covers the configurations that have to be executed on the Cisco IR1101 in order to establish a FlexVPN tunnel with the HER. The security configurations should match with that of the HER security configurations ito form the FlexVPN tunnel.
Selective Route Advertisement from IR1101 to HER:
IR1101 routes to HER selected by advertising specific prefixes through IKEv2 prefix injection as shown below:
For IR1101 registration with FND and management, refer to FAR Registration into FND (NMS).
In CCI SDA deployment, Virtual Networks (VN) provide the isolation of networks by segmenting the overall network into multiple logically separate networks as needed. In RPoP deployments the CCI SDA VNs are extended to the RPoP Gateways (IR1101s).
Stretching the SDA VNs to the RPoP gateways involves two steps:
1. Extending the SDA Multi-VRF routes to HER from FR
2. Multi-VRF routes extension from HER to RPoP Gateway
Extending the SDA Multi-VRF Routes to HER from FR
As Fusion router is aware of all the prefixes available inside each VRF, because of through route peering from the Borders of different PoP sites, therefore the intended VRFs can be extended to the RPoP Gateway by VRF-Lite using BGP.
In CCI, Firepower is positioned between the Fusion Router and the HER and it is deployed in Routed mode. To use VRF-Lite between Fusion Router and HER to exchange Multi-VRF route prefixes, FR and HER should be in the same network. To overcome this Point to Point (P2P) Generic Routing Encapsulation (GRE) tunnelling mechanism is used. The configuration steps are shown.
Figure 186 VN/VRF Extension from Fusion Router to HER
Step 1: Configuring VRF definitions:
Configure the VRF definitions on the HER for the VRFs/VNs which we intended to stretch to the RPoP. Each VRF is assigned a Route Distinguisher.
Note: VRF-lite configuration does not need the route-target.
Step 2: GRE Interfaces reachability:
GRE source and destination interfaces reachability can be achieved by advertising via static or undelay routing, in our case EIGRP.
|
|
|
|
Step 3: Configuring GRE Tunnels for Each VN/VRF:
The tunnels behave as virtual point-to-point links that have two endpoints identified by the tunnel source and tunnel destination addresses at each endpoint. Configuring a GRE tunnel involves creating a tunnel interface, which is a logical interface. Below is the example configuration on FR and HER for two of the VRFs.